uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,091,254 | arxiv | \section{introduction}
The observation of the accelerated expansion of the universe
\cite{Obs} boosted the studies on an old cosmological problem, namely,
cosmological constant problem \cite{Weinberg}. The standard explanation
for the accelerated expansion of the universe is a positive definite
cosmological constant in Einstein field equations \cite{Carroll1,
Carroll2}. A cosmological
constant (CC) may be considered either as a geometrical object ( e.g. as
the part of the curvature scalar that depends only on extra dimensions in
a higher dimensional space) or as the energy density of a perfect fluid with
negative pressure or a combination of both.
(Although these two
attributions may seem to be
really two different manifestations of the same thing this distinction
enables a more definite discussion of the problem as we shall see.)
The vacuum expectation values of the energy-momentum tensors of quantum
fields (i.e. the energy-momentum tensor due to zero modes of quantum
fields) induce energy-momentum tensors that has the form of the
CC term in Einstein field equations. This
identification is the main origin of the two (probably related ) most
important cosmological
constant problems; 1- why is the energy density ( $\sim (10^{-3}$
eV $)^4$ \cite{PDG} derived from the
measurements of acceleration of the universe is so small compared to the
energy scales associated with quantum phenomena ( that is, why is
CC so small? ), 2- why does the zero modes of quantum
fields contribute to the accelerated expansion of the universe so less
than the expected?.
There are many attempts, at least partially
to answer these questions, namely; symmetry principles, anthropic
considerations, adjustment mechanisms, quantum cosmology and string
landscape etc. \cite{Weinberg,Nobbenhuis}. None of these attempts have
been wholly satisfactory. One of the main ideas proposed towards the
solution of the problem is the use of symmetries such as supersymmetry and
supergravity. However these symmetries are badly broken in nature. So
it seems that they do not offer a viable solution. Recently a symmetry
principle that does not suffer from such a phenomenological restriction
was introduced \cite{Erdem1,Erdem2,Erdem3}. This symmetry amounts to
invariance
under the reversal of the sign of the metric and it has two different
realizations. The first realization is implemented through the
requirement of the invariance of physics under the multiplication of
the coordinates by the imaginary number $i$
\cite{Erdem1,tHooft,Kamenshchik}. The second realization corresponds to
invariance under signature reversal \cite{Bonelli,Erdem2,Duff}
and may be realized through extra dimensional reflections\cite{Erdem2}.
In this paper both realizations of the symmetry are named by a common
name, "metric reversal symmetry".
In the previous studies the symmetry is implemented for a cosmological
constant that is geometrical in origin e.g. a bulk CC
or a CC that is induced by the part of the curvature
scalar that depends on the extra dimensions only. The aim of the present
paper is to extend this symmetry to a possible contribution to CC
induced by the vacuum expectation value of the energy-momentum tensor of
quantum fields (i.e. quantum zero modes). The main difficulty in applying
the symmetry to the contribution of the quantum zero modes
is that, in the simple setting considered in the previous
studies, it is not possible to impose it so that the matter Lagrangian
corresponding to a field is non-vanishing after integration over extra
dimensions (i.e. so that the field is observable at the usual 4-dimensions
at the current accessible energies) while the quantum vacuum
contributions of the fields are forbidden. This point will be mentioned
in more detail in the following section. To this end, in this paper the
space is taken to be a union of two $2(2n+1)$ dimensional spaces
and the
gravitational Lagrangian is taken to be $R^2$ where $R$ is the curvature
scalar. Robertson-Walker metric is embedded in one of these $2(2n+1)$
dimensional spaces. Both realizations of the metric reversal symmetry
are imposed. The 4-dimensional
Robertson-Walker metric reduces to the Minkowski metric
after the symmetry imposed and
the action corresponding to matter
Lagrangian is forbidden by the requirement of the invariance under
$x^A\,\rightarrow\,ix^A$.
The requirement of the implementation of
(either realization of) the
symmetry on
each space separately restricts the form of the gravitational action
and only some part of the gravitational action
survives and it can be identified by the usual Einstein-Hilbert action
after integration over extra dimensions. After breaking the
$x^A\,\rightarrow\,ix^A$ symmetry (while preserving the signature reversal
symmetry)
the Minkowski metric converts to the Robertson-Walker metric
(with a slowly varying Hubble constant),
and results in a small non-vanishing matter Lagrangian (and action). The
unbroken signature reversal symmetry imposes the
resulting matter Lagrangian generically contain at least one pair
of off-diagonally coupled Kaluza-Klein modes in each homogeneous term
and hence necessarily contains mixture of different
Kaluza-Klein modes. This, in turn, causes the vacuum expectation value of
energy-momentum tensor be zero as we shall see. Then the accelerated
expansion of the universe may be attributed to some alternative methods
such as quintessence \cite{quintessence,Copeland}, phantoms
\cite{phantom,Copeland} etc.
or a small
CC may be induced classically after breaking of the
$x^A\,\rightarrow\,ix^A$ symmetry as we shall see.
\section{A Brief Overview of Metric Reversal Symmetry}
We consider two different realizations of a symmetry
that reverses the sign of the metric
\begin{eqnarray}
ds^2\;=\;g_{AB}dx^A\,dx^B
~\rightarrow
~~-\,ds^2
\label{aa1}
\end{eqnarray}
and leaves the gravitational action
\begin{equation}
S_R = \frac{1}{16\pi\,G}\int \sqrt{(-1)^S g} \,R \,d^Dx \label{aa2}
\end{equation}
invariant, where $S$ and $g$ denote the number of space-like dimensions
and determinant of the metric tensor, respectively. I call this symmetry,
metric reversal symmetry.
The first realization of the symmetry \cite{Erdem1}is generated by the
transformations
that multiply all coordinates by
the imaginary number $i$
\begin{equation}
x^A\,\rightarrow\,i\,x^A~~,~~~g_{AB}\,\rightarrow\,g_{AB}~~. \label{aa3}
\end{equation}
The second realization \cite{Erdem2} is generated by the signature
reversal
\begin{equation}
x^A\,\rightarrow\,x^A~~,~~~g_{AB}\,\rightarrow\,-g_{AB}~~. \label{aa4}
\end{equation}
The requirement of the invariance of Eq.(\ref{aa1}) under either of the
realizations, Eq.(\ref{aa3}) and Eq.(\ref{aa4}) sets the dimension of the
space $D$ to
\begin{equation}
D=2(2n+1)~~~,~~~~n=0,1,2,3,....~~~.
\label{aa5}
\end{equation}
Hence both realizations forbid a bulk cosmological constant (CC) term
\begin{equation}
S_C = \frac{1}{8\pi\,G}\int \sqrt{g} \,\Lambda \,d^Dx \label{aa6}
\end{equation}
(provided that $S_G$ remains invariant) where $\Lambda$ is the bulk
CC.
In fact these conclusions are valid for signature reversal symmetry in a
more general setting where the
whole space consists of a
$2(2n+1)$ dimensional subspace whose metric transforms like (\ref{aa4})
and the metric tensor for the rest of the space is even under the
symmetry. In other words in a $D$-dimensional space where
\begin{eqnarray}
&&x^A\,\rightarrow\,x^A~~,~~~g_{AB}\,\rightarrow\,-g_{AB}~~;~~~
A,B=0,1,2,3,5,....2(2n+1) ~~,
\label{aa7} \\
&&x^A\,\rightarrow\,x^A~~,~~~g_{A^\prime
B^\prime}\,\rightarrow\,g_{A^\prime B^\prime}~~;~~~
A^\prime,B^\prime=2(2n+1)+1,2(2n+1)+2,.......,D
\label{aa8}
\end{eqnarray}
as well $S_G$ is allowed while $S_\Lambda$ is forbidden.
A higher dimensional metric with
local Poincar\'{e} invariance may be written as
\cite{Rubakov}
\begin{eqnarray}
ds^2\,=\,
\Omega(y^c)[
g_{\mu\nu}(x)
\,dx^{\mu}dx^\nu\,+\,
\tilde{g}_{\tilde{a}\tilde{b}}(y)\,dy^{\tilde{a}}dy^{\tilde{b}}]\,+\,
g_{e^\prime d^\prime}(y)\,dy^{e^\prime}dy^{d^\prime}
\label{ab1}
\end{eqnarray}
where $x$ and $\mu\,\nu\,=\,0,1,2,3$ denote the usual
4-dimensional coordinates and indices; $y$ denotes extra dimensional
coordinates, and
$\tilde{a},\tilde{b}$=$4,5,...2(2n+1)$,
$e^\prime,d^\prime$=$2(2n+1),....,D$ denote
the extra dimensional indices.
We let,
\begin{eqnarray}
&&\Omega\,\rightarrow\,-\Omega
~~,~~~g_{\mu\nu}\,\rightarrow\,g_{\mu\nu}
~~,~~~g_{\tilde{a}\tilde{b}}\,\rightarrow\,g_{\tilde{a}\tilde{b}}
~~,~~~g_{e^\prime d^\prime}\,\rightarrow\,g_{e^\prime d^\prime}~~ .
\label{ab3}
\end{eqnarray}
We take the underlying symmetry that induces (\ref{ab3}) be an extra
dimensional reflection
symmetry. For example one may take
\begin{equation}
\Omega(y^c)\,=\,\cos k\,y~~~~~~~~y=y^D \label{ab4}
\end{equation}
where $k$ is some constant and take the symmetry transformation be a
reflection about $kz=\frac{\pi}{2}$ given by
\begin{equation}
ky\,\rightarrow\,\pi\,-\,ky \label{ab5}~~ .
\end{equation}
There is a small yet important difference between
simply postulating a signature reversal symmetry or realizing it through
(\ref{ab1}) and (\ref{ab4})
although both forbid a cosmological constant (CC).
In the case of
(\ref{ab1}) and (\ref{ab4}), one may take a non-vanishing
CC from the beginning and it cancels out after
integration over extra dimensions while this is not possible if one
simply postulates the metric reversal symmetry.
The action functional corresponding to the matter sector is
\begin{equation}
S_M =
\int \sqrt{(-1)^Sg} \,{\cal L}_M \,d^Dx \label{ac1}
\end{equation}
where ${\cal L}_M$ is the Lagrangian for a matter field. If the symmetry
is applicable to the matter sector then the symmetry must leave $S_M$
invariant. One may take the
dimension where the field propagates as $D=2(2n+1)$ so that (at least) the
kinetic
part of $S_M$ is invariant under the symmetry transformations. For
example the kinetic part of the Lagrangian of a scalar field $\phi$
\begin{equation}
{\cal L}_{\phi\,k}
= \frac{1}{2}g^{AB}\partial_A\phi\partial_B\phi
\label{ac2}
\end{equation}
transforms like
$R$ under the transformations, (\ref{aa3}) and/or (\ref{aa4}) so that
$S_M$ is invariant under the symmetry if $\phi$ propagates in a $2(2n+1)$
dimensional space and $\phi\rightarrow \pm\phi$ under the
symmetry transformation. Meanwhile this allows non-zero contributions
to the CC through the vacuum expectation of
energy-momentum tensor of quantum fields. The 4-dimensional
energy-momentum tensor for (\ref{ac2}) at low energies,
$T_\mu^\nu$,
is
\begin{eqnarray}
T_\mu^\nu\,=\int d^{D-4}y\,\Omega^{2n}
\sqrt{\tilde{g}\,g_e}\,
\{g^{\nu\tau}\partial_\tau\phi\partial_\mu\phi-
\frac{1}{2}\delta_\mu^\nu\,[
g^{\rho\tau}\partial_\rho\phi\partial_\tau\phi
\,+\,
\tilde{g}^{ab}\partial_a\phi\partial_b\phi
\,+\,
\Omega\,g^{ed}\partial_e\phi\partial_d\phi
]\}
\label{ac3}
\end{eqnarray}
where we employed the metric (\ref{ab1}), and $\tilde{g}$ and $g_e$ denote
the determinants of $(\tilde{g}_{\tilde{a}\tilde{b}})$ and
$(g_{e^\prime d^\prime})$, and $\delta_\mu^\nu$
denotes the Kronecker delta.
If the signature reversal symmetry is imposed through an extra dimensional
reflection, for example, by (\ref{ab4}) and (\ref{ab5}) then the last term
in (\ref{ac3}) cancels out while the other terms survive after
the integration over the extra dimensions. So the
4-dimensional energy-momentum tensor
in general gives non-zero contribution to vacuum energy density through
its vacuum expectation value after quantization.
One may allow
${\cal L}_{\phi\,k}$
by letting $\phi$
propagates in a $4n$ dimensional
but this would allow a bulk CC.
In other words one may adjust the dimension
of the space where
the field propagates so that (\ref{ac1}) is allowed and hence the symmetry
is true for matter sector but this allows either a bulk CC or the
contribution of quantum zero modes.
The situation is
the same for gauge fields and fermions.
So one should
consider this as a classical symmetry \cite{tHooft} or one should
construct a more
sophisticated framework where the symmetry applies both at classical
and quantum levels. Constructing such a model will be the aim of the
following sections.
\section{The need for both realizations of the symmetry and its
implications}
The requirement of the isotropy and the homogeneity of the usual
4-dimensional universe results in the metric
\begin{eqnarray}
ds^2&=&
\Omega(y)\,(\,dx_0^2\,-\,a(t)\,d\sigma^2\,)\,
+\,g_{ab}(y)\,dy^{a}dy^b
\label{ba2} \\
&&y\,\equiv\, x_5=y_1,x_6=y_2,......,x_D=y_{D-4}
~~~~~~~a,b\,=1,2,3,.......,D-4 \nonumber \\
&&d\sigma^2\,=\,\frac{dr^2}{1-K^2r^2}+r^2d{\it \Omega}^2 ~~.
\nonumber
\end{eqnarray}
Further I impose the symmetry
\begin{eqnarray}
&&ds^2
~\rightarrow
~~-\,ds^2~~~~~\mbox{as}~~~~~~
x^A\,\rightarrow\,i\,x^A~~,~~~g_{AB}\,\rightarrow\,g_{AB} \label{ba3} \\
&&~~~~~~A=0,1,2,3,5,....,D ~~~.\nonumber
\end{eqnarray}
This requires
\begin{eqnarray}
&&\Omega(y)\,\rightarrow\,\Omega(y)
~~,~~~a(t)\,\rightarrow\,a(t)
~~,~~~
K^2r^2\,\rightarrow\,
K^2r^2~~,~~~
g_{ab}\,\rightarrow\,g_{ab}~~. \label{ba4}
\end{eqnarray}
This together with the requirement that after integration over extra
dimensions it should correspond to the
solution of the 4-dimensional Einstein equations with a cosmological
constant (as the only
source) implies that
\begin{equation}
a(t)\,=\,\mbox{constant}~~~,~~~~K^2=0 ~~. \label{ba5}
\end{equation}
In other words the first realization of the symmetry, Eq.(\ref{ba3})
requires the 4-dimensional part of the metric be the usual Minkowski
metric, that is,
\begin{eqnarray}
ds^2&=&
\Omega(y)\,(
\,dx_0^2
\,-\,dx_1^2
\,-\,dx_2^2
\,-\,dx_3^2)
+\,g_{ab}(y)\,dy^{a}dy^b ~~.
\label{ba6}
\end{eqnarray}
Eq.(\ref{ba6}) suggests that one may get rid of the problem of
cosmological
constant in the 4-dimensional cosmological constant (CC) (provided that
extra dimensional contributions vanish) once the first realization of the
metric reversal symmetry or (global) Poincar\'{e} symmetry is imposed.
Then the smallness of the observational value of CC could be
attributed to the breaking of the symmetry by a tiny amount if the
renormalized value of CC due to vacuum fluctuations were in the order of
the observed value of CC. On the other hand the renormalized value of CC
is proportional to
the particle masses \cite{renorm-cc}. So even a free electron contributes
to CC by an amount that is $\sim 10^{33}$ times
larger than the observational value of CC. Therefore the first realization
of metric reversal
symmetry by itself can not be used to make CC vanish
(or tiny). In the next section we will see how the signature reversal
symmetry (realized through extra dimensional reflections) can be used to
make the contribution of the quantum zero modes vanish.
However the first
realization has an advantage over the second one especially when the
second realization is considered to be an extra dimensional reflection
of the form of (\ref{ab5}). Extra dimensional reflections do not act on
the 4-dimensional
coordinates so they can not forbid a contribution from the 4-dimensional
part of the metric, for example through $a(t)$ while the first realization
always does by setting it to zero as we have seen. So in the next
section we will employ both realizations of the symmetry. The second
realization through extra dimensional reflections will cancel
the contributions to CC while the first one will allow a small
CC after it is broken by a small amount.
Next see what is the form of the conformal factor $\Omega$ when both
realizations of the symmetry are imposed. We have obtained in (\ref{ba6})
the form of the metric after the first realization of the symmetry is
imposed. Eqs.(\ref{ba3},\ref{ba4}) set the form of the
conformal
factor $\Omega$
in (\ref{ba2}) to
one of the followings
\begin{equation}
\Omega(y)\,=\,\Omega(|y|)~~~~~\mbox{or}~~~~~
\Omega(y)\,=\,f(y)f(iy)~~~~ (e.g. \cos{ky}\cosh{ky} )
\label{ba8}
\end{equation}
where $f(y)$ is an even function in $y$ i.e. $f(-y)=f(y)$.
Next
apply (\ref{ab5}) to (\ref{ba8}) and require
(\ref{ab3}) and
take the extra dimension $y$ be an $S^1/Z_2$ interval.
This restricts the form of $\Omega$ to
\begin{equation}
\Omega(y)\,=\,\cos{k|y|}~~~~~\mbox{or}~~~~~
\Omega(y)\,=\,\tan{k|y|}
\label{ba9}
\end{equation}
where $\cot{k|z|}$ has been excluded because it blows out at the location
of the branes at $k|y|=0$ and $k|y|=\pi$. For simplicity I take
\begin{equation}
\Omega(y)\,=\,\cos{k|y|}
\label{ba10}
\end{equation}
in the next section whenever necessary.
\section{The Model: Classical Aspects}
In this section we employ both realizations of the metric reversal
symmetry in a space that is the sum of two $2(2+1)$
dimensional spaces (where the usual 4-dimensional is embedded in one of
them) and
modify the curvature term $S_G$ so that the metric reversal symmetry
becomes a good candidate to explain the huge discrepancy between the
observed value of cosmological constant (CC) and the theoretically
expected contribution to it through quantum zero modes.
In this study I
adopt the view that the symmetry forbids both the geometrical and the
vacuum energy density contributions to CC. Hence CC is
forced to be zero when the symmetry is manifest, and it is tiny when the
symmetry is broken by a tiny amount (instead of seeking a solution where
both contributions
cancel each other up to a very big precession to explain the observed
value of CC). In this
section the main classical aspects of a framework to this end are
introduced.
Consider the whole space be a sum of two $2(2n+1)$ dimensional spaces with
the metric
\begin{eqnarray}
ds^2&=&
g_{AB}dx^A\,dx^B
\,+\,g_{A^\prime B^\prime}dx^{A^\prime}\,dx^{B^\prime} \nonumber \\
&=&
\Omega_z(z)[
g_{\mu\nu}(x)
\,dx^{\mu}dx^\nu\,+\,
\tilde{g}_{ab}(y)\,dy^{a}dy^b]
\,+\,
\Omega_y(y)\tilde{g}_{A^\prime B^\prime}(z)\,dz^{A^\prime}dz^{B^\prime}
\label{c1} \\
&&\Omega_y(y)\,=\,\cos{k|y|})
~~,~~~\Omega_z(z)\,=\,\cos{k^\prime|z|} \label{c1a} \\
A,B&=&0,1,2,3,5,....N~~,~~~N=2(2n+1)~~,~~~~
A^\prime,B^\prime=1^\prime,2^\prime,,....N^\prime~~,~~~N^\prime=2(2m+1)
\nonumber \\
&&~~~~~~~~~~~\mu\nu=0,1,2,3~,~~~a,b=1,2,...,N-4~~,~~~n,m=0,1,2,3......~~.
\nonumber
\end{eqnarray}
The usual four dimensional space is embedded in the first space
$g_{AB}dx^A\,dx^B$
as it is evident from (\ref{c1}). We take the action be invariant under
both realizations of metric reversal symmetry, that is,
\begin{eqnarray}
&&ds^2
\,\rightarrow
\,-\,ds^2~~~\mbox{as}~~~~
x^A\,\rightarrow\,i\,x^A\,,~~
x^{A^\prime}\,\rightarrow\,i\,x^{A^\prime}\,,~~
g_{AB}\,\rightarrow\,g_{AB}\,,~~
g_{A^\prime B^\prime}\,\rightarrow\,g_{A^\prime B^\prime}
\label{c2} \\
&&\Rightarrow~~\Omega_z\,\rightarrow\,\Omega_z
\,,~~\Omega_y\,\rightarrow\,\Omega_y
\,,~~~g_{\mu\nu}\,\rightarrow\,g_{\mu\nu}
\,,~~\tilde{g}_{ab}\,\rightarrow\,\tilde{g}_{ab}
\,,~~\tilde{g}_{A^\prime B^\prime}\,\rightarrow\,\tilde{g}_{A^\prime
B^\prime} \label{c3}
\end{eqnarray}
and
\begin{eqnarray}
&&ds^2
\,\rightarrow
\,-\,ds^2~~~\mbox{as}~~~~
ky\,\rightarrow\,\pi\,-\,ky~,~~
k^\prime z\,\rightarrow\,\pi\,-\,k^\prime z~,~~
x^A\,\rightarrow\,x^A\,,~~
x^{A^\prime}\,\rightarrow\,x^{A^\prime}
\label{c4} \\
&&\Rightarrow~~\Omega_z\,\rightarrow\,-\Omega_z
\,,~~\Omega_y\,\rightarrow\,-\Omega_y
\,,~~g_{\mu\nu}\,\rightarrow\,g_{\mu\nu}
\,,~~\tilde{g}_{ab}\,\rightarrow\,\tilde{g}_{ab}
\,,~~\tilde{g}_{A^\prime B^\prime}\,\rightarrow\,\tilde{g}_{A^\prime
B^\prime} ~~.\label{c5}
\end{eqnarray}
As in (\ref{ba6}) and (\ref{ba10}) the requirements of the homogeneity and
isotropy of the
4-dimensional space
together with the equations (\ref{c2}-\ref{c5}) set $g_{\mu\nu}$ to the
Minkowski metric $\eta_{\mu\nu}=diag(1,-1,-1,-1)$ and the conformal
factors to (\ref{c1a}).
\subsection{Curvature Sector}
We replace the gravitational action in (\ref{aa2}) by an $R^2$ action
\begin{eqnarray}
S_R &=& \frac{1}{16\pi\,\tilde{G}}\int
\,dV\,\tilde{R}^2
\label{ca1} \\
&&dV\,=\,dV_1\,dV_2~,~~dV_1\,=\,\sqrt{g(-1)^S} \,d^Nx
~,~~dV_2\,=\,
\sqrt{g^\prime
(-1)^{S^\prime}}
\,d^{N^\prime}x^\prime
\label{ca2} \\
&&\tilde{R}=R(x,x^\prime)+R^\prime(x,x^\prime) \label{ca3}
\end{eqnarray}
where the
unprimed quantities denote those corresponding to the
$N=2(2n+1)$
dimensional space, and the
primed quantities denote those corresponding to the
$N^\prime=2(2m+1)$
dimensional space. Under the transformations (\ref{c4},\ref{c5})
\begin{eqnarray}
&&dV_1\,\rightarrow \,-\,dV_1~,~~dV_2\,\rightarrow \,\,dV_2~~
~~~\mbox{as}~~~~
ky\,\rightarrow\,\pi\,-\,ky~,~~
x^A\,\rightarrow\,\,x^A\,,~~
x^{A^\prime}\,\rightarrow\,\,x^{A^\prime}
\label{ca4} \\
&&dV_1\,\rightarrow \,dV_1~,~~dV_2\,\rightarrow \,-\,dV_2~~
~~~\mbox{as}~~~~
k^\prime z\,\rightarrow\,\pi\,-\,k^\prime z~,~~
x^{A}\,\rightarrow\,\,x^{A}~,~~
x^{A^\prime}\,\rightarrow\,\,x^{A^\prime}
\label{ca5} \\
&&R\,\rightarrow \,R~,~~R^\prime\,\rightarrow \,-R^\prime
~~~\mbox{as}~~~~
ky\,\rightarrow\,\pi\,-\,ky~,~~
x^A\,\rightarrow\,\,x^A\,,~~
x^{A^\prime}\,\rightarrow\,\,x^{A^\prime}
\label{ca6} \\
&&R\,\rightarrow \,-R~,~~R^\prime\,\rightarrow \,R^\prime
~~~\mbox{as}~~~~
k^\prime z\,\rightarrow\,\pi\,-\,k^\prime z~,~~
x^A\,\rightarrow\,\,x^A\,,~~
x^{A^\prime}\,\rightarrow\,\,x^{A^\prime}~~.
\label{ca7}
\end{eqnarray}
We observe that
\begin{eqnarray}
&&dV\,=\,dV_1dV_2\,\rightarrow \,-\,dV \label{ca8} \\
&&R^2\,
\rightarrow \,
R^2
~,~~
R^{\prime 2}\,
\rightarrow \,
R^{\prime 2}~,~~
R\,R^\prime\,
\rightarrow \,
-R\,R^\prime\,
\label{ca9}
\end{eqnarray}
under the action of the symmetry transformations to only one of the
spaces, the unprimed or the primed spaces. So, only the cross terms
$RR^\prime$ are allowed. In other words only these terms may survive after
integration over extra dimensions. In fact it
is obvious from the above transformation rules that an Einstein-Hilbert
type of action is not allowed directly because each piece $R$ and
$R^\prime$ in $\tilde{R}$ is odd while $dV$ is even under a
transformation applied to both
subspaces, the unprimed and the primed subspaces.
Since only $RR^\prime$ terms are allowed (\ref{ca1})
becomes
\begin{eqnarray} S_R &=& \frac{M^{N+N^\prime-4}}{16\pi\,\tilde{G}}\int
\sqrt{(-1)^S g}
\sqrt{(-1)^{S^\prime} g^\prime}
\,2\,R(x)\,R^\prime(x^\prime)
\,d^Nx
\,d^{N^\prime}x^\prime \nonumber \\
&=&
\frac{1}{16\pi\,G}
\int
\sqrt{(-1)^S g}
\,R(x)\, \,d^Nx
\label{ca10}
\end{eqnarray}
where
\begin{equation}
\frac{1}{16\pi\,G}\,=\,
M_{pl}^2(\frac{M}{M_{pl}})^2M^{N+N^\prime-6}
\frac{1}{16\pi\,\tilde{G}}\int
\sqrt{(-1)^{S^\prime} g^\prime}
\,2\,R^\prime(x^\prime)
\,d^Dx^\prime \label{ca11}
\end{equation}
and $\tilde{G}$ is a dimensionless constant. In other words in the usual
4-dimensions at low energies (\ref{ca1})
is the same as the Einstein-Hilbert action (\ref{aa2}). The Newton's
constant in $N$ dimensions, $G$ is related to the Newton's constant in
$N+N^\prime$
dimensions through Eq.(\ref{ca11}). The integral in (\ref{ca11}) is at
the order of
$\sim L^{N^\prime-2}\sim \frac{1}{M^{N^\prime -2}}$. Hence Eq.(\ref{ca11})
may explain the smallness of
gravitational interaction compared to the other interaction if the
energy scale of
$L^\prime$
is much smaller than the Planck mass $M_{Pl}$
i.e. if
$L^\prime>>\frac{1}{M_{Pl}}$ as in the models with large extra dimensions
especially when $L(L^\prime)\,<\,\frac{1}{M}$.
\subsection{Matter Sector}
In this subsection we consider the matter action
\begin{eqnarray}
S_M &=&
\int
\,dV\,{\cal L}_M
\label{cb1} \\
&&dV\,=\,\sqrt{(-1)^S g}\sqrt{(-1)^{S^\prime} g^\prime} \,d^Dx\,d^Dx^\prime
\nonumber
\end{eqnarray}
and we consider the 4-dimensional form of $S_M$ after integration over
extra dimensional spaces. Then we study the vacuum expectation value of
the energy-momentum tensor induced by the corresponding Lagrangian
in the section after the next section.
It is evident that
under the first realization of the symmetry
\begin{eqnarray}
&&dV
\,\rightarrow \,\,
dV
~~~\mbox{as}~~~~
x^{A(A^\prime)}
\,\rightarrow\,
i\,x^{A(A^\prime)}~,~~
g_{AB(A^\prime B^\prime})\,\rightarrow\,g_{AB(A^\prime B^\prime)}
\label{cb2}
\end{eqnarray}
for a space consisting of the sum of two $2(2n+1)$ dimensional spaces as
in (\ref{c1}). The kinetic part of ${\cal L}_M$ is not invariant under the
transformations
$x^{A(A^\prime)}\,\rightarrow\,i\,x^{A(A^\prime)}$
for the usual fields
\cite{tHooft}. So $S_M$ is not invariant under the symmetry generated by
$x^{A(A^\prime)}\,\rightarrow\,i\,x^{A(A^\prime)}$. In other words the
first realization of the metric reversal symmetry is maximally broken in
the matter sector (and hence the scale factor $a(t)$ in the
Robertson-Walker metric may be time dependent). On the other hand I take a
higher
dimensional version of
the $PT$ symmetry $x^{A(A^\prime)}\,\rightarrow\,-\,x^{A(A^\prime)}$ be
almost exact and broken by a tiny amount. In other words I adopt
\begin{equation}
x^A\,\rightarrow\,-\,x^A ~~,~~~~
x^{A^\prime}\,\rightarrow\,-\,x^{A^\prime}
\label{cb3}
\end{equation}
which is a
subgroup of the group generated by
\begin{eqnarray}
x^{A(A^\prime)}\,&\rightarrow&\,i\,x^{A(A^\prime)}\,
\rightarrow\,i\,(i\,x^{A(A^\prime)})\,
=-\,x^{A(A^\prime)}
\rightarrow\,i\,(i\,(i\,x^{A(A^\prime)}))
=-i\,x^{A(A^\prime)} \nonumber \\
&&\rightarrow\,i(i\,(i\,(i\,x^{A(A^\prime)})))
\,=\,x^{A(A^\prime)}~~.
\label{cb4}
\end{eqnarray}
The symmetries in (\ref{cb3}) are imposed on each subspace separately.
Next I impose an additional 4-dimensional PT symmetry generated by
\begin{equation}
x\,\rightarrow\,-x \label{cb5}~~.
\end{equation}
Eqs.(\ref{cb4},\ref{cb5})
together imply that a PT symmetry in the 4-dimensions
and an additional PT-like symmetry in the extra dimensional sector are
assumed. One observes that ${\cal L}_{SM}$ is invariant under
Eqs.(\ref{cb4},\ref{cb5}) because
$S_M$ and $dV$ are invariant under these symmetries. The eigenvectors of
Eqs.(\ref{cb4},\ref{cb5}) do not mix because the Lagrangian ( so the
Hamiltonian) is invariant under these symmetries. So the fields $\phi$ in
the Lagrangian should be eigenvectors of these symmetries.
To make the argument more concrete
consider the Fourier decomposition (i.e. Kaluza-Klein decomposition)
of a general field $\phi$ (where possible spinor or vector indices are
suppressed). For simplicity we take
$\tilde{g}_{ab}=-\delta_{ab}$,
$g_{A^\prime B^\prime}=-\delta_{A^\prime B^\prime}$, and consider only the
Fourier decomposition of $\phi$ corresponding to single dimensions $y$ and
$z$ from each of the subspaces, the unprimed and the primed ones. We show
that the Fourier expansions given below are the eigenvectors of
Eqs.(\ref{cb4},\ref{cb5}),
\begin{eqnarray}
\phi_{AA}(x,y,z)&=&\sum_{n,m} \,\phi^{AA}_{n,m}(x)\,
\sin{(n\,ky)}
\,\sin{(m\,k^\prime z)}
\label{cb6} \\
\phi_{AS}(x,y,z)&=&\sum_{n,m} \,\phi^{AS}_{n,m}(x)\,
\sin{(n\,ky)}
\,\cos{(m\,k^\prime z)}
\label{cb7} \\
\phi_{SA}(x,y,z)&=&\sum_{n,m} \,\phi^{SA}_{n,m}(x)\,
\cos{(n\,ky)}
\,\sin{(m\,k^\prime z)}
\label{cb8} \\
\phi_{SS}(x,y,z)&=&\sum_{n,m} \,\phi^{SS}_{n,m}(x)\,
\cos{(n\,ky)}
\,\cos{(m\,k^\prime z)}
\label{cb9} \\
&&
k=\frac{\pi}{L}~,~
k^\prime=\frac{\pi}{L^\prime}~,~~
0\leq\,y\,\leq\,L
~,~~0\leq\,z\,\leq\,L^\prime~~,~~~n,m=0,1,2,..... \nonumber
\end{eqnarray}
where we have used
$k=\frac{\pi}{L}$,
$k^\prime=\frac{\pi}{L^\prime}$ since
$0\leq\,y\,\leq\,L$,
$0\leq\,z\,\leq\,L^\prime$. In the case of fermions the integers
$n$, $m$ in (\ref{cb6},\ref{cb9}) should be
replaced by $\frac{1}{2}n$, $\frac{1}{2}m$, respectively.
One observes that
\begin{equation}
n(m)\,\rightarrow\,-n(m)~~~\mbox{as}~~~y(z)\,\rightarrow\,-y(z)
\label{cb10}
\end{equation}
since $n(m)$ are the eigenvalues of
$\frac{\partial}{\partial y}\,
(\frac{\partial}{\partial z})$ i.e. they are the momenta corresponding to
the
directions $y$ and $z$. There are two eigenvalues i.e. $\pm\,1$ of the
each transformation in (\ref{cb10}) since application of the
transformations twice results in the identity transformation.
Now we show that the fields (\ref{cb6},\ref{cb9}) are the eigenstates of
the transformations (\ref{cb10}). First consider (\ref{cb6}). Applying the
transformation
(\ref{cb3}) and using (\ref{cb10}), $\phi_{AA}$ in (\ref{cb6}) transforms
to
\begin{eqnarray}
\phi_{AA}(x,y,z)
&\rightarrow&\phi^\prime(x,y^\prime,z)
\,=\,\sum_{n,m} \,\phi^{AA}_{-n,m}(x)\,
\sin{(n\,ky)}
\,\sin{(m\,k^\prime z)}~~~\mbox{as}~~~~y\,\rightarrow -y
\label{cb11} \\
&\rightarrow&\phi^\prime(x,y^,z^\prime)
\,=\,\sum_{n,m} \,\phi^{AA}_{n,-m}(x)\,
\sin{(n\,ky)}
\,\sin{(m\,k^\prime z)}~~~\mbox{as}~~~~z\,\rightarrow -z~~.
\label{cb12}
\end{eqnarray}
There will be no mixture of the
eigenstates of (\ref{cb3}) in the Lagrangian
because the Lagrangian
is invariant under (\ref{cb3}). So $\phi_{AA}$ is either odd or
even under (\ref{cb3}).
In the light of
(\ref{cb10},\ref{cb12})
the eigenstates of $\phi_{AA}$
under the
transformation are determined by
$\phi^{AA}_{n,m}(x)$. The same conclusion is true for all
$\phi$'s (\ref{cb6},\ref{cb9}).
So, for all $\phi$'s (\ref{cb6},\ref{cb9}) we have
two cases for each symmetry in (\ref{cb10})
\begin{eqnarray}
&&\phi_{-n,m}(-x)\,=\,
\pm\phi_{-n,m}(x)\,=\,
\pm\phi_{n,m}(x) \label{cb13} \\
&&\phi_{n,-m}(-x)\,=\,
\pm\phi_{n,m}(x)\,=\,
\pm\phi_{n,m}(x) ~~.\label{cb14}
\end{eqnarray}
Meanwhile one may write
(\ref{cb6},\ref{cb9}) in the following form as well
\begin{eqnarray}
\phi_{AA}(x,y,z)&=&\frac{1}{2}\sum_{n,m} (
\,\phi^{AA}_{n,m}(x)\,
-\,\phi^{AA}_{-n,m}(x)\,)
\sin{(n\,ky)}
\,\sin{(m\,k^\prime z)} \nonumber \\
&=&\frac{1}{2}\sum_{n,m} (
\,\phi^{AA}_{n,m}(x)\,
-\,\phi^{AA}_{n,-m}(x)\,)
\sin{(n\,ky)}
\,\sin{(m\,k^\prime z)}
\label{cb15}\\
\phi_{AS}(x,y,z)&=&\frac{1}{2}\sum_{n,m} (
\,\phi^{AS}_{n,m}(x)\,
-\,\phi^{AS}_{-n,m}(x)\,)
\sin{(n\,ky)}
\,\cos{(m\,k^\prime z)} \nonumber \\
&=&\frac{1}{2}\sum_{n,m} (
\,\phi^{AS}_{n,m}(x)\,
+\,\phi^{AS}_{n,-m}(x)\,)
\sin{(n\,ky)}
\,\cos{(m\,k^\prime z)}
\label{cb16}\\
\phi_{SA}(x,y,z)&=&\frac{1}{2}\sum_{n,m} (
\,\phi^{SA}_{n,m}(x)\,
+\,\phi^{SA}_{-n,m}(x)\,)
\sin{(n\,ky)}
\,\sin{(m\,k^\prime z)} \nonumber \\
&=&\frac{1}{2}\sum_{n,m} (
\,\phi^{SA}_{n,m}(x)\,
-\,\phi^{SA}_{n,-m}(x)\,)
\cos{(n\,ky)}
\,\sin{(m\,k^\prime z)}
\label{cb17}\\
\phi_{SS}(x,y,z)&=&\frac{1}{2}\sum_{n,m} (
\,\phi^{SS}_{n,m}(x)\,
+\,\phi^{SS}_{-n,m}(x)\,)
\cos{(n\,ky)}
\,\cos{(m\,k^\prime z)} \nonumber \\
&=&\frac{1}{2}\sum_{n,m} (
\,\phi^{SS}_{n,m}(x)\,
+\,\phi^{SS}_{n,-m}(x)\,)
\cos{(n\,ky)}
\,\cos{(m\,k^\prime z)}~~.
\label{cb18}
\end{eqnarray}
It is evident from Eq.(\ref{cb15},\ref{cb18}) that
$\phi_{AA}$
is antisymmetric under both of
$n\rightarrow\,-n$,
$m\rightarrow\,-m$,
$\phi_{AS}$
is antisymmetric under
$n\rightarrow\,-n$
while it is symmetric under
$m\rightarrow\,-m$,
$\phi_{SA}$
is symmetric under
$n\rightarrow\,-n$
while it is antisymmetric under
$m\rightarrow\,-m$, and
$\phi_{SS}$ is symmetric under both
of $n\rightarrow\,-n$,
$m\rightarrow\,-m$. This result will be important in the value of $S_M$
after integration over extra dimensions.
\subsubsection{Scalar Field}
First consider ${\cal L}_{\phi k}$,
the kinetic part of the Lagrangian
${\cal L}_{Mk}$ for a scalar field (in
the space given in (\ref{c1}))
\begin{eqnarray}
&&{\cal L}_{\phi\,k} \,=\,
{\cal L}_{\phi\,k1}\,+\,
{\cal L}_{\phi\,k2}
\label{cba1} \\
&&{\cal L}_{\phi\,k1}\,=\,
\frac{1}{2}g^{AB}\partial_A\phi\partial_B\phi~~,~~~
{\cal L}_{\phi\,k2}
\,=\,\frac{1}{2}g^{A^\prime B^\prime}
\partial_{A^\prime}\phi\partial_{B^\prime}\phi~~.
\label{cba2}
\end{eqnarray}
Once the breaking of the first realization of the symmetry
in the matter sector is
granted we may go on to seek the implications of the
manifestations of the residual
symmetry (\ref{cb3},\ref{cb5}) and the
second realization of the symmetry
given by Eqs.(\ref{c4},\ref{c5}) that remains unbroken.
${\cal L}_M$ (i.e. ${\cal L}_{\phi k}$ in this case) is
even under
the simultaneous application of the signature reversal symmetry to
both subspaces because $dV$ is even
under the symmetry and we require the invariance of $S_M$ (i.e. $S_{\phi
k}$ in this case).
So any $\phi$ may be
written as a sum of the eigenstates of the symmetry. The eigenvalues of
the symmetry transformation
$k^{(\prime)}y(z)\,\rightarrow\,
\pi\,-\,k^{(\prime)}y(z)$ are $\pm\,1$ because application of the
transformation twice results in the identity transformation. Because
$g^{AB}(g^{A^\prime B^\prime})$ is odd then the terms
$\partial\phi\partial\phi$ are odd as well under the symmetry
transformation. So the kinetic term in (\ref{cba1}) contains mixed
eigenstates of the symmetry. In the following paragraphs we will
identify these eigenstates with odd and even terms in the Fourier
decomposition (i.e. Kaluza-Klein decomposition) of $\phi$. Then this
result
will have important consequences in the following paragraphs.
In the next paragraph we see, through an example,
explicitly how
$S_M$
contains
mixing of different Kaluza-Klein modes off-diagonally. This result, in
turn, will be crucial in ensuring vanishing of the vacuum expectation
value of energy-momentum tensors of quantum fields in the section
after the next section.
To illustrate the idea I avoid unnecessary
complications and consider the
simplest realistic case; $N=6$, $N^\prime=2$. The kinetic part of $S_M$
(i.e. $S_{\phi}$ in this case) for $\phi_{SS}$ of Eq.(\ref{cb9})
in the space (\ref{c1}) where the conformal factors are of the form
(\ref{c1a}) is given by ( see Appendix A)
\begin{eqnarray}
S_{\phi k} &=&
\frac{1}{8}(LL^\prime)^2\int\,d^4x\,\{
4\partial_\mu
[\,\phi_{1,2}(x)\,+\,\phi_{1,0}(x)]
\,\partial_\nu(\,\phi_{0,0}(x)\,)\nonumber \\
&&+\,
4\partial_\mu
[\,\phi_{0,2}(x)\,+\,\phi_{0,0}(x)
\,+\,\phi_{2,2}(x)\,+\,\phi_{2,0}(x)]
\,\partial_\nu(\,\phi_{1,0}(x)\,)\nonumber \\
&&+\,
4\eta^{\mu\nu}\sum_{r=1,s=1}^{\infty}
\partial_\mu
[\,\phi_{|r-1|,|s-2|}(x)\,+\,\phi_{|r-1|,s+2}(x)
\nonumber \\
&&+\,2\phi_{|r-1|,s}(x)\,+\,\phi_{r+1,|s-2|}(x)\,+\,\phi_{r+1,s+2}(x)
\,+\,2\phi_{r+1,s}(x)
\,] \partial_\nu(\,\phi_{r,s}(x)\,)
\nonumber \\
&&\,-4k^2\,\,
\sum_{r=1,s=0}\,r[\,(|r-1|)
(\,\phi_{|r-1|,|s-2|}(x)\,+\,\phi_{|r-1|,s+2}(x)
\,+\,2\,\phi_{|r-1|,s}(x)\,)
\nonumber\\
&&+\,(r+1)
(\,\phi_{r+1,|s-2|}(x)\,+\,\phi_{r+1,s+2}(x)
\,+\,2\,\phi_{r+1,s}(x)\,)\,-\,\phi_{r+1,s}(x)\,)\,]
\phi_{r,s}(x) \nonumber \\
&&\,-4\frac{1}{2}k^{\prime 2}\,\,
\sum_{r=0,s=1}\,s
\,[\,(|s-3|)\phi_{r,|s-3|}(x)\,+\,(s+3)\,\phi_{r,s+3}(x) \nonumber \\
&&+\,3(|s-1|)\,\phi_{r,|s-1|}(x)\,+\,3(s+1)(\,\phi_{r,s+1}(x)\,]
\,\phi_{r,s}(x) \}~~.
\label{cba6}
\end{eqnarray}
The expressions for
$\phi_{AS}$, $\phi_{SA}$, $\phi_{AA}$
are the same as (\ref{cba5}) up to
minus and pluses in front of the $\phi_{mn}$ terms. Hence the expressions
for $\phi_{AS}$, $\phi_{SA}$, $\phi_{AA}$
are the same as (\ref{cba6}) because the change in the sign of
the
coefficients of $\phi_{mn}$ are compensated by the change of the sign
due to the symmetry properties of $\phi_{mn}$'s under $n\rightarrow -n$
$m\rightarrow -m$. Although the expressions for $S_{\phi k}$ for all
$\phi_{AA}$, $\phi_{AS}$, $\phi_{SA}$, $\phi_{SS}$ are essentially the
same and given by (\ref{cba6}), in fact the $S_{\phi k}$ for $\phi_{SS}$
has an important difference than the others because only that
result contains the zero mode $\phi_{0,0}$ that is identified by the usual
particles. So I take
$\phi_{SS}$ as the only physically relevant state for $\phi$.
One observes that Eq.(\ref{cba6}) contains only off-diagonal mixing of
Kaluza-Klein modes. One may
easily see that a bulk mass term for $\phi$ results in essentially the
same form as the 4-dimensional kinetic term in (\ref{cba6}) where the
derivatives are absent. Any other power of $\phi$ necessarily
contains off-diagonal mixings of Kaluza-Klein modes. These
observations are important when the vacuum expectation of
energy-momentum tensor is obtained to give zero in the exact
manifestation of extra dimensional reflection symmetry. A
more detailed analysis of Eq.(\ref{cba6}) and these points will be given
in the next section.
Next consider a bulk mass term (for $\phi_{SS}$)
\begin{eqnarray}
S_{\phi m} &=&
\frac{1}{2}m\int
\,\sqrt{(-1)^S g}\sqrt{(-1)^{S^\prime} g^\prime} \,d^Dx\,d^Dx^\prime
\phi^2 \nonumber \\
&=&
\frac{1}{2}m\,LL^\prime\int\,d^4x\,\{
\sum_{n,m,r,s}
\,\phi_{n,m}(x)\,
\phi_{r,s}(x)\nonumber \\
&&\int_0^L\,dy\,
\cos{k y}\,
\cos{(n\,k|y|)}
\cos{(r\,k|y|)}
\int_0^{L^\prime}\,dz\,
\cos^3{k^\prime z}
\cos{(m\,k^\prime|z|)})
\,\cos{(s\,k^\prime|z|)}) \nonumber \\
&=&
\frac{1}{64}m(LL^\prime)^2\int\,d^4x\,\{
\sum_{n,m,r,s}\,\phi_{n,m}(x)\,\phi_{r,s}(x)
[(\delta_{n,-r-1}+\delta_{n,1-r})
+\delta_{n,r-1}+\delta_{n,1+r})
\nonumber \\
&&
\times\,(\delta_{m,-s-3}+
\delta_{m,3-s}+
\delta_{m,s-3}+
\delta_{m,s+3} \nonumber \\
&&+
3\delta_{m,-s-1}+
3\delta_{m,1-s}
+3\delta_{m,s-1}+
3\delta_{m,s+1})]\} ~~.\label{cba7}
\end{eqnarray}
The common aspect of the equations (\ref{cba6}) and {\ref{cba7}) are that
the Kaluza-Klein modes mix in such a way that there are no diagonal terms
i.e. the terms of the form
$\phi_{n,m}\phi_{n,m}$.
In fact this is a
generic property of all possible terms for all kinds of fields i.e.
scalars, fermions, gauge fields or any other kind of field. All terms
necessarily contain at least a pair of Kaluza-Klein modes that couple in
a non-diagonal way. This can be seen as follows: A pair of fields that
mix in a diagonal way (i.e. as
$\phi_{n,m}\phi_{n,m}$) is even under either of the transformations in
(\ref{c4}) since
it corresponds to the terms of the form $\cos^2{n\,ky}\sin^2{m\,k^\prime
z}$. If the whole terms consists of such pairs then the whole term is even
under (\ref{c4}). However the volume element is odd under
either of the transformations in (\ref{c4}).
So such a term can not exist i.e. it must contain at least one pair of
fields that couple in a off-diagonal way. This fact plays a crucial role
in making the vacuum expectation value of the energy momentum tensor zero
in the exact manifestation of the metric reversal symmetry. In the next
subsection we consider one additional example, that is, the kinetic term
for fermions because it is not a straightforward generalization of
the scalar
case. We will see that the same conclusion also holds in that case
as expected.
\subsubsection{Fermionic Fields}
The kinetic term of the Lagrangian for fermionic fields in the space given
by (\ref{c1}) in the presence of the signature
reversal symmetry (where the conformal factors and the unprimed space are
given by (\ref{c1a}) and (\ref{ba6}) ) is
\begin{eqnarray}
{\cal L}_{fk}\,=\,
i\bar{\psi}\Gamma^{A}\partial_A\psi
\,+\,i\bar{\psi}\Gamma^{A^\prime}\partial_{A^\prime}\psi~~.
\label{cbc1}
\end{eqnarray}
For simplicity I take
\begin{equation}
g_{\mu\nu}=\eta_{\mu\nu}~,~~
\tilde{g}_{ab}=-\delta_{ab}~,~~
\tilde{g}_{A^\prime B^\prime}=-\delta_{A^\prime B^\prime} ~~.\label{cbc2}
\end{equation}
In fact $g_{\mu\nu}=\eta_{\mu\nu}$ is enforced by the symmetry, the
4-dimensional homogeneity and isotropy of the metric as we have discussed
in the previous section. So
\begin{eqnarray}
\Gamma^A&=&(\cos{\frac{k\,z}{2}}\tau_3
\,+\,i\sin{\frac{k\,z}{2}}\tau_1)^{-1} \otimes\gamma^A
\nonumber \\
\Gamma^{A^\prime}&=&(\cos{\frac{k\,y}{2}}\tau_3
\,+\,i\sin{\frac{k\,y}{2}}\tau_1)^{-1}\otimes\gamma^{A^\prime}
\label{cbc3}
\end{eqnarray}
where
\begin{eqnarray}
\{\Gamma^{A(A^\prime)},\Gamma^{B(B^\prime)}\}=2g^{AB(A^\prime
B^\prime)}~~~,~~~~
\{\gamma^A,\gamma^B\}=2\eta^{AB}~~~,~~~~
\{\gamma^{A^\prime},\gamma^{B^\prime}\}=-2\delta^{A^\prime,B^\prime}
\label{cbc4}
\end{eqnarray}
and $\tau_3$, $\tau_1$ are the diagonal and the off diagonal real Pauli
matrices, and $\otimes$ denotes tensor product. In the case of fermions
one should use the complex expansion for the Fourier expansion
\begin{eqnarray}
\psi(x,y,z)&=&\sum_{n,m} \,\psi_{n,m}(x)\,
e^{\frac{i}{2}\,n\,ky}
\,e^{\frac{i}{2}\,m\,k^\prime z}\nonumber \\
&=&\sum_{n,m} \,(\,
\psi^{nS}_{n,m}(x)\,\cos{(\frac{1}{2}n\,ky)}
\,+\,
\psi^{nA}_{n,m}(x)\,\sin{(\frac{1}{2}n\,ky)}\,)
\,e^{\frac{i}{2}\,m\,k^\prime z}\nonumber \\
&=&\sum_{n,m} \,(\,
(\,\psi^{mS}_{n,m}(x)\,\cos{(\frac{1}{2}m\,k^\prime z)}
\,+\,
\psi^{mA}_{n,m}(x)\,\sin{(\frac{1}{2}m\,k^\prime z)})
e^{\frac{i}{2}\,n\,ky}
\label{cbc5}
\end{eqnarray}
where
\begin{eqnarray}
&&\psi^{nS}_{n,m}(x)\,=\,
\frac{1}{2}(\,\psi_{n,m}(x)\,+\,\psi_{-n,m}(x)\,)~,~~
\psi^{nA}_{n,m}(x)\,=\,
\frac{i}{2}(\,\psi_{n,m}(x)\,-\,\psi_{-n,m}(x)\,) \nonumber \\
&&\psi^{mS}_{n,m}(x)\,=\,
\frac{1}{2}(\,\psi_{n,m}(x)\,+\,\psi_{n,-m}(x)\,)~,~~
\psi^{nA}_{n,m}(x)\,=\,
\frac{i}{2}(\,\psi_{n,m}(x)\,-\,\psi_{n,-m}(x)\,) ~~.\label{cbc6}
\end{eqnarray}
Next we substitute (\ref{cbc5}) in (\ref{cbc1}) to get $S_{fk}$.
To be specific we take
$N=6$ and $N^\prime=2$ as in the previous subsubsection. Then (\ref{cbc1})
becomes (see Appendix B)
\begin{eqnarray}
S_{fk}
&=&
\frac{1}{32}(LL^\prime)^2\int\,d^4x\,\{
\sum_{n,m,r,s}
\,[\,
i\psi_{n,m}(x)\,\tau_3\otimes\gamma^\mu\partial_\mu(\,\psi_{r,s}(x)
\nonumber\\
&&\times\,
(\delta_{n,r+2}+\delta_{n,r-2})
(\delta_{m,s+5}+
\delta_{m,s-3}+
2\delta_{m,s+1}
(\delta_{m,s-5}+
\delta_{m,s+3}+
2\delta_{m,s-1}) \nonumber \\
&& -\psi_{n,m}(x)\,\tau_3\otimes\gamma^\mu\partial_\mu(\,\psi_{r,s}(x)
\nonumber\\
&&\times\,
(\delta_{n,r+2}+\delta_{n,r-2})
(\delta_{m,s+3}+
\delta_{m,s-5}-
\delta_{m,s+5}-\delta_{m,s-3}+
2\delta_{m,s-1}
-2\delta_{m,s+11}) \nonumber \\
&&-\,
\frac{1}{2}\psi_{n,m}(x)\,(r-n)\,\tau_3\otimes\gamma^y \psi_{r,s}(x)
\nonumber\\
&&\times\,
(\delta_{n,r+2}+\delta_{n,r-2})
(\delta_{m,s+3}+
\delta_{m,s-5}-
\delta_{m,s+5}-\delta_{m,s-3}+
2\delta_{m,s-1}
-2\delta_{m,s+11}) \nonumber \\
&&+\,\frac{1}{2}
(r-n)\,
\psi_{n,m}(x)\,\tau_1\otimes\gamma^y
\,\psi_{r,s}(x) \nonumber \\
&&\times\,
(\delta_{n,r+2}+\delta_{n,r-2})
(\delta_{m,s+3}+
\delta_{m,s-5}-
\delta_{m,s+5}-\delta_{m,s-3}+
2\delta_{m,s-1}
-2\delta_{m,s+11}) \nonumber \\
&&-\,
\frac{1}{2}\psi_{n,m}(x)\,(s-m)\,\tau_3\otimes\gamma^y \psi_{r,s}(x)
\nonumber \\
&&\times\,
(\delta_{n,r+1}+\delta_{n,r-1})
(\delta_{m,s+6}+\delta_{m,s-6}
+3\delta_{m,s+2}+3\delta_{m,s-2})
\nonumber \\
&&+\,(s-m)\frac{1}{2}
\,
\psi_{n,m}(x)\,\tau_1\otimes\gamma^y
\,\psi_{r,s}(x) \nonumber \\
&&\times\, (\delta_{n,r-1}-\delta_{n,r+1})
(\delta_{m,s+6}+\delta_{m,s-6}+3\delta_{m,s+2}+3\delta_{m,s-2})\}
\label{cbc7}
\end{eqnarray}
where we have used the identity
$\cos{u}\,(\cos{\frac{u}{2}}\tau_3\,+\,i\sin{\frac{u}{2}}\tau_1)^{-1}$=
$(\cos{\frac{u}{2}}\tau_3\,+\,i\sin{\frac{u}{2}}\tau_1)$.
We see that, in this case as well, each homogeneous term consists of
one off-diagonally coupled pair of Kaluza-Klein modes.
\section{the relation to Linde's model}
It is
evident from (\ref{cba6}) that the 4-dimensional kinetic term contains
the zero mode $\phi_{00}$ while the other terms i.e the mass terms do
not contain the zero mode. This implies that there is a zero mass
eigenstate
that contains $\phi_{00}$. However the form of (\ref{cba6}) is rather
involved since it involves, in general, mixing of all Kaluza-Klein modes.
An important aspect of this mixing
is the absence of diagonal terms in the mixing
terms. We will see in the next section how this plays a crucial role in
making the vacuum expectation value of energy-momentum tensor zero. Before
passing to this issue, first we should make the form of (\ref{cba6}) more
manageable. In any case one should diagonalize
(\ref{cba6}) so that, at least, the
fields in the 4-dimensional kinetic term couple to each other diagonally
i.e. we should pass to the interaction basis. One observes due to the
signature reversal symmetry (induced through extra dimensional
reflections) that all the
terms in the 4-dimensional kinetic term in (\ref{cba6}) are mixed so
that
the terms with odd $n$'s mix with the even $n$'s,
and the odd $m$'s with odd $m$'s,
the even $m$'s with even $m$'s. There is the same behavior for the terms
with the coefficient $k^2$, and a similar behavior for the terms with the
coefficient $k^{\prime 2}$ (the odd $n$'s mix with the odd $n$'s,
the even $n$'s mix with the even $n$'s, and the odd $m$'s mix with the
even $m$'s and vice versa). So the form given by the
4-dimensional part of (\ref{cba6}) may be only
induced by the mixture of either of
\begin{eqnarray}
&&\phi^{OO}_{SS}(x,y,z)\,=\,\sum_{j,l=0}
\,\phi^{OOSS}_{2j+1,2l+1}(x)\,
\cos{(2j+1)\,ky}
\,\cos{(2l+1)\,k^\prime z} \nonumber \\
&&\mbox{and} \nonumber \\
&&\phi^{EO}_{SS}(x,y,z)\,=\,\sum_{j,l=0}
\,\phi^{EOSS}_{2j,2l+1}(x)\,
\cos{(2j)\,ky}
\,\cos{(2l+1)\,k^\prime z}
\label{lba7}
\end{eqnarray}
or
\begin{eqnarray}
&&\phi^{EE}_{SS}(x,y,z)\,=\,\sum_{j=1,l=0}
\,\phi^{EESS}_{2j,2l}(x)\,
\cos{(2j)\,ky}
\,\cos{(2l)\,k^\prime z} \nonumber \\
&& \mbox{and} \nonumber \\
&&\phi^{OE}_{SS}(x,y,z)\,=\,\sum_{j,l=0}
\,\phi^{OESS}_{2j+1,2l}(x)\,
\cos{(2j+1)\,ky}
\,\cos{(2l)\,k^\prime z} ~~.
\label{lba8}
\end{eqnarray}
The each sum may be an infinite series if all modes are mixed
or it may correspond to a set of finite sums if the modes mix with each
other in a set of subsets of $r$ and $s$ in (\ref{cba6}). In the expansion
of $\phi^{EE}_{SS}$ the sum over $j$ starts from one
because we take the zero mode $\phi_{00}$ in a different eigenstate as
we will see. The
requirement that the internal symmetries that may be
induced by extra dimensional symmetries and the usual space-time
symmetries are independent requires the whole space be a direct product of
the 4-dimensional space with the extra dimensional space. This, in turn,
requires all $\phi_{n,m}(x)$'s in the above equations be the same up to
constant coefficients, that is,
\begin{equation}
\phi^{XY}_{SS,n,m}\,=\,C_{n,m}^{XYSS}\phi^{XY}(x) \label{lba9}
\end{equation}
where $X,Y$ may take the values $O$, $E$, and $C_{n,m}^{XYSS}$ is some
constant with the condition that it leads to a finite series. For example,
one may take
\begin{equation}
C_{n,m}=\frac{|n-2|\,|m-2|}{(n^2+1)(m^2+1)} \label{lba91}
\end{equation}
where
$|n-2|\,|m-2|$
is included to make the analysis of the zero mass eigenstate more
manageable as will see . Then Eqs.(\ref{lba7},\ref{lba8}) become
\begin{eqnarray}
\phi^{OO}(x,y,z)&=&[\sum_{j,l=0} \,
C^{OO}_{2j+1,2l+1}\,\cos{(2j+1)ky}\,\cos{(2l+1)k^\prime z}]
\phi^{OO}(x) \nonumber \\
&=&
[\sum_{j,l=0} \,
\frac{
|2j-1|\,|2l-1|
}{((2j+1)^2+1)((2l+1)^2+1)}\,
\cos{(2j+1)ky}\,\cos{(2l+1)k^\prime z}]\phi^{OO}(x)\nonumber \\
&\mbox{and}& \nonumber \\
\phi^{EO}(x,y,z)&=&[\sum_{j,l=0} \,
C^{EO}_{2j,2l+1}\,\cos{(2j)ky}\,\cos{(2l+1)k^\prime z}]
\phi^{EO}(x)\nonumber \\
&=&[\sum_{j,l=0} \,
\frac{
|2j-2|\,|2l-1|
}{((2j)^2+1)((2l+1)^2+1)}\,
\cos{(2j)ky}\,\cos{(2l+1)k^\prime z}]\phi^{EO}(x)\,
\label{lba10}
\end{eqnarray}
or
\begin{eqnarray}
\phi^{EE}(x,y,z)&=&\sum_{j=1,l=0} \,
C^{EE}_{2j,2l}\,\cos{(2j)ky}\,\cos{(2l)k^\prime z}]
\phi^{EE}(x)\nonumber \\
&=&[\sum_{j,l=0} \,
\frac{
|2j-2|\,|2l-2|
}{((2j)^2+1)((2l)^2+1}\,
\cos{(2j)ky}\,\cos{(2l)k^\prime z}]\phi^{EE}(x)\nonumber \\
&\mbox{and}& \nonumber \\
\phi^{OE}(x,y,z)&=&\sum_{j,l=0} \,
C^{OE}_{2j+1,2l}\,\cos{(2j+1)ky}\,\cos{(2l)k^\prime z}
\phi^{OE}(x)\nonumber \\
&=&[\sum_{j,l=0} \,
\frac{
|2j-1|\,|2l-2|
}{((2j+1)^2+1)((2l)^2}+1)\,
\cos{(2j+1)ky}\,\cos{(2l)k^\prime z}]\phi^{OE}(x)\,
\label{lba11}
\end{eqnarray}
where the $SS$ indices are suppressed. In the light of
(\ref{lba10},\ref{lba11}) Eq.(\ref{cba6}) becomes
\begin{eqnarray}
S_{\phi k} &=&
\frac{1}{2}(LL^\prime)^2\int\,d^4x\,\{
2\eta^{\mu\nu}\partial_\mu (\,\phi_{1,0})
\partial_\nu(\,\phi_{0,0})
\,+\,2C_1\,C_2
\eta^{\mu\nu}\partial_\mu
(\phi^{EO}(x)\,)\,
\partial_\nu
\,(\phi^{OO}(x)\,) \nonumber \\
&&+\,
2C_3\,C_4
\eta^{\mu\nu}\partial_\mu \,
(\phi^{EE}(x)\,)\,
\partial_\nu
\,(\phi^{OE}(x)\,) \nonumber \\
&&\,-k^2\,\,
[\,2C_5\,C_6
\,\phi^{OO}(x)\,
\phi^{EO}(x) \,+\,2C_7\,C_8
\,\phi^{EE}(x)\,
\phi^{OE}(x) \,]\nonumber \\
&&\,-\frac{1}{2}k^{\prime 2}\,
[\,2C_9\,C_{10}
\,\phi^{OO}(x)\,
(\phi^{OE}(x) \nonumber \\
&&
+\,2C_{11}\,C_{12}
\,\phi^{EE}(x)\,\phi^{EO}(x)\,] \,\}
\label{lba12}
\end{eqnarray}
where the form of the coefficients $C_i$, $i=1,2,3,...,12$ are given in
Appendix C.
The diagonalization of (\ref{lba12}) results in
\begin{eqnarray}
S_{\phi k} &=&
\frac{1}{2}(LL^\prime)^2\int\,d^4x\,\{
\eta^{\mu\nu}(\partial_\mu\phi_1)
\partial_\nu(\,\phi_1)
\,-\,\eta^{\mu\nu}(\partial_\mu\phi_2)
\partial_\nu(\,\phi_2) \nonumber \\
&&+\,C_1\,C_2\,(\,
\eta^{\mu\nu}(\partial_\mu \phi_3(x)\,)\,
(\partial_\nu\phi_3(x)\,)
\,-\,\eta^{\mu\nu}
\,\partial_\mu(\phi_4(x)\,)\,
\partial_\nu(\phi_4(x)\,)\,)
\nonumber \\
&&+\,C_3\,C_4\,(\,
\eta^{\mu\nu}\partial_\mu(\phi_5(x)\,)\,\partial_\nu (\phi_5(x)\,)
\,-\,
\eta^{\mu\nu}\partial_\mu (\phi_6(x)\,)\,
\partial_\nu\,(\phi_6(x)\,)\,)\,]
\nonumber \\
&&-k^2\,
[\,C_5\,C_6\,(\,\phi_3(x)\,\phi_3(x)
\,-\,
\phi_4(x)\,\phi_4(x)\,)
\nonumber \\
&&+\,C_7\,C_8\,(\,\phi_5(x)\,\phi_5(x)
\,-\,
\phi_{6}(x)\,\phi_{6}(x)\,) \,]\nonumber \\
&&-\frac{1}{2}k^{\prime 2}\,
[\,C_9\,C_{10}\,(\,\phi_{7}(x)\,\phi_{7}(x)
\,-\,
\phi_{8}(x)\,\phi_{8}(x)\,)
\nonumber \\
&&+\,C_{11}\,C_{12}
\,(\,\phi_{9}(x)\,\phi_{9}(x)\,-\,\phi_{10}(x)\,\phi_{10}(x)\,)
\,] \,\}
\label{lba14}
\end{eqnarray}
where
\begin{eqnarray}
&&\phi_1\,=\,\phi_{0,0}\,+\,\phi_{1,0}~~~,~~~~
\phi_2\,=\,\phi_{0,0}\,-\,\phi_{1,0}
~~~,~~~~
\phi_3\,=\,\phi^{EO}\,+\,\phi^{OO}
~~~,~~~~
\phi_4\,=\,\phi^{EO}\,-\,\phi^{OO}
\nonumber \\
&&\phi_5\,=\,\phi^{EE}\,+\,\phi^{OE}
~~~,~~~~
\phi_6\,=\,\phi^{EE}\,-\,\phi^{OE}
~~~,~~~~
\phi_7\,=\,\phi^{OO}\,+\,\phi^{OE}
~~~,~~~~
\phi_8\,=\,\phi^{OO}\,-\,\phi^{OE}
\nonumber \\
&&\phi_9\,=\,\phi^{EE}\,+\,\phi^{EO}
~~~,~~~~
\phi_{10}\,=\,\phi^{EE}\,-\,\phi^{EO} ~~.\label{lba15}
\end{eqnarray}
It is evident from (\ref{lba14}) that the scalar kinetic Lagrangian
(\ref{cba6}) is equivalent to a Lagrangian that consists of a set of
usual scalars and a set of ghost scalars. In fact this conclusion is valid
for all quadratic terms for all fields e.g.
$\bar{\psi}_{n,m}\psi_{r,s}$
where $n\neq r$ and/or $m\neq s$ due to the symmetry and this term is
equivalent to $\frac{1}{2}(\bar{\psi}_1\psi_1-\bar{\psi}_2\psi_2)$ where
$\bar{\psi}_1= \psi_{n,m}+\psi_{r,s}$,
$\bar{\psi}_2=\psi_{n,m}-\psi_{r,s}$. This setting is similar to
Linde's model \cite{Linde} and its variants \cite{LindeVar}. Only mixing
between the usual particles and ghost sector may be induced through
quartic and higher order terms. A detailed analysis of such
possible mixings and suppressing these couplings needs a separate study by
its own.
\section{Vacuum expectation value of energy-momentum tensor in the
presence of metric reversal symmetry}
The 4-dimensional energy momentum tensor corresponding to the
action (\ref{lba12}) is
\begin{eqnarray}
T^\nu_\mu&=&\frac{2}{\sqrt{(-1)^Sg}
\sqrt{(-1)^{S^\prime}g^\prime}}g_{\mu\rho}\frac{\delta\,S_M}
{\delta\,g_{\nu\rho}}\,=\,
2\partial_\mu
\,\phi_{1,0}(x)
\,\partial^\nu\,\phi_{0,0}(x) \nonumber \\
&&+\,
\,+\,2C_1\,C_2
\partial_\mu
(\phi^{EO}(x)\,)\,
\partial^\nu
\,(\phi^{OO}(x)\,)
\,+\,
2C_3\,C_4
\partial_\mu [\,
(\phi^{EE}(x)\,)\,
\partial^\nu
\,(\phi^{OE}(x)\,) \nonumber \\
&&\,-\,\delta_\mu^\nu\,\{\,
\eta^{\mu\nu}\partial_\mu (\,\phi_{1,0})
\partial_\nu(\,\phi_{0,0})
\,+\,C_1\,C_2
\eta^{\mu\nu}\partial_\mu
(\phi^{EO}(x)\,)\,
\partial_\nu
\,(\phi^{OO}(x)\,) \nonumber \\
&&+\,
C_3\,C_4
\eta^{\mu\nu}\partial_\mu \,
(\phi^{EE}(x)\,)\,
\partial_\nu
\,(\phi^{OE}(x)\,) \nonumber \\
&&-k^2\,\,
[\,C_5\,C_6
\,\phi^{OO}(x)\,
\phi^{EO}(x) \,+\,C_7\,C_8
\,\phi^{EE}(x)\,
\phi^{OE}(x) \,]\nonumber \\
&&\,-\frac{1}{2}k^{\prime 2}\,
[\,C_9\,C_{10}
\,\phi^{OO}(x)\,
(\phi^{OE}(x)
\,+\,C_{11}\,C_{12}
\,\phi^{EE}(x)\,\phi^{EO}(x)\,] \,\}~~.
\label{d1}
\end{eqnarray}
It is evident from (\ref{d1}) that all terms consist of
off-diagonally coupled Kaluza-Klein modes. As we have remarked before any
4-dimensonally Lagrangian term (after integration over extra dimensions)
necessarily contains at least a pair of Kaluza-Klein modes that are
off-diagonally coupled in the space given by (\ref{c1}). (As we have
remarked in the previous section, this is due to the
fact that if a term wholly consists of pairs of diagonally coupled
Kaluza-Klein modes then that term is even under the signature reversal
symmetry in contradiction with the invariance of the action under the
signature reversal symmetry.) This, in turn, leads to cancellation of the
vacuum expectation value of $T^\nu_\mu$ since it is proportional to terms
of the form
\begin{equation}
<0|T^\nu_\mu|0>~\propto~~
<0|\,a_{n,m}a_{r,s}^\dagger|0>\,=\,0~,~~
<0|\,a_{r,s}^\dagger
a_{r,s}|0>\,=\,0
~~~~~~~n\neq r~~~~\mbox{and/or}
~~~~m\neq s
\label{d2}
\end{equation}
(because $a_{r,s}|0>\,=\,0$, and $[\,a_{n,m},
a_{r,s}^\dagger\,]\,=\,0$ for
$n\neq r$ and/or $m\neq s$) where
$a_{n,m}$, $a_{n,m}^\dagger$ are the creation and annihilation operators
in the expansion the quantum fields (in Minkowski space) given by
\begin{equation}
\phi_{n,m}(x)\,=\,\sum_{\vec{k}}\,[\,
a_{n,m}(\vec{k})\,e^{-iEt}
e^{i\vec{k}.\vec{x}}
\,+\,a_{n,m}^\dagger(\vec{k})\,e^{iEt}
e^{-i\vec{k}.\vec{x}}\,]~~.
\label{d3}
\end{equation}
The same reasoning is true for all fields. Therefore the vacuum energy
density of all fields in this scheme is zero.
In this scheme the Casimir effect can be seen as follows: Introduction of
(metallic) boundaries into the vacuum results in a change in the vacuum
configuration for the usual particles while the ghost sector vacuum
remains the same. This point can be seen better when one considers the
the energy momentum tensor written in terms of
the usual and ghost fields by using (\ref{lba14})
\begin{eqnarray}
T^\nu_\mu&=&
(\partial_\mu\phi_1(x)
\partial^\nu\phi_1(x))
\,-\,
\partial_\mu\phi_2(x)
\partial^\nu\phi_2(x)) \nonumber \\
&&+\,C_1\,C_2\,(\,
\partial_\mu \phi_3(x)
\partial^\nu\phi_3(x)
\,-\,
\partial_\mu\phi_4(x)
\partial^\nu\phi_4(x)\,)
\nonumber \\
&&+\,C_3\,C_4\,(\,
\partial_\mu\phi_5(x)\partial^\nu\phi_5(x)\,)
\,-\,
\partial_\mu\phi_6(x)
\partial^\nu\phi_6(x)\,)
\nonumber \\
&&-\,\frac{1}{2}\delta_\mu^\nu\,\{\,
\eta^{\mu\nu}(\partial_\mu\phi_1)
\partial_\nu(\,\phi_1)
\,-\,\eta^{\mu\nu}(\partial_\mu\phi_2)
\partial_\nu(\,\phi_2) \nonumber \\
&&+\,C_1\,C_2\,(\,
\eta^{\mu\nu}(\partial_\mu \phi_3(x)\,)\,
(\partial_\nu\phi_3(x)\,)
\,-\,\eta^{\mu\nu}
\,\partial_\mu(\phi_4(x)\,)\,
\partial_\nu(\phi_4(x)\,)\,)
\nonumber \\
&&+\,C_3\,C_4\,(\,
\eta^{\mu\nu}\partial_\mu(\phi_5(x)\,)\,\partial_\nu (\phi_5(x)\,)
\,-\,
\eta^{\mu\nu}\partial_\mu (\phi_6(x)\,)\,
\partial_\nu\,(\phi_6(x)\,)\,)\,]
\nonumber \\
&&-k^2\,
[\,C_5\,C_6\,(\,\phi_3(x)\,\phi_3(x)
\,-\,
\phi_4(x)\,\phi_4(x)\,)
\,+\,C_7\,C_8\,(\,\phi_5(x)\,\phi_5(x)
\,-\,
\phi_{6}(x)\,\phi_{6}(x)\,) \,]\nonumber \\
&&-\frac{1}{2}k^{\prime 2}\,
[\,C_9\,C_{10}\,(\,\phi_{7}(x)\,\phi_{7}(x)
\,-\,
\phi_{8}(x)\,\phi_{8}(x)\,)
\nonumber \\
&&+\,C_{11}\,C_{12}
\,(\,\phi_{9}(x)\,\phi_{9}(x)\,-\,\phi_{10}(x)\,\phi_{10}(x)\,)
\,] \,\}~~.
\label{d4}
\end{eqnarray}
To see the situation better let us consider a simple case, for example the
part of the energy-momentum tensor that contains the zero mode. After
introduction of the (metallic) boundary the vacuum expectation value of
the
corresponding part of the energy momentum tensor changes as follows
\begin{eqnarray}
<0|\,T^\nu_\mu\,|0>_0&=&
<0|\,(\partial_\mu\phi_1)
\partial^\nu(\,\phi_1)\,|0>_0
\,-\,
<0|\,(\partial_\mu\phi_2)
\partial^\nu(\,\phi_2)\,|0>_0\,=\,0\,\rightarrow\,
<0|\,T^\nu_\mu\,|0>_{\Sigma_1} \nonumber \\
&=&
<0|\,(\partial_\mu\phi_1)
\partial^\nu(\,\phi_1)\,|0>_{\Sigma_1}
\,-\,
<0|\,(\partial_\mu\phi_1)
\partial^\nu(\,\phi_1)\,|0>_0\,
\neq\,0
\label{d5}
\end{eqnarray}
where the subscript $0$ denotes complete vacuum (without any boundary) and
the subscript $\Sigma_1$ denotes the vacuum in the presence of the
(metallic) boundaries. It is evident that this scheme results in an
automatic application of the usual subtraction prescription in
the calculation of Casimir energies i.e an automatic subtraction of the
zero point energy from the total vacuum energy in the presence of a
boundary.
To summarize; I have shown that the quantum zero modes do not contribute
to cosmological constant (CC) in the scheme presented here in the presence
of metric reversal symmetry. Now, for the sake of completeness, I discuss
the other possible contributions to CC. The first
additional contribution is a bulk CC (that is geometric in
origin). The transformations (\ref{ca4}) and/or (\ref{ca5}) (or
equivalently the form of the conformal factors given in (\ref{c1a}) )
forbid a bulk CC (or equivalently make it vanish
after integration over extra dimensions). The second possible
contribution is a 4-dimensional CC that may be induced by
the part of the scalar curvature that depends only on extra
dimensions. Eq.(\ref{ca10}) implies that such a
contribution vanishes provided that the half of the extra dimensions in
the $2(2n+1)$ dimensional space (embedding the usual 4-dimensional space)
are spacelike and half are timelike as in \cite{Erdem1}. The next possible
contribution is the vacuum energy induced by the vacuum expectation value
of Higgs field, and is about $\sim 10^{55}$ times the
observational value of CC. This contribution has the
form of a bulk CC, and hence vanishes provided that
Higgs field propagates in the whole space or in its a $2(2k+1)$
dimensional subspace. Another possible standard contribution is the vacuum
expectation value of the QCD vacuum (that is about $10^{44}$ times the
observational value of CC).
At classical level the same condition as Higgs
field applies to the space where the corresponding condensate forms.
However a rigorous conclusion needs an analysis at quantum
level.
There are many phenomenological and/or
nonperturbative
schemes aiming to explain the formation and value of QCD condensates
(hence of QCD vacuum energy) that only partially can give insight into the
problem \cite{QCD}. So a definite conclusion
about this point needs
further additional study. However this issue is not as urgent as the issue
of zero point energies because the problem of zero point energies arises
as soon as the fields are introduced (and quantized) even in the case of
free fields while QCD vacuum is present only inside the hadrons and is not
perfectly well understood. Another important issue to be studied in future
is: Although I have shown that quantum fields do not induce non-vanishing
vacuum energy at fundamental Lagrangian level (i.e. quantum zero modes do
not contribute to vacuum energy) in the presence of metric reversal
symmetry there is no guarantee of non-zero contributions to vacuum energy
due higher dimension operators (than those of the fundamental Lagrangian).
If this is the case the resulting vacuum energy due to quantum fields will
be scale dependent through renormalization group equations. The most
reasonable consequence of this, in turn, would be a time varying
cosmological constant \cite{varying-CC}. Time varying cosmological
constant scenarios together with quintessence models have an additional
virtue
of explaining cosmic coincidence i.e. the energy density of matter and
dark energy being in the same order of magnitude, that is not addressed
by the scheme in this paper. All these studies must
studied in future for a clearer picture of the cause and dynamics of the
accelerated expansion of the universe.
\section{inducing a small cosmological constant by breaking the symmetry
by a small amount}
We have seen that contribution of quantum fields to the
energy-momentum tensor is always zero in the manifestation of
signature reversal symmetry. However this is not true for
classical fields. For example consider a classical field that depends
only on extra dimensions and has a
Fourier expansion as in (\ref{cb6},\ref{cb9}). This field gives non-zero
contribution to 4-dimensional cosmological constant (CC) after integration
over
extra dimensions. For example one may take
\begin{equation}
{\cal L}_{cl}\,=\,\alpha\,
v_{1,0}v_{0,1}
\cos{k\,y}\,\cos{k^\prime\,z}
\label{e1}
\end{equation}
where $\alpha\,<<\,1$ is a constant that reflects that ${\cal L}_{cl}$ is
small since it corresponds to the breaking of the
$x^A\rightarrow\,i\,x^A$,
$x^{A^\prime}\rightarrow\,i\,x^{A^\prime}$
symmetries separately by a small amount, and $v_{1,0}$, $v_{0,1}$ are some
constants. If one takes the same space as in the section 4 and take
$N=6$, $N^\prime=2$ (as before) then ${\cal L}_{cl}$ in (\ref{e1}) after
integration over extra dimensions results in a 4-dimensional
CC given by
\begin{equation}
\Lambda^{(4)}\,=\,\frac{3\alpha\,v_{1,0}v_{0,1}}{16}(LL^\prime)^2
~~.\label{e2}
\end{equation}
For $\alpha\,v_{1,0}v_{0,1}\simeq\,1$ (\ref{e2}) results in the observed
value of
$\Lambda\simeq\,(10^{-3}eV)^4$ for $L$, $L^\prime$ in the millimeter scale
and for $\alpha\,v_{1,0}v_{0,1}\simeq \,\frac{M_{ew}}{M_{pl}}\simeq
10^{-17}$,
for example,
$L(L^\prime)\,<\,10^{-7}m$. In any case a non-zero CC
if exists is a classical phenomena in this scheme. Another point is that
the energy density due to CC obtained in a way similar
to (\ref{e2}) may be argued to be in the order of matter (ie. the usual
matter plus dark matter) density since both are induced by matter
Lagrangian that corresponds to breaking of the
$x^A\rightarrow\,i\,x^A$, $x^{A^\prime}\rightarrow\,i\,x^{A^\prime}$
symmetries. However there is a difference between the two cases. The
induction of $S_M$ corresponds to breaking the symmetry that corresponds
to the simultaneous application of
$x^A\rightarrow ix^A$ and
$x^{A^\prime}\rightarrow ix^{A^\prime}$ while ${\cal L}_{cl}$ in
Eq.(\ref{e1}) corresponds to breaking of $x^A\rightarrow ix^A$ and
$x^{A^\prime}\rightarrow ix^{A^\prime}$ separately.
\section{Conclusion}
We have considered a space that is a sum of two $2(2n+1)$ dimensional
spaces with $R^2$ gravity and metric reversal symmetry. The usual
4-dimensional space is embedded in one of these subspaces. We have shown
that the curvature sector reduces to the usual Einstein-Hilbert action,
and the 4-dimensional energy-momentum tensor of matter fields generically
mixes different Kaluza-Klein modes so that each homogeneous
term contains at least one pair of off-diagonally coupled Kaluza-Klein
modes. This, in turn, results in vanishing of the vacuum expectation value
of the energy-momentum tensor of quantum fields. I have also shown that
such a model is equivalent to a variation of Linde's model (where the
universe consists of the usual universe plus a ghost one). There may be
some relation between this scheme and the Pauli-Villars regularization
scheme \cite{Pauli-Villars} ( that employs ghost-like auxiliary fields for
regularization), and also between this scheme and Lee-Wick quantum theory
\cite{Lee}. In my opinion all these points need further and detailed
studies in future.
\begin{acknowledgments}
This work was supported in part by Scientific and Technical Research
Council of Turkey under grant no. 107T235.
\end{acknowledgments}
|
2,877,628,091,255 | arxiv | \chapter*{Abstract}
\thispagestyle{empty}
The flavour puzzle is an open problem both in the Standard Model and in its possible supersymmetric or grand unified extensions. In this thesis, we discuss possible explanations of the origin of fermion mass hierarchies and mixings by the use of non-Abelian discrete flavour symmetries. We present two realisations in which the flavour symmetry contains either the double-valued group $T'$ or the permutation group $S_4$: the spontaneous breaking of the flavour symmetry produces realistic fermion mass hierarchies, the lepton mixing matrix close to the so-called tribimaximal pattern ($\sin^2\theta_{12}=1/3$, $\sin^2\theta_{23}=1/2$ and $\theta_{13}=0$) and the quark mixing matrix comparable to the Wolfenstein parametrisation.
The exact tribimaximal scheme deviates from the experimental best-fit angles for values at most of the $1\sigma$ level. In the $T'$- and $S_4$-based models, the symmetry breaking accounts for such discrepancies, by introducing corrections to the tribimaximal pattern of the order of $\lambda^2$, being $\lambda$ the Cabibbo angle. On the experimental side, the present measurements do not exclude $\theta_{13} \sim \lambda$ and therefore, if it is found that $\theta_{13}$ is close to its present upper bound, this could be interpreted as an indication that the agreement with the tribimaximal mixing is accidental. Then a scheme where instead the bimaximal mixing ($\sin^2\theta_{12}=1/2$, $\sin^2\theta_{23}=1/2$ and $\theta_{13}=0$) is the correct first approximation modified by terms of $\mathcal{O}(\lambda)$ could be relevant. This recalls the well-known empirical quark-lepton complementarity, for which $\theta_{12}+\lambda\sim \pi/4$. We present a flavour model based on the spontaneous breaking of the $S_4$ discrete group which naturally leads to the bimaximal mixing at the leading order and, after the introduction of the breaking terms, to $\theta_{13} \sim \lambda$ and $\theta_{12}+\mathcal{O}(\lambda)\sim \pi/4$, which we call ``weak'' complementarity relation.
Masses and mixings are evaluated at a very high energy scale and for a comparison with experimental measurements we illustrate a general analysis on the stability under the renormalisation group running to evolve these observables to low energies.
We consider also the constraints on flavour violating processes arising from introducing a flavour symmetry: in particular we concentrate on the lepton sector, analysing some lepton flavour violating decays and the discrepancy between the theoretical prediction and the experimental measurement of the anomalous magnetic moment of the muon. We develop the study both in the Standard Model scenario and in its minimal supersymmetric extension, using at first an effective operator approach and then a complete loop computation. Interesting hints for the scale of New Physics and for the forthcoming experimental results from LHC are found.
Finally we discuss the impact of an underlining flavour symmetry on leptogenesis in order to explain the baryon asymmetry of the universe.
\clearpage{\pagestyle{empty}\cleardoublepage}
\newpage
\addtolength{\topmargin}{-1cm}
\chapter*{Riassunto della Tesi}
\thispagestyle{empty}
Lo studio del sapore nella fisica particellare \`e tutt'oggi un problema aperto sia nel Modello Standard sia nelle sue estensioni supersimmetriche o grande unificate. In questa tesi, affrontiamo la questione dell'origine della gerarchia di massa nei fermioni e dei loro angoli di mescolamento, utilizzando simmetrie discrete di sapore non Abeliane. In particolare, illustriamo due modelli in cui la simmetria di sapore contiene il gruppo $T'$ o il gruppo $S_4$: la rottura spontanea della simmetria di sapore produce come effetto delle gerarchie di massa realistiche per i fermioni, la matrice di mescolamento leptonica con la cosiddetta struttura tribimassimale ($\sin^2\theta_{12}=1/3$, $\sin^2\theta_{23}=1/2$ e $\theta_{13}=0$) con piccole correzioni e la matrice di mescolamento per i quark che ben si confronta con la parametrizzazione di Wolfenstein.
La struttura tribimassimale presenta delle deviazioni al massimo ad $1\sigma$ dai valori centrali trovati sperimentalmente. Nei modelli basati sui gruppi $T'$ e $S_4$, la rottura della simmetria compensa a queste piccole deviazioni, introducendo delle correzioni alla struttura tribimassimale dell'ordine di $\lambda^2$, dove $\lambda$ rappresenta l'angolo di Cabibbo. Sperimentalmente, le misure attuali non escludono $\theta_{13}\sim\lambda$ e quindi se il valore dell'angolo di reattore risulter\`a vicino al suo attuale limite superiore, questo potrebbe essere interpretato come un'indicazone che l'accordo con la struttura tribimassimale \`e solo accidentale. In questo caso, la struttura bimassimale ($\sin^2\theta_{12}=1/2$, $\sin^2\theta_{23}=1/2$ and $\theta_{13}=0$) potrebbe essere in prima approssimazione una migliore scelta se poi intervengono delle correzioni dell'ordine di $\lambda$. Questo meccanismo ricorda l'osservazione del tutto empirica per cui $\theta_{12}+\lambda\sim \pi/4$, che va sotto il nome di relazione di complementariet\`a. Studiamo questa alternativa in un modello basato sulla rottura spontanea del gruppo $S_4$ che presenta la struttura bimassimale in prima approssimazione e, dopo l'introduzione dei termini di rottura, $\theta_{13}\sim\lambda$ e $\theta_{12}+\mathcal{O}(\lambda)\sim \pi/4$, che chiamiamo complementariet\`a ``debole''.
In questi modelli, le masse e gli angoli di mescolamento sono tipicamente studiati a energie molto alte e per il confronto con le misure sperimentali sviluppiamo uno studio sulla stabilit\`a di questi osservabili durante l'evoluzione a bassa scala dovuta al gruppo di rinormalizzazione.
Inotre consideriamo i limiti su processi con violazione di sapore che sorgono dall'uso di una simmetria di sapore: in particolare analizziamo alcuni decadimenti con violazione di sapore leptonico e la discrepanza tra la predizione teorica e la misura sperimentale del momento magnetico anomalo del muone. Sviluppiamo l'analisi sia nel Modello Standard sia nella sua estensione supersimmetrica minimale, usando prima un approccio di Lagrangiana efficace e poi uno studio quantistico a un loop. Troviamo interessanti indicazioni sulla scala di energia della nuova fisica, specialmente in previsione dei prossimi risultati a LHC.
In fine discutiamo l'impatto dell'introduzione di una simmetria di sapore sulla leptogenesi, utilizzata per spiegare l'asimmetria barionica nell'universo.
\addtolength{\textheight}{-5cm}
\clearpage{\pagestyle{empty}\cleardoublepage}
\addtolength{\topmargin}{4cm}
\newpage\pagestyle{plain}
\pagenumbering{Roman}
\tableofcontents
\clearpage{\pagestyle{empty}\cleardoublepage}
\newpage
\chapter*{Introduction and Outline}
\addcontentsline{toc}{chapter}{Introduction}
\setcounter{equation}{0}
\setcounter{footnote}{3}
The Standard Model of the particle physics is not completely successful in describing nature and its behaviour and neutrinos are the most outstanding proof of this defeat: indeed the solar and atmospheric anomalies find a simple and attractive solution in the oscillations of three massive neutrinos. It is then interesting and fundamental to understand what is the theory which embeds the Standard Model and describes neutrino masses and mixings at the same time.
While global fits on neutrino oscillation experimental data have pointed out a scenario with two large angles and an approximately vanishing one, by looking at the theoretical developments in the neutrino sector of the last few years, we cannot feel satisfied: there is a so large number of existing models, that we can interpret it as the lack of a unique and compelling theoretical picture. Furthermore, it has not been given yet an answer to several basic questions: why are neutrinos much lighter than charged fermions? which is the absolute neutrino mass scale? which is the correct neutrino spectrum? why are lepton mixing angles so different from those of the quark sector? which is the most probable range for $|U_{e3}|$? is the lepton atmospheric angle maximal? which is the nature of the active neutrinos, Dirac or Majorana? Other similar queries, such those on the number of the fermion generations, on the origin of the lepton and quark mass hierarchies and on the nature of the CP violation, naturally arise regarding the full flavour sector. The lack of a fundamental understanding of all these problems is addressed as the ``flavour puzzle''.
An interesting approach to search for a solution to the flavour problem consists in extending the gauge group of the Standard Model with an additional symmetry acting only on the fermion generations. In literature there are many attempts in this direction with a variegated choice of the symmetry: either continuous or discrete, either Abelian or non-Abelian, either global or local. Since the mixing patterns of leptons and quarks manifest large differences, it seems reasonable to introduce two different flavour symmetries, one for each sector. A common belief among many physicists, however, is that these apparent differences should be explained with a unified description and therefore a valuable task would be to use a unique symmetry, able to describe at the same time the small quark mixings and the (two) large leptonic ones. The closeness of the leptonic atmospheric angle $\theta_{23}$ to the maximal value \cite{NeutrinoData,Fogli:Indication,Maltoni:Indication} gives relevant indications on the symmetry: it is well known \cite{LV_Theorem,FeruglioSymBreaking} that a maximal $\theta_{23}$ is not achievable with an exact realistic symmetry. This forces to study models based on the breaking of the flavour symmetry and a promising choice is based on the non-Abelian discrete group $A_4$, the group of even permutations of four objects. The basic idea of the model is to get, in first approximation and in the basis of diagonal charged lepton mass matrix, the neutrino mixing matrix of the so-called tribimaximal (TB) pattern \cite{HPS} ($\sin^2\theta^{TB}_{12}=1/3$, $\sin^2\theta^{TB}_{23}=1/2$ and $\sin\theta^{TB}_{13}=0$); as a result also the observable lepton mixing matrix develops the tribimaximal structure and it represents a very good approximation of the experimental measurements; then the corrections from the next-to-leading order terms provide perturbations to the angles and in particular a deviation from zero for the reactor angle, in agreement with the recent indication of a positive value for $\theta_{13}$ \cite{Fogli:Indication}. In a series of papers \cite{AF_Extra,AF_Modular,AFL_Orbifold} on the $A_4$ group, it is shown how to get a spontaneous breaking scheme responsible for the tribimaximal mixing, by the use of a convenient assignment of the quantum numbers to the Standard Model particles and the introduction of a suitable set of scalar fields, the ``flavons'', which, getting non-zero vacuum expectation values (VEVs), are responsible for the symmetry breaking. A central aspect of the model building is the symmetry breaking chain: $A_4$ is broken down to two distinct subgroups, which correspond to the low-energy flavour symmetries of the charged leptons and of neutrinos.
When extending such $A_4$-based model to quarks to get a unified description for both the sectors, we find that $A_4$ is not suitable for quarks as it is for leptons: adopting for quarks the same representations as for leptons, the CKM mixing is the unity matrix, but the sub-leading contributions do not provide the right corrections in order to get a realistic quark mixing matrix. Furthermore it is necessary to keep separated leptons and quarks at least at the leading order, this to prevent mutual (possibly dangerous) corrections between the two sectors. A possibility to overcome this problem is to enlarge the symmetry group. We find a promising candidate in $T^\prime$ \cite{FHLM_Tp}, the double covering of $A_4$: this group has three two-dimensional representations more than $A_4$ and the idea is to adopt for leptons the same representations as in \cite{AF_Extra,AF_Modular,AFL_Orbifold} and to use the doublet ones to describe quarks. As a result we manage in keeping under control the interferences between the two sectors, preserving the results of the $A_4$-based model, and, in addition, we get interesting features in the quark sector: the top Yukawa coupling arises from a renormalisable operator, while the other Yukawas come from sub-leading order terms; the vacuum misalignment of the flavons, which justify the symmetry breaking chain, is a natural solution of the minimisation of the scalar potential; two predictions hold between quark masses and the entries of the CKM matrix,
\begin{equation}
\sqrt{\dfrac{m_d}{m_s}}=|V_{us}|\;,\qquad\qquad\sqrt{\dfrac{m_d}{m_s}}=\left|\dfrac{V_{td}}{V_{ts}}\right|\;,
\end{equation}
where the first expression is the well-known Gatto-Sartori-Tonin relation \cite{GST_Relation}.
There is an alternative successful realisation to describe simultaneously leptons and quarks: in \cite{BMM_S4,BMM_SS}, we study a model based on the permutation discrete group $S_4$ which contains $A_4$ as a subgroup and has the same number of elements as $T'$, but different representations. This enables the possibility to describe neutrinos with a different mass matrix, still diagonalised by the tribimaximal mixing, with respect to the $A_4$ model. This leads to a completely new neutrino phenomenology: considering only the leading order contributions, it is in principle possible, even if difficult, to distinguish among the different realisations; unfortunately, the introduction of the higher-order corrections makes the predictions overlap in all the parameter space, apart from very small areas, which will be hard to test in the near future.
All these models indicate a value for the reactor angle very close to zero. However, if the next future neutrino-appearance experiments will find a value for $\theta_{13}$ close to its present upper bound, about the Cabibbo angle $\lambda$, the tribimaximal mixing should be considered as an accidental symmetry. In this case a new leading principle would be necessary. In \cite{AFM_BimaxS4} we use the old idea of the quark-lepton complementary relation \cite{Complementarity}, $\theta_{12}+\lambda\sim \pi/4$, in order to recover a neutrino mixing in agreement with the data, but with a reactor angle close to its present upper bound. We develop a model based on the $S_4$ discrete group in which the PMNS matrix coincides with the bimaximal (BM) mixing \cite{BMmixing} ($\sin^2\theta^{BM}_{12}=1/2$, $\sin^2\theta^{BM}_{23}=1/2$ and $\sin\theta^{BM}_{13}=0$) in first approximation, in the basis of diagonal charged lepton mass matrix; since the BM value of the solar angle exceeds the $3\sigma$ error, large corrections are needed to make the model agree with the data; we naturally constrain the perturbations to get the ``weak'' complementarity relation, $\theta_{12}+\mathcal{O}(\lambda)\sim \pi/4$, and $\sin\theta_{13}\sim\lambda$ in most of the parameter space. In this model we only deal with the lepton sector and in order to include a realistic description of quarks we investigate on a Pati-Salam grand unified model \cite{ABM_PSS4} in which we recover the weak complementary relations and a value for the reactor angle close to $\lambda$. Furthermore we analyse the Higgs scalar potential, providing a natural description for the gauge symmetry breaking steps.\\
In all of the flavour models listed above, mass matrices and mixings are evaluated at a very high energy scale. On the other hand, for a comparison with the experimental results, it is important to evolve the observables to low energies through the renormalisation group running. In general, deviations from high energy values due to this running consist in minor corrections, but the future improvements of neutrino experiments could hopefully bring the precision down to these small quantities. For this reason we discuss \cite{LMP_RGE} the effects of the renormalisation group running on the lepton sector when masses and mixings are the result of an underlying flavour symmetry.\\
Once we consider the predictions of the models based on $A_4$, $T'$ and $S_4$, they all can fit the experimental data. However, comparing the phenomenological results of the various models, we cannot see a clear distinction. In order to find new ways to characterise each model, it would be highly desirable to investigate on other types of observables, not directly related to neutrino properties. In \cite{FHLM_Efficace} we use an effective operator approach to discuss the Lagrangian of the model: it is a very useful tool because it is not necessary to know the particle spectrum above the electroweak energy scale. The simplest scenario consists in the presence of two thresholds: a first very large, $M_{GUT}$, where can live right-handed neutrinos, superheavy gauge bosons, superheavy scalar fields, and where grand unified theories (GUTs) and flavour symmetries find their natural settlement; a second very low, the electroweak scale, at which particle masses and mixing angles have the measured values. While neutrino masses and mixing angles can be interpreted as a result at low-energy of a larger and more complete theory at $M_{GUT}$, it is very difficult from them to get information about the fundamental theory. We need to find some new observables which are not directly related to neutrino properties. A possibility is to introduce an intermediate energy scale, $M$, at about $1-10$ TeV: this corresponds to the presence of some kind of new physics, which we do not specify, at this scale. Other indications, which enforce this choice, come for example from the discrepancy in the anomalous magnetic moment of the muon, the presence of Dark Matter, the convergence to a unique value of the gauge coupling constants and the solution to the hierarchy problem, which all would benefit by the presence of new physics at $1-10$ TeV.
Studying the effective Lagrangian of the model, we can point out the presence of a unique five-dimensional operator, which is responsible for the neutrino masses, and many six-dimensional operators, that represent those new observables we are interesting in: electric dipole moments $d_i$, magnetic dipole moments $a_i$ and lepton flavour violating transitions such as $\mu\to e\gamma$, $\tau\to\mu\gamma$ and $\tau\to e\gamma$. A first distinctive feature of the model is to predict the branching ratios of all the previous decays equal and, as a first result considering the present MEGA bound, the $\tau$ decays are below the future expected sensitivities. Afterwards, constraining the operators with the experimental values or bounds of the corresponding observables, we get interesting bounds on the value of the scale $M$: while the discrepancy in $a_\mu$ indicates a value of about $3$ TeV, very interesting for LHC, $d_e$ and the $BR(\mu\to e\gamma)$ push it up to $10$ TeV in the best case. Other very stringent bounds come from the $4$-fermion operators which fix the lower value for $M$ at about $15$ TeV. For this reason we conclude that these values are probably above the region of interest to explain the discrepancy in $a_{\mu}$ and for LHC.
In a subsequent moment, we specify the kind of new physics that could be at the scale $M$ and we study a supersymmetric version of the effective model. The results seem to be very attractive due to a cancellation in the right-left block of the charged slepton mass matrix: the indication from the discrepancy in $a_\mu$ remains the same (in a low $\tan\beta$ regime), but the bound from $BR(\mu\to e\gamma)$ is softened and the final results indicate values for $M$ at a few TeV, which let us explain the discrepancy in $a_\mu$ and a possible positive signal for $\mu\to e\gamma$ at MEG. Finally the model indicates an upper bound for $\theta_{13}$ of few degrees, which is close to the future expected sensitivity.
In \cite{FHLM_LFV,FHM_VEV}, we move from the effective approach to a full supersymmetric scenario. In this way we have a stricter control on the contributions of the observables discussed above and we can investigate in the supersymmetric particle spectrum. Through a detailed calculation of the slepton mass matrices in the physical basis and evaluating the branching ratios for the mentioned lepton flavour violating decays in the mass insertion approximation, we find that their behaviour, expected from the supersymmetric variant of the effective Lagrangian approach, is violated by a single, flavour independent contribution to the right-left block of the slepton mass matrix, associated to the sector necessary to maintain the correct breaking of the flavour symmetry. We also enumerate the conditions under which such a contribution is absent and the original behavior is recovered, though we could not find a dynamical explanation to justify the realisation of these conditions in our model.
Concerning the agreement of our results with the experimental measurements and bounds, assuming a supergravity framework with a common mass scale $m_{SUSY}$ for soft sfermion and Higgs masses and a common mass $m_{1/2}$ for gauginos at high energies, we numerically study the normalised branching ratios of $\ell_i\to \ell_j\gamma$ using full one-loop expressions and explore the parameter space of the model. We find that the branching ratios for $\mu\to e \gamma$, $\tau\to \mu\gamma$ and $\tau \to\ e \gamma$ are all of the same order of magnitude. Therefore, applying the present MEGA bound on $BR(\mu\to e \gamma)$, this implies that $\tau\to \mu\gamma$ and $\tau \to\ e \gamma$ have rates much smaller than the present (and near future) sensitivity. Moreover, still considering the MEGA limit, we find that small values of the symmetry breaking terms and $\tan\beta$ are favoured for $m_{SUSY}$ and $m_{1/2}$ below $1000$ GeV, i.e. in the range of a possible detection of sparticles at LHC. Furthermore, it turns out to be rather unnatural to reconcile the values of superparticle masses necessary to account for the measured deviation in the muon anomalous magnetic moment from the Standard Model value with the present bound on the branching ration of $\mu\to e\gamma$. In our model values of such deviation smaller than $100 \times 10^{-11}$ are favoured.\\
In the last few years there have been a great interest in studying the link between leptogenesis, as an explanation of the baryon asymmetry of the universe, and flavour symmetries: the See-Saw mechanisms explain the smallness of the light neutrino masses, but we need a flavour symmetry in order to predict the mixing angles; the type I is the best known mechanism and in this case the symmetry fixes the spectrum of the heavy right-handed neutrinos and the flavour structure of the Dirac and the Majorana mass matrices. While in a general context there is no relationship between low-energy parameters and $\epsilon$, the CP asymmetry from the heavy right-handed neutrino decays entering the definition of the baryon asymmetry, adding a flavour symmetry there could be a possibility to recover such a connection.
In \cite{ABMMM_Lepto} we provide a strict link among the nature of the PMNS mixing matrix and $\epsilon$: we find a model independent argument for which when the neutrino mixing matrix is mass-independent (it is independent from any mass parameter) then $\epsilon$ vanishes. This fact can be used in all the flavour models which present, in the limit of the exact symmetry, the tribimaximal pattern as well as the bimaximal and the golden-ratio schemes and some cases of the trimaximal one.
When the symmetry is broken, some corrections are introduced and in some special cases, when the number of the new parameters is sufficiently small, it is possible to express $\epsilon$ as a function of some low-energy observables.\\
The thesis is structured as follows. In chapter \ref{Sec:Overview} we first fix the notation and briefly review the main explanations for the light neutrino masses, and after we discuss about fermion masses and mixings in the physical basis, reporting their experimental determinations. Chapter \ref{Sec:FlavourPuzzle} is devoted to the flavour problem and we summarise some well-known approaches to explain the observed data, such as M(L)FV, texture zeros, mass-independent textures and flavour symmetries, focussing on discrete non-Abelian symmetry groups. In chapter \ref{Sec:FlavourModelsTBM} we deal with three different flavour models in which the lepton mixing matrix presents the tribimaximal structure in first approximaion: the first one, which accounts only with the lepton sector, is the well-known Altarelli-Feruglio model based on the $A_4$ group, whose will be recalled the main features; the other two, based on the groups $T'$ and $S_4$, represent possible alternatives to the Altarelli-Feruglio model in which also the quark sector is studied. In chapter \ref{Sec:FlavourModelsBM} we illustrate a flavour model based on the group $S_4$, in which the PMNS matrix corresponds to the bimaximal pattern in first approximations; considering the symmetry breaking corrections, we find the weak complementarity relation and a reactor angle close to the present upper bound. Chapter \ref{Sec:Running} deals with the stability of masses and mixings for leptons under the evolution from high to low energies through the renormalisation group running. In the last two chapters, \ref{Sec:FlavourViolation} and \ref{Sec:Leptogenesis}, we study the impact of an underlying flavour symmetry on flavour violating processes and on leptogenesis, respectively. In particular in chapter \ref{Sec:FlavourViolation} we focus on flavour models based on the group $A_4$, analysing their predictions for some rare decays, such as $\mu\to e\gamma$, $\tau\to e\gamma$ and $\tau\to\mu\gamma$, and the possibility to explain the discrepancy between the Standard Model prediction and the experimental measurement of the anomalous magnetic moment of the muon, through the presence of new physics at $1\div10$ TeV. Furthermore we investigate on the particle spectrum in the case of supersymmetric new physics. In chapter \ref{Sec:Leptogenesis} we present an argument for which $\epsilon$, the CP-violating parameter relevant for leptogenesis, is vanishing when the leptonic mixing matrix corresponds to a mass-independent texture in the exact symmetry phase. Finally in chapter \ref{Sec:Conclusions} we conclude and in the appendices we report details and useful tools.
\clearpage{\pagestyle{empty}\cleardoublepage}
\newpage \pagestyle{fancy}
\setlength{\headheight}{15pt}
\addtolength{\headwidth}{1cm}
\renewcommand{\headrulewidth}{0.4pt}
\textwidth 16.2 cm
\textheight 24 cm
\topmargin -0.3 cm
\renewcommand{\chaptermark}[1]{\markboth{{\sc\chaptername\ \thechapter.\ #1}}{}}
\renewcommand{\sectionmark}[1]{\markright{{\sc\thesection\ #1}}{}}
\cfoot{}
\rhead[\fancyplain{}{\bfseries\leftmark}]{\fancyplain{}{\bfseries\thepage}}
\lhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries\rightmark}}
\renewcommand{\theequation}{\arabic{chapter}.\arabic{equation}}
\def\fnsymbol{footnote}{\fnsymbol{footnote}}
\pagenumbering{arabic}\setcounter{page}{1}
\input{TesiDottorato_Chapter1.tex}
\input{TesiDottorato_Chapter2.tex}
\input{TesiDottorato_Chapter3.tex}
\input{TesiDottorato_Chapter4.tex}
\input{TesiDottorato_Chapter5.tex}
\input{TesiDottorato_Chapter6.tex}
\input{TesiDottorato_Chapter7.tex}
\clearpage{\pagestyle{empty}\cleardoublepage}
\newpage
\chapter{Summary and Final Remarks}
\label{Sec:Conclusions}
\setcounter{equation}{0}
\setcounter{footnote}{3}
Here we present a very brief concluding summary, while detailed remarks can be found at the end of each chapter.
The thesis collects the results of a series of projects which investigate on the use of flavour symmetries, in particular discrete and non-Abelian, to account for the flavour problem. The thesis itself can be seen as a unique subject developed in several directions: we first deal with model building and subsequently we analyse the impact of the underlying flavour symmetries on flavor violation and on leptogenesis.
In the first part, we presented a series of flavour models which produce phenomenologically successful patterns for fermion masses and mixings: while in the quark sector the mixing matrix shows a Wolfenstein-type structure in all the realisations in which quarks are considered, in the lepton sector we focussed on two distinct textures, the tribimaximal and the bimaximal schemes. They answer to two different requirements: in the tribimaximal models the reactor angle is almost vanishing while in the bimaximal realisations it can reach values close to its present upper bound. The future experiments on $\nu_e$ appearance will be able to discriminate between these two proposals.
In the second part of the thesis, we focus on flavour models based on the group $A_4$, analysing their predictions for some rare decays in the lepton sector, such as $\mu\to e\gamma$, $\tau\to e\gamma$ and $\tau\to\mu\gamma$, and the possibility to explain the discrepancy between the Standard Model prediction and the experimental measurement of the anomalous magnetic moment of the muon, through the presence of new physics at $1\div10$ TeV. We first adopt an effective operator approach a la MLFV in which these observables are described by six-dimensional operators invariant under the flavour symmetry; thereafter we identify the kind of new physics with Supersymmetry and we introduce a complete set of Supersymmetry breaking terms consistent with the flavour symmetry. In this second study, we found that the present and future experimental bounds on $\mu\to e\gamma$ represent strong constraints on the models, forbidding parts of the parameter space, but they do not exclude possible observations of supersymmetric particles at LHC. Furthermore, the normalised branching rations for $\mu\to e\gamma$, $\tau\to e\gamma$ and $\tau\to\mu\gamma$ are found to be of the same order of magnitude and, given the present limit on $\mu\to e\gamma$, we can conclude that the $\tau$ decays are practically unobservable. Regarding the anomalous magnetic moment of the muon, the deviation of the experimentally observed value from the Standard Model prediction cannot be naturally explained in our framework, for $BR(\mu\to e\gamma)$ below the current bound, indeed the maximal value of such a deviation is around $100 \times 10^{-11}$ for $BR(\mu\to e\gamma) \lesssim 10^{-11}$.
Finally in chapter \ref{Sec:Leptogenesis} we presented an argument for which $\epsilon$, the CP-violating parameter relevant for leptogenesis, is vanishing when the leptonic mixing matrix develops a mass-independent texture in the exact symmetry phase. Only by allowing symmetry breaking contributions, $\epsilon$ receives positive corrections and leptogenesis represents a viable explanation of the baryon asymmetry of the universe.
\clearpage{\pagestyle{empty}\cleardoublepage}
\newpage
\chapter*{Acknowledgments}
\addcontentsline{toc}{chapter}{Acknowledgments}
\thispagestyle{empty}
{\it I am deeply grateful to Ferruccio Feruglio for his teachings, clear explanations, encouragements and for having transmitted to me his professional and genuine enthusiasm in doing physics over these years. Together with Ferruccio, I warmly thank Guido Altarelli for his stimulating collaboration and formative hospitality at CERN.}\\
{\it Then I would like to remember Reinier de Adelhart Toorop, Diego Aristizabal Sierra, Federica Bazzocchi, Claudia Hagedorn, Yin Lin, Ivo de Medeiros Varzielas, Stefano Morisi and Alessio Paris for many fruitful collaborations.}\\
{\it I also thank all the members of the theory group for suggestions and advice: in particular, many thanks to Gianguido Dall'Agata, Massimo Passera, Stefano Rigolin and Fabio Zwirner.}\\
{\it Finally I warmly thank the other Ph.D. students and my office mates, in particular Alessandra Albano, Alessandra Cagnazzo, Francesca Catino, Angela Fava, Alessandra Gnecchi, Elena Moretti and Mia Tosi.}\\
\clearpage{\pagestyle{empty}\cleardoublepage}
\newpage
\renewcommand{\chaptermark}[1]{\markboth{{\sc Appendix\ \thechapter.\ #1}}{}}
\chapter{Group Theory Details}
\label{AppendixA}
\lhead[\fancyplain{}{\bfseries\leftmark}]{\fancyplain{}{\bfseries\thepage}}
\rhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries\rightmark}}
\renewcommand{\theequation}{A.\;\arabic{equation}}
\setcounter{equation}{0}
\setcounter{footnote}{3}
\setcounter{chapter}{1}
\setcounter{section}{0}
In this appendix we report the character tables and the Clebsch-Gordan coefficients of the $A_4$, $T'$ and $S_4$ discrete groups. Notice that two distinct description have been reported for the group $S_4$: this corresponds to two distinct but equivalent choices of the generators, which help in the model building.
In the character tables, $C_{i}$ are the classes of the group, $^{\circ} C_{i}$ is the order of the $i ^{\mathrm{th}}$ class, i.e. the number of distinct elements contained in this class, $^{\circ} h_{C_{i}}$ is the order of the elements $A$ in the class $C_{i}$, i.e. the smallest integer ($>0$) for which the equation $A ^{^{\circ} h_{C_{i}}}= \mathbb{1}$ holds. Furthermore the tables contain one representative $\rm G$ for each class $C_{i}$ given as product of the generators $S$ and $T$ of the group.
\mathversion{bold}
\section{The Group $A_4$}
\label{AppA:A4}
\setcounter{footnote}{3}
\mathversion{normal}
\begin{table}[ht]
\begin{center}
\begin{tabular}{l|cccc|}
&\multicolumn{4}{|c|}{classes} \\
\cline{2-5}
& $C_{1}$ & $C_{2}$ & $C_{3}$ & $C_{4}$ \\
\cline{1-5}
\rule[0.15in]{0cm}{0cm} $\rm G$ &$\rm \mathbb{1}$ & $S$ & $T^2$ & $T$ \\
\cline{1-5}
$^{\circ} C_{i}$ & 1 & 3 & 4 & 4 \\
\cline{1-5}
$^{\circ} h_{C_{i}}$ & 1 & 2 & 3 & 3 \\
\hline
${\bf1}$ & 1 & 1 & 1 & 1 \\[3pt]
${\bf1}'$ & 1 & 1 & $\omega$ & $\omega^2$ \\[3pt]
${\bf1}''$ & 1 & 1 & $\omega^2$ & $\omega$ \\[3pt]
${\bf3}$ & 3 & -1 & 0 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{\it Character table of the group $A_4$. $\omega$ is the third root of unity, i.e. $\omega= e^{\frac{2 \pi i}{3}} = -\frac{1}{2} + i \frac{\sqrt{3}}{2}$.}
\label{AppA:table:A4chartab}
\end{table}
The group $A_4$ is generated by two elements $S$ and $T$ obeying the relations\cite{GroupRepresentations}:
\begin{equation}
S^2=(ST)^3=T^3=1\;.
\end{equation}
It has three independent one-dimensional representations, $\bf1$, $\bf1'$ and $\bf1''$ and one three-dimensional representation $\bf3$.
The one-dimensional representations are given by:
\begin{equation}
\begin{array}{lll}
{\bf1} & S=1 & T=1 \\[3mm]
{\bf1'} & S=1 & T=e^{i 4 \pi/3} \equiv \omega^2\\[3mm]
{\bf1''}& S=1 & T=e^{i 2\pi/3} \equiv\omega\\[3mm]
\end{array}
\end{equation}
The three-dimensional representation, in a basis where the generator $T$ is diagonal, is given by:
\begin{equation}
T=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & \omega^2 & 0 \\
0 & 0 & \omega \\
\end{array}
\right),\qquad\qquad
S=\dfrac{1}{3}
\left(
\begin{array}{ccc}
-1 & 2 & 2 \\
2 & -1 & 2 \\
2 & 2 & -1 \\
\end{array}
\right)\;.
\label{ST}
\end{equation}
We now report the multiplication rules between the various representations. In the following we use $\alpha=(\alpha_1,\,\alpha_2,\,\alpha_3)$ to indicate the elements of the first representation of the product and $\beta=(\beta_1,\,\beta_2,\,\beta_3)$ to indicate those of the second representation. Moreover $a,b=0,\pm1$ and we denote ${\bf1}^0\equiv{\bf1}$, ${\bf1}^1\equiv{\bf1}^\prime$, ${\bf1}^{-1}\equiv{\bf1}^{\prime\prime}$ and similarly for the doublet representations. On the right-hand side the sum $a+b$ is modulo 3.
We start with all the multiplication rules which include the one-dimensional representations:
\begin{equation}
\begin{array}{l}
{\bf1}\times {\bf r}={\bf r}\times{\bf1}={\bf r}\qquad\text{with ${\bf r}$ any representation}\;,\\[3mm]
{\bf1}^a\times{\bf1}^b={\bf1}^b\times{\bf1}^a={\bf1}^{a+b}\sim\alpha\beta\;,\\[3mm]
{\bf1}^\prime\times{\bf3}={\bf3}\sim\left(\begin{array}{c}
\alpha\beta_3 \\
\alpha\beta_1 \\
\alpha\beta_2\\
\end{array}\right)\;,\qquad
{\bf1}^{\prime\prime}\times{\bf3}={\bf3}\sim\left(\begin{array}{c}
\alpha\beta_2 \\
\alpha\beta_3 \\
\alpha\beta_1\\
\end{array}\right)\;.
\end{array}
\end{equation}
The multiplication rule with the three-dimensional representation is
\begin{equation}
\begin{array}{l}
{\bf3}\times{\bf3}={\bf3}_S+{\bf3}_A+{\bf1}+{\bf1}'+{\bf1}''\quad\text{with}\quad\!\left\{
\begin{array}{l}
{\bf1}\;\,\sim\alpha_1\beta_1+\alpha_2\beta_3+\alpha_3\beta_2\;,\\[3mm]
{\bf1}'\;\sim\alpha_3\beta_3+\alpha_1\beta_2+\alpha_2\beta_1\;,\\[3mm]
{\bf1}''\,\sim\alpha_2\beta_2+\alpha_1\beta_3+\alpha_3\beta_1\;,\\[3mm]
{\bf3}_S\sim\dfrac{1}{3}\left(\begin{array}{c}
2\alpha_1\beta_1-\alpha_2\beta_3-\alpha_3\beta_2\\
2\alpha_3\beta_3-\alpha_1\beta_2-\alpha_2\beta_1\\
2\alpha_2\beta_2-\alpha_1\beta_3-\alpha_3\beta_1\\
\end{array}
\right)\\[3mm]
{\bf3}_A\sim\dfrac{1}{2}\left(\begin{array}{c}
\alpha_2\beta_3-\alpha_3\beta_2\\
\alpha_1\beta_2-\alpha_2\beta_1\\
\alpha_3\beta_1-\alpha_1\beta_3\\
\end{array}\right)
\end{array}\right.
\end{array}
\end{equation}
Note that due to the choice of complex representation matrices for the real representation ${\bf3}$ the conjugate $\alpha^*$ of $\alpha \sim {\bf3}$ does not transform as ${\bf3}$, but rather $(\alpha_1^{\star},\, \alpha_3^*,\, \alpha_2^*)$ transforms as triplet under $A_4$. The reason for this is that $T^*= U_{23}^T\,T\,U_{23}$ and $S^*=U_{23}^T\,S\,U_{23}=S$ where $U_{23}$ is the matrix which exchanges the 2nd and 3rd row and column.
\mathversion{bold}
\section{The Group $T'$}
\label{AppA:Tp}
\setcounter{footnote}{3}
\mathversion{normal}
\begin{table}[ht]
\begin{center}
\begin{tabular}{l|ccccccc|}
&\multicolumn{7}{|c|}{classes}\\
\cline{2-8}
&$C_{1}$&$C_{2}$&$C_{3}$&$C_{4}$&$C_{5}$&$C_{6}$&$C_{7}$\\
\cline{1-8}
\rule[0.15in]{0cm}{0cm} $\rm G$ &$\rm \mathbb{1}$&$\rm \mathbb{R}$&$S$&$S T \mathbb{R}$&$T^{2}$&$T$&$(S T)^{2} \mathbb{R}$\\
\cline{1-8}
$^{\circ} C_{i}$ & 1 & 1 & 6 & 4 & 4 & 4 & 4 \\
\cline{1-8}
$^{\circ} h_{C_{i}}$ & 1 & 2 & 4 & 6 & 3 & 3 & 6 \\
\hline
$\bf1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
$\bf1'$ & 1 & 1 & 1 & $\omega$ & $\omega^{2}$ & $\omega$ & $\omega^{2}$ \\
$\bf1''$ & 1 & 1 & 1 & $\omega ^{2}$ & $\omega$ & $\omega ^{2}$ & $\omega$ \\
$\bf2$ & 2 & $-2$ & 0 & 1 & $-1$ & $-1$ & 1 \\
$\bf2'$ & 2 & $-2$ & 0 & $\omega$ & $-\omega^{2}$ & $-\omega$ & $\omega ^{2}$ \\
$\bf2''$ & 2 & $-2$ & 0 & $\omega ^{2} $ & $-\omega$ & $-\omega^{2}$ & $\omega$ \\
$\bf3$ & 3 & 3 & $-1$ & 0 & 0 & 0 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{\it Character table of the group $T^{\prime}$. $\omega$ is the third root of unity, i.e. $\omega= e^{\frac{2 \pi i}{3}} = -\frac{1}{2} + i \frac{\sqrt{3}}{2}$.}
\label{AppA:table:Tpchartab}
\end{table}
The matrices $S$ and $T$ representing the generator depend on the representations of the group:
\begin{equation}
\begin{array}{ccccc}
{\bf1} &\qquad& S=1 &\quad& T=1\\[3mm]
{\bf1}^\prime &\qquad& S=1 &\quad& T=\omega\\[3mm]
{\bf1}^{\prime\prime} &\qquad& S=1 &\quad& T=\omega^2\\[3mm]
{\bf2} &\qquad& S=A_1 &\quad& T=\omega A_2\\[3mm]
{\bf2}^\prime &\qquad& S=A_1 &\quad& T=\omega^2 A_2\\[3mm]
{\bf2}^{\prime\prime} &\qquad& S=A_1 &\quad& T=A_2\\[3mm]
{\bf3} &\qquad& S=\dfrac{1}{3}\left(\begin{array}{ccc}
-1 & 2\omega & 2\omega^2 \\
2\omega^2 & -1 & 2\omega \\
2\omega & 2\omega^2 & -1 \\
\end{array}\right)
&\quad& T=\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & \omega & 0 \\
0 & 0 & \omega^2 \\
\end{array}\right)
\end{array}
\end{equation}
where we have used the matrices
\begin{equation}
A_1=-\dfrac{1}{\sqrt{3}}\left(\begin{array}{cc}
i & \sqrt2e^{i\pi/12} \\
-\sqrt2e^{-i\pi/12} & -i \\
\end{array}\right)\;,\qquad
A_2=\left(\begin{array}{cc}
\omega & 0 \\
0 & 1 \\
\end{array}\right)\;.
\end{equation}
The multiplication rules involving one- and three-dimensional representations are equal to those ones of the $A_4$ group and we report here only the rules which deal with the two-dimensional representations:
\begin{eqnarray}
{\bf1}^a\times{\bf2}^b={\bf2}^b\times{\bf1}^a={\bf2}^{a+b}&&\hspace{-5mm}\sim\left(\begin{array}{c}
\alpha\beta_1 \\
\alpha\beta_2 \\
\end{array}\right)\\[3mm]
{\bf2}\times{\bf2}={\bf2}^\prime\times{\bf2}^{\prime\prime}={\bf2}^{\prime\prime}\times{\bf2}^\prime={\bf3}+{\bf1}&&
\text{with}\quad\left\{\begin{array}{ll}
{\bf3}\sim\left(\begin{array}{c}
\dfrac{1-i}{2}(\alpha_1\beta_2+\alpha_2\beta_1) \\
i\alpha_1\beta_1 \\
\alpha_2\beta_2 \\
\end{array}\right)\\[3mm]
{\bf1}\sim\alpha_1\beta_2-\alpha_2\beta_1\\
\end{array}
\right.\\[3mm]
{\bf2}\times{\bf2}^\prime={\bf2}^{\prime\prime}\times{\bf2}^{\prime\prime}={\bf3}+{\bf1}^\prime&&
\text{with}\quad\left\{\begin{array}{ll}
{\bf3}\sim\left(\begin{array}{c}
\alpha_2\beta_2 \\
\dfrac{1-i}{2}(\alpha_1\beta_2+\alpha_2\beta_1) \\
i\alpha_1\beta_1 \\
\end{array}\right)\\[3mm]
{\bf1}^\prime\sim\alpha_1\beta_2-\alpha_2\beta_1
\end{array}
\right.\\[3mm]
{\bf2}\times{\bf2}^{\prime\prime}={\bf2}^\prime\times{\bf2}^\prime={\bf3}+{\bf1}^{\prime\prime}&&
\text{with}\quad\left\{\begin{array}{ll}
{\bf3}\sim\left(\begin{array}{c}
i\alpha_1\beta_1 \\
\alpha_2\beta_2\\
\dfrac{1-i}{2}(\alpha_1\beta_2+\alpha_2\beta_1) \\
\end{array}\right)\\[3mm]
{\bf1}''\sim\alpha_1\beta_2-\alpha_2\beta_1
\end{array}
\right.\\[3mm]
{\bf2}\times{\bf3}={\bf2}+{\bf2}^\prime+{\bf2}^{\prime\prime}&&
\text{with}\quad\left\{\begin{array}{ll}
{\bf2}\;\sim\;\left(\begin{array}{c}
(1+i)\alpha_2\beta_2+\alpha_1\beta_1 \\
(1-i)\alpha_1\beta_3-\alpha_2\beta_1 \\
\end{array}\right)\\[3mm]
{\bf2}'\sim\;\left(\begin{array}{c}
(1+i)\alpha_2\beta_3+\alpha_1\beta_2 \\
(1-i)\alpha_1\beta_1-\alpha_2\beta_2 \\
\end{array}\right)\\[3mm]
{\bf2}''\sim\left(\begin{array}{c}
(1+i)\alpha_2\beta_1+\alpha_1\beta_3 \\
(1-i)\alpha_1\beta_2-\alpha_2\beta_3 \\
\end{array}\right)
\end{array}
\right.\\[3mm]
{\bf2}^\prime\times{\bf3}={\bf2}+{\bf2}^\prime+{\bf2}^{\prime\prime}&&
\text{with}\quad\left\{\begin{array}{ll}
{\bf2}\;\,\sim\left(\begin{array}{c}
(1+i)\alpha_2\beta_1+\alpha_1\beta_3 \\
(1-i)\alpha_1\beta_2-\alpha_2\beta_3 \\
\end{array}\right)\\[3mm]
{\bf2}^\prime\,\sim\left(\begin{array}{c}
(1+i)\alpha_2\beta_2+\alpha_1\beta_1 \\
(1-i)\alpha_1\beta_3-\alpha_2\beta_1 \\
\end{array}\right)\\[3mm]
{\bf2}^{\prime\prime}\sim\left(\begin{array}{c}
(1+i)\alpha_2\beta_3+\alpha_1\beta_2 \\
(1-i)\alpha_1\beta_1-\alpha_2\beta_2 \\
\end{array}\right)
\end{array}
\right.\\[3mm]
{\bf2}^{\prime\prime}\times{\bf3}={\bf2}+{\bf2}^\prime+{\bf2}^{\prime\prime}&&
\text{with}\quad\left\{\begin{array}{ll}
{\bf2}\;\,\sim\left(\begin{array}{c}
(1+i)\alpha_2\beta_3+\alpha_1\beta_2 \\
(1-i)\alpha_1\beta_1-\alpha_2\beta_2 \\
\end{array}\right)\\[3mm]
{\bf2}^\prime\,\sim\left(\begin{array}{c}
(1+i)\alpha_2\beta_1+\alpha_1\beta_3 \\
(1-i)\alpha_1\beta_2-\alpha_2\beta_3 \\
\end{array}\right)\\[3mm]
{\bf2}^{\prime\prime}\sim\left(\begin{array}{c}
(1+i)\alpha_2\beta_2+\alpha_1\beta_1 \\
(1-i)\alpha_1\beta_3-\alpha_2\beta_1 \\
\end{array}\right)
\end{array}
\right.
\end{eqnarray}
\mathversion{bold}
\section[The Group $S_4$ -- I Version]{The Group $\mathbf{S_4}$ -- I Version}
\label{AppA:S4}
\setcounter{footnote}{3}
\mathversion{normal}
\begin{table}[ht]
\begin{center}
\begin{tabular}{l|ccccc|}
&\multicolumn{5}{|c|}{classes}\\
\cline{2-6}
& $C_{1}$ & $C_{2}$ & $C_{3}$ & $C_{4}$ & $C_{5}$ \\
\cline{1-6}
\rule[0.15in]{0cm}{0cm} $\rm G$ &$\rm \mathbb{1}$& $S^2$ & $T$ & $ST^{2}$ & $S$ \\
\cline{1-6}
$^{\circ} C_{i}$ & 1 & 3 & 8 & 6 & 6 \\
\cline{1-6}
$^{\circ} h_{C_{i}}$ & 1 & 2 & 3 & 2 & 4 \\
\hline
$\bf1$ & 1 & 1 & 1 & 1 & 1 \\[3pt]
$\bf1'$ & 1 & 1 & 1 & $-1$ & $-1$ \\[3pt]
$\bf2$ & 2 & 2 & $-1$ & 0 & 0 \\[3pt]
$\bf3$ & 3 & $-1$ & 0 & 1 & $-1$ \\[3pt]
$\bf3'$ & 3 & $-1$ & 0 & $-1$ & 1 \\
\hline
\end{tabular}
\end{center}
\caption{\it Character table of the group $S_4$ -- I version.}
\label{AppA:table:S4chartabI}
\end{table}
The generators, $S$ and $T$, obey to the following rules
\begin{equation}
S^4= T^3= (ST^2)^2=1
\end{equation}
and can be written in the different representations as
\begin{equation}
\begin{array}{ccccc}
{\bf1} &\qquad& S=1 &\quad& T=1\\[3mm]
{\bf1}^\prime &\qquad& S=-1 &\quad& T=1\\[3mm]
{\bf2} &\qquad& S=\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)
&\quad& T=\left(
\begin{array}{cc}
\omega & 0 \\
0 & \omega^2 \\
\end{array}
\right)\\[3mm]
{\bf3} &\qquad& S=\dfrac{1}{3}\left(\begin{array}{ccc}
-1 & 2\omega & 2\omega^2 \\
2\omega & 2\omega^2 & -1 \\
2\omega^2 & -1 & 2\omega \\
\end{array}\right)
&\quad& T=\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & \omega^2 & 0 \\
0 & 0 & \omega \\
\end{array}\right)\\[3mm]
{\bf3'} &\qquad& S=\dfrac{1}{3}\left(\begin{array}{ccc}
1 & -2\omega & -2\omega^2 \\
-2\omega & -2\omega^2 & 1 \\
-2\omega^2 & 1 & -2\omega \\
\end{array}\right)
&\quad& T=\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & \omega^2 & 0 \\
0 & 0 & \omega \\
\end{array}\right)
\end{array}
\end{equation}
The $24$ elements define different subgroups of $S_4$: the elements of $\mathcal{C}_{2,4}$ define two different sets of $Z_2$ subgroups, corresponding to $S^2$ and $S T^2$ respectively, those of the class $\mathcal{C}_{4}$ a set of $Z_3$ Abelian discrete symmetries associated to $T$ and those belonging to $\mathcal{C}_{5}$ a set of $Z_4$ Abelian discrete symmetries corresponding to $S$. From the three relations that define the group $S_4$ we see that it contains also a non-Abelian subgroup, $S_3$. Indeed defining $S'= S^2$ and using $S^2 T S^2= T^2$ we get the relations that define $S_3$, namely
\begin{equation}
T^3= S^{\prime2}= (S' T)^2=1\;.
\end{equation}
We now report the Clebsch-Gordan coefficients for our basis. We start with all the multiplication rules which include the one-dimensional representations:
\begin{equation}
\begin{array}{lcl}
{\bf1}\times{\bf r}&=&{\bf r}\times{\bf1}={\bf r}\quad\text{with ${\bf r}$ any representation}\;,\\[5mm]
{\bf1}'\times{\bf1}'&=&{\bf1}\sim\alpha\beta\;,\qquad\qquad\quad\;\;
{\bf1}'\times{\bf2}\;=\;{\bf2}\sim\left(\begin{array}{c}
\alpha\beta_1 \\
-\alpha\beta_2 \\
\end{array}\right)\;,\\[3mm]
{\bf1}'\times{\bf3}&=&{\bf3}'\sim\left(\begin{array}{c}
\alpha\beta_1 \\
\alpha\beta_2 \\
\alpha\beta_3 \\
\end{array}\right)\;,\qquad
{\bf1}'\times{\bf3}'\;=\;{\bf3}\sim\left(\begin{array}{c}
\alpha\beta_1 \\
\alpha\beta_2 \\
\alpha\beta_3 \\
\end{array}\right)\;.
\end{array}
\end{equation}
The multiplication rules with the two-dimensional representation are the following:
\begin{equation}
\begin{array}{ll}
{\bf2}\times{\bf2}={\bf1}+{\bf1}'+{\bf2}&\quad
\text{with}\quad\left\{\begin{array}{l}
{\bf1}\sim\alpha_1\beta_2+\alpha_2\beta_1\\[3mm]
{\bf1}'\sim\alpha_1\beta_2-\alpha_2\beta_1\\[3mm]
{\bf2}\sim\left(\begin{array}{c}
\alpha_2\beta_2 \\
\alpha_1\beta_1 \\
\end{array}\right)
\end{array}
\right.\\[3mm]
{\bf2}\times{\bf3}={\bf3}+{\bf3}'&\quad
\text{with}\quad\left\{\begin{array}{l}
{\bf3}\sim\left(\begin{array}{c}
\alpha_1\beta_2+\alpha_2\beta_3 \\
\alpha_1\beta_3+\alpha_2\beta_1 \\
\alpha_1\beta_1+\alpha_2\beta_2 \\
\end{array}\right)\\[3mm]
{\bf3}'\sim\left(\begin{array}{c}
\alpha_1\beta_2-\alpha_2\beta_3\\
\alpha_1\beta_3-\alpha_2\beta_1 \\
\alpha_1\beta_1-\alpha_2\beta_2 \\
\end{array}\right)
\end{array}
\right.\\[3mm]
{\bf2}\times{\bf3}'={\bf3}+{\bf3}'&\quad
\text{with}\quad\left\{\begin{array}{l}
{\bf3}\sim\left(\begin{array}{c}
\alpha_1\beta_2-\alpha_2\beta_3\\
\alpha_1\beta_3-\alpha_2\beta_1 \\
\alpha_1\beta_1-\alpha_2\beta_2 \\
\end{array}\right)\\[3mm]
{\bf3}'\sim\left(\begin{array}{c}
\alpha_1\beta_2+\alpha_2\beta_3 \\
\alpha_1\beta_3+\alpha_2\beta_1 \\
\alpha_1\beta_1+\alpha_2\beta_2 \\
\end{array}\right)
\end{array}
\right.
\end{array}
\end{equation}
The multiplication rules with the three-dimensional representations are the following:
\begin{equation}
\begin{array}{c}
{\bf3}\times{\bf3}={\bf3}'\times{\bf3}'={\bf1}+{\bf2}+{\bf3}+{\bf3}'
\quad\;\text{with}\quad\left\{
\begin{array}{l}
{\bf1}\sim\alpha_1\beta_1+\alpha_2\beta_3+\alpha_3\beta_2 \\[3mm]
{\bf2}\sim\left(
\begin{array}{c}
\alpha_2\beta_2+\alpha_1\beta_3+\alpha_3\beta_1 \\
\alpha_3\beta_3+\alpha_1\beta_2+\alpha_2\beta_1 \\
\end{array}
\right)\\[3mm]
{\bf3}\sim\left(\begin{array}{c}
2\alpha_1\beta_1-\alpha_2\beta_3-\alpha_3\beta_2 \\
2\alpha_3\beta_3-\alpha_1\beta_2-\alpha_2\beta_1 \\
2\alpha_2\beta_2-\alpha_1\beta_3-\alpha_3\beta_1 \\
\end{array}\right)\\[3mm]
{\bf3}'\sim\left(\begin{array}{c}
\alpha_2\beta_3-\alpha_3\beta_2 \\
\alpha_1\beta_2-\alpha_2\beta_1 \\
\alpha_3\beta_1-\alpha_1\beta_3 \\
\end{array}\right)
\end{array}\right.
\end{array}
\end{equation}
\begin{equation}
\begin{array}{ll}
{\bf3}\times{\bf3}'={\bf1}'+{\bf2}+{\bf3}+{\bf3}'\quad\;\text{with}\quad\left\{
\begin{array}{l}
{\bf1}'\sim\alpha_1\beta_1+\alpha_2\beta_3+\alpha_3\beta_2\\[3mm]
{\bf2}\sim\left(
\begin{array}{c}
\alpha_2\beta_2+\alpha_1\beta_3+\alpha_3\beta_1 \\
-\alpha_3\beta_3-\alpha_1\beta_2-\alpha_2\beta_1 \\
\end{array}
\right)\\[3mm]
{\bf3}\sim\left(\begin{array}{c}
\alpha_2\beta_3-\alpha_3\beta_2 \\
\alpha_1\beta_2-\alpha_2\beta_1 \\
\alpha_3\beta_1-\alpha_1\beta_3 \\
\end{array}\right)\\[3mm]
{\bf3}'\sim\left(\begin{array}{c}
2\alpha_1\beta_1-\alpha_2\beta_3-\alpha_3\beta_2 \\
2\alpha_3\beta_3-\alpha_1\beta_2-\alpha_2\beta_1 \\
2\alpha_2\beta_2-\alpha_1\beta_3-\alpha_3\beta_1 \\
\end{array}\right)
\end{array}\right.
\end{array}
\end{equation}
\section[The Group $S_4$ -- II Version]{The Group $\mathbf{S_4}$ -- II Version}
\label{AppA:S4BM}
\setcounter{footnote}{3}
\begin{table}[ht]
\begin{center}
\begin{tabular}{l|ccccc|}
&\multicolumn{5}{|c|}{classes}\\
\cline{2-6}
& $C_{1}$ & $C_{2}$ & $C_{3}$ & $C_{4}$ & $C_{5}$ \\
\cline{1-6}
\rule[0.15in]{0cm}{0cm} $\rm G$ &$\rm \mathbb{1}$& $T^2$ & $ST$ & $S$ & $T$ \\
\cline{1-6}
$^{\circ} C_{i}$ & 1 & 3 & 8 & 6 & 6 \\
\cline{1-6}
$^{\circ} h_{C_{i}}$ & 1 & 2 & 3 & 2 & 4 \\
\hline
$\bf1$ & 1 & 1 & 1 & 1 & 1 \\[3pt]
$\bf1'$ & 1 & 1 & 1 & $-1$ & $-1$ \\[3pt]
$\bf2$ & 2 & 2 & $-1$ & 0 & 0 \\[3pt]
$\bf3$ & 3 & $-1$ & 0 & 1 & $-1$ \\[3pt]
$\bf3'$ & 3 & $-1$ & 0 & $-1$ & 1 \\
\hline
\end{tabular}
\end{center}
\caption{\it Character table of the group $S_4$ -- II version.}
\label{AppA:table:S4chartabII}
\end{table}
In order to realise the bimaximal mixing, we find that it is useful to define group generators different from those used to study the tribimaximal pattern: the two new operators $S$ and $T$ satisfy to
\begin{equation}
T^4=S^2=(ST)^3=(TS)^3=1\;.
\end{equation}
Explicit forms of $S$ and $T$ in each of the irreducible representations can be simply obtained:
\begin{equation}
\begin{array}{ccccc}
{\bf1} &\qquad& S=1 &\quad& T=1\\[3mm]
{\bf1}^\prime &\qquad& S=-1 &\quad& T=1\\[3mm]
{\bf2} &\qquad& S=\dfrac{1}{2}\left(
\begin{array}{cc}
-1 & \sqrt3 \\
\sqrt3 & -1 \\
\end{array}
\right)
&\quad& T=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}
\right)\\[3mm]
{\bf3} &\qquad& S=\left(\begin{array}{ccc}
0 & -1/\sqrt2 & -1/\sqrt2 \\
-1/\sqrt2 & 1/2 & -1/2 \\
-1/\sqrt2 & -1/2 & 1/2 \\
\end{array}\right)
&\quad& T=\left(\begin{array}{ccc}
-1 & 0 & 0 \\
0 & -i & 0 \\
0 & 0 & i \\
\end{array}\right)\\[3mm]
{\bf3'} &\qquad& S=\left(\begin{array}{ccc}
0 & 1/\sqrt2 & 1/\sqrt2 \\
1/\sqrt2 & -1/2 & 1/2 \\
1/\sqrt2 & 1/2 & -1/2 \\
\end{array}\right)
&\quad& T=\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & i & 0 \\
0 & 0 & -i \\
\end{array}\right)
\end{array}
\end{equation}
We recall here the multiplication table for $S_4$ and we list the Clebsch-Gordan coefficients in our basis. We start with all the multiplication rules which include the one-dimensional representations:
\begin{equation}
\begin{array}{l}
{\bf1}\times {\bf r}={\bf r}\times{\bf1}={\bf r}\quad\text{with ${\bf r}$ any representation}\;,\\[5mm]
{\bf1}'\times{\bf1}'={\bf1}\sim\alpha\beta\;,\qquad\qquad\quad\;
{\bf1}'\times{\bf2}={\bf2}\sim\left(\begin{array}{c}
\alpha\beta_2 \\
-\alpha\beta_1 \\
\end{array}\right)\;,\\[3mm]
{\bf1}'\times{\bf3}={\bf3}'\sim\left(\begin{array}{c}
\alpha\beta_1 \\
\alpha\beta_2 \\
\alpha\beta_3\\
\end{array}\right)\;,\qquad
{\bf1}'\times{\bf3}'={\bf3}\sim\left(\begin{array}{c}
\alpha\beta_1 \\
\alpha\beta_2 \\
\alpha\beta_3\\
\end{array}\right)\;.
\end{array}
\end{equation}
The multiplication rules with the two-dimensional representation are the following ones:
\begin{equation}
\begin{array}{ll}
{\bf2}\times{\bf2}={\bf1}+{\bf1}'+{\bf2}&\quad
\rm{with}\quad\left\{\begin{array}{l}
{\bf1}\sim\alpha_1\beta_1+\alpha_2\beta_2\\[3mm]
{\bf1}'\sim\alpha_1\beta_2-\alpha_2\beta_1\\[3mm]
{\bf2}\sim\left(\begin{array}{c}
\alpha_2\beta_2-\alpha_1\beta_1 \\
\alpha_1\beta_2+\alpha_2\beta_1\\
\end{array}\right)
\end{array}
\right.\\[3mm]
{\bf2}\times{\bf3}={\bf3}+{\bf3}^\prime&\quad
\rm{with}\quad\left\{\begin{array}{l}
{\bf3}\sim\left(\begin{array}{c}
\alpha_1\beta_1\\
\frac{\sqrt3}{2}\alpha_2\beta_3-\frac{1}{2}\alpha_1\beta_2 \\
\frac{\sqrt3}{2}\alpha_2\beta_2-\frac{1}{2}\alpha_1\beta_3 \\
\end{array}\right)\\[3mm]
{\bf3}'\sim\left(\begin{array}{c}
-\alpha_2\beta_1\\
\frac{\sqrt3}{2}\alpha_1\beta_3+\frac{1}{2}\alpha_2\beta_2 \\
\frac{\sqrt3}{2}\alpha_1\beta_2+\frac{1}{2}\alpha_2\beta_3 \\
\end{array}\right)\\
\end{array}
\right.\\[3mm]
{\bf2}\times{\bf3}'={\bf3}+{\bf3}^\prime&\quad
\rm{with}\quad\left\{\begin{array}{l}
{\bf3}\sim\left(\begin{array}{c}
-\alpha_2\beta_1\\
\frac{\sqrt3}{2}\alpha_1\beta_3+\frac{1}{2}\alpha_2\beta_2 \\
\frac{\sqrt3}{2}\alpha_1\beta_2+\frac{1}{2}\alpha_2\beta_3 \\
\end{array}\right)\\[3mm]
{\bf3}'\sim\left(\begin{array}{c}
\alpha_1\beta_1\\
\frac{\sqrt3}{2}\alpha_2\beta_3-\frac{1}{2}\alpha_1\beta_2 \\
\frac{\sqrt3}{2}\alpha_2\beta_2-\frac{1}{2}\alpha_1\beta_3 \\
\end{array}\right)\\
\end{array}
\right.
\end{array}
\end{equation}
The multiplication rules involving the three-dimensional representations are:
\begin{equation}
\begin{array}{l}
{\bf3}\times{\bf3}={\bf3}'\times{\bf3}'={\bf1}+{\bf2}+{\bf3}+{\bf3}'\quad\text{with}\quad\!\left\{
\begin{array}{l}
{\bf1}\sim\alpha_1\beta_1+\alpha_2\beta_3+\alpha_3\beta_2\\[3mm]
{\bf2}\sim\left(
\begin{array}{c}
\alpha_1\beta_1-\frac{1}{2}(\alpha_2\beta_3+\alpha_3\beta_2)\\
\frac{\sqrt3}{2}(\alpha_2\beta_2+\alpha_3\beta_3)\\
\end{array}
\right)\\[3mm]
{\bf3}\sim\left(\begin{array}{c}
\alpha_3\beta_3-\alpha_2\beta_2\\
\alpha_1\beta_3+\alpha_3\beta_1\\
-\alpha_1\beta_2-\alpha_2\beta_1\\
\end{array}\right)\\[3mm]
{\bf3}'\sim\left(\begin{array}{c}
\alpha_3\beta_2-\alpha_2\beta_3\\
\alpha_2\beta_1-\alpha_1\beta_2\\
\alpha_1\beta_3-\alpha_3\beta_1\\
\end{array}\right)
\end{array}\right.
\end{array}
\end{equation}
\begin{equation}
\begin{array}{l}
{\bf3}\times{\bf3}'={\bf1}'+{\bf2}+{\bf3}+{\bf3}'\quad\;\text{with}\quad\left\{
\begin{array}{l}
{\bf1}'\sim\alpha_1\beta_1+\alpha_2\beta_3+\alpha_3\beta_2\\[3mm]
{\bf2}\sim\left(
\begin{array}{c}
\frac{\sqrt3}{2}(\alpha_2\beta_2+\alpha_3\beta_3)\\
-\alpha_1\beta_1+\frac{1}{2}(\alpha_2\beta_3+\alpha_3\beta_2)\\
\end{array}
\right)\\[3mm]
{\bf3}\sim\left(\begin{array}{c}
\alpha_3\beta_2-\alpha_2\beta_3\\
\alpha_2\beta_1-\alpha_1\beta_2\\
\alpha_1\beta_3-\alpha_3\beta_1\\
\end{array}\right)\\[3mm]
{\bf3}'\sim\left(\begin{array}{c}
\alpha_3\beta_3-\alpha_2\beta_2\\
\alpha_1\beta_3+\alpha_3\beta_1\\
-\alpha_1\beta_2-\alpha_2\beta_1\\
\end{array}\right)
\end{array}\right.
\end{array}
\end{equation}
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{Vacuum Alignment}
\label{AppendixB}
\lhead[\fancyplain{}{\bfseries\leftmark}]{\fancyplain{}{\bfseries\thepage}}
\rhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries\rightmark}}
\renewcommand{\theequation}{B.\;\arabic{equation}}
\setcounter{equation}{0}
\setcounter{footnote}{3}
\setcounter{section}{0}
\mathversion{bold}
\section{The Group $T'$}
\label{AppB:Tp}
\setcounter{footnote}{3}
\mathversion{normal}
Here we discuss particular aspects regarding the vacuum alignment in the $T'$-based model. To this purpose we should complete the definition of the superpotential $w$ by specifying the last term in eq. (\ref{TpTBM:fullw}). This is the term responsible for the spontaneous symmetry breaking of $T'$ and it includes a new set of fields, the ``driving'' fields, which are gauge singlets and transform non-trivially only under the flavour symmetry as shown in table \ref{AppB:table:Tpflavoncharges}. All these fields do not develop VEV and their $F$-terms are the determining equations for the VEVs of the flavons.
\begin{table}[ht!]
\centering
\begin{tabular}{|c||c|c|c|c|c||c|c|c|c|c|}
\hline
&&&&&&&&&&\\[-9pt]
Field & $\varphi_T$ & $\varphi_S$ & $\eta$ & $\xi$,$\tilde{\xi}$ & $\xi''$ & $\varphi^0_T$ & $\varphi^0_S$ & $\eta^0$ & $\xi^0$ & $ \xi^{\prime0}$ \\
&&&&&&&&&&\\[-9pt]
\hline
&&&&&&&&&&\\[-9pt]
$T^{\prime}$ & $\bf3$ & $\bf3$ & $\bf2'$ & $\bf1$ & $\bf1''$ & $\bf3$ & $\bf3$ & $\bf2''$ & $\bf1$ & $\bf1'$ \\[3pt]
$Z_3$ & $1$ & $\omega$ & $1$ & $\omega$ & $1$ & $1$ & $\omega$ & $1$ & $\omega$ & $1$ \\ [3pt]
$U(1)_R$ & $0$ & $0$ & $0$ & $0$ & $0$ & $2$ & $2$ & $2$ & $2$ & $2$ \\ [3pt]
\hline
\end{tabular}
\caption{\it The transformation rules of flavons and driving fields under the symmetries associated to the groups $T'$, $Z_3$ and $U(1)_{R}$.}
\label{AppB:table:Tpflavoncharges}
\end{table}
Driving fields have $R$-charge $2$ and this prevents them to directly couple to Standard Model fermions. Moreover all the terms in the driving superpotential $w_d$ can only be linear in the driving fields:
\begin{equation}
\begin{split}
w_d\,=&\phantom{+} M \,(\varphi^0_T\,\varphi_T)+g\,(\varphi^0_T\,\varphi_T\,\varphi_T)+g_{7}\,\xi''\,(\varphi^0_T\,\varphi_T)^{\prime}+ g_{8}\,(\varphi^0_T\,\eta\,\eta)+\\[3mm]
&+g_{1}\,(\varphi^0_S\,\varphi_S\,\varphi_S)+g_{2}\,\tilde{\xi}\,(\varphi^0_S\,\varphi_S)+\\[3mm]
&+g_{3}\,\xi^0\,(\varphi_S\,\varphi_S)+g_{4}\,\xi^0\,\xi^2 +g_{5}\,\xi^0\,\xi\,\tilde{\xi}+g_{6}\,\xi^0\,\tilde{\xi}^2+\\[3mm]
&+M_{\eta}\,(\eta^0\,\eta)+g_9\,(\varphi_T\,\eta^0\,\eta)+\\[3mm]
&+M_{\xi}\,\xi^{\prime\,0}\,\xi'' +g_{10}\,\xi^{\prime\,0}\,(\varphi_T\,\varphi_T)^{\prime\,\prime}+\ldots
\end{split}
\label{TpTBM:DrivingSuperP}
\end{equation}
At this level there is no fundamental distinction between the singlets $\xi$ and $\tilde{\xi}$. Thus we are free to define $\tilde{\xi}$ as the combination that couples to $(\varphi^0_S \varphi_S)$ in the superpotential $w_d$. We notice that at the leading order there are no terms involving the Higgs fields $H_{u,d}$. We assume that the electroweak symmetry is broken by some mechanism, such as radiative effects when Supersymmetry is broken. It is interesting that at the leading order the electroweak scale does not mix with the potentially large scales of the VEVs. The scalar potential is given by:
\begin{equation}
V=\sum_i\left\vert\frac{\partial w}{\partial \phi_i}\right\vert^2 +m_i^2 \vert \phi_i\vert^2+...
\end{equation}
where $\phi_i$ denote collectively all the scalar fields of the theory, $m_i^2$ are soft masses and dots stand for $D$-terms for the fields charged under the gauge group and possible additional soft breaking terms. Since $m_i$ are expected to be much smaller than the mass scales involved in $w_d$, it makes sense to minimise $V$ in the supersymmetric limit and to account for soft breaking effects subsequently. Calculating the $F$-terms for the driving fields leads to two sets of equations
\begin{equation}
\begin{split}
\dfrac{\partial w}{\partial {\varphi^0_S}_1}\,=&\,\,
g_2\tilde{\xi}{\varphi_S}_1+\dfrac{2g_1}{3}({\varphi_S}_1^2-{\varphi_S}_2{\varphi_S}_3)=0\\[3mm]
\dfrac{\partial w}{\partial {\varphi^0_S}_2}\,=&\,\,g_2\tilde{\xi} {\varphi_S}_3+
\dfrac{2g_1}{3}({\varphi_S}_2^2-{\varphi_S}_1{\varphi_S}_3)=0\\[3mm]
\dfrac{\partial w}{\partial {\varphi^0_S}_3}\,=&\,\,g_2\tilde{\xi} {\varphi_S}_2+
\dfrac{2g_1}{3}({\varphi_S}_3^2-{\varphi_S}_1{\varphi_S}_2)=0\\[3mm]
\dfrac{\partial w}{\partial \xi^0}\,=&\,\,g_4 \xi^2+g_5 \xi \tilde{\xi}+g_6\tilde{\xi}^2 +g_3({\varphi_S}_1^2+2{\varphi_S}_2{\varphi_S}_3)=0
\end{split}
\label{TpTBM:neutrinomin}
\end{equation}
and
\begin{equation}
\begin{split}
\dfrac{\partial\,w}{\partial\,{\varphi^0_T}_1}\,=&\,\,M\,{\varphi_T}_1+\dfrac{2\,g}{3}\,({\varphi^2_T}_1-{\varphi_T}_2\,{\varphi_T}_3)+ g_7\,\xi''\,{\varphi_T}_2+i\,g_8\,\eta_1^2=0\\[3mm]
\dfrac{\partial\,w}{\partial\,{\varphi^0_T}_2}\,=&\,\,M\,{\varphi_T}_3+\dfrac{2\,g}{3}\,({\varphi^2_T}_2-{\varphi_T}_1\,{\varphi_T}_3)+ g_7\,\xi''\,{\varphi_T}_1+(1-i)\,g_8\,\eta_1\,\eta_2=0\\[3mm]
\dfrac{\partial\,w}{\partial\,{\varphi^0_T}_3}\,=&\,\,M\,{\varphi_T}_2+\dfrac{2\,g}{3}\,({\varphi^2_T}_3-{\varphi_T}_1\,{\varphi_T}_2)+ g_7\,\xi''\,{\varphi_T}_3+g_8\,\eta_2^2=0\\[3mm]
\dfrac{\partial\,w}{\partial\,\eta^0_1}\,=&\,\,-M_\eta\,\eta_2+g_9\,((1-i)\,\eta_1\,{\varphi_T}_3-\eta_2\,{\varphi_T}_1)=0\\[3mm]
\dfrac{\partial\,w}{\partial\,\eta^0_2}\,=&\,\,M_\eta\,\eta_1-g_9\,((1+i)\,\eta_2\,{\varphi_T}_2+\eta_1\,{\varphi_T}_1)=0\\[3mm]
\dfrac{\partial\,w}{\partial\,\xi^{\prime\,0}}\,=&\,\,M_\xi\,\xi''+g_{10}\,({\varphi^2_T}_2+2\,{\varphi_T}_1\,{\varphi_T}_3)=0
\end{split}
\label{TpTBM:chrgdmn}
\end{equation}
Concerning the first set of equations, (\ref{TpTBM:neutrinomin}), there are flat directions in the supersymmetric limit. We can enforce $\langle\tilde{\xi}\rangle=0$ by adding to the scalar potential a soft Supersymmetry breaking mass term
for the scalar field $\tilde{\xi}$, with $m^2_{\tilde{\xi}}>0$. In this case, in a finite portion of the parameter space, we find the solution
\begin{equation}
\begin{array}{rcl}
\langle\tilde{\xi}\rangle&=&0\\[3mm]
\langle\xi\rangle&=&v_\xi\\[3mm]
\langle\varphi_S\rangle&=&(v_S,\,v_S,\,v_S)\;,\qquad v_S^2=-\dfrac{g_4}{3 g_3} v_\xi^2\;,
\end{array}
\label{TpTBM:solS}
\end{equation}
with $\xi$ undetermined. By choosing $m^2_{\varphi_S}, m^2_\xi<0$, then $\xi$ slides to a large scale, which we assume to be eventually
stabilised by one-loop radiative corrections in the Supersymmetry broken phase. The VEVs in (\ref{TpTBM:solS}) break $T'$ down to the subgroup $G_S$. It is remarkable that other two equivalent VEV configurations are allowed:
\begin{equation}
\langle\varphi_S\rangle=v_S(1,\,\omega^2,\,\omega)\;,\qquad\qquad
\langle\varphi_S\rangle=v_S(1,\,\omega,\,\omega^2)\;.
\end{equation}
These configurations break $T'$ down to different directions of $Z_4$, but they are equivalent to eq. (\ref{TpTBM:solS}), indeed they are obtained by acting on eq. \eqref{TpTBM:solS} with the elements of $T'$. Any of these solutions produces the same neutrino mass matrix: for example $\langle\varphi_S\rangle=v_S(1,\,\omega,\,\omega^2)$ and $\langle\varphi_S\rangle=v_S(1,\,1,\,1)$ are equivalent, being related by the local fields transformation $\nu_e\rightarrow\nu_e$, $\nu_\mu\rightarrow\omega^2\nu_\mu$ and $\nu_\tau\rightarrow\omega\nu_\tau$.
Concerning the last six equations, (\ref{TpTBM:chrgdmn}), by excluding the trivial solutions where all VEVs vanish, in the supersymmetric limit we find three classes of solutions. One class preserves the subgroup $G_S$, as for the set of VEVs given in (\ref{TpTBM:solS}). It is characterised by $\langle\xi''\rangle\ne 0$ and $\langle\eta\rangle=(0,\,0)$.
A representative VEV configuration in this class is:
\begin{equation}
\langle \xi'' \rangle= - \dfrac{M}{g_{7}}\;,\qquad
\langle \eta \rangle=(0,0)\;,\qquad
\langle \varphi _T \rangle =(v_T,v_T,v_T)\;,\qquad
v_T^2=\dfrac{M \, M_{\xi}}{3 \, g_{7} \, g_{10}}\;.
\label{TpTBM:cl1}
\end{equation}
The second class preserves a subgroup $Z_6$ generated by the elements $T$ and $\mathbb{R}$. It is characterised by $\langle\xi''\rangle=0$ and $\langle\eta\rangle=(0,\,0)$:
\begin{equation}
\langle \xi'' \rangle =0\;,\qquad
\langle \eta\rangle =(0,0)
\;,\qquad\langle \varphi_T\rangle =(v_T ,0,0)\;,\qquad
v_T=- \dfrac{3M}{2g}\;.
\label{TpTBM:cl21}
\end{equation}
The third class preserves the subgroup $G_T$. It is characterised by $\langle\xi''\rangle=0$ and $\langle\eta\rangle\ne 0$:
\begin{equation}
\begin{array}{c}
\langle \xi'' \rangle =0\;,\qquad
\langle \eta\rangle = \pm(v_1 ,0)
\;,\qquad\langle \varphi_T\rangle =(v_T,0,0)\\[3mm]
\text{with}\qquad
v_1=\dfrac{1}{g_9 \, \sqrt{3 \, g_8}} \, \sqrt{i \, (2 \, M_{\eta} ^{2} \, g + 3 \, M \, M_{\eta} \, g_9)}\;,\qquad
v_T=\dfrac{M_{\eta}}{g_{9}}\;.
\end{array}
\label{TpTBM:cl22}
\end{equation}
The three sets of minima in eqs. (\ref{TpTBM:cl1}), (\ref{TpTBM:cl21}) and (\ref{TpTBM:cl22}) are all degenerate in the supersymmetric limit and we will simply choose the one in eq. (\ref{TpTBM:cl22}). We have checked that, by adding soft masses $m_{\xi''}^2>0$, $m_\eta^2<0$, the desired vacuum is selected as the absolute minimum, thus reproducing the results in eqs. (\ref{TpTBM:love1}, \ref{TpTBM:love2}). In summary, we have shown that the VEVs in eqs. (\ref{TpTBM:love1}, \ref{TpTBM:love2}) represent a local minimum of the scalar potential of the theory in a finite portion of the parameter space, without any ad hoc relation among the parameters of the theory. As we will see below, these VEVs will be slightly perturbed by higher-order corrections induced by higher dimensional operators contributing to the driving potential $w_d$.
Such corrections will be important to achieve a realistic mass spectrum in the quark sector. Finally, concerning the numerical values of the VEVs, radiative corrections typically stabilise $v_\xi$ and $v_S$ well below the cutoff scale $\Lambda_f$. Similarly, mass parameters in the superpotential $w_d$ can be chosen in such a way that $v_1$ and $v_T$ are below $\Lambda_f$. It is not unreasonable to assume that all the VEVs are of the same order of magnitude:
\begin{equation}
VEV\approx \lambda^2 \Lambda_f\;.
\end{equation}
For the Froggatt-Nielsen (FN) field $\theta$ to acquire a VEV, we assume that the symmetry $U(1)_{FN}$ is gauged such that $\theta$ gets its VEV through a $D$-term. The corresponding potential is of the form:
\begin{equation}
V_{D, FN}=\dfrac{1}{2}(M_{FI}^2- g_{FN}\vert\theta\vert^2+...)^2
\label{TpTBM:Dterm}
\end{equation}
where $g_{FN}$ is the gauge coupling constant of $U(1)_{FN}$ and $M_{FI}^2$ denotes the contribution of the Fayet-Iliopoulos (FI) term.
Dots in eq. (\ref{TpTBM:Dterm}) represent e.g. terms involving the right-handed charged leptons $e^c$ and $\mu^c$ which are charged under $U(1)_{FN}$. These terms are however not relevant to calculate the VEV of the Froggatt-Nielsen field and we omit them in the present discussion.
$V_{D,FN}$ leads in the supersymmetric limit to:
\begin{equation}
|\langle\theta\rangle|^2= \dfrac{M_{FI}^2}{g_{FN}}
\end{equation}
which we parametrise as:
\begin{equation}
\dfrac{\langle\theta\rangle}{\Lambda_f}=t
\label{TpTBM:vevtheta}
\end{equation}
with $t$ being the second small symmetry breaking parameter in our model.
Before moving to discuss the higher-order terms in the superpotential, we comment on the $F$-terms of the flavons, since they determine the VEVs of the driving fields. In the unbroken Supersymmetry limit, we have
\begin{equation}
\begin{array}{rcl}
\dfrac{\partial\,w_d}{\partial\,\varphi_{T1}}\, &=& \,M\,\varphi^0_{T1}\,+
\,\dfrac{2\,g}{3}\,(2\,\varphi^0_{T\,1}\,\varphi_{T1}\,-\,\varphi^0_{T\,2}\,\varphi_{T3}-
\varphi^0_{T\,3}\,\varphi_{T2})+\,g_7\,\varphi^0_{T2}\,\xi''\\[3mm]
&&\,-\,g_9\,(\eta_1^0\,\eta_2+\eta_2^0\,\eta_1)\,+\,2\,g_{10}\,\xi^{\prime\,0}\,\varphi_{T3}=0\\[3mm]
\dfrac{\partial\,w_d}{\partial\,\varphi_{T2}}\, &=& \,M\,\varphi^0_{T3}\,+
\,\dfrac{2\,g}{3}\,(2\,\varphi^0_{T\,2}\,\varphi_{T2}\,-\,\varphi^0_{T\,1}\,\varphi_{T3}-
\varphi^0_{T\,3}\,\varphi_{T1})+\,g_7\,\varphi^0_{T1}\,\xi''\\[3mm]
&&\,-(1+i)\,g_9\,\eta_2^0\,\eta_2\,+\,2\,g_{10}\,\xi^{\prime\,0}\,\varphi_{T2}=0\\[3mm]
\dfrac{\partial\,w_d}{\partial\,\varphi_{T3}}\, &=& \,M\,\varphi^0_{T2}\,+
\,\dfrac{2\,g}{3}\,(2\,\varphi^0_{T\,3}\,\varphi_{T3}\,-\,\varphi^0_{T\,1}\,\varphi_{T2}-
\varphi^0_{T\,2}\,\varphi_{T1})+\,g_7\,\varphi^0_{T3}\,\xi''\\[3mm]
&&\,+(1-i)\,g_9\,\eta_1^0\,\eta_1\,+\,2\,g_{10}\,\xi^{\prime\,0}\,\varphi_{T1}=0
\end{array}
\end{equation}
\begin{equation}
\hspace{-5mm}
\begin{array}{rcl}
\dfrac{\partial\,w_d}{\partial\,\eta_1}\, &=& \,M_{\eta}\,\eta^0_2\,+\, g_8\,(2\,i\,\varphi^0_{T1}\,\eta_1\,+\, (1\,-\,i)\,\varphi^0_{T2}\,\eta_2)\,+\, g_9\,\left((1\,-\,i)\,\eta_1^0\,\varphi_{T3}\,-\,\eta_2^0\,\varphi_{T1}\right)=0\\[3mm]
\dfrac{\partial\,w_d}{\partial\,\eta_2}\, &=& -\,M_{\eta}\,\eta^0_1\,+\, g_8\,\left((1\,-\,i)\,\varphi^0_{T2}\,\eta_1\,+\, 2\,\varphi^0_{T3}\,\eta_2\right)\,-\, g_9\,\left(\eta_1^0\,\varphi_{T1}\,+(1\,+\,i)\,\eta_2^0\,\varphi_{T2}\right)=0
\end{array}
\end{equation}
\begin{equation}
\begin{array}{rcl}
\dfrac{\partial\,w_d}{\partial\,\varphi_{S1}}\, &=&
\,\dfrac{2\,g_1}{3}\,(2\,\varphi^0_{S1}\,\varphi_{S1}\,-\,\varphi^0_{S2}\,\varphi_{S3}-
\varphi^0_{S3}\,\varphi_{S2})+\,g_2\,\varphi^0_{S1}\,\tilde{\xi}\,+ \,2\,g_3\,\xi^0\,\varphi_{S1}=0\\[3mm]
\dfrac{\partial\,w_d}{\partial\,\varphi_{S2}}\, &=&
\,\dfrac{2\,g_1}{3}\,(2\,\varphi^0_{S2}\,\varphi_{S2}\,-\,\varphi^0_{S1}\,\varphi_{S3}-
\varphi^0_{S3}\,\varphi_{S1})+\,g_2\,\varphi^0_{S3}\,\tilde{\xi}\,+ \,2\,g_3\,\xi^0\,\varphi_{S3}=0\\[3mm]
\dfrac{\partial\,w_d}{\partial\,\varphi_{S3}}\, &=&
\,\dfrac{2\,g_1}{3}\,(2\,\varphi^0_{S3}\,\varphi_{S3}\,-\,\varphi^0_{S1}\,\varphi_{S2}-
\varphi^0_{S2}\,\varphi_{S1})+\,g_2\,\varphi^0_{S2}\,\tilde{\xi}\,+ \,2\,g_3\,\xi^0\,\varphi_{S2}=0
\end{array}
\end{equation}
\begin{equation}
\begin{array}{rcl}
\dfrac{\partial\,w_d}{\partial\,\xi} \,&=&\, 2\,g_4\,\xi^0\,\xi\,+\, g_5\,\xi^0\,\tilde{\xi}=0\\[3mm]
\dfrac{\partial\,w_d}{\partial\,\tilde{\xi}} \,&=&\, g_2\,\left(\varphi^0_{S1}\,\varphi_{S1}\,+\, \varphi^0_{S3}\,\varphi_{S2}\,+\, \varphi^0_{S2}\,\varphi_{S3}\right)\,+\, g_5\,\xi^0\,\xi\,+\, 2\,g_6\,\xi^0\,\tilde{\xi}=0\\[3mm]
\dfrac{\partial\,w_d}{\partial\,\xi''} \,&=&\, M _{\xi}\,\xi^{\prime0}\,+\, g_7\,\left(\varphi^0_{T2}\,\varphi_{T1}\,+\, \varphi^0_{T1}\,\varphi_{T2}\,+\, \varphi^0_{T3}\,\varphi_{T3}\right)=0\;.
\end{array}
\end{equation}
Notice that we did not consider the $F$-terms involving squarks and sleptons, since they do not develop VEV in order to preserve $S(3)_c$ and $U(1)_{em}$. It is easy to see from the equations above that each expression contain a driving field and as a result the minimum consists in the trivial solution in which all the VEV of the driving fields are vanishing. This result strictly holds in the exact supersymmetric limit. In section \ref{Sec:FlavourViolation} we comment on the fact that, when supersymmetric soft terms are considered, also the driving fields develop a VEV of the order of the soft breaking mass scale.
We now move to discuss the subleading contributions to the superpotential $w_d$, that is modified into
\begin{equation}
w_d+\delta w_d\;,
\end{equation}
where $w_d$ is the leading order contribution already introduced above, in which, for convenience, we have redefined
\begin{equation}
g_3\equiv3\,\tilde{g}_3^2\;,\qquad g_4\equiv-\tilde{g}_4^2\qquad\text{and}\qquad g_8\equiv i\,\tilde{g}_8^2\;.
\end{equation}
The remaining term, $\delta w_d$ is the most general quartic, $T'$-invariant polynomial linear in the driving fields and it is responsible for the corrections to the VEV alignment. We can parametrise the new VEVs of the flavons introducing shifts from the values in eqs. (\ref{TpTBM:solS}, \ref{TpTBM:cl22}):
\begin{equation}
\begin{array}{c}
\mean{\varphi_T}=(v_T+\delta v_{T1},\delta v_{T2},\delta v_{T3})\;,\qquad
\mean{\varphi_S}=(v_S+\delta v_{S1},v_S+\delta v_{S2},v_S+\delta v_{S3})\;\\[3mm]
\mean{\xi}=v_\xi\;,\qquad
\mean{\tilde{\xi}}=\delta v_{\tilde{\xi}} \;,\qquad
\mean{\eta}=(v_1+\delta v_1,\delta v_2)\;,\qquad\langle\xi''\rangle=\delta v_{\xi''}\;,
\end{array}
\end{equation}
where the corrections $\delta v_{Ti}$, $\delta v_{Si}$, $\delta v_{i}$, $\delta v_{\tilde{\xi}}$ and $\delta v_{\xi''}$ are independent of each other. Note that there also might be a correction to the VEV $v_\xi$, but we do not have to indicate this explicitly by the addition of a term $\delta v_\xi$, since $v_\xi$ is undetermined at tree-level anyway. The shifts can be determined studying in detail the term $\delta w_d$:
\begin{equation}
\delta w_d=\dd\frac{1}{\Lambda}\left(
\sum_{k=3}^{18} t_k I_k^T+
\sum_{k=1}^{15} s_k I_k^S+
\sum_{k=1}^{4} x_k I_k^X+
\sum_{k=1}^{4} n_k I_k^N+
\sum_{k=1}^{4} y_k I_k^Y
\right)
\end{equation}
where $t_k$, $s_k$, $x_k$, $n_k$ and $y_k$ are coefficients and $\{I_k^T,I_k^S,I_k^X,I_k^N,I_k^Y\}$ represent a basis of independent quartic invariants:
\begin{equation}
\begin{array}{ll}
I_3^T=(\varphi^0_T\varphi_T) (\varphi_T\varphi_T)&\quad
I_{11}^T=(\varphi^0_T\varphi_S) \xi^2\\
I_4^T=(\varphi^0_T\varphi_T)' (\varphi_T\varphi_T)''&\quad
I_{12}^T=(\varphi^0_T\varphi_S) \xi \tilde{\xi}\\
I_5^T=(\varphi^0_T\varphi_T)'' (\varphi_T\varphi_T)'&\quad
I_{13}^T=(\varphi^0_T\varphi_S) {\tilde{\xi}}^2\\
I_6^T=(\varphi^0_T\varphi_S) (\varphi_S\varphi_S)&\quad
I_{14}^T=(\varphi^0_T\varphi_T)''\xi''\xi''\\
I_7^T=(\varphi^0_T\varphi_S)' (\varphi_S\varphi_S)''&\quad
I_{15}^T=((\varphi_T\eta)_2(\varphi^0_T\eta)_2)\\
I_8^T=(\varphi^0_T\varphi_S)'' (\varphi_S\varphi_S)'&\quad
I_{16}^T=((\varphi_T\eta)_{2'}(\varphi^0_T\eta)_{2''})\\
I_9^T=\left(\varphi^0_T(\varphi_S\varphi_S)_S\right) \xi&\quad
I_{17}^T=\left((\eta\xi'')_2(\varphi^0_T\eta)_2\right)\\
I_{10}^T=\left(\varphi^0_T(\varphi_S\varphi_S)_S\right) \tilde{\xi}&\quad
I_{18}^T=(\left(\varphi_T\varphi_T)_S(\varphi^0_T\xi''\right)_3)
\end{array}
\end{equation}
\begin{equation}
\begin{array}{ll}
I_1^S=\left((\varphi^0_S\varphi_T)_S(\varphi_S\varphi_S)_S\right)&\quad
I_9^S=\left(\varphi^0_S(\varphi_T\varphi_S)_A\right) \tilde{\xi}\\
I_2^S=\left((\varphi^0_S\varphi_T)_A(\varphi_S\varphi_S)_S\right)&\quad
I_{10}^S=(\varphi^0_S\varphi_T) \xi^2\\
I_3^S=(\varphi^0_S\varphi_T) (\varphi_S\varphi_S)&\quad
I_{11}^S=(\varphi^0_S\varphi_T) \xi \tilde{\xi}\\
I_4^S=(\varphi^0_S\varphi_T)' (\varphi_S\varphi_S)''&\quad
I_{12}^S=(\varphi^0_S\varphi_T) {\tilde{\xi}}^2\\
I_5^S=(\varphi^0_S\varphi_T)'' (\varphi_S\varphi_S)'&\quad
I_{13}^S=(\varphi^0_S\varphi_S)' \xi \xi''\\
I_6^S=\left(\varphi^0_S(\varphi_T\varphi_S)_S\right) \xi&\quad
I_{14}^S=(\varphi^0_S\varphi_S)' \tilde{\xi} \xi''\\
I_7^S=\left(\varphi^0_S(\varphi_T\varphi_S)_S\right) \tilde{\xi}&\quad
I_{15}^S=\left((\varphi_S\xi'')_3(\varphi^0_S\varphi_S\right)_S)\\
I_8^S=\left(\varphi^0_S(\varphi_T\varphi_S)_A\right) \xi&\quad
\end{array}
\end{equation}
\begin{equation}
\begin{array}{ll}
I_1^X=\xi^0 \left(\varphi_T(\varphi_S\varphi_S)_S\right)&\quad
I_3^X=\xi^0 (\varphi_T\varphi_S) \tilde{\xi}\\
I_2^X=\xi^0 (\varphi_T\varphi_S) \xi&\quad
I_4^X=\xi^0 (\varphi_S\varphi_S)' \xi''
\end{array}
\end{equation}
\begin{equation}
\begin{array}{ll}
I_1^N=(\eta^0\eta)(\varphi_T\varphi_T)&\quad
I_3^N=((\eta\xi'')_2(\eta^0\varphi_T)_2)\\
I_2^N=((\eta\varphi_T)_2(\eta^0\varphi_T)_2)&\quad
I_4^N=\left((\eta^0\eta)_3(\eta\eta)_3\right)
\end{array}
\end{equation}
\begin{equation}
\begin{array}{ll}
I_1^Y=\xi^{\prime\,0}(\varphi_T\varphi_T)\xi''&\quad
I_3^Y=\xi^{\prime\,0}(\varphi_S\varphi_S)''\xi\\
I_2^Y=((\eta\varphi_T)_{2'}(\xi^{\prime\,0}\eta)_{2''})&\quad
I_4^Y=\xi^{\prime\,0}(\varphi_S\varphi_S)''\tilde{\xi}\;.
\end{array}
\end{equation}
We only take into account terms which are at most linear in $\delta v$ and no terms of the order $\cO(\delta v/\Lambda_f)$. If we plug in the VEVs $v_{T}$ and $v_1$, the equations for the shifts of the VEVs take the following form:
\begin{eqnarray}
&&\hspace{-1cm}\begin{split}
&\dfrac{\tilde{g}_4\, v_{\xi}^3}{3\,\tilde{g}_3\,\Lambda_f}\,\left(t_{11}+ \dfrac{\tilde{g}^2_4}{3\,\tilde{g}_3^2}\,\left(t_6+t_7+t_8\right)\,\right)+\dfrac{t_3}{\Lambda_f}\,v_T^3+ (1-i)\,\dfrac{t_{16}}{\Lambda_f}\,v_1^2\,v_T+\\
&\qquad\qquad\qquad
-2\,v_T\,\left(\dfrac{2\,g\,v_T}{3}+M\right)\,\dfrac{\delta v_1}{v_1}+ \left(M+\dfrac{4\,g\,v_T}{3}\right)\,\delta v_{T1}=0
\end{split}\label{AppB:deltavT1}\\[3mm]
&&\hspace{-1cm}\dfrac{\tilde{g}_4\, v_{\xi}^3}{3\,\tilde{g}_3\,\Lambda_f }\,\left(t_{11}+ \dfrac{\tilde{g}^2_4}{3\,\tilde{g}_3^2}\,\left(t_6+t_7+t_8\right)\,\right)+\left(M-\dfrac{2\,g\,v_T}{3}\right)\,\delta v_{T2}=0
\label{AppB:deltavT2}\\[3mm]
&&\hspace{-1cm}\begin{split}
&\dfrac{\tilde{g}_4\, v_{\xi}^3}{3\,\tilde{g}_3\,\Lambda_f }\,\left(t_{11}+ \dfrac{\tilde{g}^2_4}{3\,\tilde{g}_3^2}\,\left(t_6+t_7+t_8\right)\,\right)+g_7\,v_T\,\delta v_{\xi''}+\\
&\qquad\qquad\qquad
+(1+i)\,v_T\,\left(\dfrac{2\,g\,v_T}{3}+M\right)\,\dfrac{\delta v_2}{v_1}+ \left(M-\dfrac{2\,g\,v_T}{3}\right)\,\delta v_{T3}=0
\end{split}\label{AppB:deltavT3}\\[3mm]
&&\hspace{-1cm}\left(\dfrac{9\,\tilde{g}_3\,s_{10}}{\tilde{g}_4}
+\dfrac{3\,\tilde{g}_4\,s_3}{\tilde{g}_3}+ 2\,s_6\right)\,\dfrac{v_T\, v_{\xi}}{\Lambda_f}+ 3\,g_2\,\delta v_{\tilde{\xi}}+ 2\,g_1\,\left(2\,\delta v_{S1}-\delta v_{S2}-\delta v_{S3}\right)=0
\label{AppB:deltavS1}\\[3mm]
&&\hspace{-1cm}\left(\dfrac{3\,\tilde{g}_4\,s_4}{\tilde{g}_3}-s_6- \dfrac{3}{2}\,s_8\right)\,\dfrac{v_T\, v_{\xi}}{\Lambda_f}+ 3\,g_2\delta v_{\tilde{\xi}}+ 2\,g_1\,\left(2\,\delta v_{S2}-\delta v_{S1}-\delta v_{S3}\right)=0
\label{AppB:deltavS2}\\[3mm]
&&\hspace{-1cm}\left(\dfrac{3\,\tilde{g}_4\,s_5}{\tilde{g}_3}-s_6+ \dfrac{3}{2}\,s_8\right)\,\dfrac{v_T\, v_{\xi}}{\Lambda_f}+ 3\,g_2\,\delta v_{\tilde{\xi}}+2\,g_1\,\left(2\,\delta v_{S3}-\delta v_{S1}-\delta v_{S2}\right)=0
\label{AppB:deltavS3}\\[3mm]
&&\hspace{-1cm}\dfrac{x_2\,v_T\, v_{\xi}}{3\,\tilde{g}_3\,\Lambda_f}+\dfrac{g_5}{\tilde{g}_4}\,\delta v_{\tilde{\xi}}+
2\,\tilde{g}_3\,\left(\delta v_{S1}+\delta v_{S2}+\delta v_{S3}\right)=0
\label{AppB:deltau}\\[3mm]
&&\hspace{-1cm}v_T\,\delta v_2-\dfrac{1}{2}(1-i)\,v_1\,\delta v_{T3}=0
\label{AppB:deltav2}\\[3mm]
&&\hspace{-1cm}-\dfrac{1}{2\,\Lambda_f }\,(1+i)\,n_4\,v_1^2+\dfrac{n_1}{\Lambda_f}\,v_T^2+g_9\,\delta v_{T1}=0
\label{AppB:deltav1}\\[3mm]
&&\hspace{-1cm}\dfrac{\tilde{g}_4^2\,y_3\,v_{\xi}^3}{3\,\tilde{g}_3^2\,\Lambda_f}+ M_{\xi}\,\delta v_{\xi''}+ 2\,g_{10}\,v_T\,\delta v_{T3}=0
\label{AppB:deltaupr2}
\end{eqnarray}
The typical order of all the shifts is $VEV^2/\Lambda_f\sim\lambda^4\Lambda_f$, considering that $VEV/\Lambda_f\sim\lambda^2$. As a result the relative size of a shift compared to a non-vanishing VEV is $\lambda^2$. Thereby, it is reasonable that all masses, $M$, $M_\xi$ and $M_\eta$, are of the order of the VEVs, since they are (at least partly) correlated to the VEVs, as one can read off eqs. (\ref{TpTBM:solS}, \ref{TpTBM:cl22}).
\newpage
\mathversion{bold}
\section[The Group $S_4$ -- I Version]{The Group $\mathbf{S_4}$ -- I Version}
\label{AppB:S4}
\setcounter{footnote}{3}
\mathversion{normal}
In the following we present the mechanism to get the particular VEV alignment used in the previous sections. In table \ref{AppendixB:table:flavon_transformation} we illustrate all the flavon and the driving fields of the model. In order to distinguish between the matter fields, the flavons and the driving fields we use the $U(1)_R$, under which the fields have quantum number 1, 0 and 2 respectively.
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|c||c|c|c||c|c|}
\hline
&&&&&&&&& \\[-0,3cm]
& $\Upsilon$ & $\varphi$ & $\Upsilon^0$ & $\varphi^0$ & $\psi$ & $\eta$ & $\psi^0$ & $\xi'$ & $\xi'^0$ \\
&&&&&&&&& \\[-0,3cm]
\hline
&&&&&&&&& \\[-0,3cm]
$S_4$ & $\bf3$ & $\bf2$ & $\bf3'$ & $\bf2$ & $\bf3$ & $\bf2$ & $\bf3$ & $\bf1'$ & $\bf1'$ \\
&&&&&&&&& \\[-0,3cm]
$Z_5$ & $\omega^3$ & $\omega^3$ & $\omega^4$ & $\omega^4$ & $\omega^2$ & $\omega^2$ & $\omega$ & 1 & 1 \\
&&&&&&&&& \\[-0,3cm]
$U(1)_R$ & $0$ & $0$ & $2$ & $2$ & $0$ & $0$ & $2$ & 0 & 2 \\
\hline
\end{tabular}
\caption{\it Transformation properties of the flavons and the driving fields.}
\label{AppendixB:table:flavon_transformation}
\end{table}
The driving superpotential is given by
\begin{equation}\begin{array}{rcl}
w_d&=&g_1(\Upsilon^0\Upsilon\varphi)+g_2(\varphi^0\Upsilon\Upsilon)+g_3(\varphi^0\varphi\varphi)+\\[0.3cm]
&&+f_1(\psi^0\psi\psi)+f_2(\psi^0\psi\eta)+\\[0.3cm]
&&+M_{\xi'}\xi^{\prime\,0}\xi'+H_1\xi^{\prime\,0}(\eta\varphi)'\;.
\label{S4TBM:eq:wd:driving}
\end{array}\end{equation}
The equations for the minimum of the scalar potential are obtained deriving $w_d$ by the driving fields:
\begin{equation}
\begin{split}
\dfrac{\partial w_d}{\partial\Upsilon^0_1}\;=&\;g_1(\varphi_1\Upsilon_2-\varphi_2\Upsilon_3)=0\\[3mm]
\dfrac{\partial w_d}{\partial\Upsilon^0_2}\;=&\;g_1(\varphi_1\Upsilon_1-\varphi_2\Upsilon_2)=0\\[3mm]
\dfrac{\partial w_d}{\partial\Upsilon^0_3}\;=&\;g_1(\varphi_1\Upsilon_3-\varphi_2\Upsilon_1)=0\;,\\[3mm]
\end{split}
\label{S4TBM:eq:wd:Neutrinos1}
\end{equation}
\begin{equation}
\begin{split}
\dfrac{\partial w_d}{\partial\varphi^0_1}\;=&\;g_2(\Upsilon_3^2+2\Upsilon_1\Upsilon_2)+g_3\varphi_1^2=0\\[3mm]
\dfrac{\partial w_d}{\partial\varphi^0_2}\;=&\;g_2(\Upsilon_2^2+2\Upsilon_1\Upsilon_3)+g_3\varphi_2^2=0\;,\\[3mm]
\end{split}
\label{S4TBM:eq:wd:Neutrinos2}
\end{equation}
\begin{equation}
\begin{split}
\dfrac{\partial w_d}{\partial\psi^0_1}\;=&\;2f_1(\psi_1^2-\psi_2\psi_3)+f_2(\eta_1\psi_2+\eta_2\psi_3)=0\\[3mm]
\dfrac{\partial w_d}{\partial\psi^0_2}\;=&\;2f_1(\psi_2^2-\psi_1\psi_3)+f_2(\eta_1\psi_1+\eta_2\psi_2)=0\\[3mm]
\dfrac{\partial w_d}{\partial\psi^0_3}\;=&\;2f_1(\psi_3^2-\psi_1\psi_2)+f_2(\eta_1\psi_3+\eta_2\psi_1)=0\;,\\[3mm]
\end{split}
\label{S4TBM:eq:wd:ChargedLeptons}
\end{equation}
\begin{equation}
\label{S4TBM:eq:wd:Quarks}
\dfrac{\partial w_d}{\partial\xi^{\prime\,0}}\;=\;M_{\xi'}\xi'+H_1(\eta_1\varphi_2-\eta_2\varphi_1)=0\;.
\end{equation}
The equations can be divided into almost separated groups. The first five equations in (\ref{S4TBM:eq:wd:Neutrinos1}, \ref{S4TBM:eq:wd:Neutrinos2}) are satisfied by the alignment
\begin{equation}
\mean{\Upsilon}=(v_\Upsilon,\,v_\Upsilon,\,v_\Upsilon)\;,\qquad
\mean{\varphi}=(v_\varphi,\,v_\varphi)\;,
\label{S4TBM:vev:neutrinos}
\end{equation}
which is a stable solution of the scalar potential, with
\begin{equation}
v_\Upsilon^2=-\dfrac{g_3}{3g_2}v_\varphi^2\;,\qquad\qquad v_\varphi\;\text{undetermined}\,.
\end{equation}
The three equations in (\ref{S4TBM:eq:wd:ChargedLeptons}), almost separated from the others, are satisfied by two different patterns: the first is
\begin{equation} \mean{\psi}=(0,\,v_\psi,\,0)\;,\qquad
\mean{\eta}=(0,\,v_\eta)
\label{S4TBM:vev:leptons}
\end{equation}
with
\begin{equation}
v_\psi=-\dfrac{f_2}{2f_1}v_\eta\;,\qquad v_\eta\text{ undetermined}
\end{equation}
and the second is
\begin{equation}
\mean{\psi}=(v_\psi,\,v_\psi,\,v_\psi)\;,\qquad
\mean{\eta}=(v_\eta,\,-v_\eta)\;,
\label{S4TBM:vev:leptonsno}
\end{equation}
with $v_\eta$ and $v_\psi$ undetermined. Only the first solution provides the results presented in the previous sections and we need of some soft masses in order to discriminate it as the lowest minimum of the scalar potential. We manage in doing it, considering some $Z_5$-breaking soft terms involving $\psi$ and $\eta$, which in the most general form can be written as
\begin{equation}
m^2_\psi |\psi|^2+ m^2_\eta |\eta|^2 +\tilde{m}^2_\psi \psi \psi + \tilde{m}^2_\eta \eta \eta\,.
\end{equation}
Assuming that $m^2_{\psi,\eta} <0$ the first two terms stabilise the potential for both the vacuum configurations. On the other hand the last two terms vanish for the first vacuum configuration and get a value different from zero in the second one. With a suitable choice of the soft parameters, these contributions can be positive, distinguishing the two configurations of VEVs and assuring that one in eq. (\ref{S4TBM:vev:leptons}) as the setting with the corresponding lowest minimum.
Acting on the configurations of eq. (\ref{S4TBM:vev:neutrinos}) or eq. (\ref{S4TBM:vev:leptons}) with elements of the flavour symmetry group $S_4$, we can generate other minima of the scalar potential. These new minima are physically equivalent to those of the original sets, but it is not restrictive to analyse the model by choosing as local minimum exactly those ones in eqs. (\ref{S4TBM:vev:neutrinos}) and (\ref{S4TBM:vev:leptons}) (it is possible to show that the different scenarios are related by field redefinitions in a similar way as in the Altarelli-Feruglio model of section \ref{Sec:AFTBM} and in the $T'$ based-model of section \ref{Sec:TpTBM}).
The last equation (\ref{S4TBM:eq:wd:Quarks}) connects all the sectors and fixes the VEV of $\xi'$
\begin{equation}
\mean{\xi'}=v_{\xi'}=\dfrac{H_1}{M_{\xi'}}v_\eta v_\varphi\;.
\end{equation}
For the flavon field $\theta$, related to the Froggatt-Nielsen symmetry, the non vanishing VEV is determined by the $D$-term associated with the $U(1)_{FN}$ symmetry: the mechanism works exactly in the same way as in the $T'$ model and we referee to section \ref{AppB:Tp} for the details.
When we consider the higher dimensional operators some corrections are introduced in the VEV alignment. The part of the superpotential depending on the driving fields $\Upsilon^0$, $\varphi^0$, $\psi^0$ and $\xi'^0$ is modified into
\begin{equation}
w_d+\delta w_d\;,
\end{equation}
where $w_d$ is the leading order contribution studied above. The remaining part, $\delta w_d$, is the most general quartic, $S_4$-invariant polynomial linear in the driving fields:
\begin{equation}
\delta w_d=\dfrac{1}{\Lambda_f}\left(\sum_{i=1}^5x_iI_i^{\Upsilon^0}+\sum_{i=1}^{6}w_iI_i^{\varphi^0}+\sum_{i=1}^7s_iI_i^{\psi^0}+
\sum_{i=1}^2v_iI_i^{\xi'^0}\right)
\end{equation}
where $x_i$, $w_i$, $s_i$ and $v_i$ are coefficients and $\left\{I_i^{\Upsilon^0},\;I_i^{\varphi^0},\;I_i^{\psi^0},\;I_i^{\xi'^0}\right\}$ represents a basis of independent quartic invariants,
\begin{equation}\begin{array}{ll}
I_1^{\Upsilon^0}=(\Upsilon^0(\Upsilon\varphi)_3)'\xi'\qquad\qquad&
I_4^{\Upsilon^0}=((\Upsilon^0\eta)_3(\psi\psi)_3)\\
I_2^{\Upsilon^0}=(\Upsilon^0(\Upsilon\Upsilon)_3)'\xi'\qquad\qquad&
I_5^{\Upsilon^0}=((\Upsilon^0\psi)_2(\eta\eta)_2)\\
I_3^{\Upsilon^0}=((\Upsilon^0\psi)_2(\psi\psi)_2)&\\
\\
I_1^{\varphi^0}=(\varphi^0(\Upsilon\Upsilon)_2)'\xi'\qquad\qquad&
I_4^{\varphi^0}=(\varphi^0\eta)(\psi\psi)\\
I_2^{\varphi^0}=(\varphi^0(\varphi\varphi)_2)'\xi'\qquad\qquad&
I_5^{\varphi^0}=(\varphi^0\eta)(\eta\eta)\\
I_3^{\varphi^0}=((\varphi^0\eta)_2(\psi\psi)_2)\qquad\qquad&\\
\\
I_1^{\psi^0}=((\psi^0\psi)_2\eta)'\xi'\qquad\qquad&
I_4^{\psi^0}=((\psi^0\varphi)_3(\Upsilon\Upsilon)_3)\\
I_2^{\psi^0}=((\psi^0\Upsilon)_2(\Upsilon\Upsilon)_2)\qquad\qquad&
I_5^{\psi^0}=((\psi^0\Upsilon)_2(\varphi\varphi)_2)\\
I_3^{\psi^0}=(\psi^0\Upsilon)(\Upsilon\Upsilon)\qquad\qquad&
I_6^{\psi^0}=(\psi^0\Upsilon)(\varphi\varphi)\\
\\
I_1^{\xi'^0}=\xi'^0\xi'\xi'\xi'\qquad\qquad&
I_2^{\xi'^0}=\xi'^0\xi'(\varphi\eta)\\
I_2^{\xi'^0}=\xi'^0\xi'(\Upsilon\psi)\qquad\qquad\;.&\\
\end{array}\end{equation}
The new minimum for $\Upsilon$, $\varphi$, $\psi$, $\eta$ and $\xi'$ is obtained by searching for the zeros of the $F$-terms, the first derivative of $w_d+\delta w_d$, associated to the driving fields $\Upsilon^0$, $\varphi^0$, $\psi^0$ and $\xi'^0$. We look for a solution that perturbs eq. (\ref{S4TBM:vev:neutrinos}, \ref{S4TBM:vev:leptons}) to first order in the $1/\Lambda_f$ expansion: denoting the general flavon field with $\Phi$, we can write the new VEVs as
\begin{equation}
\mean{\Phi_i}=\mean{\Phi_i}^{(LO)}+\delta\Phi_i\;.
\end{equation}
The minimum conditions become equations in the unknown $\delta\Phi_i$, $v_\varphi$ and $v_\eta$. By keeping only the first order in the expansion, we see that the equations can be separated into different groups: the first five concern only the neutrino sector, the second three only the charged lepton one and the last one connects the two sectors. Finally all the perturbations are non vanishing, apart $\delta\eta_1$ and $\delta\eta_2$ and one of the perturbations in the neutrino sector, which remain undetermined. On the other hand the NLO terms fixes the relation between $v_\varphi$ and $v_\eta$. We can conclude that the VEV alignment in eq. (\ref{S4TBM:vev:neutrinos}, \ref{S4TBM:vev:leptons}) is stable under the NLO corrections and the deviations are of relative order $u$ with respect to the leading order results.
\mathversion{bold}
\section[The Group $S_4$ -- II Version]{The Group $\mathbf{S_4}$ -- II Version}
\label{AppB:S42}
\setcounter{footnote}{3}
\mathversion{normal}
In this section we show that the superpotential $w_d$ given by
\begin{equation}
\begin{split}
w_d\;=&\;M_\varphi\Lambda_f (\varphi_\nu^0\varphi_\nu)+g_1\left(\varphi_\nu^0(\varphi_\nu\varphi_\nu)_3\right)+g_2\left(\varphi_\nu^0\varphi_\nu\right)\xi_\nu+\\[3mm]
&+M_\xi^2\Lambda^2\xi_\nu^0+M'_\xi\Lambda_f\xi_\nu^0\xi_\nu+g_3\xi_\nu^0\xi_\nu\xi_\nu+g_4\xi_\nu^0(\varphi_\nu\varphi_\nu)+\\[3mm]
&+f_1\left(\psi_\ell^0(\varphi_\ell\varphi_\ell)_2\right)+f_2\left(\psi_\ell^0(\chi_\ell\chi_\ell)_2\right) +f_3\left(\psi_\ell^0(\varphi_\ell\chi_\ell)_2\right)+\\[3mm]
&+f_4\left(\chi_\ell^0(\varphi_\ell\chi_\ell)_{3'}\right)+\dots
\end{split}
\label{AFM:wd}\;,
\end{equation}
has an isolated minimum that corresponds to the VEVs in eqs. (\ref{AFM:vev:charged:best}) and (\ref{AFM:vev:neutrinos}). At the leading order, the equations for the minimum of the potential can be divided into two decoupled parts: one for the neutrino sector and one for the charged lepton sector. Given this separation, for shorthand we can omit the indexes $\ell$ and $\nu$ for flavons and driving fields. It is easy to see by explicit computation that the driving fields have vanishing VEVs in the limit of exact Supersymmetry. In the neutrino sector, the equations for the vanishing of the derivatives of $w_d$ with respect to each component of the driving fields are:
\begin{equation}
\begin{array}{c}
M_\varphi\Lambda_f\varphi_1+g_1(\varphi_3^2-\varphi_2^2)+g_2\xi\varphi_1=0\\[3mm]
M_\varphi\Lambda_f\varphi_3-2g_1\varphi_1\varphi_2+g_2\xi\varphi_3=0\\[3mm]
M_\varphi\Lambda_f\varphi_2+2g_1\varphi_1\varphi_3+g_2\xi\varphi_2=0\\
\\
M_\xi^2\Lambda_f^2+M'_\xi\Lambda_f\xi+g_3\xi^2+g_4(\varphi_1^2+2\varphi_2\varphi_3)=0\;.
\end{array}
\end{equation}
A solution to these equations is given in eqs. (\ref{AFM:vev:neutrinos}, \ref{AFM:CD}).
For the charged lepton sector the equations are:
\begin{equation}
\begin{array}{c}
f_1(\varphi_1^2-\varphi_2\varphi_3)+f_2(\chi_1^2-\chi_2\chi_3)+\dfrac{\sqrt{3}}{2}f_3(\varphi_2\chi_2+\varphi_3\chi_3)=0\\[3mm]
\dfrac{\sqrt{3}}{2}f_1(\varphi_2^2+\varphi_3^2)+\dfrac{\sqrt{3}}{2}f_2\left(\chi_2^2+\chi_3^2\right)+f_3\left[-\varphi_1\chi_1+ \dfrac{1}{2}\left(\varphi_2\chi_3+\varphi_3\chi_2\right)\right]=0\\
\\
f_4(\varphi_3\chi_3-\varphi_2\chi_2)=0\\[3mm]
f_4(-\varphi_1\chi_2-\varphi_2\chi_1)=0\\[3mm]
f_4(\varphi_1\chi_3+\varphi_3\chi_1)=0\;.
\end{array}
\label{AFM:Feq:wd:charged}
\end{equation}
There are two independent solutions of this set of equations. One is given in eqs. (\ref{AFM:vev:charged:best}, \ref{AFM:AB}).
A second solution is given by:
\begin{equation}
\dfrac{\mean{\varphi_\ell}}{\Lambda_f}=\left(
\begin{array}{c}
-\sqrt{2} \\
1 \\
1 \\
\end{array}
\right)A\;,\qquad
\qquad
\dfrac{\mean{\chi_\ell}}{\Lambda_f}=\left(
\begin{array}{c}
\sqrt{2} \\
1 \\
1 \\
\end{array}
\right)B\;,
\label{AFM:Fvev:charged:wrong}
\end{equation}
where the factors $A$ and $B$ should obey to the relation
\begin{equation}
f_1A^2+f_2B^2+\sqrt{3}f_3AB=0\;.
\label{AFM:ABbis}
\end{equation}
At this level we assume that some additional input neglected so far in the analysis, such as for instance some choice of the
soft Supersymmetry breaking parameters, selects the first solution as the the lowest minimum of the scalar potential.
Notice the existence of a flat direction related to an arbitrary, common rescaling of $A$ and $B$: if we indicate with $m^2_{\varphi_\ell}$ and $m^2_{\chi_\ell}$ the soft masses of the two flavons $\varphi_\ell$ and $\chi_\ell$, we can assume $m^2_{\varphi_\ell},m^2_{\chi_\ell}<0$ and then $\mean{\varphi_\ell}$ and $\mean{\chi_\ell}$ slide to a large scale, which we assume to be possibly stabilised by one-loop radiative corrections, fixing in this way $A$ and $B$.
It is important to note that the stability of the alignment in eqs. (\ref{AFM:vev:charged:best}) and (\ref{AFM:vev:neutrinos}), under small perturbations can be proven. If one introduces small parameters in the VEVs of the fields as follows
\begin{equation}
\dfrac{\mean{\varphi_\ell}}{\Lambda_f}=\left(
\begin{array}{c}
x_1 \\
1 \\
x_2 \\
\end{array}
\right)A\;,\qquad
\qquad
\dfrac{\mean{\chi_\ell}}{\Lambda_f}=\left(
\begin{array}{c}
y_1 \\
y_2 \\
1 \\
\end{array}
\right)B\;,\qquad
\qquad
\dfrac{\mean{\varphi_\nu}}{\Lambda_f}=\left(
\begin{array}{c}
z \\
1 \\
-1 \\
\end{array}
\right)C\;,
\end{equation}
it is only a matter of a simple algebra to show that, for small $(z,\,x_1,\,x_2,\,y_1,\,y_2)$, the only solution minimising the scalar potential in the supersymmetric limit is indeed that one with \mbox{$(z,\,x_1,\,x_2,\,y_1,\,y_2)=(0,\,0,\,0,\,0,\,0)$}.
Given the symmetry of the superpotential $w_d$, starting from the field configurations of eqs. (\ref{AFM:vev:neutrinos}, \ref{AFM:CD}),
(or (\ref{AFM:vev:charged:best}, \ref{AFM:AB})) and acting on them with elements of the flavour symmetry group $S_4\times Z_4$, we can generate other minima of the scalar potential. Some of them are
\begin{equation}
\mean{\varphi_\nu}\propto\left(
\begin{array}{c}
1 \\
0 \\
0 \\
\end{array}
\right)\;,\qquad\qquad
\mean{\varphi_\nu}\propto\left(
\begin{array}{c}
0 \\
1 \\
1 \\
\end{array}
\right)\;.
\end{equation}
The new minima are however physically equivalent to those of the original set and it is not restrictive to analyse the model by choosing as local minimum the specific field configuration discussed so far.
We can now consider the higher-order terms and their effects io the flavon VEV alignments. The superpotential term $w_d$, linear in the driving fields $\psi_\ell^0$, $\chi_\ell^0$, $\xi_\nu^0$ and $\varphi_\nu^0$, is modified into:
\begin{equation}
w_d+\Delta w_d\;,
\end{equation}
where $\Delta w_d$ is the NLO contribution, suppressed by one power of $1/\Lambda_f$ with respect to $w_d$. The corrective term $\Delta w_d$ is given by the most general quartic, $S_4\times Z_4 \times U(1)_{FN}$-invariant polynomial linear in the driving fields, and can be obtained by inserting an additional flavon field in all the leading order terms. The $Z_4$-charges prevent any addition of the flavons $\varphi_\ell$ and $\chi_\ell$ at the NLO, while a factor of $\xi_\nu$ or $\varphi_\nu$ can be added to all the leading order terms. The full expression of $\Delta w_d$ is the following:
\begin{equation}
\Delta w_d=\frac{1}{\Lambda_f}\left(\sum_{i=1}^3x_iI_i^{\xi_\nu^0}+\sum_{i=1}^{5}w_iI_i^{\varphi_\nu^0}+\sum_{i=1}^6s_iI_i^{\psi_\ell^0}+
\sum_{i=1}^5v_iI_i^{\chi_\ell^0}\right)
\end{equation}
where $x_i$, $w_i$, $s_i$ and $v_i$ are coefficients and $\left\{I_i^{\xi_\nu^0},\;I_i^{\varphi_\nu^0},\;I_i^{\psi_\ell^0},\;I_i^{\chi_\ell^0}\right\}$ represent a basis of independent quartic invariants:
\begin{equation}\begin{array}{ll}
I_1^{\xi_\nu^0}=\xi_\nu^0\xi_\nu\xi_\nu\xi_\nu\qquad\qquad&
I_3^{\xi_\nu^0}=\xi_\nu^0\xi_\nu(\varphi_\nu\varphi_\nu)\\
I_2^{\xi_\nu^0}=\xi_\nu^0(\varphi_\nu(\varphi_\nu\varphi_\nu)_3)\qquad\qquad&
\end{array}\end{equation}
\begin{equation}\begin{array}{ll}
I_1^{\varphi_\nu^0}=(\varphi_\nu^0\varphi_\nu)(\varphi_\nu\varphi_\nu)\qquad\qquad&
I_4^{\varphi_\nu^0}=\left(\varphi_\nu^0(\varphi_\nu\varphi_\nu)_3\right)\xi_\nu\\
I_2^{\varphi_\nu^0}=\left((\varphi_\nu^0\varphi_\nu)_2(\varphi_\nu\varphi_\nu)_2\right)\qquad\qquad& I_5^{\varphi_\nu^0}=(\varphi_\nu^0\varphi_\nu)\xi_\nu\xi_\nu\\
I_3^{\varphi_\nu^0}=\left((\varphi_\nu^0\varphi_\nu)_3(\varphi_\nu\varphi_\nu)_3\right)\qquad\qquad&
\end{array}\end{equation}
\begin{equation}\begin{array}{ll}
I_1^{\psi_\ell^0}=\left((\psi_\ell^0\varphi_\nu)_3(\varphi_\ell\chi_\ell)_3\right)\qquad\qquad&
I_4^{\psi_\ell^0}=\left(\psi_\ell^0(\varphi_\ell\varphi_\ell)_2\right)\xi_\nu\\
I_2^{\psi_\ell^0}=\left((\psi_\ell^0\varphi_\nu)_{3'}(\varphi_\ell\chi_\ell)_{3'}\right)\qquad\qquad&
I_5^{\psi_\ell^0}=\left(\psi_\ell^0(\chi_\ell\chi_\ell)_2\right)\xi_\nu\\
I_3^{\psi_\ell^0}=\left((\psi_\ell^0\varphi_\nu)_3(\chi_\ell\chi_\ell)_3\right)\qquad\qquad&
I_6^{\psi_\ell^0}=\left(\psi_\ell^0(\varphi_\ell\chi_\ell)_2\right)\xi_\nu\\
\end{array}\end{equation}
\begin{equation}\begin{array}{ll}
I_1^{\chi_\ell^0}=(\chi_\ell^0\varphi_\nu)'(\varphi_\ell\chi_\ell)'\qquad\qquad&
I_4^{\chi_\ell^0}=\left((\chi_\ell^0\varphi_\nu)_{3'}(\varphi_\ell\chi_\ell)_{3'}\right)\\
I_2^{\chi_\ell^0}=\left((\chi_\ell^0\varphi_\nu)_2(\varphi_\ell\chi_\ell)_2\right)\qquad\qquad&
I_5^{\chi_\ell^0}=\left(\chi_\ell^0(\varphi_\ell\chi_\ell)_{3'}\right)\xi_\nu\;.\\
I_3^{\chi_\ell^0}=\left((\chi_\ell^0\varphi_\nu)_3(\varphi_\ell\chi_\ell)_3\right)\qquad\qquad&
\end{array}\end{equation}
The new VEV configuration is obtained by imposing the vanishing of the first derivative of $w_d+\Delta w_d$ with respect to the driving fields $\xi_\nu^0$, $\varphi_\nu^0$, $\psi_\ell^0$ and $\chi_\ell^0$. We look for a solution that perturbs eqs. (\ref{AFM:vev:charged:best}) and (\ref{AFM:vev:neutrinos}) to first order in the $1/\Lambda_f$ expansion: for all components of the flavons $\Phi=(\xi_\nu,~\varphi_\nu, ~\varphi_\ell, ~\chi_\ell)$, we denote the shifted VEVs by
\begin{equation}
\langle \Phi \rangle=\langle \Phi \rangle_{LO}+\delta \Phi
\end{equation}
where $\langle \Phi \rangle_{LO}$ are given by eqs. (\ref{AFM:vev:charged:best}) and (\ref{AFM:vev:neutrinos}).
After some straightforward algebra the results can be described as follows. In the neutrino sector the shifts $\delta \xi_\nu,~\delta \varphi_\nu$ turn out to be proportional to the leading order VEVs $\langle \Phi \rangle_{LO}$ and can be absorbed in a redefinition of the parameters $C$ and $D$. Instead, in the charged lepton sector, the shifts $\delta \varphi_\ell, ~\delta \chi_\ell$ have a non trivial structure, so that the leading order texture is modified:
\begin{equation}
\mean{\varphi_\ell}=\left(
\begin{array}{c}
{\delta \varphi_\ell}_1\\
A' \Lambda_f \\
0\\
\end{array}
\right)\qquad
\qquad\mean{\chi_\ell}=\left(
\begin{array}{c}
{\delta \chi_\ell}_1 \\
0 \\
B' \Lambda_f \\
\end{array}
\right)
\label{AFM:vev:charged:nlo}
\end{equation}
where $A'$ and $B'$ satisfy a relation similar to that in eq. (\ref{AFM:AB}) and the shifts ${\delta \varphi_\ell}_1$ and ${\delta \chi_\ell}_1$ are proportional to $v'v\Lambda_f$, that are, in other words, suppressed by a factor $v'$ with respect to the leading order entries $A\Lambda_f$ and $B\Lambda_f$, respectively.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{Renormalisation Group Equations}
\label{AppendixC}
\lhead[\fancyplain{}{\bfseries\leftmark}]{\fancyplain{}{\bfseries\thepage}}
\rhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries\rightmark}}
\renewcommand{\theequation}{C.\;\arabic{equation}}
\setcounter{equation}{0}
\setcounter{footnote}{3}
\setcounter{section}{0}
In order to calculate the evolution of the fermion mass matrix from the cutoff of the low-energy theory down to the electroweak energy scale, the renormalisation group equations for all the parameters have to be solved simultaneously. We use the notation defined in section \ref{Sec:Running}, where a superscript $(n)$ denotes a quantity between the $n$th and the $(n+1)$th mass threshold. When all the right-handed neutrinos are integrated out, the renormalisation group equations can be recovered by setting the neutrino Yukawa coupling $Y_\nu$ to zero, while in the full theory above the highest See-Saw scale, the superscript $(n)$ has to be omitted.
In the following, $t:=\ln(\mu/\mu_0)$ and $Y_{u(d)}$ is the Yukawa coupling for the up- (down-) quarks, in the GUT normalisation, such that $g_2=g$ and $g_1=\sqrt{5/3} g'$.
In the MSSM context the 1-loop renormalisation group equations for the renormalisation group equations for $\accentset{(n)}{Y_e}$, $\accentset{(n)}{Y}_\nu$, $\accentset{(n)}{M}$, $\accentset{(n)}{\kappa}$, $\accentset{(n)}{Y_d}$, and $\accentset{(n)}{Y_u}$ are given by
\begin{equation}
\label{LMP:EqRGEMSSM}
\hspace{-4mm}
\begin{array}{ccl}
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{Y_e} & = & Y_e\left\{ 3Y_e^\dagger Y_e +\accentset{(n)}{Y}_\nu ^\dagger \accentset{(n)}{Y}_\nu + \Tr\Big[3Y_d^\dagger Y_d +Y_e^\dagger Y_e\Big] - \dfrac{9}{5}g_1^2 - 3g_2^2\right\}\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{Y_\nu} &=& \accentset{(n)}{Y}_\nu \left\{ 3 \accentset{(n)}{Y}^\dagger_\nu \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e + \Tr\Big[3Y_u^\dagger Y_u + \accentset{(n)}{Y}^{\dagger}_\nu \accentset{(n)}{Y}_\nu\Big] - \dfrac{3}{5} g_1^2 - 3 g_2^2 \right\}\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t} \accentset{(n)}{M}_R &=& \vphantom{\dfrac{1}{2}} 2\Big(\accentset{(n)}{Y}_\nu \accentset{(n)}{Y}^\dagger_\nu\Big) \accentset{(n)}{M}_R + 2\accentset{(n)}{M}_R\Big(\accentset{(n)}{Y}_\nu \accentset{(n)}{Y}^\dagger_\nu\Big)^T\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{\kappa} & = & \Big[\accentset{(n)}{Y}^\dagger_\nu \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e \Big]^T \accentset{(n)}{\kappa} + \accentset{(n)}{\kappa} \Big[\accentset{(n)}{Y}^\dagger_\nu\accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e \Big] + 2 \Tr\Big[3 Y_u^\dagger Y_u + \accentset{(n)}{Y}^{\dagger}_\nu \accentset{(n)}{Y}_\nu \Big]\accentset{(n)}{\kappa}+\\[3mm]
&&-\dfrac{6}{5} g_1^2 \accentset{(n)}{\kappa}- 6 g_2^2 \accentset{(n)}{\kappa}\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{Y_d}& = & Y_d\left\{ 3Y_d^\dagger Y_d + Y_u^\dagger Y_u + \Tr\Big[3 Y_d^\dagger Y_d + Y_e^\dagger Y_e\Big] - \dfrac{7}{15}g_1^2 - 3g_2^2 - \dfrac{16}{3}g_3^2 \right\}\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t} \accentset{(n)}{Y_u} & = & Y_u\left\{ Y_d^\dagger Y_d + 3 Y_u^\dagger Y_u + \Tr\Big[3Y_u^\dagger Y_u + \accentset{(n)}{Y}_\nu ^\dagger \accentset{(n)}{Y}_\nu\Big] - \dfrac{13}{15}g_1^2- 3g_2^2 - \dfrac{16}{3}g_3^2\right\}\;.
\end{array}
\end{equation}
In the Standard Model extended by singlet neutrinos, the renormalisation group equations for the same quantities are given by
\begin{equation}
\label{LMP:EqRGESM}
\hspace{-4mm}
\begin{array}{ccl}
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{Y_e} & = & Y_e \left\{ \dfrac{3}{2} Y_e^\dagger Y_e -\dfrac{3}{2} \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu +\Tr\left[ 3\,Y_u^\dagger Y_u + 3\,Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e\right] -\dfrac{9}{4} g_1^2 - \dfrac{9}{4} g_2^2\right\}\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{Y_\nu} & = & \accentset{(n)}{Y}_\nu \left\{ \dfrac{3}{2} \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu - \dfrac{3}{2} Y_e^\dagger Y_e +\Tr\left[ 3\,Y_u^\dagger Y_u + 3\,Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e\right] -\dfrac{9}{20} g_1^2 -\dfrac{9}{4} g_2^2 \right\}\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{M} &=& \Big(\accentset{(n)}{Y}_\nu \accentset{(n)}{Y}^\dagger_\nu\Big) \accentset{(n)}{M} + \accentset{(n)}{M} \Big(\accentset{(n)}{Y}_\nu \accentset{(n)}{Y}^\dagger_\nu\Big)^T \;,\\[3mm]
16\pi^2\dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{\kappa} & = & \dfrac{1}{2} \Big[ \accentset{(n)}{Y}^\dagger_\nu \accentset{(n)}{Y}_\nu -3 Y_e^\dagger Y_e \Big]^T \accentset{(n)}{\kappa} +\dfrac{1}{2}\accentset{(n)}{\kappa} \Big[\accentset{(n)}{Y}^\dagger_\nu \accentset{(n)}{Y}_\nu -3 Y_e^\dagger Y_e \Big]+\\[3mm]
&&+2 \Tr\left[ 3\,Y_u^\dagger Y_u + 3\,Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e\right]
- 3 g_2^2\accentset{(n)}{\kappa} +\lambda_H\accentset{(n)}{\kappa}\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{Y_d} & = & Y_d \left\{ \dfrac{3}{2} Y_d^\dagger Y_d - \dfrac{3}{2} Y_u^\dagger Y_u + \Tr\Big[3\,Y_u^\dagger Y_u + 3\,Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e \Big]+\right.\\[3mm]
&&\left.\qquad-\dfrac{1}{4} g_1^2 - \dfrac{9}{4} g_2^2 - 8g_3^2\right\}\;,\\[3mm]
16\pi^2 \dfrac{\mathrm{d}}{\mathrm{d} t} \accentset{(n)}{Y_u} & = & Y_u \left\{ \dfrac{3}{2} Y_u^\dagger Y_u - \dfrac{3}{2} Y_d^\dagger Y_d + \Tr\Big[3\,Y_u^\dagger Y_u + 3\,Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e \Big]+\right.\\[3mm]
&&\left.\qquad-\dfrac{17}{20} g_1^2 - \dfrac{9}{4} g_2^2 - 8g_3^2 \right\}\;,\\[3mm]
16\pi^2\dfrac{\mathrm{d}}{\mathrm{d} t}\accentset{(n)}{\lambda_H} & = & 6\lambda_H^2 -3\lambda_H \left(3g_2^2+\dfrac{3}{5} g_1^2\right) +3 g_2^4 +\dfrac{3}{2}\left(\dfrac{3}{5} g_1^2+g_2^2\right)^2 +\\[3mm]
&&+4\lambda_H \Tr\Big[3\,Y_u^\dagger Y_u + 3\,Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e \Big]+\\[3mm]
&&-8 \Tr\Big[ 3\,Y_u^\dagger Y_u\,Y_u^\dagger Y_u + 3\,Y_d^\dagger Y_d\,Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger\accentset{(n)}{Y}_\nu \accentset{(n)}{Y}_\nu^\dagger\accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e\,Y_e^\dagger Y_e \Big]\;.
\end{array}
\end{equation}
We use the convention that the Higgs self-interaction term in the Lagrangian is $-\lambda_H (H^\dagger H)^2/4$.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{Mass Insertion and $1$-Loop Formulae}
\label{AppendixE}
\mathversion{normal}
\lhead[\fancyplain{}{\bfseries\leftmark}]{\fancyplain{}{\bfseries\thepage}}
\rhead[\fancyplain{}{\bfseries\thepage}]{\fancyplain{}{\bfseries\rightmark}}
\renewcommand{\theequation}{D.\;\arabic{equation}}
\setcounter{equation}{0}
\setcounter{footnote}{3}
\setcounter{section}{0}
\section{Mass Insertion Formulae}
\label{AppE:MI}
\setcounter{footnote}{3}
The ratios $R_{ij}$ can be expressed as:
\begin{equation}
R_{ij}= \frac{48\pi^3 \alpha}{G_F^2 m_{SUSY}^4}\left(\vert A_L^{ij} \vert^2+\vert A_R^{ij} \vert^2 \right)\;.
\label{AppE:LFV:rij}
\end{equation}
At the leading order, the amplitudes $A_L^{ij}$ and $A_R^{ij}$ are given by:
\begin{eqnarray}
A_L^{ij}&=&a_{LL} (\delta_{ij})_{LL} + a_{RL} \frac{m_{SUSY}}{m_i} (\delta_{ij})_{RL}\nn\\
A_R^{ij}&=&a_{RR} (\delta_{ij})_{RR} + a_{LR} \frac{m_{SUSY}}{m_i} (\delta_{ij})_{LR}
\label{AppE:LFV:ALAR}
\end{eqnarray}
where $a_{CC'}$ $(C,C'=L,R)$ are dimensionless functions of the ratios $M_{1,2}/m_{SUSY}$, $\mu/m_{SUSY}$ and of $\tan\theta_W$. Their typical size is one tenth of $g^2/(16\pi^2)$, $g$ being the $SU(2)_L$ gauge coupling constant. In our conventions their explicit expression is given by:
\begin{equation}
\hspace{-1cm}
\begin{array}{rcl}
a_{LL}&=&\dfrac{g^2}{16\pi^2}\left[ f_{1n}(a_2)+f_{1c}(a_2)+
\dfrac{M_2\mu\tan\beta}{M_2^2-\mu^2}\Big(f_{2n}(a_2,b)+f_{2c}(a_2,b)\Big)\right.\\
&&+\left.\tan^2\theta_W\left(f_{1n}(a_1)- \dfrac{M_1\mu\tan\beta}{M_1^2-\mu^2}f_{2n}(a_1,b)- M_1\left(\left(\dfrac{z_i}{y_i}+\zeta\right) m_{SUSY}-\mu\tan\beta\right)\dfrac{f_{3n}(a_1)}{m_{SUSY}^2}\right)\right]\\[3mm]
a_{RL}&=&\dfrac{g^2}{16\pi^2}\tan^2\theta_W\dfrac{M_1}{m_{SUSY}} 2 f_{2n}(a_1)\\[3mm]
a_{RR}&=&\dfrac{g^2}{16\pi^2}\tan^2\theta_W \left[4 f_{1n}(a_1)+ 2\dfrac{M_1\mu\tan\beta}{M_1^2-\mu^2}f_{2n}(a_1,b)- M_1\left(\left(\dfrac{z_i}{y_i}+\zeta\right) m_{SUSY}-\mu\tan\beta\right)\dfrac{f_{3n}(a_1)}{m_{SUSY}^2}\right]\\[3mm]
a_{LR}&=&\dfrac{g^2}{16\pi^2}\tan^2\theta_W\dfrac{M_1}{m_{SUSY}} 2 f_{2n}(a_1)\;,
\end{array}
\label{LFV:MIcoefficients}
\end{equation}
where $a_{1,2}=M^2_{1,2}/m_{SUSY}^2$, $b=\mu^2/m_{SUSY}^2$ and $f_{i(c,n)}(x,y)=f_{i(c,n)}(x)-f_{i(c,n)}(y)$. The functions $f_{in}(x)$ and $f_{ic}(x)$ are given by:
\begin{equation}
\begin{array}{rcl}
f_{1n}(x)&=&(-17 x^3+9 x^2+9 x-1+6 x^2(x+3) \log x)/(24(1-x)^5)\\[3mm]
f_{2n}(x)&=&(-5 x^2+4 x+1+2x(x+2)\log x)/(4(1-x)^4)\\[3mm]
f_{3n}(x)&=&(1+9x -9x^2-x^3+6x(x+1) \log x)/(2(1-x)^5)\\[3mm]
f_{1c}(x)&=&(-x^3-9x^2+9x+1+6x(x+1) \log x)/(6(1-x)^5)\\[3mm]
f_{2c}(x)&=&(-x^2-4 x+5+2(2x+1)\log x)/(2(1-x)^4)\;.
\end{array}
\label{LFV:MIfunctions}
\end{equation}
\mathversion{bold}
\section{Notations and $1$-Loop Formulae for $R_{ij}$ and $\delta a_\mu^{SUSY}$}
\label{AppE:one-loop}
\setcounter{footnote}{3}
\mathversion{normal}
\newcommand{\ell_i\rightarrow \ell_j\gamma}{\ell_i\rightarrow \ell_j\gamma}
\newcommand{\ell_i\rightarrow \ell_j\nu_i\overline{\nu}_j}{\ell_i\rightarrow \ell_j\nu_i\overline{\nu}_j}
\newcommand{m_{\tilde{\ell}}}{m_{\tilde{\ell}}}
\newcommand{m_{\tilde{\ell}_X}}{m_{\tilde{\ell}_X}}
\newcommand{m_{\tilde{\nu}}}{m_{\tilde{\nu}}}
\newcommand{m_{\tilde{\nu}_X}}{m_{\tilde{\nu}_X}}
\newcommand{M_{\tilde{\chi}_A}}{M_{\tilde{\chi}_A}}
\newcommand{M_{\tilde{\chi}^0}}{M_{\tilde{\chi}^0}}
\newcommand{M_{\tilde{\chi}^0_A}}{M_{\tilde{\chi}^0_A}}
\newcommand{M_{\tilde{\chi}^-_A}}{M_{\tilde{\chi}^-_A}}
In this part we fix the notations and we report the formulae for $R_{ij}$ and for $\delta a_\mu^{SUSY}$ which we have used in section \ref{Sec:FlavourViolation}. The main references are \cite{deltaamu_susy,Arganda,HisanoFukuyama,g2BR}. The mass matrix of the charginos is given by:
\begin{equation}
-\mscr{L}_m\supset\left(\overline{\widetilde{W}^-_R}\quad\overline{\widetilde{H}^-_{2R}}\right)M_c\left(
\begin{array}{c}
\widetilde{W}^-_L \\
\widetilde{H}^-_{1L} \\
\end{array}
\right)+\text{h.c.}
\end{equation}
with
\begin{equation}
M_c=
\left(
\begin{array}{cc}
M_2 & \sqrt{2}m_W\cos{\beta} \\
\sqrt{2}m_W\sin{\beta} & \mu \\
\end{array}
\right)\;.
\end{equation}
This matrix is diagonalised by $2\times 2$ rotation matrices $O_L$ and $O_R$ as:
\begin{equation}
O_R M_c O_L^T=\diag\left(M_{ \widetilde{\chi}^-_1}, M_{ \widetilde{\chi}^-_2}\right)\;,
\end{equation}
where the diagonalising matrices connect mass and interaction eigenstates in the following way:
\begin{equation}
\left(
\begin{array}{c}
\widetilde{\chi}^-_{1L} \\
\widetilde{\chi}^-_{2L} \\
\end{array}
\right)=O_L\left(
\begin{array}{c}
\widetilde{W}^-_L \\
\widetilde{H}^-_{1L} \\
\end{array}
\right)\;,\qquad
\left(
\begin{array}{c}
\widetilde{\chi}^-_{1R} \\
\widetilde{\chi}^-_{2R} \\
\end{array}
\right)=O_R\left(
\begin{array}{c}
\widetilde{W}^-_R \\
\widetilde{H}^-_{2R} \\
\end{array}
\right)
\end{equation}
and the mass eigenstates are written as $\widetilde{\chi}^-_A= \widetilde{\chi}^-_{AL}+ \widetilde{\chi}^-_{AR}$ ($A=1,2$) with masses $M_{ \widetilde{\chi}^-_A}$.
The neutralino mass matrix is given by:
\begin{equation}
-\mscr{L}_m\supset\dfrac{1}{2}\left(\begin{array}{cccc}\widetilde{B}_L&\widetilde{W}^0_L&
\widetilde{H}^0_{1L}&\widetilde{H}^0_{2L}\end{array}\right)M_N
\left(
\begin{array}{c}
\widetilde{B}_L \\
\widetilde{W}^0_L \\
\widetilde{H}^0_{1L} \\
\widetilde{H}^0_{2L} \\
\end{array}
\right)+\text{h.c.}
\end{equation}
with
\begin{equation}
M_N=\left(
\begin{array}{cccc}
M_1 & 0 & -m_Z\sin{\theta_W}\cos{\beta} & m_Z\sin{\theta_W}\sin{\beta} \\
0 & M_2 & m_Z\cos{\theta_W}\cos{\beta} & -m_Z\cos{\theta_W}\sin{\beta} \\
-m_Z\sin{\theta_W}\cos{\beta} & m_Z\cos{\theta_W}\cos{\beta} & 0 & -\mu \\
m_Z\sin{\theta_W}\sin{\beta} & -m_Z\cos{\theta_W}\sin{\beta} & -\mu & 0 \\
\end{array}
\right)
\label{MN}
\end{equation}
We can diagonalise $M_N$ by a rotation matrix $O_N$:
\begin{equation}
O_N M_N O_N^T=\diag\left(M_{ \widetilde{\chi}^0_1}, M_{ \widetilde{\chi}^0_2}, M_{ \widetilde{\chi}^0_3}, M_{ \widetilde{\chi}^0_4}\right)\;,
\end{equation}
where $O_N$ connects mass and interaction eigenstates in the following way:
\begin{equation}
\left(
\begin{array}{c}
\widetilde{\chi}^0_{1L} \\
\widetilde{\chi}^0_{2L} \\
\widetilde{\chi}^0_{3L} \\
\widetilde{\chi}^0_{4L} \\
\end{array}
\right)=O_N\left(
\begin{array}{c}
\widetilde{B}_L \\
\widetilde{W}^0_L \\
\widetilde{H}^0_{1L} \\
\widetilde{H}^0_{2L} \\
\end{array}
\right)
\end{equation}
and the mass eigenstates are given by $\widetilde{\chi}^0_A= \widetilde{\chi}^0_{AL}+ \widetilde{\chi}^0_{AR}$ ($A=1,2,3,4$) with masses $M_{ \widetilde{\chi}^0_A}$.\\
The mass matrices for the charged sleptons and for sneutrinos are given by:
\begin{equation}
-\mscr{L}_m\supset\left(\overline{\tilde{\ell}}\quad\tilde{\ell}^c\right)
\hat{\cM}_e^2
\left(
\begin{array}{c}
\tilde{\ell} \\
\overline{\tilde{\ell}^{c}} \\
\end{array}
\right)+\overline{\tilde{\nu}}\,\hat{m}^2_{\nu LL}\tilde{\nu}
\end{equation}
with
\begin{equation}
\hat{\cM}_e^2=\left(
\begin{array}{cc}
\hat{m}^2_{eLL} & \hat{m}_{eLR}^2 \\
\hat{m}_{eRL}^2 & \hat{m}^2_{e RR} \\
\end{array}
\right)
\end{equation}
where $\hat{m}^2_{(e,\nu)LL}$ and $\hat{m}^2_{eRR}$ are in general hermitian matrices and $\hat{m}^2_{eLR}=\left(\hat{m}^2_{RL}\right)^\dag$.
We diagonalise the mass matrix $\hat{\cM}_e^2$ by a $6\times 6$ rotation matrix $U^{\tilde{\ell}}$ as:
\begin{equation}
U^{\tilde{\ell}}\,\hat{\cM}_e^2U^{\tilde{\ell}\,T}=m_{\tilde{\ell}}^2
\end{equation}
where the mass eigenstates are:
\begin{equation}
\tilde{\ell}_X=U^{\tilde{\ell}}_{X,i}\tilde{\ell}_i+U^{\tilde{\ell}}_{X,i+3}\bar{\tilde{\ell}}^{c}_i
\end{equation}
with masses $m^2_{\tilde{\ell}_X}$ ($X=1,\ldots,6$).
\noindent Analogously, the sneutrino mass matrix is diagonalised by:
\begin{equation}
U^{\tilde{\nu}} \hat{m}_{\nu LL}^2U^{\tilde{\nu} T}=m_{\tilde{\nu}}^2
\end{equation}
where the mass eigenstates are:
\begin{equation}
\tilde{\nu}_X=U^{\tilde{\nu}}_{X,i}\tilde{\nu}_i
\end{equation}
with masses $m^2_{\tilde{\nu}_X}$ ($X=1,2,3$).\\
\\
The normalised branching ratios, $R_{ij}$, for the LFV transitions $\ell_i\rightarrow \ell_j\gamma$ are:
\begin{equation}
R_{ij}=\dfrac{BR(\ell_i\rightarrow \ell_j\gamma)}{BR(\ell_i\rightarrow \ell_j\nu_i\overline{\nu}_j)}=\dfrac{48\pi^3\alpha}{G_F^2}\left(|A_2^L|^2+|A_2^R|^2\right)
\end{equation}
and the decay rates are given by:
\begin{equation}
\begin{array}{ccl}
\Gamma(\ell_i\rightarrow \ell_j\nu_i\overline{\nu}_j)&=&\dfrac{G_F^2}{192\pi^3}m_i^5\;,\\[3mm]
\Gamma(\ell_i\rightarrow \ell_j\gamma)&=&\dfrac{e^2}{16\pi}m_i^5\left(|A_2^L|^2+|A_2^R|^2\right)\;.
\end{array}
\end{equation}
Each coefficient $A_2^{L,R}$ can be written as a sum of two terms:
\begin{equation}
A_2^{L,R}=A_2^{(n)L,R}+A_2^{(c)L,R}\;,
\end{equation}
where $A_2^{(n)L,R}$ and $A_2^{(c)L,R}$ stand for the contributions from the neutralino and from the chargino loops, respectively.
These coefficients are explicitly given by:
\begin{equation}
A_2^{(n)L}=\dfrac{1}{32\pi^2}\dfrac{1}{m_{\tilde{\ell}_X}^2}\bigg[N_{jAX}^L \bar{N}_{iAX}^{L} g_{1n}(x_{AX})+N_{jAX}^R \bar{N}_{iAX}^{R}\dfrac{m_j}{m_i} g_{1n}(x_{AX})
+N_{jAX}^L \bar{N}_{iAX}^{R}\dfrac{M_{\tilde{\chi}^0_A}}{m_i} g_{2n}(x_{AX})\bigg]
\end{equation}
and $A_2^{(n)R}=A_2^{(n)L}\vert_{L\leftrightarrow R}$ with $x_{AX}=M_{\tilde{\chi}^0_A}^2 / m_{\tilde{\ell}_X}^2$, and
\begin{equation}
A_2^{(c)L}=-\dfrac{1}{32\pi^2}\dfrac{1}{m_{\tilde{\nu}_X}^2}\bigg[C_{jAX}^L \bar{C}_{iAX}^{L} g_{1c}(x_{AX})+C_{jAX}^R \bar{C}_{iAX}^{R}\dfrac{m_j}{m_i} g_{1c}(x_{AX})
+C_{jAX}^L \bar{C}_{iAX}^{R}\dfrac{M_{\tilde{\chi}^-_A}}{m_i} g_{2c}(x_{AX})\bigg]
\end{equation}
and $A_2^{(c)R}=A_2^{(c)L}\vert_{L\leftrightarrow R}$ with $x_{AX}=M_{\tilde{\chi}^-_A}^2 / m_{\tilde{\nu}_X}^2$.\\
The terms $N_{iAX}$ and $C_{iAX}$ and the loop functions $g_{in}$ and $g_{ic}$ read as follows:
\begin{equation}
\hspace{-1.9mm}
\begin{array}{rcl}
N_{iAX}^L &=& -\dfrac{g_2}{\sqrt{2}}\left\{\left[-(O_N)_{A,2}-(O_N)_{A,1}\tan{\theta_W}\right]
U^{\tilde{\ell}}_{X,i}+\dfrac{m_i}{m_W\cos{\beta}}(O_N)_{A,3}U^{\tilde{\ell}}_{X,i+3}\right\}\\[3mm]
N_{iAX}^R &=& -\dfrac{g_2}{\sqrt{2}}\left\{\dfrac{m_i}{m_W\cos{\beta}}(O_N)_{A,3}U^{\tilde{\ell}}_{X,i}+ 2(O_N)_{A,1}\tan{\theta_W}U^{\tilde{\ell}}_{X,i+3}\right\}\\[3mm]
C_{iAX}^L &=& -g_2(O_R)_{A,1}U^{\tilde{\nu}}_{X,i}\\[3mm]
C_{iAX}^R &=& g_2\dfrac{m_i}{\sqrt{2}m_W\cos{\beta}}(O_L)_{A,2}U^{\tilde{\nu}}_{X,i}
\end{array}
\label{AppE:functions:NC}
\end{equation}
and
\begin{equation}
\begin{array}{rcl}
g_{1n}(x_{AX})&=&\dfrac{1}{6(1-x_{AX})^4}\left(1-6x_{AX}+3x_{AX}^2+2x_{AX}^3-6x_{AX}^2\ln{x_{AX}}\right)\\[3mm]
g_{2n}(x_{AX})&=&\dfrac{1}{(1-x_{AX})^3}\left(1-x_{AX}^2+2x_{AX}\ln{x_{AX}}\right)\\[3mm]
g_{1c}(x_{AX})&=&\dfrac{1}{6(1-x_{AX})^4}\left(2+3x_{AX}-6x_{AX}^2+x_{AX}^3+6x_{AX}\ln{x_{AX}}\right)\\[3mm]
g_{2c}(x_{AX})&=&\dfrac{1}{(1-x_{AX})^3}\left(-3+4x_{AX}-x_{AX}^2-2\ln{x_{AX}}\right)\;.
\end{array}
\label{AppE:functions}
\end{equation}
We note that the functions $f_{i n}$ and $f_{i c}$ , displayed in section \ref{Sec:LFV:MI_AnaliticResults}, are related to the loop functions $g_{in}$ and $g_{ic}$, mostly through taking the first derivative.
The deviation of the muon anomalous magnetic moment, $\delta a_\mu^{SUSY}$, due to supersymmetric contributions can be written as:
\begin{equation}
\delta a_{\ell_i}^{SUSY}=\dfrac{g_{\ell_i}^{(n)}+g_{\ell_i}^{(c)}}{2}
\end{equation}
with
\begin{equation}
g_{\ell_i}^{(n,c)}=g_{\ell_i}^{(n,c)L}+g_{\ell_i}^{(n,c)R}\;.
\end{equation}
These terms are explicitly given by:
\begin{equation}
g_{\ell_i}^{(c)L}=\dfrac{1}{16\pi^2}\dfrac{m_i^2}{m_{\tilde{\nu}_X}^2}\bigg[C_{iAX}^L \bar{C}_{iAX}^{L} g_{1c}(x_{AX})+C_{iAX}^R \bar{C}_{iAX}^{R} g_{1c}(x_{AX})
+C_{iAX}^L \bar{C}_{iAX}^{R}\dfrac{M_{\tilde{\chi}^-_A}}{m_i} g_{2c}(x_{AX})\bigg]
\end{equation}
and $g_{\ell_i}^{(c)R}=g_{\ell_i}^{(c)L}\vert_{L\leftrightarrow R}$ with $x_{AX}=M_{\tilde{\chi}^-_A}^2/m_{\tilde{\nu}_X}^2$, and
\begin{equation}
g_{\ell_i}^{(n)L}=-\dfrac{1}{16\pi^2}\dfrac{m_i^2}{m_{\tilde{\ell}_X}^2}\bigg[N_{iAX}^L \bar{N}_{iAX}^{L} g_{1n}(x_{AX})+N_{iAX}^R \bar{N}_{iAX}^{R} g_{1n}(x_{AX})
+N_{iAX}^L \bar{N}_{iAX}^{R}\dfrac{M_{\tilde{\chi}^0_A}}{m_i} g_{2n}(x_{AX})\bigg]
\end{equation}
and $g_{\ell_i}^{(n)R}=g_{\ell_i}^{(n)L}\vert_{L\leftrightarrow R}$ with $x_{AX}=M_{\tilde{\chi}^0_A}^2 / m_{\tilde{\ell}_X}^2$.
The terms $N_{iAX}$ and $C_{iAX}$ and the functions $g_{in}$ and $g_{ic}$
have been already introduced in eqs. (\ref{AppE:functions:NC}) and in eqs. (\ref{AppE:functions}).
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{The Standard Model and Beyond}
\label{Sec:Overview}
\setcounter{equation}{0}
\setcounter{footnote}{3}
\section{The Standard Model and the Neutrino Masses}
\label{Sec:SM}
\setcounter{footnote}{3}
The fundamental particles and their interactions are described by the Standard Model (SM), a quantum field theory in a 4-dimensional relativistic framework based on the gauge group $SU(3)_c\times SU(2)_L\times U(1)_Y$ \cite{SM}. The first term, $SU(3)_c$, refers to the quantum chromodynamics, the theory of strong interactions of coloured particles, such as gluons and quarks. The $SU(2)_L\times U(1)_Y$ term is the group of the electroweak force, which describes the behaviour of the weak gauge bosons, $W^+$, $W^-$ and $Z^0$, as well as the electromagnetic one, the photon $\gamma$, in their mutual interactions and in the presence of fermions.
The constituents of matter are leptons and quarks, which transform as spinors under the Lorenz group. For each spinor, it is possible to define a left-handed (LH) and a right-handed (RH) part, which behave differently under the Standard Model gauge group. For this reason it is usually more convenient to adopt a two-component Weyl spinor representation instead of a four-component Dirac one: they are equivalent indeed a Dirac spinor $\Psi$ is composed of a left-handed Weyl spinor $\Psi_L$ and a right-handed Weyl spinor $\Psi_R$,
\begin{equation}
\Psi=\left(
\begin{array}{c}
\Psi_L \\
\Psi_R \\
\end{array}
\right)\;.
\end{equation}
If a Dirac spinor have the left- and right-handed Weyl spinors which satisfy to $\Psi_L=i\sigma^{2*}\Psi_R^*$, it is called Majorana spinor and in this case the two parts are equivalent.
It is also convenient to adopt an other notation: we consider a basis in which all the fields are left-handed and therefore $\Psi$ refers to the true left-handed component and $\Psi^c$ to the charge conjugate of the right-handed part. Using this convention, we clarify the equivalence between the two- and the four-component notations. For example $e$ ($e^c$) denotes the left-handed (right-handed) component of the electron field. In terms of the four-component spinor $\psi^T_e = (e,\,\overline{e}^c)$, the bilinears $\overline{e}\, {\ov\sigma}^{\nu} e $ and $e^c \sigma^{\nu} \overline{e}^c$ correspond to $\overline{\psi}_{e} \gamma^\nu P_L \psi_{e}$ and $\overline{\psi}_e \gamma^\nu P_R \psi_e $ (where $P_{L,R} = \frac12 (1\mp \gamma^5)$) respectively. We take $\sigma^{\mu} \equiv (1,\vec{\sigma})$, ${\ov\sigma}^{\mu}\equiv (1,-\vec{\sigma})$, $\sigma^{\mu\nu} \equiv \frac14 (\sigma^{\mu} {\ov\sigma}^{\nu} -\sigma^{\nu}{\ov\sigma}^{\mu})$, ${\ov\sigma}^{\mu\nu} \equiv \frac14 ({\ov\sigma}^{\mu} \sigma^{\nu} -{\ov\sigma}^{\nu}\sigma^{\mu})$ and $g_{\mu\nu}= \diag(+1, -1, -1, -1)$, where $\vec{\sigma} = (\sigma^1, \sigma^2, \sigma^3)$ are the $2\times 2$ Pauli matrices:
\begin{equation}
\sigma^1=\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)\qquad
\sigma^2=\left(
\begin{array}{cc}
0 & -i \\
i & 0 \\
\end{array}
\right)\qquad
\sigma^3=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}
\right)\;.
\end{equation}
Here the four-component matrix $\gamma^\mu$ is in the chiral basis, where the 2$\times$2 blocks along the diagonal vanish, the upper-right block is given by $\sigma^{\mu}$ and the lower-left block is equal to ${\ov\sigma}^{\mu}$.
Leptons and quarks are present in three generations or families and each of them accounts for four left-handed $SU(2)_L$-doublets, one in the lepton sector, $\ell=(\nu,\,e)$, and three in the quark sector, $q=(u,\,d)$, and seven right-handed $SU(2)_L$-singlets, the charged lepton $e^c$ and the up- and down-quarks $u^c$ and $d^c$. The symmetry charge assignments of one such family under the Standard Model gauge group are displayed in table \ref{SM:SMFermions}, along with the symmetry assignments of the Higgs boson $H=(H^+,\,H^0)$. The electric charge is given by $Q_\text{em}=T_{3L}+Y$, where $T_{3L}$ is the weak isospin and $Y$ the hypercharge.
\begin{table}[h]
\centering
\begin{math}
\begin{array}{|c||cc|ccc|c|}
\hline
&&&&&&\\[-3mm]
& \ell & e^c & q & u^c & d^c & H \\[3mm]
\hline
&&&&&&\\[-3mm]
SU(3)_c & \bf1 & \bf1 & \bf3 & \bf\overline3 & \bf\overline3 & \bf1 \\[3mm]
SU(2)_L & \bf2 & \bf1 & \bf2 & \bf1 & \bf1 & \bf2 \\[3mm]
U(1)_Y & -1/2 & +1 & +1/6 & -2/3 & +1/3 & +1/2 \\[3mm]
\hline
\end{array}
\end{math}
\caption{\emphcap{Charge assignments of leptons, quarks and the Higgs field under the Standard Model gauge group.}}
\label{SM:SMFermions}
\end{table}
The most general $SU(3)_c\times SU(2)_L\times U(1)_Y$ gauge invariant renormalisable Lagrangian density can be written as follows:
\begin{equation}
\mscr{L}_\text{SM}=\mscr{L}_K+\mscr{L}_Y+V\;,
\end{equation}
where $\mscr{L}_K$ contains the kinetic terms and the gauge interactions for fermions and bosons, while $\mscr{L}_Y$ referees to the fermion Yukawa terms and $V$ is the scalar potential. The Yukawa Lagrangian can be written as
\begin{equation}
\mscr{L}_Y = (Y_e)_{ij}\,e^c_iH^\dag\ell_j + (Y_d)_{ij}\,d^c_iH^\dag q_j + (Y_u)_{ij}\,u^c_i\widetilde{H}^\dag q_j + \text{h.c.}
\label{SM:LagrangianY}
\end{equation}
where $\widetilde{H}\equiv i\sigma^2H^*$. The Standard Model gauge group prevents direct fermion mass terms in the Lagrangian. However, when the neutral component of the Higgs field acquires a non-vanishing vacuum expectation value (VEV), $\mean{H^0}=v/\sqrt2$ with $v\simeq246$ GeV, the electroweak symmetry is spontaneously broken,
\begin{equation}
SU(2)_L\times U(1)_Y\longrightarrow U(1)_{em}\;,
\end{equation}
and as a result all the fermions, apart from neutrinos, acquire non-vanishing masses:
\begin{equation}
\mscr{L}_Y = (M_u)\, u^c_i u_j + (M_d)_{ij}\, d^c_i d_j + (M_e)_{ij}\, e^c_i e_j \qquad\text{where}\qquad M_i=Y_i \dfrac{v}{\sqrt2}\;.
\label{SM:SMFermionsMasses}
\end{equation}
Observations of massive neutrinos require an extension of the Standard Model. The minimal variation consists in the introduction of a set of right-handed neutrinos, $\nu^c$, which transform under the gauge group of the Standard Model as $({\bf1},\,{\bf1},\,0)$, i.e. they do not have any interactions with the vector bosons. In this way, it is possible to construct a Yukawa term for neutrinos similar to the up-quark Yukawa:
\begin{equation}
(Y_\nu)_{ij}\,\nu^c_i\widetilde{H}^\dag\ell_j+\text{h.c.}
\end{equation}
which in the electroweak symmetry broken phase becomes a neutrino Dirac mass term
\begin{equation}
(m_\nu)_{ij}\,\nu^c_i\nu_j\qquad\mrm{where}\qquad m_\nu=Y_{\nu}\dfrac{v}{\sqrt2}\;.
\label{SM:mD}
\end{equation}
According to the observations, $m_\nu\sim0.1$ eV and as a consequence it requires that $Y_\nu\sim 10^{-12}$, which does not find any natural explanation.
An alternative minimal extension of the Standard Model consists in assuming the explicit violation of the accidental global symmetry $L$, the lepton number, at a very high energy scale, $\Lambda_L$. It is then possible to write the Weinberg operator \cite{Weinberg}, a five-dimensional non-renormalisable term suppressed by $\Lambda_L$:
\begin{equation}
\cO_5 = \dfrac{1}{2} (Y_\nu)_{ij}\dfrac{(\ell_i\widetilde{H}^*)\,(\widetilde{H}^\dag\ell_j)}{\Lambda_L}\;.
\label{SM:O5operator}
\end{equation}
When the Higgs field develops the VEV, this produces a neutrino Majorana mass term
\begin{equation}
(m_\nu)_{ij} \nu_i\nu_j\qquad\mrm{where}\qquad m_\nu=Y_\nu\dfrac{v^2}{4\Lambda_L}\;.
\end{equation}
Considering again an average value for the neutrino masses of the order of $0.1$ eV, it implies that $\Lambda_L$ can reach $10^{14\div15}$ GeV, for $(Y_\nu)_{ij}=\cO(1)$. Once we accept explicit lepton number violation, we gain an elegant explanation for the lightness of neutrinos as they turn out to be inversely proportional to the large scale where lepton number is violated.
\subsection{The See-Saw Mechanisms}
\label{Sec:SM:SeeSaw}
\setcounter{footnote}{3}
It is then interesting to investigate on the kind of new physics which accounts for the lepton number violation and generates the Weinberg operator in a renormalisable extension of the Standard Model. Tree-level exchange of three different types of new particles makes the job: right-handed neutrinos, fermion $SU(2)_L$-triplets and scalar $SU(2)_L$-triplets. We summarise in the following these proposals which are known with the name of ``See-Saw'' mechanisms.
\begin{figure}[h!]
\centering
\includegraphics[width=4.5cm]{See-SawI.jpg}\qquad\qquad
\includegraphics[width=2.4cm]{See-SawII.jpg}\qquad\qquad
\includegraphics[width=4.5cm]{See-SawIII.jpg}
\caption{\it The Weinberg operators can be produced by tree-level exchange of fermion singlets (type I), scalar triplets (type II) and fermion triplets (type III).}
\label{fig:See-Saw}
\end{figure}
\subsubsection{Type I See-saw Mechanism}
\setcounter{footnote}{3}
The presence of new fermions with no gauge interactions, which play the role of right-handed neutrinos, is quite plausible because any grand unified theory (GUT) group larger than $SU(5)$ requires them: for example, $\nu^c$ complete the representation $\bf{16}$ of $SO(10)$. As already anticipated they have a Dirac Yukawa interaction $Y_\nu$ with the left-handed neutrinos. Assuming explicit lepton number violation a second term appears, a Majorana mass $M_R$: the Lagrangian can then be written as
\begin{equation}
\mscr{L}_\text{type I} = (Y_\nu\,)_{ij}\nu^c_i\widetilde{H}^\dag \ell_j+\dfrac{1}{2}(M_R\,)_{ij}\nu^c_i\nu^c_j+\text{h.c.}\;.
\label{SM:LagrangianTypeI}
\end{equation}
$Y_\nu$ and $M_R$ are $3\times3$ matrices in the flavour space: $M_R$ is symmetric, while $Y_\nu$ is in general non hermitian and non symmetric. The Dirac mass term originates through the Higgs mechanism as in eq. (\ref{SM:mD}). On the other hand, the Majorana mass term is $SU(3)\times SU(2)_L\times U(1)_Y$ invariant and therefore the Majorana masses are unprotected and naturally of the order of the cutoff of the low-energy theory. The full neutrino mass matrix is a $6\times6$ mass matrix in the flavour space and can be written as
\begin{equation}
M_\nu=\bordermatrix{ & \nu & \nu^c \cr
\nu & 0 & m_D^T \cr
\nu^c & m_D & M_R}\;,
\end{equation}
where $m_D=Y_\nu\,v/\sqrt2$. By block-diagonalising $M_\nu$, the light neutrino mass matrix is obtained as
\begin{equation}
m_\nu=-m_D^TM_R^{-1}m_D\;.
\end{equation}
This is the well known type I See-Saw Mechanism \cite{SeeSawI}: the light neutrino masses are quadratic in the Dirac masses and inversely proportional to the large Majorana ones, justifying the lightness of the left-handed neutrinos.
The same result can be derived integrating out the heavy neutrinos which gives a non-renormalisable effective Lagrangian that only contains the observable low-energy fields. Figure \ref{fig:See-Saw} shows the tree-level exchange of $\nu^c$ and the $\cO_5$ operator of eq. (\ref{SM:O5operator}) is easily derived, where $\Lambda_L$ corresponds to the Majorana masses of the right-handed neutrinos.
This construction holds true for any number of heavy $\nu^c$ coupled to the three known light neutrinos. In the case of only two $\nu^c$, one light neutrino remains massless, which is a possibility not excluded by the present data.
\subsubsection{Type II See-saw Mechanism}
\setcounter{footnote}{3}
The type II See-Saw mechanism \cite{SeeSawII} refers to the general scenario in which neutrinos get a mass thanks to the coupling of the lepton $SU(2)_L$-doublet $\ell$ with a scalar $SU(2)_L$-triplet $\Delta\equiv(\delta_1,\,\delta_2,\,\delta_3)$ transforming as $({\bf1},\,{\bf3},\,+1)$ under the Standard Model. It is useful to express the triplet scalar by three complex (electric charge neutral $\Delta^0$, singly charged $\Delta^+$ and doubly charged $\Delta^{++}$) scalars:
\begin{equation}
\Delta=\dfrac{1}{\sqrt2}\sum_{i}\sigma^i\Delta_i=\left(
\begin{array}{cc}
\Delta^+/\sqrt2 & \Delta^{++} \\
\Delta^0 & -\Delta^+/\sqrt2 \\
\end{array}
\right)\;,
\end{equation}
where $\sigma^i$ are the Pauli matrices and $\Delta^0=(\delta_1+i\delta_2)/\sqrt2$, $\Delta^+=\delta_3$ and $\Delta^{++}=(\delta_1-i\delta_2)/\sqrt2$.
The relevant Lagrangian for the type II See-Saw can then be written as
\begin{equation}
\begin{split}
\mscr{L}_\text{type II} & =\, -\dfrac{1}{2}(Y_\Delta)_{ij}\overline{\ell^c}_i i\sigma^2\Delta\ell_j+\text{h.c.}\\[3mm]
& =\, -\dfrac{1}{2}(Y_\Delta)_{ij}\nu_i\Delta^0\nu_j +\dfrac{1}{\sqrt2}(Y_\Delta)_{ij}\nu_i\Delta^+e_j
+\dfrac{1}{2}(Y_\Delta)_{ij}e_i\Delta^{++}e_j+\text{h.c.}\;.
\end{split}
\end{equation}
The neutrino masses are generated when the neutral component of the scalar triplet develops a VEV, $\mean{\Delta^0}=v_\Delta/\sqrt2$:
\begin{equation}
m_\nu = Y_\Delta\dfrac{v_\Delta}{\sqrt2}\;.
\end{equation}
The same result is achieved integrating out the Higgs triplet: in figure \ref{fig:See-Saw} the relevant tree-level diagram is shown. In this case we get an effective five-dimensional operator similar to $\cO_5$ of eq. (\ref{SM:O5operator}), where $\Lambda_L$ corresponds to $M_\Delta$, the mass of the Higgs triplet. The two pictures are phenomenologically identical since the minimisation of the potential leads to $v_\Delta \propto v^2/M_\Delta$, as shown below.
The VEV of the Higgs triplet is usually induced by the VEV of the Higgs doublet $H$: indeed in the Lagrangian it is possible to write these two terms
\begin{equation}
M_\Delta^2 \tr[\Delta^\dag \Delta] +\dfrac{1}{2}\left(\lambda_\Delta M_\Delta \widetilde{H}^\dag\Delta^\dag H+\text{h.c.}\right)
\end{equation}
and when the Higgs doublet develops a non-zero VEV, it induces a tadpole term for $\Delta$ through the last term in the previous equation and as a consequence a VEV for the Higgs triplet is generated,
\begin{equation}
\mean{\Delta}\sim\lambda_\Delta \dfrac{v^2}{M_\Delta}\,.
\label{SM:SSII:vevDelta}
\end{equation}
Note that $v_\Delta$ contributes to the weak boson masses and introduces a deviation of the $\rho$-parameter from the Standard Model prediction, $\rho\simeq1$ at tree-level. The current precision measurements constrain this deviation and consequently the ratio $v_\Delta/v$ \cite{BoundsTypeII}: $\Delta\rho=\rho-1\simeq v_\Delta/v\lesssim0.01$. From eq. (\ref{SM:SSII:vevDelta}) we see that this implies $M_{\Delta}\gtrsim(25 \lambda_{\Delta})$ TeV.
The type II See-Saw involves lepton number violation because the co-existence of $Y_\Delta$ and $\lambda_\Delta$ does not allow a
consistent way of assigning a lepton charge to $\Delta$.
\subsubsection{Type III See-saw Mechanism}
\setcounter{footnote}{3}
In the type III See-Saw mechanism \cite{SeeSawIII}, three fermion $SU(2)_L$-triplets $\Sigma^a$ are added to the Standard Model content.\footnote{It is also possible to introduce only two such fermion triplets: indeed it is the minimal number of $\Sigma$ in order to fit the data with a massless neutrino. With three $\Sigma$, however, it is possible to describe three distinct non-vanishing neutrino masses.} These new particles should be color-singlets and carry hypercharge $0$. Each $\Sigma$ has three components defined as $\Sigma=(\varsigma_1,\,\varsigma_2,\,\varsigma_3)$ and can be written in the fundamental representation of $SU(2)_L$ as
\begin{equation}
\Sigma=\sum_i\sigma^i\varsigma_i=\left(\begin{array}{cc}
\Sigma^0/\sqrt2 & \Sigma^{+} \\
\Sigma^- & -\Sigma^0/\sqrt2 \\
\end{array}
\right)\;,
\end{equation}
where $\Sigma^0=\varsigma_3$ and $\Sigma^{\pm}=(\varsigma_1\mp i\varsigma_2)/\sqrt2$ are electric charge neutral and positive or negative singly charged. The relevant Lagrangian terms have a form that is similar to the singlet-fermion case, but the contractions of the $SU(2)_L$ indices are different:
\begin{equation}
\mscr{L}_\text{type III}=(Y_\Sigma)_{ai}\widetilde{H}^\dag\Sigma^a\ell_i-\dfrac{1}{2}(M_\Sigma)_{ab} \tr[\Sigma^a\Sigma^b]+\text{h.c.}\,.
\end{equation}
Here $Y_\Sigma$ is a $3\times3$ matrix of dimensionless, complex Yukawa couplings. When the electroweak symmetry is broken, both charged leptons and neutrinos mix with the $\Sigma$ components. In the basis $(\nu,\,\Sigma)$, the following mass term can be written
\begin{equation}
M_\nu=\bordermatrix{ & \nu & \Sigma \cr
\nu & 0 & m_D^T \cr
\Sigma & m_D & M_\Sigma}\;.
\label{SM:SSIII:FullMassTypeIII}
\end{equation}
where $m_D=Y_\Sigma\,v/\sqrt2$ and $M_\Sigma$ is a $3\times3$ real mass matrix. As in the case of $M_R$, the mass of the right-handed neutrinos in the type I See-Saw mechanism, $M_\Sigma$ is unprotected by any symmetry and it could reach values close to the cutoff of the low-energy theory. The exchange of fermion triplets, as shown in figure \ref{fig:See-Saw}, generates an effective five-dimensional operator similar to $\cO_5$ of eq. (\ref{SM:O5operator}), where $\Lambda_L$ corresponds to $M_\Sigma$, which leads to the neutrino masses:
\begin{equation}
m_\nu=-m_D^TM_\Sigma^{-1}m_D\;.
\label{SM:SSIII:nuMassesTypeIII}
\end{equation}
The same result can be achieved simply diagonalising the matrix in eq. (\ref{SM:SSIII:FullMassTypeIII}).
It is clear that for what concerns the phenomenology of the light neutrinos, the type I and the type III See-Saw mechanisms cannot be distinguished. The two descriptions however may be discriminated by taking into account phenomenological processes involving for example the charged lepton sector that presents a small mixing with the charged components of $\Sigma$. In the basis $(e,\,\Sigma)$, the following mass term can be written
\begin{equation}
\bordermatrix{ & e & \Sigma \cr
e^c & M_e & 0 \cr
\Sigma & m_D & M_\Sigma}\;.
\label{SM:SSIII:FullMassTypeIIICharged}
\end{equation}
where $M_e$ is the usual charged lepton mass matrix. The perturbations induced by $\Sigma$ to $M_e$ have the same form and order of magnitude of the neutrino masses in eq. (\ref{SM:SSIII:nuMassesTypeIII}) and therefore they are negligible. However, the presence of the triplets induces some lepton flavour violating decays which could have interesting experimental hints \cite{AbadaTypeIII}.
The type III See-Saw involves lepton number violation because the co-existence of $Y_\Sigma$ and $M_\Sigma$ does not allow a consistent way of assigning a lepton charge to $\Sigma^a$.
\subsection{The Physical Basis and the Mixing Matrices}
\label{Sec:SM:PhysicalBasis}
\setcounter{footnote}{3}
If we simply assume that the lepton number is explicitly violated by the introduction of the Weinberg operator in the Lagrangian of the Standard Model, when the electroweak symmetry is broken, all the fermions develop a mass term:
\begin{equation}
\mscr{L}_\text{mass}=(M_u)\, u^c_i u_j + (M_d)_{ij}\, d^c_i d_j + (M_e)_{ij}\, e^c_i e_j + \dfrac{1}{2} (m_\nu)_{ij} \nu_i \nu_j\;,
\end{equation}
where $M_i$ and $m_\nu$ are $3\times3$ mass matrices in the flavour space. Counting the number of free parameters, there are 9 complex entries
for each mass matrix, apart for $m_\nu$ which is symmetric and owns only 6. To reduce this amount, we can move to the physical basis in which the kinetic terms are canonical and all the mass matrices are diagonal. In this basis also the Yukawa coupling matrices are diagonal and therefore there is no tree-level flavour changing currents mediated by the neutral Higgs boson. This feature is in general lost extending the Standard Model by introducing multiple Higgs doublets or extra fermions.
We make unitary transformations on the fermions in the family space in order to move to the physical basis. Unitarity of these matrices ensures that the kinetic terms remain canonical. Specifically, we define $V_{u,\,u^c,\,d,\,d^c}$ and $U_{e,\,e^c,\,\nu}$ such that the transformations produce the following diagonal matrices:
\begin{equation}
\begin{array}{rclrcl}
V_{u^c}^\dag\,M_u\,V_u&=&\diag(m_u,\,m_c,\,m_t)\;,&\qquad
V_{d^c}^\dag\,M_d\,V_d&=&\diag(m_d,\,m_s,\,m_b)\;,\\[3mm]
U_{e^c}^\dag\,M_e\,U_e&=&\diag(m_e,\,m_\mu,\,m_\tau)\;,&\qquad
U_{\nu}^T\,m_\nu\,U_\nu&=&\diag(m_1,\,m_2,\,m_3)\;.
\end{array}
\end{equation}
Experiments showed that quarks and charged leptons have strongly hierarchical masses: the mass of the first families are smaller than those of the second families and the third families are the heaviest. The quark masses are given by \cite{PDG08}\footnote{The u-, d-, and s-quark masses are estimates of so-called ``current-quark masses'', in a mass-independent subtraction scheme such as $\overline{\mathrm{MS}}$ at a scale $\mu\approx 2$ GeV. The c- and b-quark masses are the ``running'' masses, $m(\mu=m)$, in the $\overline{\mathrm{MS}}$ scheme. Only the mass of the t-quark is a result of direct observation of top events.}
\begin{equation}
\begin{array}{lll}
m_u=1.5\div3.3\;\mathrm{MeV}\;, &\qquad m_c=1.27^{+0.07}_{-0.11}\;\mathrm{GeV}\;, &\qquad m_t=171.2\pm2.1\;\mathrm{GeV}\;,\\[3mm]
m_d=3.5\div6\;\mathrm{MeV}\;, &\qquad m_s=104^{+26}_{-34}\;\mathrm{MeV}\;, &\qquad m_b=4.20^{+0.17}_{-0.07}\;\mathrm{GeV}\;.
\end{array}
\end{equation}
The charged lepton masses are very precisely known and they read \cite{PDG08}
\begin{eqnarray}
m_e &=& 0.510998910\pm0.000000013\;\mathrm{MeV}\;,\nn\\[3mm]
m_\mu &=& 105.658367\pm0.000004\;\mathrm{MeV}\;,\\[3mm]
m_\tau &=& 1776.84\pm0.17\;\mathrm{MeV}\nn\;.
\end{eqnarray}
In the neutrino sector the mass hierarchy is much milder and only two mass squared differences have been measured in oscillation experiments.\footnote{There is an indication for the existence of a third independent mass squared difference from the LSND experiment \cite{LNSD}, which could be explained only if an additional (sterile) neutrino is considered. However, the MiniBooNE collaboration \cite{MiniBoone} have not recently confirmed the LSND result.} The mass squared differences are defined as
\begin{equation}
\Delta m^2_{21} = m_2^2-m_1^2 \equiv \Delta m^2_{sol}\;,\qquad\quad\Delta m^2_{31} = m_3^2-m_1^2 \equiv \pm\Delta m^2_{atm}
\label{SM:PhysicalBasisDeltamSol+Atm}
\end{equation}
and in table \ref{table:OscillationData} we can read the results of two independent global fits on neutrino oscillation data from \cite{Fogli:Indication} and \cite{Maltoni:Indication}. It is interesting to report also the ratio between the two mass squared differences\cite{Maltoni:Indication}:
\begin{equation}
r=\dfrac{\Delta m^2_{sol}}{\Delta m^2_{atm}}=0.032^{+0.006}_{-0.005}\;.
\end{equation}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l||cc|cc|}
\hline
&&&&\\[-2mm]
& \multicolumn{2}{c}{Ref.~\cite{Fogli:Indication}} & \multicolumn{2}{|c|}{Ref.~\cite{Maltoni:Indication}}\\[2mm]
parameter & best fit (\@$1\sigma$) & 3$\sigma$-interval & best fit (\@$1\sigma)$ & 3$\sigma$-interval\\[2mm]
\hline
&&&&\\[-2mm]
$\Delta m^2_{sol}\:[\times10^{-5}\mathrm{eV}^2]$
& $7.67^{+0.16}_{-0.19}$ & $7.14-8.19$
& $7.65^{+0.23}_{-0.20}$ & $7.05-8.34$\\[2mm]
$\Delta m^2_{atm}\: [\times10^{-3}\mathrm{eV}^2]$
& $2.39^{+0.11}_{-0.08}$ & $2.06-2.81$
& $2.40^{+0.12}_{-0.11}$ & $2.07-2.75$\\[2mm]
$\sin^2\theta_{12}$
& $0.312^{+0.019}_{-0.018}$ & $0.26-0.37$
& $0.304^{+0.022}_{-0.016}$ & $0.25-0.37$\\[2mm]
$\sin^2\theta_{23}$
& $0.466^{+0.073}_{-0.058}$ & $0.331-0.644$
& $0.50^{+0.07}_{-0.06}$ & $0.36-0.67$\\[2mm]
$\sin^2\theta_{13}$
& $0.016^{+0.010}_{-0.010}$ & $\leq$ $0.046$
& $0.010^{+0.016}_{-0.011}$ & $\leq$ $0.056$\\[2mm]
\hline
\end{tabular}
\end{center}
\caption{\it Neutrino oscillation parameters from two independent global fits \cite{Fogli:Indication, Maltoni:Indication}.}
\label{table:OscillationData}
\end{table}
Due to the ambiguity in the sign of the atmospheric mass squared difference, neutrinos can have two mass hierarchies: they can be normally hierarchical (NH) if $m_1<m_2<m_3$ or inversely hierarchical (IH) if $m_3<m_1<m_2$. Furthermore, if the absolute mass scale is much larger than the mass squared differences then we cannot speak about hierarchy: in this case the neutrino spectrum is quasi degenerate (QD) and we speak about mass ordering. In figure \ref{fig:HierarchyFig} we display the two possible hierarchical spectra. It is common to redefine the atmospheric mass squared difference to account for the type of the hierarchy: indeed $\Delta m^2_{atm}$ is taken to be the mass squared difference between the heaviest and the lightest mass eigenstates and therefore
\begin{equation}
|m_3^2-m_1^2(m_2^2)| \equiv \Delta m^2_{atm}
\label{SM:PhysicalBasisDeltamAtm2}
\end{equation}
for the normal (inverse) hierarchy.
There are some weak indications in favour of normal mass hierarchy from supernova SN1987A data \cite{SN87}. However in view of small statistics and uncertainties in the original fluxes it is not possible to make a firm statement.
\begin{figure}[h!]
\centering
\includegraphics[width=5.5cm]{HierarchyN.pdf}\qquad
\includegraphics[width=5.5cm]{HierarchyI.pdf}
\caption{\it Neutrino mass and flavour spectra for the
normal (left) and inverse (right) mass hierarchies.
The distribution of the flavours in the mass eigenstates
corresponds to the best-fit values of mixing parameters and
$\sin^2{\theta_{13}}=0.05$.}
\label{fig:HierarchyFig}
\end{figure}
Regarding the absolute neutrino mass scale there are several sources of information from non-oscillation experiments and from cosmological analysis. They are:
\begin{itemize}
\item $\beta$-decay experiments which measure the endpoint of the tritium decay and to good approximation probe $m_{\nu_e}^2=\sum_i|U_{ei}^2|m_i^2$. The most recent experiment is \textsc{Mainz} \cite{mainz} and it puts an upper bound at $99\%$ of CL of $m_{\nu_e}<2.1$ eV.
The \textsc{Katrin} experiment will improve the sensitivity to $m_{\nu_e}$ by one order of magnitude down to $\sim0.2$ eV \cite{katrin}.
\item the neutrinoless-double-beta ($0\nu2\beta$) decay is a viable decay for a little class of nuclei and only in the hypothesis of Majorana nature for neutrinos. Dedicated experiments could probe the $ee$ element of the neutrino Majorana mass:
\begin{equation}
m_{ee}=\sum_iU^2_{ei}m_i=\cos\theta_{13}^2(m_1\cos\theta_{12}^2e^{i\varphi_1}+ m_2\sin\theta_{12}^2e^{i\varphi_2})+m_3\sin\theta_{13}^2\;,
\end{equation}
where $\varphi_i$ are the Majorana phases, which will be defined in the following. Nowadays only an upper bound of $0.35$ eV on this quantity has been put by the Heidelberg-Moscow collaboration \cite{HM}, but the future experiments are expected to reach better sensitivities: $90$ meV \cite{gerda} (GERDA), $20$ meV \cite{majorana} (Majorana), $50$ meV \cite{supernemo} (SuperNEMO), $15$ meV \cite{cuore} (CUORE) and $24$ meV \cite{exo} (EXO). In figure \ref{fig:0nu2betaGeneral} we show the $0\nu2\beta$-decay effective mass as a function of the lightest neutrino mass for both the hierarchies together with the future experimental sensitivities.
\item cosmology can set an upper bound on the sum of the neutrino masses: there are typically five representative combinations of the cosmological data, which lead to increasingly stronger bounds and as a result \cite{CosmoNu} $\sum_i m_i<0.19\div2.6$ eV.
\end{itemize}
\begin{figure}[h!]
\centering
\vspace{-4mm}
\includegraphics[width=7.8cm]{0nu2betaGeneral.pdf}
\vspace{-4mm}
\caption{\it $|m_{ee}|$ as a function of the lightest neutrino mass for the normal ($\Delta m_{31}^2>0$) and inverse ($\Delta m_{31}^2<0$) mass hierarchies. The coloured regions show the allowed range for the best-fit values of the parameters from \cite{Fogli:Indication}. The dashed lines refer to the allowed region when the $3\sigma$ errors are considered. The black continuous lines represent future experimental sensitivities as described in the text.}
\label{fig:0nu2betaGeneral}
\end{figure}
\subsubsection{The CKM and PMNS Mixing Matrices}
\setcounter{footnote}{3}
Moving to the physical basis, the unitary matrices $V_i$ and $U_i$ should enter into all the fermion interactions. As already noted, the associated transformations bring the Yukawa couplings of fermions with the Higgs boson in the diagonal form. The coupling of the $Z^0$ boson and the photon have the original diagonal form even after these rotations. It follows that there is no tree-level flavour changing neutral current mediated by the $Z^0$ boson or by the photon in the Standard Model. Most significantly, these transformations bring the charged current interactions in a non diagonal form: considering for simplicity only the negative charged current $J_{\mu}^-$, we see that
\begin{equation}
J_{\mu}^-=\overline\nu\gamma_\mu e + \overline{u}\gamma_\mu d\longrightarrow
\overline\nu\gamma_\mu U_\nu^{\dag}U_e e +\overline{u}\gamma_\mu V_u^{\dag}V_d d\;.
\end{equation}
The products of the diagonalising unitary matrices are defined as the mixing matrices for leptons, the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix \cite{PMNS}, and for quarks, the Cabibbo-Kobayashi-Maskawa (CKM) matrix \cite{Cabibbo,KM},
respectively:
\begin{equation}
U=U_e^{\dag}U_\nu\qquad\text{and}\qquad
V=V_u^{\dag}V_d\;.
\end{equation}
Each one of these matrices is also unitary and has nine parameters, but only some of them are physical. It is possible to absorb the non-physical parameters through further unitary transformations on the fields and finally $V$ is described by 3 angles and 1 phase and $U$ by 3 angles and 3 phases. This difference is due to the Majorana nature of the neutrino mass term. However the meaning of the parameters are equivalent in both sectors: the angles rule the mixing between the flavour eigenstates and the phases are responsible for CP violation. If we now assume the conservation of the lepton number, neutrinos develop masses only due to the introduction of the right-handed neutrinos. In this case, as already discussed, the mass term is of the Dirac form $(m_\nu)_{ij}\nu^c_i\nu_j$ and going through the same passages as before, we note one main difference: the physical parameters of $U$ can be reduce to only 3 angles and 1 phase.\\
There are several parametrisations of the CKM matrix. We recall now the standard one, by the use of the angles $\theta_{12}$, $\theta_{13}$ and $\theta_{23}$ and of the phase $\delta$:
\begin{equation}
\begin{split}
V&=\,\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & c_{23} & s_{23} \\
0 & -s_{23} & c_{23} \\
\end{array}\right)\cdot
\left(\begin{array}{ccc}
c_{13} & 0 & s_{13}e^{-i\delta} \\
0 & 1 & 0 \\
-s_{13}e^{i\delta} & 0 & c_{13} \\
\end{array}\right)\cdot
\left(\begin{array}{ccc}
c_{12} & s_{12} & 0 \\
-s_{12} & c_{12} & 0 \\
0 & 0 & 1 \\
\end{array}\right)\\[3mm]
&=\,\left(\begin{array}{ccc}
c_{12}c_{13} & c_{13}s_{12} & s_{13}e^{-i\delta} \\
-c_{23}s_{12}-c_{12}s_{13}s_{23}e^{i\delta} &
c_{12}c_{23}-s_{12}s_{13}s_{23}e^{i\delta} & c_{13}s_{23} \\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} &
-c_{12}s_{23}-c_{23}s_{12}s_{13}e^{i\delta} & c_{13}c_{23} \\
\end{array}
\right)\;,
\end{split}
\label{SM:PhysBasis:MixingMatrixVCKM}
\end{equation}
where $c_{ij}$ and $s_{ij}$ stand for $\cos\theta_{ij}$ and $\sin\theta_{ij}$ (with $0\leq\theta_{ij}\leq\pi/2$), respectively, and the Dirac CP-violating phase lies in the range $0\leq\delta<2\pi$. This notation has various advantages: the rotation angles are defined and labelled in a way which is related to the mixing of two specific generations; as a result if one of these angles vanishes, so does the mixing between the two respective generations. Moreover in the limit $\theta_{23}=\theta_{13}=0$ the third generation decouples and the situation reduces to the usual Cabibbo mixing of the first two generations with $\sin\theta_{12}$ identified to the Cabibbo angle \cite{Cabibbo}.
Experimentally, the mixing matrix has well defined entries \cite{PDG08}: a fit on the data, considering the unitary conditions, gives the following results,
\begin{equation}
\begin{split}
|V|&=\,\left(
\begin{array}{ccc}
V_{ud} & V_{us} & V_{ub} \\
V_{cd} & V_{cs} & V_{cb} \\
V_{td} & V_{ts} & V_{tb} \\
\end{array}
\right)\\[3mm]
&=\,\left(
\begin{array}{ccc}
0.97419\pm0.00022 & 0.2257\pm0.0010 & (3.59\pm0.16)\times10^{-3} \\[2mm]
0.2256\pm0.0010 & 0.97334\pm0.00023 & (41.5^{+0.0010}_{-0.0011})\times10^{-3} \\[2mm]
(8.74^{+0.26}_{-0.37})\times10^{-3} & (40.7\pm1.0)\times10^{-3} & 0.999133^{+0.000044}_{-0.000043} \\
\end{array}
\right)\;.
\end{split}
\label{SM:PhysBasis:QuarkCKMmatrix}
\end{equation}
Making use of the standard parametrisation, it is possible to extract the values of the quark mixing angles: in terms of $\sin\theta_{ij}$ we naively have
\begin{equation}
\sin\theta_{12}\simeq 0.2243\;,\qquad
\sin\theta_{23}\simeq 0.0413\;,\qquad
\sin\theta_{13}\simeq 0.0037\;.
\label{SM:PhysBasis:QuarkAngles}
\end{equation}
In the same way it is possible to recover the value of the Dirac CP-violating phase:
\begin{equation}
\delta=(77^{+30}_{-32})^\circ\;.
\label{SM:PhysBasis:QuarkPhase}
\end{equation}
It is also useful to report the value of the Jarlskog invariant \cite{Jarlskog}, which measures the amount of the CP violation: it is defined as
\begin{eqnarray}
J_{CP} & = & \frac{1}{2} \left| \mbb{I}\mrm{m} (V_{ud}^*V_{us}V_{cd}V_{cs}^*)\right| \,=\, \frac{1}{2} \left| \mbb{I}\mrm{m} (V_{ud}^*V_{ub}V_{td}V_{tb}^*)\right| \nonumber\\
& = &\frac{1}{2} \left| \mbb{I}\mrm{m} (V_{cs}^*V_{cb}V_{ts}V_{tb}^*)\right| \, = \, \frac{1}{2}\left|c_{12}\,c_{13}^2\, c_{23}\,\sin \delta \, s_{12}\,s_{13}\,s_{23}\right|\;.
\end{eqnarray}
and the result of the data fit is
\begin{equation}
J_{CP}=(3.05^{+0.19}_{-0.20})\times10^{-5}\;.
\end{equation}
From eq. (\ref{SM:PhysBasis:QuarkAngles}, \ref{SM:PhysBasis:QuarkPhase}), it is obvious the presence of a hierarchy between the size of the angles, $\sin\theta_{12}\gg \sin\theta_{23}\gg \sin\theta_{13}$, and an approximation to the standard parametrisation has been proposed by Wolfenstein \cite{Wolfenstein} which emphasises this feature. It is possible to use only four parameters to describe the CKM matrix: they are $\lambda$, $A$, $\rho$ and $\eta$ defined as
\begin{equation}
\lambda\equiv\dfrac{|V_{us}|}{\sqrt{|V_{ud}|^2+|V_{us}|^2}}\;,\qquad
A\equiv\dfrac{1}{\lambda}\left|\dfrac{V_{cb}}{V_{us}}\right|\;,\qquad
\rho+i\eta\equiv\dfrac{V_{ub}^*}{A\lambda^3}\;.
\end{equation}
In terms of powers of $\lambda$, up to $\cO(\lambda^4)$ we have
\begin{equation}
V=\left(\begin{array}{ccc}
1-\lambda^2/2 & \lambda & A\lambda^3(\rho-i\eta) \\
-\lambda & 1-\lambda^2/2 & A\lambda^2 \\
A\lambda^3(1-\rho-i\eta) & -A\lambda^2 & 1 \\
\end{array}\right)+\mcal{O}(\lambda^4)\;.
\end{equation}
These four parameters are experimentally determined as follows:
\begin{equation}
\begin{array}{ll}
\lambda=0.2257^{+0.0009}_{-0.0010}\;,\quad
&A=0.814^{+0.021}_{-0.022}\;,\\[3mm]
\rho\left(1-\dfrac{\lambda^2}{2}\right)=0.135^{+0.031}_{-0.016}\;,\quad
&\eta\left(1-\dfrac{\lambda^2}{2}\right)=0.349^{+0.015}_{-0.017}\;.
\end{array}
\end{equation}
Due to the closeness of the Cabibbo angle, $\sin\theta_{12}$, to the parameter $\lambda$, it is common to identify the two quantities and the error, which is potentially introduced, is of $\cO(\lambda^4)$.\\
The standard parametrisation of the lepton mixing is similar to eq. (\ref{SM:PhysBasis:MixingMatrixVCKM}): we can write the PMNS matrix as the product of four parts
\begin{equation}
\begin{split}
U&=\,R_{23}(\theta_{23})\cdot R_{13}(\theta_{13},\,\delta)\cdot R_{12}(\theta_{12})\cdot P\\[3mm]
&=\,\left(\begin{array}{ccc}
c_{12}c_{13} & c_{13}s_{12} & s_{13}e^{-i\delta} \\
-c_{23}s_{12}-c_{12}s_{13}s_{23}e^{i\delta} &
c_{12}c_{23}-s_{12}s_{13}s_{23}e^{i\delta} & c_{13}s_{23} \\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} &
-c_{12}s_{23}-c_{23}s_{12}s_{13}e^{i\delta} & c_{13}c_{23} \\
\end{array}
\right)\cdot P\;,
\end{split}
\label{SM:PhysBasis:MixingMatrixUPMNS}
\end{equation}
where $P$ is the matrix of the Majorana phases $P=\diag(e^{i\varphi_1/2},\,e^{i\varphi_2/2},\,1)$, $c_{ij}$ and $s_{ij}$ represent $\cos\theta_{ij}$ and $\sin\theta_{ij}$, respectively, and $\delta$ is the dirac CP-violating phase. Angles and phases have well defined ranges: $0\leq\theta_{12},\,\theta_{23},\,\theta_{13}\leq\dfrac{\pi}{2}$ and $0\leq\delta,\,\varphi_1,\,\varphi_2<2\pi$. It is interesting to note that the Dirac CP-violating phase is always present in the combination $s_{13}e^{\pm i\delta}$: this means that when the reactor angle is vanishing, not excluded from the experimental data in table \ref{table:OscillationData}, $\delta$ is undetermined and does not appear in the mixing matrix. A further consideration is that the Majorana phases are not present in eq. (\ref{SM:PhysBasis:MixingMatrixUPMNS}) if lepton number is assumed to be conserved and the right-handed neutrinos are responsible for the neutrino mass term.
A convenient summary of the neutrino oscillation data is given in table \ref{table:OscillationData}. The pattern of the mixings is characterised by two large angles and a small one: $\theta_{23}$ is compatible with a maximal value, but the accuracy admits relatively large deviations; $\theta_{12}$ is large, but about $5\sigma$ far from the maximal value; $\theta_{13}$ has only an upper bound. According to the type of the experiments which measured them, the mixing angle $\theta_{23}$ is called atmospheric, $\theta_{12}$ solar and $\theta_{13}$ reactor.
We underline that there is a tension among the two global fits presented in table \ref{table:OscillationData} on the central value of the reactor angle: in \cite{Fogli:Indication} we can read a suggestion for a positive value of $\sin^2\theta_{13}\simeq0.016\pm0.010$ [$1.6\sigma$], while in \cite{Maltoni:Indication} a best fit value consistent with zero within less than $1\sigma$ is found. Therefore we need for a direct measurement \cite{MezzettoSchwetz} by experiments like DOUBLE CHOOZ \cite{doublechooz}, Daya Bay \cite{dayabay}, MINOS \cite{minos}, RENO \cite{reno}, T2K \cite{T2K} and NOvA \cite{NOvA}.
It is interesting to note that the large lepton mixing angles contrast with the small angles of the CKM matrix. Furthermore, to compare with eq. (\ref{SM:PhysBasis:QuarkCKMmatrix}), we display the allowed ranges of the entries of the PMNS matrix \cite{PMNSEntriesOld}:
\begin{equation}
|U|=\,\left(
\begin{array}{ccc}
U_{e1} & U_{e2} & U_{e3} \\
U_{\mu1} & U_{\mu2} & U_{\mu3} \\
U_{\tau1} & U_{\tau2} & U_{\tau3} \\
\end{array}
\right)\;
=\, \left(
\begin{array}{ccc}
0.79-0.88 & 0.47-0.61 & <0.20 \\
0.19-0.52 & 0.42-0.73 & 0.58-0.82\\
0.20-0.53 & 0.44-0.74 & 0.56-0.81 \\
\end{array}
\right)\;.
\end{equation}\\
In analytical and numerical analysis, quark and lepton mixing matrices are not in the standard form as in eqs. (\ref{SM:PhysBasis:MixingMatrixVCKM}, \ref{SM:PhysBasis:MixingMatrixUPMNS}) but it is possible to recover the mixing angles $\theta_{ij}$ and the phases $\delta$, $\varphi_1$ and $\varphi_2$ through the following procedure. Note that, when considering the CKM matrix, the Majorana phases are absent. Denoting the generic mixing matrix as $W$, the mixing angles are given by
\begin{equation}
\sin\theta_{13}=|W_{13}|\;,\qquad
\tan\theta_{12}=\left(\dfrac{|W_{12}|}{|W_{11}|}\right)\;,\qquad
\tan\theta_{23}=\left(\dfrac{|W_{23}|}{|W_{33}|}\right)\;,
\end{equation}
if $|W_{11}|$ ($|W_{33}|$) is non-vanishing, otherwise $\theta_{12}$ ($\theta_{23}$) is equal to $\pi/2$. For the Dirac CP-violating phase we use the relation
\begin{equation}
W_{ii}^*W_{ij}W_{ji}W_{jj}^* = c_{12}\,c_{13}^2\, c_{23}\,s_{13} \left(e^{-i\delta}\,s_{12}\,s_{23} - c_{12}\,c_{23}\,s_{13}\right)
\end{equation}
which holds for $i,j\in\{1,2,3\}$ and $i\ne j$. Then the phase $\delta$ is given by
\begin{equation}
\delta=-\arg\left(\dfrac{\displaystyle\dfrac{W_{ii}^*W_{ij}W_{ji}W_{jj}^*}
{c_{12}\,c_{13}^2\,c_{23}\,s_{13}}
+c_{12}\,c_{23}\,s_{13}}
{s_{12}\,s_{23}}\right)
\end{equation}
where $i,j\in\{1,2,3\}$ and $i\ne j$. Similarly, we can write the Jarlskog invariant in terms of mixing angles and the phase $\delta$:
\begin{equation}
\begin{split}
J_\mathrm{CP} =&\, \frac{1}{2} \left| \mbb{I}\mrm{m} (W_{11}^*W_{12}W_{21}W_{22}^*)\right| \,=\, \frac{1}{2} \left| \mbb{I}\mrm{m} (W_{11}^*W_{13}W_{31}W_{33}^*)\right|\\
= &\, \frac{1}{2} \left| \mbb{I}\mrm{m} (W_{22}^*W_{23}W_{32}W_{33}^*)\right| \, = \, \frac{1}{2}\left|c_{12}\,c_{13}^2\, c_{23}\,\sin \delta \, s_{12}\,s_{13}\,s_{23}\right|\;.
\end{split}
\end{equation}
To conclude the Majorana phases are given by
\begin{equation}
\varphi_1=2\arg(e^{i\delta_e}\,W_{11}^*)\;,\qquad\qquad
\varphi_2=2\arg(e^{i\delta_e}\,W_{12}^*)\;,
\end{equation}
where $\delta_e=\arg(e^{i\delta}\,W_{13})$.
\section{Supersymmetry}
\label{Sec:SUSY}
\setcounter{footnote}{3}
In the Standard Model two Higgs parameters appear in the scalar potential: $m_H$ and $\lambda_H$, which are the mass and the quartic coupling of the Higgs boson, respectively. The Higgs VEV is linked to these parameter as
\begin{equation}
\mean{H}\equiv\dfrac{v}{\sqrt2}=\sqrt{-\dfrac{m_H^2}{2\lambda_H}}\;.
\end{equation}
Since $\lambda_H$ is bounded from above by various consistency conditions (such as perturbative unitarity), it follows that it should be roughly $-m^2_H\sim (100\;\mathrm{GeV})^2$. However the mass parameter $m_H$ is expected to receive large radiative corrections, indeed it depends on the cutoff scale at which new physics is introduced: this leads to the well known ``hierarchy problem'' of particle physics.
If the cutoff scale is taken to be close to the Planck scale $M_P\approx10^{19}$ GeV, the corrections due to the fermion loops are much larger than the weak scale.
An elegant solution to the hierarchy problem is the low-energy supersymmetric theory. Supersymmetry (SUSY) can be considered an extension of the usual 4-dimensional space-time Poincar\'e symmetry, in which new fermionic transformations, that change the spin of fields, are introduced. We consider only $N=1$ Supersymmetry, which is the simplest supersymmetric theory, in which a single set of supersymmetric generators is introduced.
It is not difficult to generalise the Standard Model description of section \ref{Sec:SM} to the supersymmetric context: each Standard Model field is considered to be a part of a superfield, $z$, and the Lagrangian of the model can be written as a sum of different terms in the following way
\begin{equation}
\mscr{L}\,=\;\int d^2\theta d^2\overline{\theta} {\cal K}(\overline{z}, e^{2 V} z)+\left[\int d^2 \theta w(z)+\text{h.c.}\right]
+\frac{1}{4}\left[\int d^2\theta f(z) {\cal W W}+\text{h.c.}\right]\;,
\label{leel}
\end{equation}
where $\cK(\overline{z},z)$ is the K\"ahler potential, a real gauge-invariant function of the chiral superfields $z$ and their conjugates, of dimensionality $(\mathrm{mass})^2$; $w(z)$ is the superpotential, an analytic gauge-invariant function of the chiral superfields, of dimensionality $(\mrm{mass})^3$; $f(z)$ is the gauge kinetic function, a dimensionless analytic gauge-invariant function; $V$ is the Lie-algebra valued vector supermultiplet, describing the gauge fields and their superpartners. Finally ${\cal W}} \def\cT{{\cal T}$ is the chiral superfield describing, together with the function $f(z)$, the kinetic terms of gauge bosons and their superpartners.
In the minimal supersymmetric Standard Model (MSSM) the field content of the Standard Model is increased by an extra Higgs $SU(2)_L$-doublet: the usual Standard Model Higgs $H$, defined in table \ref{SM:SMFermions}, is renamed to $H_d$ and it is responsible for giving mass to the down quarks, to the charged leptons and to their superpartners. The extra Higgs, $H_u$, is required to generate the Dirac mass of the up quarks (and of neutrinos if $\nu^c$ are included) and of their superpartners, as the holomorphicity requirement of the superpotential prevents the charge conjugate of $H_d$ from playing that role (in contrast to what happens with $H$ in the Standard Model).
The usual Standard Model fermions are contained in chiral superfields with their respective superpartners, bosons with spin 0 usually denoted as sfermions (squarks and sleptons). Analogously, the vector superfields contain the usual Standard Model gauge bosons and their own superpartners, spin $1/2$ fermions usually called gauginos (gluinos, photino, bino and winos). Finally, the two Higgs belong to chiral superfields with their superpartners, spin $1/2$ fermions denoted as Higgsinos. It is common to indicate with a ``$\sim$'' the component of a superfield which represents the superpartner of a Standard Model field.
The scalar potential $V\equiv V(\tilde{z},\,\tilde{z}^\dag)$ is composed of two contributions. One is usually called the $F$-term, obtained from the superpotential as $F_{i} \equiv \partial w(\tilde{z})/\partial\tilde{z}_{i}$, where $i$ is an index labelling the components of whatever representation the field has under the gauge group (for example, two components if the chiral superfield containing $\tilde{z}_{i}$ is a doublet of $SU(2)$). The other contribution is usually called the $D$-term, and is associated with the gauge group: $D^{a} \equiv g_a (M^a_{FI})^2-g_a \tilde{z}^{\dagger} T^{a} \tilde{z}$, where $a$ labels the generators $T^{a}$ of the group and $(M^a_{FI})^2$ denotes the contribution of the Fayet-Iliopoulos (FI) term, which may be non-zero only for Abelian $U(1)$ factors of the group. Assuming a canonical K\"ahler potential, $\cK=\overline{z}_iz_i$ and summarising the two contributions we have
\begin{equation}
V = F^\dag F + \dfrac{1}{2} D^{2} = \sum_{i} \left| \dfrac{dw(z)}{d\tilde{z}_{i}} \right|^{2} + \dfrac{1}{2} \sum_{a}\left(g_a (M^a_{FI})^2- g_{a}\tilde{z}^\dag T^{a} \tilde{z} \right)^2\;.
\end{equation}
In terms of the hierarchy problem, $m_H$ receives new contributions from the Standard Model superpartners in such a way that the loop diagrams with superparticles in the loop have exactly the same value as those ones with Standard Model particles in the loop, but with opposite sign (due to the minus sign coming from the fermion loop): Supersymmetry enables the exact cancellation of the quadratic divergence, leaving only milder logarithmic divergences.
If from one side in supersymmetric theories there is a natural explanation of the hierarchy problem, dangerous gauge-invariant, renormalisable operators appear: the most general superpotential would include also terms which violate either the baryon number ($B$) or the total lepton number (L). The existence of these terms corresponds to $B$- and $L$-violating processes, which however have not been observed: a strong constraint comes from the non-observation of the proton decay. A possible way out to this problem is represented by the introduction of a new symmetry in MSSM, which allows the Yukawa terms, but suppresses $B$- and $L$-violating terms in the renormalisable superpotential. This new symmetry is called ``matter parity'' or equivalently ``$R$-parity''. The matter parity is a multiplicative conserved quantum number defined as
\begin{equation}
P_M=(-1)^{3(B-L)}
\end{equation}
for each particle in the theory. It is easy to check that quark and lepton supermultiplets have $P_M=-1$, while Higgs supermultiplets, gauge bosons and gauginos have $P_M=+1$. In the superpotential only terms for which $P_M=+1$ are allowed. The advantage of such a solution is that B and L are violated only due to non-renormalisable terms in the Lagrangian and therefore in tiny amounts.
It is common to use also a second definition of this symmetry: the $R$-parity refers to
\begin{equation}
P_R=(-1)^{3(B-L)+2s}\;,
\end{equation}
where $s$ is the spin of the particle. The two definitions are precisely equivalent, since the product of $(-1)^{2s}$ is always equal to $+1$ for the particles in a vertex that conserves angular moment. Under this symmetry all the Standard Model particles and the Higgs bosons have even $R$-parity ($P_R=+1$), while all their superpartners have odd $R$-parity ($P_R=-1$).
The phenomenological consequences of an $R$-parity conservation in a theory are extremely important: the lightest sparticle with odd $R$-parity is called lightest supersymmetric particle (LSP) and is absolutely stable; each sparticle, other then the LSP, must eventually decay in a state with an odd number of LSPs; sparticles can only be produced in even numbers, at colliders.\\
Since any superparticle has not been observed yet, Supersymmetry must be broken at some scale higher than the electroweak scale. However, in order to solve the hierarchy problem, the breaking scale $m_{SUSY}$ has to be relatively low, not much higher than $1$ TeV.
The superparticle mass spectrum depends strongly on the Supersymmetry breaking mechanism. In figure \ref{fig:SUSYmasses} it is given an example of the evolution of superparticle masses with the energy scale $Q$, driven by radiative corrections of gauge (positive) and Yukawa (negative) contributions. Supergravity inspired boundary conditions have been implemented in the plot: common masses $m_0$ for the scalar partners and $m_{1/2}$ for the gauginos, imposed approximately at a unification scale $M_{GUT}$ of about $10^{16}$ GeV \cite{Martin}.
\begin{figure}[ht!]
\centering
\subfigure
{\includegraphics[width=7.8cm]{SUSYmasses.pdf}}
\subfigure
{\includegraphics[width=7.8cm]{SUSYgauge.pdf}}
\caption{\it On the left, running of the superpartner masses (from \protect\cite{Martin}). On the right, running coupling constants (from \protect\cite{Martin}). The strong coupling represented by $\alpha_{3}(m_{Z})$ is varied from $0.113$ to $0.123$ and the mass thresholds are varied between $250$ and $1000$ GeV.}
\label{fig:SUSYmasses}
\end{figure}
In figure \ref{fig:SUSYmasses}, $\mu$ represents the so-called $\mu$-term, the coupling of the two Higgs $\mu H_{u} H_{d}$. $M_{3}$, $M_{2}$ and $M_{1}$ are the gaugino masses, corresponding to the $SU(3)_{c}$, $SU(2)_{L}$ and $U(1)_{Y}$ gauge groups respectively, running from the common fermion mass $m_{1/2}$. The dashed lines refer to the third generation sfermions, and the solid lines to the other sfermions, all running from the common scalar mass $m_0$. It is interesting to note that radiative corrections due to the strong interactions dominate, driving the gluinos and the squarks considerably heavier than the other gauginos and sleptons. Furthermore the third generation sfermions are respectively lighter (particularly the stop and the sbottom) than the other two, receiving stronger Yukawa negative contributions.
From figure \ref{fig:SUSYmasses} we find another interesting feature of supersymmetric models: in the Standard Model there is no reason to have a negative $m_H^2$, but in figure \ref{fig:SUSYmasses} we can see that the mass of $H_u$ can be driven negative at low $Q$, due to the negative Yukawa contributions (largely due to the coupling to the top quark) which dominates over the gauge contributions. As a result in the supersymmetric context there is a natural explanation of the electroweak symmetry breaking.
The unification of the gauge coupling constants is strictly connected to the radiative corrections of the supersymmetric spectrum: there is no apriori reason to require this feature in a model, but in the Standard Model the gauge couplings seem to converge to a common value (dashed lines in figure \ref{fig:SUSYmasses}) and this suggests that the introduction of new physics could improve the unification. This is the case of the superparticles: considering again the supersymmetric breaking scale $m_{SUSY}$ at about $1$ TeV, the evolution is changed and the three couplings run together, as shown by the solid lines of figure \ref{fig:SUSYmasses}. Clearly, if $m_{SUSY}$ had been of a different order of magnitude, the unification would be lost. $\alpha_3$, $\alpha_2$ and $\alpha_1$ are the hyperfine constants defined as $\alpha_{a}=g_{a}^{2}/4 \pi$ and associated with $SU(3)_c$, $SU(2)_L$ and $U(1)_Y$, in the GUT normalisation, such that $g_2=g$ and $g_{1}=\sqrt{5/3} g'$.
\section{Grand Unified Theories}
\label{Sec:GUT}
\setcounter{footnote}{3}
Grand unified theories (GUTs) are the result of a common belief among many physicists that the apparent variety of interactions in Nature should eventually find a unified description, i.e. a theory with only one gauge coupling constant which is spontaneously broken at a very high energy scale. The idea is to have, at energies up to $M_{GUT} \gg m_Z$, a simple gauge group G which is spontaneously broken down to the Standard Model gauge group:
\begin{equation}
G\stackrel{M_{GUT}}{\longrightarrow} SU(3)_c\times SU(2)_L\times U(1)_Y\stackrel{m_Z}{\longrightarrow}SU(3)_c\times U(1)_{em}\;.
\end{equation}
To have complete unification (a single gauge coupling constant) and to have the Standard Model as the low-energy effective representative (Standard Model as a subgroup), the group $G$ must be simple and of rank $r\geq 4$. Furthermore, $G$ must allow for complex but anomaly-free representations in order to correctly embed the Standard Model fermions. There are only few groups which fulfill all these requirements and the simplest solutions are $SU(5)$ and $SO(10)$ of rank 4 and 5, respectively. It is relevant to note that the minimal versions of GUTs are not realistic: they suffer from serious problems such as the explanation of the correct symmetry breaking scheme, the prediction of a sufficiently long proton lifetime and the correct description of fermion masses and mixings.\\
The simplest GUT is the minimal $SU(5)$ model by Georgi and Glashow \cite{MinimalSU5}. The gauge bosons of this model belong to the adjoint $\bf24$-dimensional representation of $SU(5)$, which decomposes under the Standard Model gauge group as:
\begin{equation}
{\bf24}=({\bf8},\,{\bf1},0)+({\bf1},\,{\bf3},0)+({\bf1},\,{\bf1},0)+({\bf3},\,{\bf2},-5/6)+({\bf\overline{3}},\,{\bf2},5/6)
\end{equation}
where the first three terms are the usual Standard Model gauge bosons and the others are $12$ new bosons with both colour and weak isospin. Each fermion generation must be arranged in a representation of $SU(5)$ which decomposes under $SU(3)_c\times SU(2)_L$ as
\begin{equation}
2\times({\bf\overline{3},{\bf1}})+({\bf3},{\bf2})+({\bf1},{\bf2})+({\bf1},{\bf1})
\end{equation}
and this property is present in the reducible ${\bf\overline{5}}+{\bf 10}$ representation of $SU(5)$:
\begin{eqnarray}
{\bf\overline{5}}&=&({\bf\overline{3},{\bf1}})+({\bf1},{\bf2})=\left(d_1^c,\,d_2^c,\,d_3^c,\,e,\,-\nu\right)^T\\[3mm]
{\bf10}&=&({\bf\overline{3},{\bf1}})+({\bf3},{\bf2})+({\bf1},{\bf1})=\left(
\begin{array}{ccc|cc}
0 & u_3^c & -u_2^c & -u_1 & -d_1 \\
-u_3^c & 0 & u_1^c & -u_2 & -d_2 \\
u_2^c & -u_1^c & 0 & -u_3 & -d_3 \\
&&&&\\[-5mm]
\hline\\[-5mm]
u_1 & u_2 & u_3 & o & -e^c \\
d_1 & d_2 & d_3 & e^c & o \\
\end{array}
\right)\;.
\end{eqnarray}
The scalar content of the model contains two fields, $\phi_{24}$ and $\phi_{\ov5}$ which transform as $\bf24$ and $\bf{\overline{5}}$ of $SU(5)$, respectively. When these fields get VEVs then, $\mean{\phi_{24}}\approxM_{GUT}$ breaks $SU(5)$ down to $SU(3)_c\times SU(2)_L\times U(1)_Y$ and after $\mean{\phi_{\ov5}}\approx m_Z$ down to $SU(3)_c\times U(1)_\mathrm{em}$. The two breakings occur at two very distinct energy scales: the first at $M_{GUT}\sim10^{15\div16}$ GeV and the second at $m_Z$. As a result there is a large ratio between the two VEVs: this is the well-known hierarchy problem and corresponds to a fine-tuning on the parameters of about $14$ order of magnitude. Many attempts have been proposed to solve this problem, but all of them require a non-minimal extension of the model.\\
In order to go further, it is possible to extend the symmetry to $SO(10)$ \cite{SO10Original}, where there is not only gauge coupling unification, but also every fermion of one generation, including right-handed neutrinos, fits in one single fundamental representation, the $\bf16$ of $SO(10)$: it decomposes under $SU(3)_c\times SU(2)_L$ as
\begin{equation}
\begin{split}
{\bf16}&=\,({\bf3},\,{\bf2})+2\times({\bf{\ov3}},\,{\bf1})+({\bf1},\,{\bf2})+({\bf1},\,{\bf1})+({\bf1},\,{\bf1})\\[3mm]
&=\,\left(\nu,\,u_1,\,u_2,\,u_3,\,e,\,d_1,\,d_2,\,d_3,\,| \,-d_3^c,\,d_2^2,\,d_1^c,\,-e^c,\,u_3^c,\,-u_2^2,\,-u_1^c,\,\nu^c\right)^T\;.
\end{split}
\end{equation}
The gauge bosons belong to the $\bf45$ representation of $SO(10)$: it contains the same gauge bosons of $SU(5)$ and additional 21 new states.
In order to break $SO(10)$ down to the Standard Model gauge group it is necessary to use the spinorial representation $\bf{\overline{16}}$ and the representation $\bf126$, which is contained in the symmetric part of the $\bf{\overline{16}}\times\bf{\overline{16}}$ product (the VEV of an adjoint representation does not lower the rank of the group). In this minimal $SO(10)$ model, the VEVs of these scalar fields contain three free parameters which determine the type of the breaking: it is possible to have a superstrong breaking (directly down to $SU(3)_c\times SU(2)_L\times U(1)_Y$) at $M_{GUT}\approx10^{15\div16}$ GeV as well as a two-fold breaking
\begin{equation}
SO(10)\stackrel{M_X}{\longrightarrow}G'\stackrel{M_{GUT}}{\longrightarrow} SU(3)_c\times SU(2)_L\times U(1)_Y\;,
\end{equation}
with $M_X>M_{GUT}$. There are two inequivalent maximal breaking patterns: \linebreak\mbox{$SO(10)\longrightarrow SU(5)\times U(1)_X$} or $SO(10)\longrightarrow SU(4)_c\times SU(2)_L\times SU(2)_R$. The first possibility corresponds to the case discussed above, where the Standard Model gauge group is achieved through the subsequent breaking of $SU(5)$. What is relevant is that the additional $U(1)_X$ can remain unbroken up to a scale close to $m_Z$, thus giving rise to modifications of the usual neutral current phenomenology. On the other hand the second possibility corresponds to the well-known Pati-Salam (PS) GUT \cite{PatiSalam}.\\
The Pati-Salam group is one example of a partial GUT which ties quarks and leptons together: the leptons are seen as the extra ``colour'' of $SU(4)_c$. Furthermore the $SU(2)_R$ factor makes the model left-right symmetric. Although with Pati-Salam there was fermion unification at some extent, the gauge couplings remain independent parameters and for this reason it is only a partial GUT.
Each of the three families has one left-handed multiplet including left-handed quark and lepton doublets $F=(Q,\ell)$, and one right-handed multiplet $F^c=(Q^{c},\ell^{c})$ including the charge conjugates of the right-handed states that now belong to their own doublets ($Q^{c}$ and $\ell^{c}$). The explicit matrix representation of the two multiplets is given by
\begin{equation}
F\sim({\bf4},\,{\bf2},\,{\bf1})=\left(
\begin{array}{cccc}
u_1 & u_2 & u_3 & \nu \\
d_1 & d_2 & d_3 & e \\
\end{array}
\right)\;,\qquad
F^c\sim({\bf{\overline{4}}},\,{\bf1},\,{\bf{\overline{2}}})=\left(
\begin{array}{cccc}
d_1^c & d_2^c & d_3^c & e^c \\
u_1^c & u_2^c & u_3^c & \nu^c \\
\end{array}
\right)\;.
\label{GUT:PSassignment}
\end{equation}
From eq. (\ref{GUT:PSassignment}), we can see that the right-handed neutrinos are now naturally introduced together with the charge conjugates of the right-handed charged leptons, $e^{c}$.
The breaking to the Standard Model gauge group originates by the introduction of three different scalar fields: $\Delta_L\sim({\bf\overline{10}},\,{\bf2},\,{\bf1})$, $\Delta_R\sim({\bf\overline{10}},\,{\bf1},\,{\bf2})$ and $\phi\sim({\bf1},\,{\bf2},\,{\bf1})$, being the $\bf10$ the symmetric part of the ${\bf4}\times{\bf4}$ product, which under $SU(3)_c$ decomposes as ${\bf10}={\bf6}+{\bf3}+{\bf1}$. The VEV of the $SU(3)_c$ singlet part of $\Delta_R$ does not break $SU(3)_c$, $SU(2)_L$ and $U(1)_\mathrm{em}$, but since $\Delta_R$ is charged under $SU(4)_c$ and $SU(2)_R$, these two groups are spontaneously broken. As a result, when $\Delta_R$ develops a VEV, the first breaking step occurs:
\begin{equation}
SU(4)_c\times SU(2)_L\times SU(2)_R \stackrel{\mean{\Delta_R}}{\longrightarrow} SU(3)_c\times SU(2)_L\times U(1)_Y\;.
\end{equation}
The last step of the electroweak symmetry breaking is accomplished to the VEV of $\phi$.\\
A common interesting feature of GUTs is the presence of new interactions, which could have a strong phenomenological impact: among the others the proton decay is a strong test for any GUT. Considering for example the minimal $SU(5)$ model, proton decay arises from four fermion operators with a prediction for the proton lifetime smaller that the present lower bound. This represents a failure of the minimal $SU(5)$ model, which however can be overcome considering non-minimal extension of this model.
Before concluding the section, we briefly comment on the possibility of supersymmetric GUTs. In this kind of models, we can see the possibility to solve both the hierarchy problem and the proliferation of many free parameters. Taking as an example the minimal supersymmetric $SU(5)$ model \cite{MinimalSUSYSU5}, the simplest supersymmetric GUT realisation, gauge bosons and fermions are given to the same representations of the non-supersymmetric minimal $SU(5)$, but are promoted to corresponding supermultiplets. The Higgs sector is enlarged by the introduction of an additional $\phi_{5}$ which transforms as a $\bf 5$ under the gauge group. Also in the supersymmetric variant of the model it is necessary a parameter fine-tuning in order to keep small the mass of the Higgs doublets.
In this minimal supersymmetric $SU(5)$ model the hierarchy problem finds a natural solution, the gauge coupling constant unification is improved with respect to the MSSM and a solution to the proton decay can be implemented by the introduction of the $R$-parity. On the other hand the Supersymmetry breaking mechanism (whatever it is) introduces a lot of new parameters.
We have commented on different possibilities for a GUT scenario, including the interplay with Supersymmetry, but in any of these models the existence of three families and the explanation of their mass hierarchies and mixings remain an open problem which could only be solved by the extension of these theories to a non-minimal treatment. In the following chapter we directly face the problem of the flavour.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{The Flavour Puzzle}
\label{Sec:FlavourPuzzle}
\setcounter{equation}{0}
\setcounter{footnote}{3}
In this chapter we focus on the flavour sector of the Standard Model, which is the origin of the majority of the free parameters. Including the neutrino masses, the complete list accounts for $26$ or $28$ low-energy free parameters, depending on the lepton number conservation or violation (or alternatively on the Dirac or Majorana nature of neutrinos). Five of these are flavour universal: the three gauge coupling constants $g,\,g',\,g_3$, one Higgs quartic coupling $\lambda_H$ and one Higgs mass squared $m_H^2$. The rest are parameters associated to the fermion masses and mixings: the masses for the six quarks, the three charged leptons and three neutrinos; three angles and one Dirac phase in the quark sector; three angles, one Dirac phase and possibly two Majorana phases in the lepton sector. The last parameter is the strong CP-violating parameter $\overline{\theta}_{CP}$, which is intimately related to the quark masses.
While from the experimental point of view there is abundant information on the numerical values of (almost all) these parameters, from the theoretical side there are several fundamental open questions: why are there three generations of fermions? why are quarks and charged leptons strongly hierarchical? why do neutrinos not show the same hierarchy? why is the neutrino absolute mass scale much smaller than the charged fermion masses? why are the quark mixing angles much smaller than (at least two of) the lepton mixing ones? are the masses and the mixings free parameters? are the mixing parameters related to the masses? why is $\overline{\theta}_{CP}<10^{-9}$? what is the origin of the CP violation? The lack of a fundamental understanding of all these problems is addressed as the ``flavour puzzle''.\\
Moving to more general considerations, the flavour problem is one of the key aspects in all the extensions of the Standard Model. Grand unified theories (GUTs) can help in (partially) improving the situation: besides their aesthetic appeal, GUTs reduce the number of free parameters. Apart the unification of the gauge coupling constants, several relations among fermion masses have been found: in the minimal $SU(5)$ model, at the GUT scale, this relation holds
\begin{equation}
M_d=M_e^T\;,
\end{equation}
which leads to the following expressions for the mass eigenvalues
\begin{equation}
m_b=m_\tau\;,\qquad m_s=m_\mu\;,\qquad m_d=m_e\;.
\label{FP:MassesMinimalSU5}
\end{equation}
It is easy, however, to see that this result is not in agreement with the experimental measurements. In order to test the validity of these predictions we should extrapolate the masses from the GUT threshold to the low-energy scale. We can see, however, that the last two relations of eq. (\ref{FP:MassesMinimalSU5}) turn out to be not acceptable even without going through the renormalisation group (RG) evolution: eq. (\ref{FP:MassesMinimalSU5}) implies $m_s/m_d=m_\mu/m_e$ which is independent from the running. We can directly compare these mass ratios with the observations and we find that $m_s/m_d\simeq20$ and $m_\mu/m_e\simeq200$, concluding that this relation is off by an order of magnitude.
With this simple example, we understand that a non-minimal extension of the model should be considered and, usually, this introduces new parameters which reduce the advantages of using a GUT to explain the origin of fermion masses and mixings.\\
When we consider (broken) supersymmetric theories, mainly motivated to solve the hierarchy problem, the flavour sector suffers for the introduction of many new mass and mixing parameters. Furthermore, while in the Standard Model, not including right-handed neutrinos, the flavour symmetry $[U(3)]^5$ is strongly violated only by the CKM matrix, so that there is a natural suppression of all flavour-changing and CP-violating effects, in a general supersymmetric theory, once non-universal soft breaking terms are added, new sources of flavour and CP violation are introduced and the theory is subject to very stringent constraints from the experimental measurements. A dangerous source of these effects are the off-diagonal entries of the sfermion masses in the generation space: strong constraints come from $\mu\to e\gamma$ and $b\to s\gamma$ decays, the $K^0-\overline{K}^0$ and $B^0-\overline{B}^0$ systems, electric and magnetic dipole moments, etc \ldots
It is interesting to note, however, that the minimal supersymmetric Standard Model (MSSM) respects all these constraints, when universal boundary conditions on the soft terms are assigned: indeed the sfermion masses are universal at the unification scale and the only non-universality is the one induced by the renormalisation group running down to the electroweak energy scale. However, this result is not a proper feature of MSSM, but it characterises a larger class of models in which the flavour sector is determined by the universality conditions. In the next section we briefly discuss about this general approach to soften the flavour problem.
In section \ref{Sec:FS} we illustrate a second interesting approach which consists in studying the presence of vanishing entries in the fermion mass or mixing matrices: people usually refers to this kind of studies with the name of texture zeros and are useful to understand from a theoretical point of view which flavour pattern might be a good approximation of the experimental measurements. Among these, we focus on the bimaximal and the tribimaximal mixing patterns, which received particularly attention in the last years.
In section \ref{Sec:FSym} we give a brief overview on the flavour symmetries which have been introduced to recover particular flavour structures, able to reproduce the measured fermion masses and mixings: in particular, we comment on the advantages and disadvantages of using symmetries which can be either Abelian or non-Abelian, either local or global, either continuous or discrete. We focus only on flavour symmetries which commute with the underlying gauge symmetry group. In this section we also comment on the necessary requirement to implement a (spontaneous) breaking mechanism of the flavour symmetry in order to correctly describe fermion masses and mixings.
\section{The Minimal (Lepton) Flavour Violation}
\label{Sec:MFV}
\setcounter{footnote}{3}
The Minimal Flavour Violation (MFV) models \cite{MFV} are the simplest class of extensions of the Standard Model attempting to solve the flavour problem. They are based on the introduction of the symmetry $G_f=[U(3)]^5$, acting only among the fermion generations, which corresponds to the largest group of unitary field transformations that commutes with the Standard Model gauge group, not including right-handed neutrinos. $G_f$ can be decomposed as
\begin{equation}
G_f=SU(3)_Q^3\times SU(3)_L^2\times U(1)_B\times U(1)_L\times U(1)_Y\times U(1)_{PQ}\times U(1)_{e^c}
\end{equation}
where
\begin{eqnarray}
SU(3)_Q^3 &=& SU(3)_q\times SU(3)_{u^c}\times SU(3)_{d^c}\\[3mm]
SU(3)_L^2 &=& SU(3)_\ell\times SU(3)_{e^c}\;.
\end{eqnarray}
The $U(1)$ factors can be identified with the baryon number $U(1)_B$, the lepton number $U(1)_L$, the hypercharge $U(1)_Y$, the Peccei-Quinn symmetry $U(1)_{PQ}$ of two-Higgs-doublet models \cite{PecceiQuinnMFV} and with a rotation which affects $e^c$ only, $U(1)_{e^c}$. Under $SU(3)_Q^3\times SU(3)_L^2$ the fermions transform as
\begin{eqnarray}
&q\sim({\bf\ov3},\,{\bf1},\,{\bf1};\,{\bf1},\,{\bf1})\;,\qquad
u^c\sim({\bf1},\,{\bf3},\,{\bf1};\,{\bf1},\,{\bf1})\;,\qquad
d^c\sim({\bf1},\,{\bf1},\,{\bf3};\,{\bf1},\,{\bf1})\;,&\nn\\[3mm]
&\ell\sim({\bf1},\,{\bf1},\,{\bf1};\,{\bf\ov3},\,{\bf1})\;,\qquad
e^c\sim({\bf1},\,{\bf1},\,{\bf1};\,{\bf1},\,{\bf3})\;.&\nn
\end{eqnarray}
In the Standard Model the Yukawa interactions break the symmetry group $SU(3)_Q^3\times SU(3)_L^2\times U(1)_{PQ}\times U(1)_{e^c}$, but preserve $B$, $L$ and $Y$. We can recover flavour invariance by introducing dimensionless auxiliary fields $Y_e$, $Y_d$ and $Y_u$ transforming under $SU(3)_Q^3\times SU(3)_L^2$ as
\begin{equation}
Y_e\sim ({\bf1},\,{\bf1},\,{\bf1};\,{\bf3},\,{\bf\ov3})\;,\qquad
Y_d\sim ({\bf3},\,{\bf1},\,{\bf\ov3};\,{\bf1},\,{\bf1})\;,\qquad
Y_u\sim ({\bf3},\,{\bf\ov3},\,{\bf1};\,{\bf1},\,{\bf1})\;.
\end{equation}
This allows to write the Lagrangian with the same appearance of the Yukawa interactions
\begin{equation}
\mscr{L}_{\rm MFV} = e^c\,Y_e\,H^\dag\ell + d^c\,Y_d\,H^\dag q + u^c\,Y_u\,\widetilde{H}^\dag q + \text{h.c.}\;.
\end{equation}
This expression describes the most general coupling of the fields $Y_i$ to the renormalisable Standard Model operators.
The fermion masses and the quark mixing are then recovered allowing the auxiliary fields to develop a VEV. Using the flavour symmetry, it is possible to write these VEVs as
\begin{eqnarray}
\mean{Y_e} &=& \dfrac{\sqrt2}{v}\diag(m_e,\,m_\mu,\,m_\tau)\;,\nn\\[3mm]
\mean{Y_d} &=& \dfrac{\sqrt2}{v}\diag(m_d,\,m_s,\,m_b)\;,
\label{MFV:DiagonalBasis}\\[3mm]
\mean{Y_u} &=&\dfrac{\sqrt2}{v}\diag(m_u,\,m_c,\,m_t)V\;,\nn
\end{eqnarray}
where $m_i$ are the fermion masses, $v/\sqrt2$ is the VEV of the neutral component of the Higgs field and $V$ is the CKM matrix.
We can now rewrite the MFV leading principle in different terms: an effective low-energy theory satisfies the criterion of MFV if all higher-dimensional operators, constructed from the Standard Model fields and the $Y_i$ auxiliary fields, are invariant under CP and under the flavour group $G_f$. In this way any flavour and CP violation contribution is completely determined by the CKM matrix.\\
In order to account for the neutrino masses it is necessary to extend such a treatment to the Minimal Lepton Flavour Violation (MLFV) context \cite{MLFV1,MLFVother}. Similarly to the discussion in section \ref{Sec:SM}, we should distinguish the case in which the source of neutrino masses is the Weinberg operator or the introduction of new fields:
\begin{description}
\item[Minimal field content.] In this case the MLFV respects two hypothesis: the breaking of the lepton number occurs at a very high energy scale $\Lambda_L$ which is unrelated to the flavour symmetry $G_f$; with respect to the MFV scenario, there is only an additional source of flavour violation in the lepton sector, which is $Y_\nu\sim({\bf1},\,{\bf1},\,{\bf1};\,{\bf6},\,{\bf1})$ defined as
\begin{equation}
\mscr{L}_{\rm MLFV} = \mscr{L}_{\rm MFV} + \dfrac{1}{2\Lambda_L} (\widetilde{H}^\dag\ell)^T\,Y_\nu\,(\widetilde{H}^\dag\ell)+ \text{h.c.}\;.
\end{equation}
The neutrino masses originate when $Y_\nu$ develops a VEV and, using the invariance under $G_f$ to rotate the fields in the basis of eq. (\ref{MFV:DiagonalBasis}), we can express $\mean{Y_\nu}$ in terms of neutrino masses and lepton mixings:
\begin{equation}
\mean{Y_\nu}=\dfrac{\Lambda_L}{4v^2}U^*\diag(m_1,\,m_2,\,m_3)U^\dag\;,
\end{equation}
where $m_i$ are the neutrino masses and $U$ is the PMNS mixing matrix.
\item[Extended field content.] In this scenario three right-handed neutrinos are considered and the flavour group enlarges to account for an additional $SU(3)$ term: $G_f\times SU(3)_{\nu^c}$. Only the right-handed neutrinos transform under the additional symmetry term, $\nu^c\sim({\bf1},\,{\bf1},\,{\bf1};\,{\bf1},\,{\bf1},\,{\bf3})$. There is a large freedom in the way of breaking this group to generate the observed masses and mixings: it is possible to introduce neutrino mass terms transforming as $({\bf1},\,{\bf1},\,{\bf1};\,{\bf6},\,{\bf1},\,{\bf1})$, $({\bf1},\,{\bf1},\,{\bf1};\,{\bf1},\,{\bf1},\,{\bf\ov6})$ and $({\bf1},\,{\bf1},\,{\bf1};\,{\bf3},\,{\bf1},\,{\bf\ov3})$. All these possibilities correspond to a Majorana mass term for the left-handed neutrinos, for the right-handed neutrinos and to a Dirac mass term, respectively. In \cite{MLFV1} a discussion of these cases is presented and some differences with respect to the previous scenario are found.
\end{description}
In \cite{MFV,MLFV1,MLFVother} a general classification of six-dimensional effective operators is presented, allowing the study of several flavour violating transitions. The conclusions are that MFV respects the strong constraints coming from LFV and FCNC processes, such as $\mu\to e\gamma$, $b\to s\gamma$, etc \ldots. However, while such a choice has the advantage that it can accommodate any pattern of fermion masses and mixing angles, it does not provide any explanation for the origin of the particular structure for the VEV of the auxiliary fields which break the flavour symmetry or, in other words, any explanation for the fermion mass hierarchies and for the specific patterns of the mixing matrices. This defect suggests to search for other flavour symmetries which, leading to correct fermion masses and mixings without introducing unwanted flavour changing phenomena, explain the origin of the observed flavour phenomenology.
\section{Flavour Structure Approach}
\label{Sec:FS}
\setcounter{footnote}{3}
In this section we review some ideas to go further with respect to the MFV approach. Instead of starting from a universality principle, the idea is to study particular flavour patterns for the fermion mass matrices which deal to realistic mixing angles and fermion hierarchies. A very simple strategy is to require that certain elements of the fermion mass matrices are negligible and, as a result, some matrix entries can be set to zero. People usually refer to these constructions as texture zeros: they are not motivated by any specific principle, but only to increase the predictivity of the model, indeed this approach allows to reduce the number of free parameters.
In the quark sector, four \cite{4ZerosQuarks}, five and six \cite{56ZerosQuarks} texture zeros have been extensively studied and some of them are able to correlate the quark masses with some elements of the CKM matrix, getting relations as the following \cite{GST_Relation,QuarkRelations}:
\begin{equation}
\left|\dfrac{V_{td}}{V_{ts}}\right| = \sqrt{\dfrac{m_d}{m_s}}\;,\qquad\qquad
\left|\dfrac{V_{ub}}{V_{cb}}\right| = \sqrt{\dfrac{m_u}{m_c}}\;.
\end{equation}
In the lepton sector, mass matrices with different number of zeros and with zeros in various places have been considered \cite{ZerosNeutrinos}. In particular, the charged lepton mass matrix is usually assumed to be diagonal and all the flavour information are encoded in the neutrino sector: if neutrinos are of Majorana type the mass matrix must be symmetric and, as a result, there are several textures with only two independent zeros, while schemes with a larger number of zeros appear to be excluded by experiments; on the contrary, if neutrinos are of Dirac nature textures with more than two zeros are allowed. A common problem of these studies is the stability of the textures zeros under corrections due to the renormalisation group running from the cutoff, at which the zeros are imposed, down to the electroweak scale: this effect is particularly relevant when the neutrino spectrum is quasi degenerated or inversely hierarchical, as discussed more in details in section \ref{Sec:Running}.
The approach of the textures zeros should be taken with some caution: for instance, it is not clear the origin of the zeros which appear in the mass matrices; furthermore the zeros could not be exactly zeros, partially reducing the predictivity of the model. In any case, these flavour patterns could help in understanding the correct approach to explain fermion mass hierarchies and mixings.\\
Following the philosophy of the textures zeros, which can be seen as particular flavour patterns of the mass matrices, a similar approach has been used to discuss interesting flavour structures for the mixing matrices, with particular attention to the lepton sector.
As already discussed in section \ref{Sec:SM:PhysicalBasis}, the pattern of the lepton mixings is characterised by two large angles and a small one: the atmospheric angle is compatible with a maximal value; the solar angle is large, but not maximal; the reactor angle only has an upper bound and it is well compatible with a vanishing value. Looking at this scenario, for long time people tried to reproduce models with a lepton mixing matrix characterised by a maximal angle and a vanishing one, $\theta_{23}=\pi/4$ and $\theta_{13}=0$: apart from sign convention redefinition,
\begin{equation}
U_{\mu-\tau}=\left(
\begin{array}{ccc}
c_{12} & s_{12} & 0 \\
-s_{12}/\sqrt2 & c_{12}/\sqrt2 & -1/\sqrt2 \\
-s_{12}/\sqrt2 & c_{12}/\sqrt2 & +1/\sqrt2 \\
\end{array}
\right)\;,
\end{equation}
in the basis where charged leptons are diagonal. The most general neutrino mass matrix which can be diagonalised by $U_{\mu-\tau}$ is $\mu-\tau$ symmetric, for which $(m_\nu)_{2,2}=(m_\nu)_{3,3}$ and $(m_\nu)_{1,2}=(m_\nu)_{1,3}$, and is given by \cite{MuTauSymMatrix}
\begin{equation}
m_\nu=\left(
\begin{array}{ccc}
x & y & y \\
y & z & w \\
y & w & z \\
\end{array}
\right)\;.
\end{equation}
Since the reactor angle is vanishing, there is no CP violation and the only phases are of Majorana type. Collecting apart the Majorana phases, the mass matrix depends on four real parameters: the three masses and the remaining angle, the solar one, which can be written in terms of mass parameters as
\begin{equation}
\sin^2{2\theta_{12}}=\dfrac{8y^2}{(x-y-z)^2+8y^2}\;.
\label{FS:MuTauSolarAng}
\end{equation}
Many models have been constructed introducing a flavour symmetry in addition to the gauge group of the Standard Model in order to provide the $\mu-\tau$ symmetric pattern for the neutrino mass matrix, but in all of these the solar angle remains undetermined. It is therefore necessary some new ingredients other than the $\mu-\tau$ symmetry to describe correctly the neutrino mixings from the theoretical point of view. In what follows we review two relevant flavour structures, which can be considered an upgrade of the $\mu-\tau$ symmetry: the bimaximal (BM) and the tribimaximal (TB) patterns.
\subsection{The Bimaximal Mixing Pattern}
\label{Sec:FS:BM}
\setcounter{footnote}{3}
In the so-called bimaximal pattern \cite{BMmixing} while $\theta_{13}=0$, $\theta_{23}$ and $\theta_{12}$ are assumed to be maximal. A maximal solar angle can be alternatively written as $\sin^2{\theta_{12}}=1/2$, which, comparing with eq. (\ref{FS:MuTauSolarAng}), corresponds to a well defined relation between the mass parameters: $w=x-z$. The most general mass matrix of the BM-type can be written as
\begin{equation}
m_\nu=\left(
\begin{array}{ccc}
x & y & y \\
y & z & x-z \\
y & x-z & z \\
\end{array}
\right)
\label{FS:BM:GeneralMassMatrix}
\end{equation}
and satisfies to the $\mu-\tau$ symmetry and to an additional symmetry for which $(m_\nu)_{1,1}=(m_\nu)_{2,2}+(m_\nu)_{2,3}$. The $\mu-\tau$ symmetry is responsible for $\theta_{13}=0$ and, as discussed in section \ref{Sec:SM:PhysicalBasis}, the CP-violating phase does not contribute. Apart from the Majorana phases, eq. (\ref{FS:BM:GeneralMassMatrix}) depends on only three real parameters, the masses, which can be written in terms of the mass parameters $x$, $y$ and $z$:
\begin{equation}
m_1=x+\sqrt2y\;,\qquad
m_2=x-\sqrt2y\;,\qquad
m_3=2z-x\;.
\end{equation}
These masses are the eigenvalues of eq. (\ref{FS:BM:GeneralMassMatrix}), while the eigenstates define the unitary transformation which diagonalises the mass matrix in such a way that $m_\nu^\diag=U_{BM}^T m_\nu U_{BM}$, where the unitary matrix is given by
\begin{equation}
U_{BM}=\left(
\begin{array}{ccc}
1/\sqrt2 & -1/\sqrt2 & 0 \\
1/2 & 1/2 & -1/\sqrt2 \\
1/2 & 1/2 & +1/\sqrt2 \\
\end{array}
\right)\;.
\label{FS:BM:BMmixing}
\end{equation}
Notice that $U_{BM}$ does not depend on the mass parameters $x$, $y$, $z$, or on the mass eigenvalues, in contrast with the quark sector, where the entries of the CKM matrix can be written in terms of the ratio of the quark masses. This feature puts the bimaximal pattern in the class of the mass-independent textures \cite{LV_Theorem}.
It is useful to express eq. (\ref{FS:BM:GeneralMassMatrix}) in terms of $m_i$ instead of $x$, $y$ and $z$:
\begin{equation}
\begin{split}
m_\nu&=\,U_{BM}\,\diag(m_1,\,m_2,\,m_3)\,U_{BM}^T\\[3mm]
&=\,\dfrac{m_1}{4}\left(\begin{array}{ccc}
2 & \sqrt2 & \sqrt2 \\
\sqrt2 & 1 & 1 \\
\sqrt2 & 1 & 1 \\
\end{array}
\right)
+\dfrac{m_2}{4}\left(\begin{array}{ccc}
2 & -\sqrt2 & -\sqrt2 \\
-\sqrt2 & 1 & 1 \\
-\sqrt2 & 1 & 1 \\
\end{array}
\right)
+\dfrac{m_3}{2}\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & -1 \\
0 & -1 & 1 \\
\end{array}
\right)\;.
\end{split}
\end{equation}
Clearly, all type of hierarchies among neutrino masses can be accommodated. The smallness of the ratio $r=\Delta m^2_{sun}/\Delta m^2_{atm}$
requires either $\vert xy\vert \ll \vert z^2\vert$ (normal hierarchy) or $\vert x\vert \sim \vert z\vert\ll \vert y\vert$ (inverse hierarchy) or $\vert y\vert \ll \vert x\vert \sim \vert z\vert$ (approximate degeneracy except for $x\sim 2z$).
A final comment on the agreement of this scheme with the experimental data is worth. In the bimaximal pattern the solar angle is assumed maximal, $\sin^2{\theta_{12}}= 1/2$, to be compared with the latest experimental determination: at $3\sigma$ error level,
\mbox{$\sin^2{\theta_{12}}= 0.26-0.37$} from \cite{Fogli:Indication} or $\sin^2{\theta_{12}}= 0.25-0.37$ from \cite{Maltoni:Indication}, and the bimaximal pattern can be considered at most as a zeroth order approximation that needs large corrections.
\subsection{The Tribimaximal Mixing Pattern}
\label{Sec:FS:TB}
\setcounter{footnote}{3}
In the so-called tribimaximal or Harrison-Perkins-Scott pattern \cite{HPS} a vanishing reactor angle, a maximal atmospheric angle and $\sin^2{\theta_{12}}=1/3$ are assumed. From eq. (\ref{FS:MuTauSolarAng}) it results $w=x+y-z$ and therefore the most generic mass matrix of the TB-type is given by
\begin{equation}
m_\nu=\left(
\begin{array}{ccc}
x & y & y \\
y & z & x+y-z \\
y & x+y-z & z \\
\end{array}
\right)\;.
\label{FS:TB:GeneralMassMatrix}
\end{equation}
This matrix satisfies the $\mu-\tau$ symmetry and the so-called magic symmetry, for which \mbox{$(m_\nu)_{1,1}=(m_\nu)_{2,2}+(m_\nu)_{2,3}-(m_\nu)_{1,3}$}. The $\mu-\tau$ symmetry assures that the reactor angle is vanishing and as a result the CP phase disappears. Disregarding the Majorana phases, the mass matrix depends on only three real parameters, the masses, which can be written in terms of the parameters $x$, $y$ and $z$:
\begin{equation}
m_1=x-y\;,\qquad
m_2=x+2y\;,\qquad
m_3=2z-x-y\;.
\end{equation}
These eigenvalues come from the diagonalisation of eq. (\ref{FS:TB:GeneralMassMatrix}) by the use of a unitary transformation in such a way that $m_\nu^\diag=U_{TB}^T m_\nu U_{TB}$, where the unitary matrix is given by
\begin{equation}
U_{TB}=\left(
\begin{array}{ccc}
\sqrt{2/3} & 1/\sqrt3 & 0 \\
-1/\sqrt6 & 1/\sqrt3 & -1/\sqrt2 \\
-1/\sqrt6 & 1/\sqrt3 & +1/\sqrt2 \\
\end{array}
\right)\;.
\label{FS:TB:TBmixing}
\end{equation}
Notice that $U_{TB}$ does not depend on the mass eigenvalues, in complete analogy to the bimaximal pattern of eq. (\ref{FS:BM:BMmixing}), and therefore it belongs to the class of mass-independent textures.
It is useful to write eq. (\ref{FS:TB:GeneralMassMatrix}) in terms of $m_i$ instead of $x$, $y$ and $z$:
\begin{equation}
\begin{split}
m_\nu&=\,U_{TB}\,\diag(m_1,\,m_2,\,m_3)\,U_{TB}^T\\[3mm]
&=\,\dfrac{m_3}{2}\left(\begin{array}{ccc}
0&0&0\\
0&1&-1\\
0&-1&1\end{array}\right)
+\dfrac{m_2}{3}\left(\begin{array}{ccc}
1&1&1\\
1&1&1\\
1&1&1\end{array}\right)
+\dfrac{m_1}{6}\left(\begin{array}{ccc}
4&-2&-2\\
-2&1&1\\
-2&1&1\end{array}\right)\;.
\end{split}
\end{equation}
All the type of neutrino spectra can be accommodated: $m_3>>m_2>>m_1$ defines a normal hierarchy; a degenerate model is given by choosing $m_3\approx-m_2 \approx m_1$; for $m_1 \approx - m_2$ and $m_3 \approx 0$ the inverse hierarchy case is achieved. However, stability under renormalisation group running strongly prefers opposite signs for the first and the second eigenvalue which are related to solar oscillations and have the smallest mass squared splitting (see section \ref{Sec:Running} for details).
Finally we underline that this mixing pattern is a very good approximation of the experimental data: the tribimaximal values for the atmospheric and the reactor angles are inside the $1\sigma$ error level, while that one for the solar angle is very close to the upper $1\sigma$ value.\\
The study of promising patterns for the mass matrices or for the mixing matrices are useful and predictive tools to describe the experimental data, but they suffer for the lack of an explanation of their origin and their stability under corrections. In order to improve the situation, it is necessary to search for an intimate reason of their appearance. In the next section we will examine the use of flavour symmetries in order to recover such patterns.
\section{Flavour Symmetry Overview}
\label{Sec:FSym}
\setcounter{footnote}{3}
In section \ref{Sec:MFV} we discussed the maximal size for a flavour symmetry in the Standard Model: $[U(3)]^5$ without right-handed neutrinos or otherwise $[U(3)]^6$. If we deal with a GUT model, however, the maximal size is reduced (for example in $SO(10)$ GUT, it is a single $U(3)$). Furthermore, in the current literature, there is a tendency of adopting a flavour symmetry which can be embedded into $SU(3)$ or $SO(3)$, to give the reason of some kind of rotation among the families. In any case, as we will see in a while, the symmetry which is introduced has to be broken: it is a general requirement of the Yukawa interactions and it is a necessary condition in order to be consistent with the observed fermion masses and mixings.
There is large variety of symmetries which can be used: they can be either Abelian or non-Abelian, either local or global (or even a combination of them) and finally either discrete or continuous. Historically, flavour symmetries were first used to describe the quark sector and the Abelian $U(1)$ symmetry has been shown to be able to explain the observed quark mass hierarchies and mixings. In this approach developed by Froggatt and Nielsen in 1979 \cite{FN}, there is a flavon field $S$, a gauge-invariant scalar, which acquires a vacuum expectation value (VEV) and breaks the $U(1)$ symmetry. It is possible to define a small parameter $\epsilon=\mean{S}/\Lambda_f$, where the cutoff $\Lambda_f$ is the scale of flavour dynamics usually associated with some heavy fermions which are integrated out. This symmetry breaking is then communicated to fermions with a non-universal mechanism, in such a way that different fermions receive different powers of $\epsilon$. The advantage of this mechanism is that the Yukawas can be of $\cO(1)$ and the fermion masses and mixings are explained as powers of the expansion parameter $\epsilon$. On the other hand, the main disadvantage consists in the lack of well-defined predictions: masses and mixings angles are only predicted up to unknown $\cO(1)$ coefficients. Furthermore, certain mixing patterns such as the bimaximal and the tribimaximal schemes cannot be achieved with an Abelian symmetry. Therefore we can conclude that the predictive power of a non-Abelian symmetry is in general larger than that of an Abelian one.
Concerning the local or the global attribute of a flavour symmetry, we have to remember that the requirement of anomaly freedom for a local symmetry can put strong constraints on the charge assignment of the fermions. Furthermore, locality preserves the symmetry from being broken by quantum gravity effects at the Planck scale.
We now discuss the advantages and disadvantages of using a continuous or a discrete group. In the case of a spontaneously broken symmetry a continuous one leads to the appearance of Goldstone or gauge bosons. On the other hand, the breaking of a discrete group is safe from such a consequence but could be affected by the problem of domain walls \cite{DomainWalls} (solvable by inflation). Furthermore, using the continuous groups such as $SO(3)$ or $SU(3)$, we have only a single non-trivial possibility to describe the three fermion families and the type of contractions is also strongly limited. On the contrary, adopting a discrete symmetry, there are several small representations which can be fairly used. Even if these disadvantages in treating continuous symmetries, they have been extensively studied in literature. A first attempt has been proposed in \cite{SymmU2} where the investigated flavour symmetry is the group $U(2)$, under which the three families transform as a ${\bf2}+{\bf1}$. This reflects
the fact that the third families are the heaviest ones: in this way it is possible to explain a relatively large mixing between the first two families, while those with the third generation are smaller. It fits well the quark sector, where the Cabibbo angle is the largest, but it does not work with the lepton sector, where two of the angles are large.
An upgraded approach has been pursued with the use of the $SO(3)$ \cite{SymmSO3,SymmSO3King} and $SU(3)$ \cite{SymmSU3} symmetries, which account for all the three generations. In the models, realistic fermion mass spectra and mixings are achieved, but with the introduction of additional heavy degrees of freedom and auxiliary symmetries, which suppress unwanted operators. It is worth to note that in all of these models it is not-trivial to explicitly realise an embedding of the flavour symmetry with an underlying GUT.\\
After this brief summary, we restrict to the context of non-Abelian discrete flavour symmetries which are in general more predictive than Abelian ones and that are safe from dangerous effects such as the appearance of Goldstone or gauge bosons. Furthermore, the particular mixing pattern in the lepton sector can be very well explained by the use of certain discrete symmetries, which are all subgroups of $SU(3)$. In rest of this section we do not enter in the details of each group, referring to \cite{GroupRepresentations} for the general group theory and to the following chapters where some of these group are treated in detail.
\subsection{Discrete Flavour Symmetries}
\label{Sec:FS:Discrete}
\setcounter{footnote}{3}
Here we give a brief overview of the main discrete flavour symmetries which has been (recently) adopted in order to describe quarks and leptons.
Of particular relevance is the $A_4$ flavour group \cite{TBA4,MR_A4,BMV_A4,AF_Extra,BH_Geometric,Ma_TBMandSUSYwithA4,AF_Modular,HKV_A4, AFL_Orbifold,He_Renorm,MPT_SO10xA4,Yin_A4,BKM_A4Zn,Altarelli_Lectures,AFH_SU5,BMPT_SU3,AG_AF,HMV_Nie,LinPredictive,CDGG_Extra,JM_A4Lepto,BFM_SO10xA4, Riazuddin,CK_Form,Lin_Lepto,BFRS_A4Lepto,AM_Lin,HMV_ILSS,Lin_LargeReactor,BBFN_Lepto,HMP_Lepto,ABMMM_Lepto}. $A_4$ has been widely used as a flavour symmetry since it is the smallest group with an irreducible three-dimensional representation. In this way, it is possible to collect all the three families in a unique representation, such as with $SO(3)$ or $SU(3)$, but with more freedom in the type of couplings. The introduction of $A_4$ as the flavour symmetry in the lepton sector can produce the tribimaximal pattern as the lepton mixing matrix and a great effort has been done in order to study the related neutrino phenomenology. However, when the quark sector is considered, it results a highly non-trivial task to get a (even non grand) unified description of leptons and quarks. We will illustrate this point in more details in section \ref{Sec:FlavourModelsTBM}. We only report here that a possible strategy to overcome the problem is to enlarge the symmetry.
Two groups have been used in order to mimic the behavior of $A_4$ in the lepton sector, but pursuing a correct description also of the quark mass hierarchy and mixings: the group $T'$ \cite{FHLM_Tp,Tp_other,CF_Tp}, whose features will be illustrated in section \ref{Sec:TpTBM}, and the group $S_4$ \cite{BM_S4,BMM_S4,BMM_SS,Lam_S4,MP_S4}, which will be discussed in section \ref{Sec:S4TBM}. It is worth to report that the group $S_4$ has been already studied in literature but with different aims in\cite{S4Old} and it has also been recently used for a revival of the bimaximal pattern in the context of the MSSM \cite{AFM_BimaxS4}, which will be the subject of section \ref{Sec:FlavourModelsBM}, and for the construction of a realistic Pati-Salam GUT \cite{ABM_PSS4}, discussed in section \ref{Sec:AFM:PS}.
The groups $T'$ and $S_4$ are the smallest groups which well fit the lepton and the quark sectors at the same time, but several other studies based on different flavour symmetries show interesting results: in particular the recently analysed symmetry group $\Delta(27)$ \cite{SymmDelta27} is worth to be mentioned.\\
Before concluding this section, it is relevant to underline a general feature of models based on (discrete) groups: it turns out that the symmetry alone is not sufficient to fully account for the fermion mass hierarchies and mixings in the majority of the cases. A first problem concerns the differences between leptons and quarks: two (of three) large lepton mixing angles with respect to three small and hierarchical quark ones; neutrinos with a much milder mass hierarchy with respect to the charged fermions. A viable solution consists in avoiding interferences among the two sectors, at least in first approximation, and to keep them separated additional groups, such as the Abelian factors $Z_n$, are implemented in the complete flavour symmetry group. A second problem refers to the use of the three-dimensional representation, which is usually adopted to describe leptons: the components of a triplet show degenerate masses, unless some breaking parameter is introduced. From this the problem of how to describe the charged lepton mass hierarchy follows and two kind of solutions have been proposed: the Froggatt-Nielsen (FN) mechanism, which consists in introducing an additional (global or local) $U(1)_{FN}$ factor under which right-handed fermions transform, is the most used; otherwise it is possible to introduce additional symmetry groups, usually very small, such as $Z_3$ or $S_3$. In the following chapters we will focus only on models where the Froggatt-Nielsen mechanism is responsible of the charged lepton mass hierarchy.
\subsection{The Flavour Symmetry Breaking}
\label{Sec:FS:SSB}
\setcounter{footnote}{3}
In this section we comment on the necessary requirement of a flavour symmetry breaking mechanism, which can occur explicitly or spontaneously. Since the explicit breaking generally introduces several additional parameters, we focus only on the mechanism of the spontaneous symmetry breaking.
The gauge group of the Standard Model prevents direct fermion mass terms and the Higgs mechanism is addressed to be responsible for them. When a flavour symmetry is implemented in a model, some new fields are needed: the simplest example is the M(L)FV which we have discussed in section \ref{Sec:MFV}, where the Yukawa couplings are promoted to gauge singlet scalars. Even in this simple approach, it is necessary that the Yukawa fields develop a VEV in order to generate mass terms for fermions. A similar requirement holds also when the flavour symmetries, which we have presented in the previous section, are introduced. We have already seen this aspect discussing the FN symmetry: a new scalar field $S$ is introduced and its VEV, communicated to the fermions, accounts for masses and mixings. People usually refer to this kind of new degrees of freedom with the name of ``flavons'': they usually are invariant under the gauge group of the underlying theory, either Standard Model or GUT, and transform only under the flavour symmetry; their masses are typically much larger then the electroweak scale and this means that they introduce a further energy scale in the model; they develop a VEV which defines the energy scale at which the flavour symmetry is broken. In order not to introduce further scales into the theory, an alternative approach has been pursued: the flavour and the electroweak symmetries are broken together due to the introduction of several copies of the Standard Model Higgs doublet which transform non-trivially under the flavour group. It is well-known that such multi-Higgs models find strong constraints by direct searches for Higgs bosons and by indirect bounds from flavour changing neutral current and lepton flavour violating processes. For this reason we confine ourselves to models in which the flavour symmetry is broken at an higher energy scale with respect to the electroweak one by the VEV of some flavon fields.\\
The requirement of a broken flavour symmetry is also the result of a well-known no-go theorem \cite{LV_Theorem,FeruglioSymBreaking}, which deals with the lepton mixing angles and in particular with the atmospheric angle. We report here the main reasoning, for which, under quite general
conditions, it can never be $\theta_{23}=\pi/4$ as a result of an exact flavour symmetry.
A general condition is that the flavour symmetry (either global or local, either continuous or discrete), is only broken by small effects. Furthermore, in the basis of canonical kinetic terms, the symmetry acts on the field content of the Standard Model, potentially considering also the right-handed neutrinos, through unitary transformations. Finally, the proof is restricted only to the limit of exact symmetry and then it is possible to neglect the symmetry breaking sector.
The lepton mass matrices can be written as the sum of two terms: the first, denoted as $M_e^0$ and $m_\nu^0$, respectively, are the dominant contributions and are the mass matrices in the exact symmetry phase; the second parts contain the subleading symmetry breaking effects. Confronting with the measured value for the charged lepton masses, it is a natural requirement that $M_e^0$ should be of rank less or equal to $1$, otherwise the differences between the two or three non-vanishing masses (of the same order in the exact symmetry limit) should be explained by large breaking effects or by fine-tunings of the parameters, which both we have excluded. On the other hand a vanishing rank is also a bad starting point since in this case the charged mixing angles are undetermined and, as we will see in a while, also $\theta_{23}$ is completely undetermined. The only remaining possibility is that the rank is equal to $1$.
By a unitary transformations it is always possible to go in the basis where
\begin{equation}
M_e^0=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & m_\tau^0\\
\end{array}
\right)\;.
\end{equation}
Considering $U_\nu$ and $U_e$ the unitary matrices that diagonalise $m_\nu^0$ and $M_e^{0\dagger} M_e^0$, it will be possible to adopt the parametrisation in eq. (\ref{SM:PhysBasis:MixingMatrixUPMNS}) for $U_\nu$, putting a suffix $\nu$ on angles and phases, and
\begin{equation}
U_e=R_{12}(\theta^e_{12})\;,
\end{equation}
where the angle $\theta^e_{12}$ is completely undetermined.
The physical mixing matrix is defined as the product $U=U_e^\dagger U_\nu$ and it follows that
\begin{equation}
\left|\tan\theta_{23}\right|=\left|\cos\theta^e_{12} \tan\theta^\nu_{23} + \sin\theta^e_{12}\dfrac{\tan\theta^\nu_{13}}{\cos\theta^\nu_{23}} e^{-i\delta}\right|\;.
\end{equation}
Therefore, in general, the atmospheric mixing angle is always undetermined in the limit of the exact symmetry. Only when small breaking parameters are considered in the mass matrices, it is possible to recover $\theta_{23}=\pi/4$. This goal is provided if these breaking terms have suitable orientations in the flavour space and this is connected to the VEV (mis)alignment of the flavons: if the breaking terms are produced by a spontaneous symmetry breaking, in general two independent sectors of flavons are needed, indeed one of them communicates the breaking to charged fermions and the other one to neutrinos. It is worth to underline that the VEV (mis)alignment of the flavons is an highly non-trivial problem to solve, which could put severe constraints on the choice of the group representations and on the minimal number of new degrees of freedom. In the following chapters, we will face this problem providing for each model a suitable explanation of the specific VEV (mis)alignment of all the flavons.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{Flavour Models with the Tribimaximal Mixing}
\label{Sec:FlavourModelsTBM}
\setcounter{equation}{0}
\setcounter{footnote}{3}
In this chapter we enter in the details of three flavour models which have the common prediction of the lepton mixing matrix of the tribimaximal (TB) form. As already pointed out in the previous sections, the tribimaximal pattern is a very good approximation of the measured mixings and for this reason it represents an attractive starting point to describe leptons.
There is a model based on the symmetry group $A_4$ \cite{AF_Extra,AF_Modular,AFL_Orbifold} which is extremely appealing, thanks to its simplicity and predictivity. $A_4$ is the group of the even permutations of four objects and has twelve elements and four irreducible representations, which are three singlets, $\bf1$, $\bf1'$ and $\bf1''$, and one triplet $\bf3$. This model manages in deriving the tribimaximal mixing and presents a prediction for the neutrinoless-double-beta decay parameter $|m_{ee}|$ as a function of the lightest neutrino mass: considering the Weinberg operator as the origin of the neutrino masses only the normally hierarchical spectrum is admitted and the prediction reads as
\begin{equation}
|m_{ee}|^2=\dfrac{1}{9}\left(9 m_1^2+5 \Delta m^2_{sol}- \Delta m^2_{atm}\right)\;.
\end{equation}
These results come from the assumption that the $A_4$ symmetry is realised at a very high energy scale $\Lambda_f$ and that leptons transform in a non trivial way under $A_4$. Afterward the symmetry is spontaneously broken by a set of scalar multiplets $\Phi$, the flavons, whose vacuum expectation values (VEVs) receive a specific alignment. As a consequence the tribimaximal mixing is corrected by the higher-order terms by quantities of the order of $\mean{\Phi}/\Lambda<1$ and the reactor angle is no longer vanishing and becomes proportional to $\mean{\Phi}/\Lambda$.
The drawback of this model is the difficulty to describe correctly the quark sector. First of all the quark mixing matrix is completely different from its lepton counterpart: the first shows little angles and, on the contrary, the second presents two large angles. As a result, while the lepton mixing matrix can be fairly achieved through the $A_4$ flavour symmetry, the quark mixings seem to be better described by some continuous symmetries, like $U(2)$ \cite{SymmU2}. Indeed, according to the left- and right-handed quark representation assignments, the $A_4$ flavour symmetry tends to predict no mixing at all in the quark sector, $V_{CKM}=\mathbb{1}$, or too large mixing angles. On the other hand, the results obtained by the $U(2)$-based models suggest that the use of the doublet representation in the quark sector should help in describing the quark mixing. However, this possibility is prevented in the $A_4$-based models, since there are no doublet representations. The solutions which have been proposed consist in the possibility of add several $Z_n$ symmetries \cite{BKM_A4Zn}, in order to suppress the unwanted terms, or in adopting a larger group, which manages in reproducing the lepton sector similarly as in the $A_4$-based model and possesses some doublet representations useful to describe quarks. We followed this second strategy studying two discrete symmetries: $T'$ \cite{FHLM_Tp} the double-valued group of the tetrahedral symmetry and $S_4$ \cite{BMM_SS,BMM_S4} the group of permutation of four objects. Both of these groups have $24$ elements, but they differ in the type of the representations: $T'$ contains exactly the same representations of $A_4$, i.e. three singlets, $\bf1$, $\bf1'$ and $\bf1''$, and one triplet $\bf3$, and additional three doublets, $\bf2$, $\bf2'$ and $\bf2''$; $S_4$ has only five representations, i.e. two singlets, $\bf1$, $\bf1'$, one doublet $\bf2$ and two triplets, $\bf3$ and $\bf3'$.
As we will see in the next sections, the lepton sector of $T'$ is described exactly in the same way as in the $A_4$ model, using the doublets to account for quark masses and mixings: to recover a realistic description, in particular the correct order of magnitude of the ratio $m_u/m_c$, it is however necessary a moderate fine-tuning of order $\lambda$. Apart from the predictions in the lepton sector, two relations are present in the quark sector:
\begin{equation}
\sqrt{\dfrac{m_d}{m_s}}=\left\vert V_{us}\right\vert+O(\lambda^2)\;,\qquad\qquad
\sqrt{\dfrac{m_d}{m_s}}=\left\vert\dfrac{V_{td}}{V_{ts}}\right\vert+O(\lambda^2)\;.
\end{equation}
These relations can be verified by the experimental data: from \cite{PDG08} we have $\sqrt{m_d/m_s}=0.213\div 0.243$, $\vert V_{us}\vert=0.2257\pm0.0010$ and $\vert V_{td}/V_{ts}\vert=0.209\pm0.001\pm0.006$. Unfortunately, the theoretical errors affecting the predictions, dominated respectively by the unknown $O(\lambda^2)$ term in $V_{us}$ and by the unknown $O(\lambda^4)$ term in
$V_{td}$, are of order $20\%$. For this reason, and for the large uncertainty on the ratio $m_d/m_s$, it is not possible to turn these predictions into precise tests of the model.
The presence of the doublet representations of $S_4$ introduce new features in the lepton sector: it is possible to describe the same neutrino mass matrix as in the $A_4$ model or alternatively a phenomenologically distinct mass texture. We investigate this second possibility and we find small, but non-negligible, parts of the parameter space in which the models are different. Apart from the lepton mixing angles, a prediction for $|m_{ee}|$ is present also in this model:
\beq\ba{rcl}
\text{NH}\qquad\quad |m_{ee}| &=& \dfrac{1}{3}\sqrt{3m_1^2+2\Delta m^2_{atm}-\Delta m^2_{sol}}\;,\\[3mm]
\text{IH}\qquad\quad |m_{ee}| &=& \dfrac{1}{3}\sqrt{3m_3^2+\Delta m^2_{atm}-2\Delta m^2_{sol}}\;.
\ea\eeq
In the quark sector the model well explains the observed mass hierarchies, but it is necessary a moderate fine-tuning to recover the Cabibbo angle: the $(12)$ entry of the CKM matrix is the difference of two complex terms with absolute values of order $\lambda^2$ and therefore we ask a definite relation between the phases of the two terms in order to have an accidental enhancement of order $1/\lambda$.
Discussing all these models we do not consider the renormalisation group (RG) running, postponing a detailed analysis in section \ref{Sec:Running}: we only anticipate that RG corrections have a minor impact on the predictions and on the observables of the models, a part when the spectrum is inverse hierarchical and only for particular relations among the Majorana phases.
\mathversion{bold}
\section{$A_4$-Based Model}
\label{Sec:AFTBM}
\setcounter{footnote}{3}
\mathversion{normal}
We recall here the main features of the Altarelli-Feruglio (AF) model \cite{AF_Extra,AF_Modular,AFL_Orbifold}, which is based on the flavour group $G_f=A_4\times Z_3\times U(1)_{FN}$: the spontaneous breaking of $A_4$ is responsible for the tribimaximal mixing; the cyclic symmetry $Z_3$ prevents the appearance of dangerous couplings and helps keeping separated the charged lepton sector and the neutrino one; the $U(1)_{FN}$ provides a natural hierarchy among the charged lepton masses.
$A_4$ is the group of the even permutations of $4$ objects, isomorphic to the group of discrete rotations in the three-dimensional space that leave invariant a regular tetrahedron. It is generated by two elements $S$ and $T$ obeying the relations\cite{GroupRepresentations}:
\begin{equation}
S^2=(ST)^3=T^3=1\;.
\end{equation}
It has three independent one-dimensional representations, $\bf1$, $\bf1'$ and $\bf1''$ and one three-dimensional representation $\bf3$. We present a set of generators $S$ and $T$ for the various representations, and the relevant multiplication rules in appendix \ref{AppA:A4}. The group $A_4$ has two obvious subgroups: $G_S$, which is a reflection subgroup generated by $S$, and $G_T$, which is the group generated by $T$, isomorphic to $Z_3$. These subgroups are of interest for us because $G_S$ and $G_T$ are the relevant low-energy symmetries of the neutrino and the charged-lepton sectors at the leading order, respectively. The tribimaximal mixing is then a direct consequence of this special symmetry breaking pattern, which is achieved via the vacuum misalignment of triplet scalar fields. If $\Phi=(\Phi_1,\Phi_2,\Phi_3)$ denotes the generic scalar triplet, the VEV
\begin{equation}
\mean{\Phi}\propto (1,1,1)
\end{equation}
breaks $A_4$ down to $G_S$, while
\begin{equation}
\mean{\Phi}\propto (1,0,0)
\end{equation}
breaks $A_4$ down to $G_T$. The flavour symmetry breaking sector of the model includes the scalar fields $\varphi_T$, $\varphi_S$, $\xi$ and $\theta$. In table \ref{table:AFtransformations}, we can see the fermion and the scalar content of the model and their transformation properties under $G_f$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c||cccc||ccccc|}
\hline
&&&&&&&&&\\[-4mm]
& $\ell$ & $e^c$ & $\mu^c$ & $\tau^c$ &$H$ & $\theta$ & $\varphi_T$ & $\varphi_S$ & $\xi$ \\[2mm]
\hline
&&&&&&&&&\\[-4mm]
$A_4$ & $\bf3$ & $\bf1$ & $\bf1''$ & $\bf1'$ & $\bf1$ & $\bf1$ & $\bf3$ & $\bf3$ & $\bf1$ \\[2mm]
$Z_3$ & $\omega$ & $\omega^2$ & $\omega^2$ & $\omega^2$ & 1 & 1 & 1 & $\omega$ & $\omega$ \\[2mm]
$U(1)_{FN}$ & 0 & 2 & 1 & 0 & 0 & -1 & 0 & 0 & 0 \\[2mm]
\hline
\end{tabular}
\end{center}
\caption{\it The transformation properties of the fields under $A_4$, $Z_3$ and $U(1)_{FN}$.}
\label{table:AFtransformations}
\end{table}
As anticipated above, the specific breaking patter of the symmetry which leads to the tribimaximal scheme and to hierarchical masses for leptons requires that $\xi$ and $\theta$ develop a non vanishing VEV and that the following specific vacuum misalignment for the triplets occurs:
\begin{equation}
\langle\varphi_T\rangle=(v_T,\,0,\,0)\;,\qquad
\langle\varphi_S\rangle=(v_S,\,v_S,\,v_S)\;.
\end{equation}
In \cite{AF_Extra,AF_Modular} it has been shown a natural explanation of this misalignment. These VEVs can be very large, much larger than the electroweak scale. From the analysis in \cite{AF_Extra,AF_Modular}, it is reasonable to choose:
\begin{equation}
\dfrac{VEV}{\Lambda_f}\approx \lambda^2\;,
\label{AFTBM:vevratio}
\end{equation}
where VEV stands for the generic non-vanishing VEV of the flavons, $\Lambda_f$ the cutoff of the theory and $\lambda$ the Cabibbo angle.
Since the ratio in eq. (\ref{AFTBM:vevratio}) represents the typical expansion parameter when including higher dimensional operators, it keeps all the leading order results stable, up to correction of relative order $\lambda^2$.
A very useful parametrisation of $VEV/\Lambda_f$ is the following:
\begin{equation}
\dfrac{\langle\varphi_T\rangle}{\Lambda_f}= (u,\,0,\,0)\;,\quad
\dfrac{\langle\varphi_S\rangle}{\Lambda_f}=c_b(u,\,u,\,u)\;,\quad
\dfrac{\mean{\xi}}{\Lambda_f}=c_a\,u,\,\quad
\dfrac{\mean{\theta}}{\Lambda_f}=t\;,
\label{AFTBM:vevs}
\end{equation}
where $c_{a,b}$ are complex numbers with absolute value of order one, while $u$ and $t$ are the small symmetry breaking parameters of the theory (they can be taken real through field redefinitions).
Once defined the transformations of all the fields under $G_f$, it is possible to write down the Yukawa interactions: at the leading order they read
\begin{eqnarray}
\mscr{L}_e&=&\dfrac{y_e}{\Lambda_f^3} \theta^2e^c H^\dagger \left(\varphi_T \ell\right)
+\dfrac{y_\mu}{\Lambda_f^2} \theta\mu^c H^\dagger \left(\varphi_T \ell\right)'
+\dfrac{y_\tau}{\Lambda_f} \tau^c H^\dagger \left(\varphi_T \ell\right)''+h.c.
\label{AFTBM:Ll}\\
\nn\\
{\mscr{L}}_\nu&=& \dfrac{x_a}{\Lambda_f\Lambda_L} \xi ({\tilde H}^\dagger \ell {\tilde H}^\dagger \ell) + \dfrac{x_b}{\Lambda_f\Lambda_L} (\varphi_S {\tilde H}^\dagger \ell {\tilde H}^\dagger \ell)+h.c.\;,
\label{AFTBM:Lnu}
\end{eqnarray}
where $y_i$ and $x_i$ are complex numbers with absolute value of order one. The contractions under $SU(2)_L$ are understood and the notation $(\ldots)$, $(\ldots)'$ and $(\ldots)''$ refers to the contractions in $\bf1$, $\bf1'$ and $\bf1''$, respectively. We distinguish two different energy scales: $\Lambda_f$ refers to the energy scale of the flavour dynamics while $\Lambda_L$ to the scale at which the lepton number is violated. We assume here that $\Lambda_f\sim\Lambda_L$.
When the flavons develop VEVs in agreement with eq. (\ref{AFTBM:vevs}) and after the electroweak symmetry breaking, the leading order mass matrix of charged leptons takes the following form: in the basis of canonical kinetic terms\footnote{It has been shown in a series of papers \cite{Kahler} that the corrections, from the transformations needed to move in the basis of canonical kinetic terms, appear at most as NLO deviations.},
\begin{equation}
M_e=\left(
\begin{array}{ccc}
y_e t^2 & 0& 0\\
0& y_\mu t& 0\\
0& 0& y_\tau
\end{array}
\right)\dfrac{v\, u}{\sqrt2} \;.
\label{AFTBM:ChargedMass}
\end{equation}
Once in the physical basis, the entries on the diagonal are identified to the masses of the charged leptons and the relative hierarchy among them is given by the parameter $t$: when
\begin{equation}
t\approx 0.05
\end{equation}
then the mass hierarchy is in agreement with the experimental measurements. As we will see in the following sections, the model admits a well defined range for the parameter $u$ which can approximatively be set to
\begin{equation}
0.003\lesssim u \lesssim 0.05\;.
\label{AFTBM:RangeuSM}
\end{equation}
In the neutrino sector, the leading order Majorana mass matrix is given by
\begin{equation}
m_\nu=\left(
\begin{array}{ccc}
a+2 b/3 & -b/3 & -b/3 \\
-b/3 & 2b/3 & a-b/3 \\
-b/3 & a-b/3 & 2 b/3 \\
\end{array}
\right)\dfrac{v^2}{\Lambda_L}\;,
\label{AFTBM:LONuMasses}
\end{equation}
where $a\equiv x_a\,c_a\,u$ and $b\equiv x_b\,c_b\,u$. At this order the mass matrix is diagonalised by
\begin{equation}
U_\nu^T m_\nu U_\nu =\dfrac{v^2}{\Lambda_L}\diag(|a+b|,\,|a|,\,|-a+b|)\;,
\end{equation}
where $U_\nu=U_{TB}P$. The matrix $U_{TB}$ is the tribimaximal transformation of eq. (\ref{FS:TB:TBmixing}), while $P$ is the matrix of the Majorana phases,
\begin{equation}
P=\diag(e^{i\alpha_1/2},\,e^{i\alpha_2/2},\,e^{i\alpha_3/2})\;,
\label{AFTBM:Pmatrix}
\end{equation}
with $\alpha_1=-\arg(a+b)$, $\alpha_2=-\arg(a)$ and $\alpha_3=-\arg(-a+b)$.
It is possible to generalise this description also to the supersymmetric context. In this case $G_f$ accounts for an additional term, a continuous $R$-symmetry $U(1)_R$, that contains the usual $R$-parity as a subgroup and simplifies the constructions of the scalar potential: under this symmetry, the matter superfields transform as $U(1)_R=1$, while the scalar ones are neutral.
It is easy to extend eqs. (\ref{AFTBM:Ll}, \ref{AFTBM:Lnu}) in the supersymmetric case: two Higgs doublets $H_{(d,u)}$, invariant under $A_4$, substitute $H$ and $\widetilde H$, respectively; the Lagrangian $\mscr{L}_e$ is identified to the leading order charge lepton superpotential $w_e$ and $\mscr{L}_\nu$ is identified to the leading order neutrino superpotential $w_\nu$. Moreover, it is necessary to introduce a further flavon $\tilde{\xi}$, which exactly transforms as $\xi$ but does not acquire any VEV. As a result it does not have any impact on the previous discussion and its relevance is only linked to the way in which the VEV misalignment is recovered (see \cite{AF_Modular} for further details).
While $t$ is still equal to $0.05$ in order to have a correct charged lepton mass hierarchy, the range for $u$ slightly changes:
\begin{equation}
0.007\lesssim u \lesssim 0.05\;.
\label{AFTBM:RangeuMSSM}
\end{equation}
\subsection{The Neutrino Mass Spectrum}
\label{Sec:AFTBM:NuSpectrum}
\setcounter{footnote}{3}
We now summarise the results for the neutrino mass spectrum. Notice that the following analysis is valid in the Standard Model as well as in its supersymmetric extension, by substituting $v$ with $v_u$ when necessary.
The neutrino masses are given by
\begin{equation}
m_1=|a+b|\dfrac{v^2}{\Lambda_L}\;,\qquad\qquad m_2=|a|\dfrac{v^2}{\Lambda_L}\;,\qquad\qquad m_3=|-a+b|\dfrac{v^2}{\Lambda_L}\;.
\end{equation}
They can be expressed in terms of only three independent parameters: a possible choice that simplifies the analysis consists in taking $|a|$, $\rho$ and $\Delta$, where $\rho$ and $\Delta$ are defined as
\begin{equation}
\dfrac{b}{a}=\rho\,e^{i\Delta}\;,
\label{AFTBM:RhoDeltaDef}
\end{equation}
with $\Delta$ in the range $[0,\,2\pi]$.
From the experimental side only the squared mass differences have been measured and as a result the spectrum is not fully determined and indeed $\Delta$ is still a free parameter: we can, however, bound $\Delta$ requiring that $|\cos\Delta|\leq1$. Before proceeding it is useful to express $\rho$ and $\cos\Delta$ as functions of some physical observables. To this purpose we calculate the following mass ratios: for both the hierarchies we have
\begin{equation}
\dfrac{m_{1(3)}^2}{m_2^2}=1\pm2 \rho\cos\Delta+\rho^2\;.
\end{equation}
It is then easy to express $\rho$ and $\cos\Delta$ as a function of the neutrino masses:
\begin{equation}
\rho=\sqrt{\dfrac{m_1^2-2m_2^2+m_3^2}{2m_2^2}}\;,\qquad\qquad
\cos\Delta=\dfrac{m_1^2-m_3^2}{2\sqrt2m_2\sqrt{m_1^2-2m_2^2+m_3^2}}\;.
\label{AFTBM:RoDelta}
\end{equation}
Using now the definitions of the mass squared differences,
\begin{equation}
\Delta m^2_{sol} \equiv m_2^2-m_1^2\;,\qquad\qquad
\Delta m^2_{atm} \equiv |m_3^2-m_1^2(m_2^2)|\;,
\label{AFTBM:DeltaMassesNu}
\end{equation}
it is possible to express $\cos\Delta$ as a function of only the lightest neutrino mass. Imposing the constraint $|\cos\Delta|\leq1$, it results that only the normal hierarchy is allowed and taking the most conservative case (the $3\sigma$ upper value for $\Delta m^2_\mathrm{sol}$ and the $3\sigma$ lower value for $\Delta m^2_\mathrm{atm}$ as in \cite{Fogli:Indication}) we have
\beq\ba{rcl}
m_1>14.1\;\;\mathrm{meV}\;.
\label{AFTBM:Boundm1}
\ea\eeq
This value corresponds to $\cos\Delta=-1$ and it is the value for which the spectrum presents the strongest hierarchy: the values of the masses of the other two neutrinos are given by
\begin{equation}
m_2=16.7\;\;\mathrm{meV}\qquad\text{and}\qquad m_3=47.5\;\;\mathrm{meV}\;.
\end{equation}
Furthermore the sum of the neutrino masses in this case is about $78.3$ meV. When $\cos\Delta$ approached the zero, the neutrino spectrum becomes quasi degenerate.
Not only the neutrino masses can be written as a function of the lightest neutrino mass, but also the phases: since in the tribimaximal mixing the reactor angle is vanishing, the Dirac CP phase is undetermined at the leading order; on the contrary the Majorana phases are well defined and they can be expressed through $\rho$ and $\Delta$. Since we are interested in physical observables, we report only phase differences, $\alpha_{ij}\equiv(\alpha_i-\alpha_j)/2$: in terms of $\rho$ and $\Delta$ in order to keep compact the expressions,
\begin{equation}
\sin(2\alpha_{13})=\dfrac{2\rho\sin\Delta}{\sqrt{(\rho^2-1)^2+4\rho^2\sin^2\Delta}}\;,\qquad\quad
\sin(2\alpha_{23})=\dfrac{\rho\sin\Delta}{\sqrt{1-2\rho\cos\Delta+\rho^2}}\;.
\label{AFTBM:Majorana1}
\end{equation}
It will be useful to report also $\sin(2\alpha_{12})$:
\begin{equation}
\sin(2\alpha_{12})=-\dfrac{\rho\sin\Delta}{\sqrt{1+2\rho\cos\Delta+\rho^2}}\;.
\label{AFTBM:Majorana2}
\end{equation}
These results are valid only at the leading order and some deviations are expected with the introduction of the higher-order terms, that is illustrated in the following section. The corrections are expected to be of relative order $u$, whose allowed range is defined in eqs. (\ref{AFTBM:RangeuSM}, \ref{AFTBM:RangeuMSSM}). However, close to $\cos\Delta=-1$, where the bounds are saturated, the corrections to both the numerator and the denominator of eq. (\ref{AFTBM:RoDelta}) remain of relative order $u$ and as a result the lower bound on $m_1$ of eq. (\ref{AFTBM:Boundm1}) is not significantly affected. Major effects could appear when the spectrum is quasi degenerate, \mbox{$\cos\Delta\approx0$}.
\begin{figure}[ht!]
\centering
\includegraphics[width=7.8cm]{AF0Nu2Be.pdf}
\caption{\it $|m_{ee}|$ as a function of the lightest neutrino mass $m_1$ for the NH. The light coloured regions show the allowed range for the best-fit values of the parameters from \cite{Fogli:Indication}. The dashed lines refer to the allowed region when the $3\sigma$ errors are considered for the mixing angles as well as for the mass squared differences. The dark red area refers to the model in consideration when the $3\sigma$-error ranges have been implement for $\Delta m^2_{sol}$ and $\Delta m^2_{atm}$. The black continuous lines represent future experimental sensitivities as described in the text.}
\label{fig:AF_0nu2beta}
\end{figure}
\subsubsection{Neutrinoless-Double-Beta Decay}
Still working in the leading order approximation, we can study the value of $|m_{ee}|$, the parameter which characterises the violation of the total lepton number in the $0\nu2\beta$-decay. By using eqs. (\ref{AFTBM:RoDelta}, \ref{AFTBM:DeltaMassesNu}), $|m_{ee}|$ can be written in terms of the mass squared differences and of the lightest neutrino mass:
\begin{equation}
|m_{ee}|^2=\dfrac{1}{9}\left(9 m_1^2+5 \Delta m^2_{sol}- \Delta m^2_{atm}\right)\;.
\end{equation}
This expression constitutes a prediction of the model. We display the dependence of $|m_{ee}|$ as a function of the lightest neutrino mass, but similar relation can be constructed with $m_2$ or also $m_3$ \cite{AF_Modular,AF_Extra}.
For $\cos\Delta=-1$ and taking the best-fit values for $\Delta m^2_{sol}$ and $\Delta m^2_{atm}$, it results $|m_{ee}|=3.8$ meV, which is at the lower edge of the range allowed for the NH considering the best-fit values, as shown in figure \ref{fig:0nu2betaGeneral}. This value is close, but unfortunately just below, to the sensitivities of the future experiments, such as CUORE ($15$ meV) and Majorana ($20$ meV). In figure \ref{fig:AF_0nu2beta}, we show $|m_{ee}|$ as a function of the lightest neutrino $m_1$, the present upper bound from the Heidelberg-Moscow collaboration and future experimental sensitivities from GERDA, Majorana and CUORE. In the plot, the $3\sigma$-error ranges have been implemented for $\Delta m^2_{sol}$ and $\Delta m^2_{atm}$. From the figure we can conclude that the model at the leading order is in agreement with the experimental data inside the $1\sigma$ level for all the allowed range for $m_1$, apart when $m_1$ is close to its minimum in which the model slightly pass the $1\sigma$ edge.
When considering the introduction of the NLO terms, we expect that the dark red area in figure \ref{fig:AF_0nu2beta} will enlarge, but the deviations are still expected to be at most of $\cO(u)$ level.
\subsection{The Next-To-Leading Order Contributions}
\label{Sec:AFTBM:NLO}
\setcounter{footnote}{3}
Another important implication of the spontaneously broken flavour symmetry is that the leading order predictions are always subjected to corrections due to higher-dimensional operators. The latter are suppressed by additional powers of the cutoff $\Lambda_f$ and can be organised in a suitable double power expansion in $u$ and $t$.
At the NLO there are may additional terms which can be added to the Lagrangian. Since $\varphi_T$ is the only scalar field which
is neutral under the Abelian part of the flavour symmetry, all the NLO terms contain the terms already present in the leading order Lagrangian
with an additional insertion of $\varphi_T /\Lambda_f$.
In addition to these terms, there are also corrections to the leading vacuum alignment in eq. (\ref{AFTBM:vevs}):
\begin{equation}
\begin{array}{ccl}
\dfrac{\langle\varphi_T\rangle}{\Lambda_f}&=&(u,0,0)+(c_1 u^2,\,c_2 u^2,\,c_3 u^2)\\[3mm]
\dfrac{\langle\varphi_S\rangle}{\Lambda_f}&=& c_b(u,u,u)+(c_4 u^2,\,c_5 u^2,\,c_6 u^2)\\[3mm]
\dfrac{\langle\xi\rangle}{\Lambda_f}&=&c_a u+c_7 u^2\;,
\end{array}
\label{AFTBM:vevsplus}
\end{equation}
where $c_i$ are complex numbers with absolute value of order one. Note that in the supersymmetric version, the model predicts $c_2=c_3$.
Here we will not perform a detailed analysis for NLO operators and the origin of eq. (\ref{AFTBM:vevsplus}) (see \cite{AF_Extra,AF_Modular} for a detailed study). As a result of these NLO contributions, the quantities generally deviate from their initial values for terms of relative order $u$:
\begin{equation}
Y_e + \delta Y_e\;, \qquad m_\nu + \delta m_\nu \;.
\end{equation}
These corrections affect also the mixing angles and it is not difficult to see that deviations from tribimaximal are also of relative order $u$ with respect to their leading order values \cite{AF_Extra,AF_Modular}:
\begin{equation}
\sin ^2 \theta_{23} = \dfrac12 + \mathcal{O}(u), \qquad \sin^2 \theta_{12}=\dfrac 13 + \mathcal{O}(u),
\qquad \sin \theta_{13} = \mathcal{O}(u).
\end{equation}
Since the solar mixing angle is, at present, the most precisely known, we require that its value remains inside the $3\sigma$ range \cite{NeutrinoData}. This requirement results in an upper bound on $u$ of about $0.05$. On the other hand, from eq. (\ref{AFTBM:ChargedMass}), we have the following relations:
\begin{equation}
\begin{array}{rll}
u&=\, \dfrac{1}{|y_\tau|} \dfrac{\sqrt{2} m_\tau}{v}\approx 0.01 \dfrac{1}{|y_\tau|}&\qquad\text{in the SM}\\[5mm]
u&\simeq\,\dfrac{\tan\beta}{|y_\tau|} \dfrac{\sqrt{2} m_\tau}{v} \approx 0.01 \dfrac{\tan\beta}{|y_\tau|}&\qquad\text{in the MSSM}
\end{array}
\label{AFTBM:tanb&u&yt}
\end{equation}
where for the $\tau$ lepton we have used its pole mass $m_\tau=(1776.84 \pm 0.17) \;\rm{MeV}$ \cite{PDG08}. Requesting $|y_\tau|<3$ we find a lower limit for $u$ of about $0.003$ in the Standard Model case; in the supersymmetric context, the same requirements provides a lower bound close to the upper bound $0.05$ for $\tan\beta=15$, whereas for $\tan\beta=2$ it is $u>0.007$. From now on, we will choose the maximal range of $u$ as
\begin{equation}
0.003\lesssim u \lesssim 0.05
\end{equation}
for the Standard Model context, while for the supersymmetric case we take
\begin{equation}
0.007\lesssim u \lesssim 0.05\;,
\end{equation}
which shrinks when $\tan\beta$ is increased from 2 to 15.
The NLO terms affect also the previous results for the neutrino phases. All the new parameters which perturb the leading order results are complex and therefore they introduce corrections to the phases of the PMNS matrix. Due to the large amount of such a parameters, we expect non-negligible deviations from the leading order values.
\subsection{Type I See-Saw Realisation}
\label{Sec:AFTBM:SeeSaw}
\setcounter{footnote}{3}
It is possible to easily modify the previous model to accommodate the (type I) See-Saw mechanism. In this part we do such an extension and analyse the differences with the effective model. Notice that this part is written considering an underlying Standard Model context, but the extension to the supersymmetric one is trivial, following the same strategy as in the effective model.
We introduce conjugate right-handed neutrino fields $\nu^c$ transforming as a triplet of $A_4$ and we modify the transformation properties of some other fields according to table \ref{table:AF+SeeSawtransformations}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c||c|ccc|}
\hline
&&&&\\[-4mm]
& $\nu^c$ & $\varphi_S$ & $\xi$ & $\tilde{\xi}$ \\[2mm]
\hline
&&&&\\[-4mm]
$A_4$ & $\bf3$ & $\bf3$ & $\bf1$ & $\bf1$ \\[2mm]
$Z_3$ & $\omega^2$ & $\omega^2$ & $\omega^2$ & $\omega^2$ \\[2mm]
$U(1)_{FN}$ & 0 & 0 & 0 & 0 \\[2mm]
\hline
\end{tabular}
\end{center}
\caption{\it The transformation properties of $\nu^c$, $\varphi_S$, $\xi$ and $\tilde{\xi}$ under $A_4\times Z_3\times U(1)_{FN}$. The rest of the fields still transform as in table \ref{table:AFtransformations}. Notice that $\tilde{\xi}$ is present only in the supersymmetric context.}
\label{table:AF+SeeSawtransformations}
\end{table}
The Lagrangian changes only in the neutrino sector and it is given by
\begin{equation}
\mscr{L}_\nu = y(\nu^c \widetilde H^\dag \ell)+x_a\xi(\nu^c\nu^c)+x_b(\varphi_S\nu^c\nu^c)+h.c.+\ldots\;,
\label{AFTBM:LnuSeeSaw}
\end{equation}
where dots stand for higher-order contributions.
The vacuum alignment of the flavons is exactly the one described in eqs. (\ref{AFTBM:vevs}, \ref{AFTBM:vevsplus}). When the flavons develop VEVs in agreement with eq. (\ref{AFTBM:vevs}) and after the electroweak symmetry breaking, the Dirac and the Majorana mass matrices, at the leading order, are given by
\begin{equation}
m_D = \dfrac{y\,v}{\sqrt2} \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}
\right)\;,\qquad
M_R = \left(
\begin{array}{ccc}
a+2 b/3 & -b/3 & -b/3 \\
-b/3 & 2b/3 & a-b/3 \\
-b/3 & a-b/3 & 2 b/3 \\
\end{array}
\right)\;,
\label{AFTBM:SSMassMatrices}
\end{equation}
where $a\equiv 2x_a c_a u$ and $b\equiv 2x_b c_b u$. The complex symmetric matrix $M_R$ is diagonalised by the transformation
\begin{equation}
\hat{M}_R=U_R^T M_R U_R\;,
\label{AFTBM:Mhat}
\end{equation}
where $\hat M_R$ is a diagonal matrix with real and positive entries, given by
\begin{equation}
\hat M_R\equiv \diag(M_1,\,M_2,\,M_3)=\diag(|a+b|,|a|,|-a+b|)\;,
\label{AFTBM:RHEigenvalues}
\end{equation}
while the unitary matrix $U_R$ can be written as $U_R=U_{TB}P$, where $P$ is the diagonal matrix of the Majorana phases already defined in eq. (\ref{AFTBM:Pmatrix}). After the electroweak symmetry breaking, the mass matrix for the light neutrinos is recovered from the well known type I See-Saw formula
\begin{equation}
m_\nu=-m_D^T M_R^{-1}m_D=-\dfrac{y^2\,v^2}{2}M_R^{-1}\;
\end{equation}
where the last passage is possible considering that $M_R^{-1} m_D=m_D M_R^{-1}$. From \eq{AFTBM:Mhat}, $U_R^\dag M_R^{-1} U_R^*=\diag(M_1^{-1},\,M_2^{-1},\,M_3^{-1})$ and as a result the light neutrino mass matrix can be diagonalised by
\begin{equation}
\hat{m}_\nu=U_\nu^Tm_\nu U_\nu\;,
\end{equation}
where $U_\nu=iU_R^*=iU_{TB}P^*$. The diagonal matrix $\hat{m}_\nu$ has real and positive entries written as
\begin{equation}
m_i=\dfrac{v^2}{2}\dfrac{y^2}{M_i}\;,
\label{AFTBM:LightMasses}
\end{equation}
which explicitly give the following values
\begin{equation}
m_1=\dfrac{v^2}{2}\dfrac{y^2}{|a+b|}\;,\qquad\qquad
m_2=\dfrac{v^2}{2}\dfrac{y^2}{|a|}\;,\qquad\qquad
m_3=\dfrac{v^2}{2}\dfrac{y^2}{|-a+b|}\;.
\end{equation}
In these expressions we have absorbed the possible phase of $y$ inside the matrix $P$: this phase however is not observable and thus we could have assumed a positive $y$ from the beginning without loss of generality.
We can repeat the analysis presented in section \ref{Sec:AFTBM:NuSpectrum} to study the light neutrino spectrum in this case. Taking $|a|=M_2=v^2y^2/(2m_2)$, we find that both the orderings can be described and that the lightest neutrino masses span the following ranges: for the most conservative case,
\beq\ba{rcl}
\text{normal hierarchy:}\qquad&4.3\;\mathrm{meV}<m_1<6.2\;\mathrm{meV}&\\[3mm]
\text{inverse hierarchy:}\qquad&m_3>15.8\;\mathrm{meV\;.}&
\label{AFTBM:RangeMasses}
\ea\eeq
For the normal hierarchy, $m_1$ spans a narrow range of values, which corresponds to values of $\Delta$ close to zero. This completely determines the neutrino masses inside a very small range and represents a prediction of the model. On the other hand, for the inverse hierarchy, $m_3$ is bounded only from below and the minimum is achieved when $\Delta$ is close to $\pm\pi$. In figure \ref{fig:AFSeeSaw} we can read off the light neutrino spectrum and its dependence with the lightest neutrino mass.
\begin{figure}[ht!]
\centering
\includegraphics[width=6cm]{AFSSNuNH.pdf}\qquad
\includegraphics[width=6cm]{AFSSNuIH.pdf}
\includegraphics[width=6cm]{AFSSNuRHNH.pdf}\qquad
\includegraphics[width=6cm]{AFSSNuRHIH.pdf}
\caption{\it Plots of the light (above) and heavy (below) neutrino masses, as a function of the lightest left-handed neutrino mass. On the left the normal hierarchy and on the right the inverse hierarchy. The yellow areas refer to the allowed range for $m_{1(3)}$ as in eq. (\ref{AFTBM:RangeMasses}). The vertical black lines correspond to the future sensitivity of KATRIN experiment.}
\label{fig:AFSeeSaw}
\end{figure}
From eq. (\ref{AFTBM:LightMasses}) it is possible to describe the leading order spectrum of the right-handed neutrinos as a function of a unique parameter, which is the lightest left-handed neutrino mass. In all the allowed range for $m_{1,3}$, the order of magnitude of the right-handed neutrino masses falls between $10^{14}$ GeV and $10^{15}$ GeV. In fig. (\ref{fig:AFSeeSaw}) we show explicitly the right-handed neutrino masses for normal hierarchy and inverse hierarchy, on the left and on the right respectively. The ratios among the masses are well defined for the NH, thanks to the narrow allowed range for $m_1$: $M_1/M_3\sim11$ and $M_2/M_3\sim5$. In the case of the IH, the ratio $M_1/M_2$ is fixed at $1$ while $M_3/M_2$ varies from about $3$ to $1$, going from the lower bound of $m_3$ up to the KATRIN sensitivity.
The analysis done for the Majorana phases in eqs. (\ref{AFTBM:Majorana1}, \ref{AFTBM:Majorana2}) is still valid here. \\
It is interesting to comment also in this context about the results for the neutrinoless-double-beta decay. The parameter $|m_{ee}|$ can be written as
\beq\ba{rcl}
\mathrm{NH:}&&|m_{ee}|=\dfrac{1}{3}\sqrt{\dfrac{9 m_1^4+2\Delta m^2_{atm}\Delta m^2_{sol}+m_1^2(10\Delta m^2_{atm}+\Delta m^2_{sol})}{m_1^2+\Delta m^2_{atm}}}\\[5mm]
\mathrm{IH:}&&|m_{ee}|=\dfrac{1}{3m_3}\sqrt{9 m_3^4+m_3^2(8\Delta m^2_{atm}-\Delta m^2_{sol})+\Delta m^2_{atm}(-\Delta m^2_{atm}+\Delta m^2_{sol})}\;.\nn
\ea\eeq
In figure \ref{fig:AF_0nu2betaSS}, we show the behaviour of $|m_{ee}|$ as a function of the lightest neutrino mass for both the mass hierarchies: in red the NH and in blue the IH. The red profile for the NH case is restricted to a really narrow range of values and we can conclude that $|m_{ee}|$ remains just below the future experiment sensitivity. On the contrary, the blue line which represents the IH case remains well above the future experiment sensitivity, except in the most pessimistic situation when the lower bound is saturated. As a concluding comment we can say that the Altarelli-Feruglio predicts that if the neutrino masses are explained by the type I See-Saw and the neutrino ordering is inverse, then a $0\nu2\beta$-decay signal will be observed in the next future with very high probability.
\begin{figure}[ht!]
\centering
\includegraphics[width=7.8cm]{AFSS0Nu2Be.pdf}
\caption{\it $|m_{ee}|$ as a function of the lightest neutrino mass $m_{1(3)}$ for the NH (IH). See figure \ref{fig:AF_0nu2beta} for the details of the plots.}
\label{fig:AF_0nu2betaSS}
\end{figure}
These results are valid only at the leading order and some deviations are expected with the introduction of the higher-order terms. The result of a direct computation shows that for the NH spectrum the corrections leave approximatively unaffected eqs. (\ref{AFTBM:RangeMasses}); this is true for the IH case too, apart when the neutrino masses reach values at about $0.1$ eV for which the deviations become significant.
\subsection{Extension to Quarks}
\label{Sec:AFTBM:Quarks}
\setcounter{footnote}{3}
In this section we address the question of looking for a realistic description of quarks through the flavour group $A_4\times Z_3\times U(1)_{FN}(\times U(1)_R)$ in the context of the Standard Model (MSSM). An attractive possibility is to adopt for quarks the same representations under $A_4$ that have been used for leptons: the left-handed quark doublets $q$ transform as a triplet $\bf3$, while the right-handed quarks $(u^c,\,d^c)$, $(c^c,\,s^c)$ and $(t^c,\,b^c)$ transform as $\bf1$, $\bf1''$ and $\bf1'$, respectively. We can similarly extend to quarks the transformations of $Z_3$ (and $U(1)_R$) given for leptons. As a result the Lagrangian for the quark sector reads:
\begin{equation}
\begin{split}
\mscr{L}_q=&\phantom{+}\dfrac{y_d}{\Lambda_f}\, d^c H^\dagger \left(\varphi_T q\right)
+\dfrac{y_s}{\Lambda_f}\, s^c H^\dagger \left(\varphi_T q\right)'
+\dfrac{y_b}{\Lambda_f}\, b^c H^\dagger \left(\varphi_T q\right)''+\\
&+\dfrac{y_u}{\Lambda_f}\, u^c H^\dagger \left(\varphi_T q\right)
+\dfrac{y_c}{\Lambda_f}\, c^c H^\dagger \left(\varphi_T q\right)'
+\dfrac{y_t}{\Lambda_f}\, t^c H^\dagger \left(\varphi_T q\right)''+\text{h.c.}+\ldots
\end{split}
\end{equation}
where dots stand for higher-order terms. When the flavour and the electroweak symmetry breakings occur this Lagrangian provide the mass matrices for quarks: it is straightforward to verify that the mass matrices are diagonal, leading to a diagonal CKM mixing matrix, that represents a good first order approximation. In order to explain the mass hierarchies, a suitable charge assignment under $U(1)_{FN}$ can be implemented.
Looking at the Lagrangian we find a disadvantage of adopting for quarks the same representations used for lepton: the top Yukawa does not arise at the renormalisable level. Furthermore, we expect that the NLO corrections introduce additional terms in the CKM in order to switch on the mixing angles and in particular the Cabibbo angle. However, we have already seen that the NLO corrections are of relative order $u\approx \lambda^2$ with respect to the starting values and therefore they are too small to describe the Cabibbo angle. To a closer sight, the NLO corrections do not come from higher-order operators in the Lagrangian, but from the new vacuum as described in eq. (\ref{AFTBM:vevsplus}). As a results the corrections are the same in the up and down sectors, apart negligible differences, and therefore they almost exactly cancel in the CKM matrix.
The conclusion is that new symmetry breaking sources are needed in order to describe quarks adopting a similar description used for leptons. Alternatively, we can try to enlarge the flavour symmetry group in such a way to reproduce similar results in the lepton sector as in the Altarelli-Feruglio model and to find a new method to correctly describe quarks. In the next two sections we follow this second approach, adopting as flavour symmetry the $T'$ and the $S_4$ discrete groups.
\mathversion{bold}
\section{$T'$-Based Model}
\label{Sec:TpTBM}
\setcounter{footnote}{3}
\mathversion{normal}
In this section we recall the main features of the flavour model based on $T'$ \cite{FHLM_Tp}, the double-valued group of the tetrahedral symmetry, which is isomorphic to $A_4$. Further synonyms of $T^{\prime}$ are Type $24/13$ and $SL_{2} (F_{3})$ \cite{CF_Tp}. The key role in our construction is played by the fact that $T'$ is the double covering of the tetrahedral group $A_4$. The relation between $T'$ and $A_4$ can be understood by thinking of $A_4$, the group of proper rotation in the three-dimensional space leaving a regular tetrahedron invariant, as a subgroup of $SO(3)$. Thus the $12$ elements of $A_4$ are in a one-to-one correspondence with $12$ sets of Euler angles. Now consider $SU(2)$, the double covering of $SO(3)$, possessing ``twice'' as many elements as $SO(3)$. There is a correspondence from $SU(2)$ to $SO(3)$ that maps two distinct elements of $SU(2)$ into the same set of Euler angles of $SO(3)$. The group $T'$ can be defined as the inverse image under this map of the group $A_4$.
The group $T'$ has $24$ elements and has two kinds of representations. It contains the representations of $A_4$: one triplet $\bf3$ and three singlets $\bf1$, $\bf1'$ and $\bf1''$. When working with these representations there is no way to distinguish the group $T'$ from the group $A_4$. In particular, in these representations, the elements of $T'$ coincide two by two and can be described by the same matrices that represent the elements in $A_4$. The other representations are three doublets $\bf2$, $\bf2'$ and $\bf2''$. Note that $A_{4}$ is not a subgroup of $T'$, since the two-dimensional representations cannot be decomposed into representations of $A_{4}$.
It is generated by two elements $S$ and $T$ fulfilling the relations:
\begin{equation}
S^{2}=\mathbb{R}\;, \;\; T^{3}=\mathbb{1}\;, \;\; (S T)^{3}=\mathbb{1}\;, \;\; \mathbb{R}^{2}=\mathbb{1}\;,
\end{equation}
where $\mathbb{R}=\mathbb{1}$ in case of the odd-dimensional representation and $\mathbb{R}=-\mathbb{1}$ for $\bf2$, $\bf2'$ and $\bf2''$ such that $\mathbb{R}$ commutes with all elements of the group. Beyond the center of the group, generated by the elements $\mathbb{1}$ and $\mathbb{R}$, there are other Abelian subgroups: $Z_3$, $Z_4$ and $Z_6$. In particular, there is a $Z_4$ subgroup here denoted by $G_S$, generated by the element $TST^2$ and a $Z_3$ subgroup here called $G_T$, generated by the element $T$.
As we will see $G_S$ and $G_T$ are of great importance for the structure of our model. Realisations of $S$ and $T$ for all the representations can be found in the appendix \ref{AppA:Tp}.
The multiplication rules of the representations are as follows:
\begin{equation}
\begin{array}{l}
{\bf1}^a \times {\bf r}^b = {\bf r}^b\times {\bf1}^a={\bf r}^{a+b}\qquad\qquad\text{for}\;{\bf r}={\bf1},\,{\bf2}\\
{\bf1}^a \times {\bf3} = {\bf3} + {\bf1}^a = {\bf3}\\
{\bf2}^a \times {\bf2}^b = {\bf3} + {\bf1}^{a+b}\\
{\bf2}^a \times {\bf3} = {\bf3}\times {\bf2}^a = {\bf2} + {\bf2'} + {\bf2''}\\
{\bf3} \times {\bf3} = {\bf3} + {\bf3} + {\bf1} + {\bf1'} + {\bf1''}
\end{array}
\label{TpTBM:mult}
\end{equation}
where $a,\,b=0,\pm1$ and we have denoted ${\bf1}^0\equiv{\bf1}$, ${\bf1}^{1}\equiv{\bf1'}$, ${\bf1}^{-1}\equiv{\bf1''}$ and similarly for the doublet representations. On the right-hand side the sum $a+b$ is modulo 3. The Clebsch-Gordan coefficients for the decomposition of product representations are shown in the appendix \ref{AppA:Tp}.
\subsection{Outline of the Model}
\label{Sec:TpTBM:Outline}
\setcounter{footnote}{3}
In this section we introduce our model and we illustrate its main features. We choose the model to be supersymmetric, which help us when discussing the vacuum selection and the symmetry breaking pattern of $T'$. The model is required to be invariant under a flavour symmetry group $G_f=T' \times Z_3 \times U(1)_{FN} \times U(1)_R$. The group factor $T'$ is the one responsible for the tribimaximal lepton mixing. The group $T'$ is unable to produce the necessary mass suppressions for all the fermions. These suppressions originate in part from a spontaneously broken $U(1)_{FN}$, according to the original Froggatt-Nielsen mechanism. The $Z_3$ factor helps
in keeping separate the contributions to neutrino masses and to charged fermion masses, and it is an important ingredient in the vacuum alignment analysis. Finally, the $U(1)_R$ contains the usual $R$-parity as a subgroup and simplifies the constructions of the scalar potential. The fields of the model, together with their transformation properties under the flavour group, are listed in table \ref{table:Tptransformations}.
\begin{table}[!ht]
\hspace{-.5cm}
\begin{tabular}{|c||c|c|c|c||c|c|c|c|c|c||c|c|c|c|c|c|c|}
\hline
&&&&&&&&&&&&&&&&&\\[-4mm]
& $\ell$ & $e^c$ & $\mu^c$ & $\tau^c$ & $D_q$ & $D_u^c$ & $D_d^c$ & $q_3$ & $t^c$ & $b^c$ & $H_{u,d}$ & $\theta$ & $\varphi_T$ & $\varphi_S$ & $\xi$, $\tilde{\xi}$ & $\eta$ & $\xi^{\prime\prime}$ \\[2mm]
\hline
&&&&&&&&&&&&&&&&&\\[-4mm]
$T'$ & {\bf3} & {\bf1} & $\bf1''$ & $\bf1'$ & $\bf2''$ & $\bf2''$ & $\bf2''$ & {\bf1} & {\bf1} & {\bf1} & {\bf1} & {\bf1} & {\bf3} & {\bf3} & {\bf1} & $\bf2'$ & $\bf1''$ \\[2mm]
$Z_3$ & $\omega$ & $\omega^2$ & $\omega^2$ & $\omega^2$ & $\omega$ & $\omega^2$ & $\omega^2$ & $\omega$ & $\omega^2$ & $\omega^2$ & $1$ & $1$ & $1$ & $\omega$ & $\omega$ & 1 & 1 \\[2mm]
$U(1)_{FN}$ & 0 & 2 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & $-1$ & 0 & 0 & 0 & 0 & 0 \\[2mm]
$U(1)_R$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[2mm]
\hline
\end{tabular}
\caption{\it The transformation rules of the fields under the symmetries associated to the groups $T'$, $Z_3$, $U(1)_{FN}$ and $U(1)_R$. We denote $D_q=(q_1,q_2)^t$ where $q_1=(u,d)^t$ and $q_2=(c,s)^t$ are the electroweak $SU(2)_L$-doublets of the first two generations, $D_u^c=(u^c,c^c)^t$ and $D_d^c=(d^c,s^c)^t$. $D_q$, $D_u^c$ and $D_d^c$ are doublets of $T'$. $q_3=(t,b)^t$ is the electroweak $SU(2)_L$-doublet of the third generation. $q_3$, $t^c$ and $b^c$ are all singlets under $T'$.}
\label{table:Tptransformations}
\end{table}
The most important feature of our model is the pattern of symmetry breaking of the flavour group $T'$. We will see that, at the leading order, $T'$ is broken down to the subgroup $G_S$, generated by the element $TST^2$, in the neutrino sector and to the subgroup $G_T$, generated by $T$, in the charged fermion sector. This pattern of symmetry breaking is achieved dynamically and corresponds to a local minimum of the scalar potential of the model. This result is already sufficient to understand the predicted pattern of fermion mixing angles.
Indeed, given the $T'$ assignment of the matter fields displayed in table \ref{table:Tptransformations} and the explicit expressions
of the generators $S$ and $T$ for the various representations (see appendix \ref{AppA:Tp}), specific mass textures are obtained from the requirement of invariance under $TST^2$ or $T$. For instance, neutrinos are in a triplet of $T'$ and the element $TST^2$ in the triplet representations is given by:
\begin{equation}
TST^2=\dfrac{1}{3}\left(\begin{array}{ccc}
-1 & 2 & 2 \\
2 & -1 & 2 \\
2 & 2 & -1 \\
\end{array}\right)\;.
\end{equation}
The most general mass matrix for neutrinos invariant under $G_S$, in arbitrary units, is given by:
\begin{equation}
m_\nu=\left(
\begin{array}{ccc}
a+c & -b/3-c+d & -b/3 \\
-b/3-c+d & c & a-b/3 \\
-b/3 & a-b/3 & d
\end{array}
\right)
\label{TpTBM:t1}
\end{equation}
where $a$, $b$, $c$ and $d$ are arbitrary parameters. Similarly, the most general mass matrices for charged fermions invariant under $G_T$ have the following structure:
\begin{equation}
M_e=\left(
\begin{array}{ccc}
\times & 0 & 0 \\
0 & \times & 0 \\
0 & 0 & \times \\
\end{array}
\right)\;,\qquad
M_{u,d}=\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & \times & \times \\
0 & \times & \times \\
\end{array}
\right)
\label{TpTBM:t2}
\end{equation}
where a cross denotes a non-vanishing entry.
The lepton mixing originates completely from $m_\nu$ and, with an additional requirement, reproduces the tribimaximal scheme. This additional requirement is the condition $c=d$, which is not generically implied by the invariance under $G_S$. In our model the fields that break $T'$ along the $G_S$ direction are a triplet $\varphi_S$ and an invariant singlet $\xi$. There are no further scalar singlets, transforming as $\bf1'$ or $\bf1''$ that couple to the neutrino sector. We will see in a moment that due to this restriction our model gives rise to a particular version of the neutrino mass matrix in eq (\ref{TpTBM:t1}), where $c=d=2b/3$, which implies directly a tribimaximal mixing. This feature holds also in the Altarelli-Feruglio model and it works exactly in the same way.
It is interesting to note that, while the requirement of $G_T$ invariance implies a diagonal mass matrix in the charged lepton sector, this is not the case for the quark sector, due to the different $T'$ assignment. At the leading order, in both up and down sectors, we get mass matrices with vanishing first row and column, eq. (\ref{TpTBM:t2}). Moreover, the element $(33)$ of both mass matrices is larger than the other elements, since it is invariant under the full $T'$ group, not only the $G_T$ subgroup. The other non-vanishing elements carry a suppression factor originating from the breaking of $T'$ down to $G_T$. This pattern of quark mass matrices, while not yet fully realistic, is however encouraging, since it reproduces correctly masses and mixing angle of the second and third generations. As we will see, the textures in eqs. (\ref{TpTBM:t1}, \ref{TpTBM:t2}, \ref{TpTBM:t2}) are modified by subleading effects. These effects are sufficiently small to keep the good features of the leading order approximation, and large enough to provide a realistic description of the quark sector.
Fermion masses are generated by the superpotential $w$:
\begin{equation}
w=w_\ell+w_q+w_d
\label{TpTBM:fullw}
\end{equation}
where $w_\ell$ is the term responsible for the Yukawa interactions in the lepton sector, $w_q$ is the analogous term for quarks and $w_d$ is the term responsible for the vacuum alignment. We will consider the expansion of $w$ in inverse powers of the cutoff scale $\Lambda_f$ and we will write down only the first non-trivial terms of this expansion. This will provide a leading order approximation, here analysed in detail. Corrections to this approximation are produced by higher dimensional operators contributing to $w$. As described in section \ref{AppB:Tp}, at the leading order,
the scalar components of the supermultiplets $\varphi_T$, $\varphi_S$, $\xi$, $\tilde{\xi}$, $\eta$ and $\xi''$ develop VEVs according to
\begin{eqnarray}
\mean{\varphi_S}=(v_S,\,v_S,\,v_S)\;,&\qquad
\mean{\xi}=v_\xi\;,&\qquad
\mean{\tilde{\xi}}=0\;,
\label{TpTBM:love1}\\[3mm]
\mean{\varphi_T}=(v_T,\,0,\,0)\;,&\qquad
\mean{\eta}=(v_1,0)\;,&\qquad
\mean{\xi''}=0\;.
\label{TpTBM:love2}
\end{eqnarray}
Notice that the VEVs of $\varphi_T$, $\varphi_S$, $\xi$ and $\tilde{\xi}$ correspond to those in the Altarelli-Feruglio model and, as in that realisation, it is reasonable to choose:
\begin{equation}
\dfrac{VEV}{\Lambda_f}\approx \lambda^2\;.
\label{TpTBM:vevratio}
\end{equation}
Furthermore, there is a neat misalignment in flavour space between $\langle\varphi_T\rangle$, $\langle\eta\rangle$ and $\langle\varphi_S\rangle$:
$\langle\varphi_T\rangle=(v_T,0,0)$, $\langle\eta\rangle=(v_1,0)$ and $\langle\xi''\rangle=0$ break $T'$ down to the subgroup $G_T$, while $\langle\varphi_S\rangle=(v_S,v_S,v_S)$ breaks $T'$ down to the subgroup $G_S$. It is precisely this misalignment the origin of the mass textures in eqs. (\ref{TpTBM:t1}, \ref{TpTBM:t2}, \ref{TpTBM:t2}).
A certain freedom is present in our formalism and this can lead to models that are physically equivalent though different at a superficial level, when comparing VEVs or mass matrices. One source of freedom is related to the possibility of working with different basis for the generators $S$ and $T$. Another source of freedom is related to the fact that vacua that break $T'$ are degenerate and lie in orbits of the flavour group. For instance, when we say that the set of VEVs in eq. (\ref{TpTBM:love2}) breaks $T'$ leaving invariant the $Z_3$ subgroup generated by $T$, VEVs obtained from this set by acting with elements of $T'$ are degenerate and they preserve other $Z_3$ subgroups of $T'$. Both these sources of freedom can lead to mass matrices different from those explicitly shown in eqs. (\ref{TpTBM:t1}, \ref{TpTBM:t2}, \ref{TpTBM:t2}). It is however easy to show that the different ``pictures'' are related by field redefinitions and the physical properties of the system, such as the mass eigenvalues and the physical mixing angles, are always the same. Thus it is not restrictive to work in a particular basis and to choose a single representative VEV configuration, as we will do in the following.
\subsection{Fermion Masses and Mixings at the Leading Order}
\label{Sec:TpTBM:MassesMixingsLO}
\setcounter{footnote}{3}
Lepton masses are described by $w_\ell$, given by:
\begin{equation}
\begin{split}
w_\ell=&\phantom{+}\dfrac{y_e}{\Lambda_f^3} \theta^2e^c H_d \left(\varphi_T \ell\right) +\dfrac{y_\mu}{\Lambda_f^2} \theta\mu^c H_d \left(\varphi_T \ell\right)' +\dfrac{y_\tau}{\Lambda_f} \tau^c H_d \left(\varphi_T \ell\right)''+\\[3mm]
&+\dfrac{x_a}{\Lambda_f\Lambda_L} \xi (\ell\ell)H_uH_u + \dfrac{x_b}{\Lambda_f\Lambda_L} (\varphi_S\ell\ell)H_uH_u+\ldots\;,
\end{split}
\label{TpTBM:wlplus}
\end{equation}
where dots here and in the following formulae stand for higher dimensional operators. The contractions under $SU(2)_L$ are understood and the notation $(\ldots)$, $(\ldots)'$ and $(\ldots)''$ refers to the contractions in $\bf1$, $\bf1'$ and $\bf1''$, respectively. This superpotential corresponds to the Lagrangian in eq. (\ref{AFTBM:Ll}, \ref{AFTBM:Lnu}) when the supersymmetric context is considered and therefore all the results listed in the Altarelli-Feruglio model exactly hold in this model based on $T'$. Just for simplicity, we recall here the mass matrices for the charged leptons and for neutrinos: here and in the following we make the notation more compact indicating with $t$ the ratio of the VEV of the flavon $\theta$ over the cutoff of the theory, similarly as in the Altarelli-Feruglio model, and we get
\begin{equation}
M_e=\left(
\begin{array}{ccc}
y_e t^2 & 0 & 0 \\
0 & y_\mu t & 0 \\
0 & 0 & y_\tau \\
\end{array}
\right)\dfrac{v_d\,v_T}{\sqrt2\Lambda_f}\;,\qquad
m_\nu=\left(
\begin{array}{ccc}
a+2 b/3 & -b/3 & -b/3 \\
-b/3 & 2b/3 & a-b/3 \\
-b/3 & a-b/3 & 2 b/3 \\
\end{array}
\right)\dfrac{v_u^2}{\Lambda_L}\;,
\label{TpTBM:LOMasses}
\end{equation}
where $a\equiv x_a\,v_\xi/\Lambda_f$ and $b\equiv x_b\,v_S/\Lambda_f$.\\
The contribution to the superpotential in the quark sector is given by
\begin{equation}
\begin{split}
w_q =&\phantom{+}y_t\left(t^c q_3\right)H_u+\dfrac{y_b}{\Lambda_f}\theta\left(b^c q_3\right)H_d+\\[3mm]
&+\dfrac{y_1}{\Lambda_f^2}\theta(\varphi_T D_u^c D_q) H_u + \dfrac{y_5}{\Lambda_f^2}\theta(\varphi_T D_d^c D_q) H_d+\\[3mm]
&+\dfrac{y_2}{\Lambda_f^2}\theta\xi^{\prime\prime}(D_u^c D_q)' H_u + \dfrac{y_6}{\Lambda_f^2}\theta\xi^{\prime\prime}(D_d^c D_q)' H_d+\\[3mm]
&+\dfrac{1}{\Lambda_f}\left[y_3\, t^c (\eta D_q) + \dfrac{y_4}{\Lambda_f}\theta(D_u^c \eta)q_3\right] H_u+\\[3mm]
&+\dfrac{1}{\Lambda_f^2}\theta\Big[y_7\, b^c (\eta D_q) + y_8\, (D_d^c \eta) q_3 \Big] H_d+\ldots
\end{split}
\label{TpTBM:wmq}
\end{equation}
Observe that the supermultiplets $\varphi_S$, $\xi$ and $\tilde{\xi}$, which control the neutrino mass matrices, do not couple to the quark sector, at the leading order. Conversely, the supermultiplets $\varphi_T$, $\eta$ and $\xi''$, which give masses to the charged fermions, do not couple to neutrinos at the leading order. This separation is partly due to the discrete $Z_3$ symmetry, described in table \ref{table:Tptransformations}. By recalling the VEVs of eq. (\ref{TpTBM:love1}, \ref{TpTBM:love2}), we can write down the mass matrices for the up and down quarks: at the leading order we have
\begin{equation}
M_u=\left(\begin{array}{ccc}
0 & 0 & 0 \\[2mm]
0 & y_1\,t\, \dfrac{v_T}{\Lambda_f} & y_4\,t\, \dfrac{v_1}{\Lambda_f} \\[2mm]
0 & y_3\, \dfrac{v_1}{\Lambda_f} & y_t \\[2mm]
\end{array}\right)\dfrac{v_u}{\sqrt2}\;,\qquad
M_d=\left(\begin{array}{ccc}
0 & 0 & 0 \\[2mm]
0 & y_5\, \dfrac{v_T}{\Lambda_f} & y_8\, \dfrac{v_1}{\Lambda_f} \\[2mm]
0 & y_7\, \dfrac{v_1}{\Lambda_f} & y_b \\[2mm]
\end{array}\right)\dfrac{v_d\,t}{\sqrt2}\;.
\label{TpTBM:qmassm}
\end{equation}
These mass matrices are the most general ones that are invariant under $G_T$, see eq. (\ref{TpTBM:t2}). The following absolute values for quark masses and mixing angles are predicted, at the leading order:
\begin{equation}
\begin{array}{ccc}
m_u=0\;, & \qquad m_c\approx y_1 \dfrac{v_u\, v_T\, t}{\sqrt2\,\Lambda_f}\;, &\qquad m_t\approx y_t\, \dfrac{v_u}{\sqrt2}\\[3mm]
m_d=0\;, & \qquad m_s\approx y_5 \dfrac{v_d\, v_T\, t}{\sqrt2\,\Lambda_f}\;, &\qquad m_b\approx y_b\, \dfrac{v_d\, t}{\sqrt2}\\[3mm]
V_{us}=0\;, & \qquad V_{ub}=0\;, & \qquad V_{cb}\approx \left(\dfrac{y_7}{y_b}-\dfrac{y_3}{y_t}\right)\dfrac{v_1}{\Lambda_f}\;.
\end{array}
\end{equation}
The mass of the top quark is expected to be of the order of the VEV \mbox{$v_u\approx\cO(100\;\mathrm{GeV})$}. The mass of the bottom quark is suppressed compared with $m_t$ by the Froggatt-Nielsen mechanism so that it is of the same order of $m_\tau$. For values of order one of the dimensionless
coefficients $y_b$ and $y_5$, the ratio $m_s/m_b$ is correctly reproduced since it is approximately given by $VEV/\Lambda_f$, which we already chose of order $\lambda^2$, see eq. (\ref{TpTBM:vevratio}). The mass of the charm quark is $m_c\approx \lambda^4v_u$ and therefore the ratio $m_c/m_t\approx\lambda^4$ holds. Finally the element $V_{cb}$ is of order $v_1/\Lambda_f\approx\lambda^2$, in agreement with the experiments. Masses and mixing angles are still unrealistic, since $m_u/m_c$, $m_d/m_s$, $V_{ub}$ and $V_{us}$ are vanishing, at this level.
We will see that all these parameters can be generated by higher-order corrections, in particular those affecting the VEVs in eqs. (\ref{TpTBM:love1}, \ref{TpTBM:love2}).
\subsection{Higher-Order Corrections}
\label{Sec:TpTBM:NLO}
\setcounter{footnote}{3}
The inclusion of higher-order corrections is essential in our model. First of all, from these corrections we hope to achieve a realistic mass spectrum in the quark sector. The leading order result is encouraging, but quarks of the first generations are still massless at this level and there is no mixing allowing communication between the first generations and the other ones. Moreover we should check that the higher-order corrections do not spoil the leading order results. At the leading order there is a neat separation between the scalar fields giving masses to the neutrino sector and those giving masses to the charged fermion sector. As a result the $T'$ flavour symmetry is broken down in two different directions in the two sectors: neutrino mass terms are invariant under the subgroup $G_S$, while the charged fermion mass terms are invariant under the subgroup $G_T$. It is precisely this misalignment the source of the tribimaximal lepton mixing. Such a sharp separation is not expected to survive when higher dimensional operators are included and this will cause the breaking of the subgroup $G_S$ $(G_T)$ in the neutrino (charged fermion) sector. It is important to check that this further breaking does not modify too much the misalignment achieved at the leading order and that the tribimaximal mixing remains stable.
The corrections are induced by higher dimensional operators, compatible with all the symmetries of our model, that can be included in the superpotential $w$, thus providing the next terms in a $1/\Lambda_f$ expansion. Here we do not enter into the details of the analysis already presented in \cite{FHLM_Tp} and we simply give the results.
When looking at the corrections to the flavon sector (see appendix \ref{AppB:Tp}), we can parametrise the new VEVs as
\begin{equation}
\begin{array}{c}
\mean{\varphi_T}=(v_T+\delta v_{T1},\delta v_{T2},\delta v_{T3})\;,\qquad
\mean{\varphi_S}=(v_S+\delta v_{S1},v_S+\delta v_{S2},v_S+\delta v_{S3})\;\\[3mm]
\mean{\xi}=v_\xi\;,\qquad
\mean{\tilde{\xi}}=\delta v_{\tilde{\xi}} \;,\qquad
\mean{\eta}=(v_1+\delta v_1,\delta v_2)\;,\qquad\langle\xi''\rangle=\delta v_{\xi''}\;.
\end{array}
\end{equation}
This modification will affect also the fermion mass matrices, since new flavour structures are expected to appear when these new VEVs are introduced in the superpotential $w$ of eqs. (\ref{TpTBM:wlplus}, \ref{TpTBM:wmq}).
Lepton masses and mixing angles are modified by terms of relative order $\lambda^2$, exactly in the same way as in the Altarelli-Feruglio model . This correction is close to the $3\sigma$ experimental error for $\theta_{12}$ and largely within the current uncertainties of $\theta_{23}$ and $\theta_{13}$. From the experimental view point, a small non-vanishing value $\theta_{13}\approx \lambda^2$ and a deviation from $\pi/4$ of order $\lambda^2$ of $\theta_{23}$, are both close to the reach of the next generation of neutrino experiments and will provide a valuable test of this model.
In the quark sector all the subleading effects contribute to give these new quark mass matrices:
\begin{gather}
M_u=\left(\begin{array}{ccc}
i\, y_1\,t\,\dfrac{\delta v_{T2}}{\Lambda_f} +...& (1-i)\,y_1\,t\,\dfrac{\delta v_{T3}}{2\Lambda_f}\,+\,y_2\,t\,\dfrac{\delta v_{\xi''}}{\Lambda_f}& -\,y_4\,t\,\frac{\delta v_2}{\Lambda_f} \\[2mm]
(1-i) \,y_1\,t\,\dfrac{\delta v_{T3}}{2\Lambda_f}-\,y_2\,t\,\dfrac{\delta v_{\xi''}}{\Lambda_f} & \,y_1\,t\, \dfrac{v_T}{\Lambda_f} & \,y_4\,t\, \dfrac{v_1}{\Lambda_f} \\[2mm]
-\,y_3\dfrac{\delta v_2}{\Lambda_f} & \,y_3 \dfrac{v_1}{\Lambda_f} & \,y_t \\[2mm]
\end{array}\right)\frac{v_u}{\sqrt2}\\[3mm]
M_d=\left(\begin{array}{ccc}
i \,y_5\dfrac{\delta v_{T2}}{\Lambda_f}+... & (1-i) \,y_5\dfrac{\delta v_{T3}}{2\Lambda_f}+\,y_6\dfrac{\delta v_{\xi''}}{\Lambda_f}& -\,y_8 \frac{\delta v_2}{\Lambda_f} \\[2mm]
(1-i) \,y_5\dfrac{\delta v_{T3}}{2\Lambda_f}-\,y_6\dfrac{\delta v_{\xi''}}{\Lambda_f} & \,y_5 \dfrac{v_T}{\Lambda_f} & \,y_8 \dfrac{v_1}{\Lambda_f} \\[2mm]
- \,y_7 \dfrac{\delta v_2}{\Lambda_f} & \,y_7 \dfrac{v_1}{\Lambda_f} & \,y_b \\[2mm]
\end{array}\right)\dfrac{v_d\,t}{\sqrt2}\;,
\label{hoqm}
\end{gather}
where the dots in the $(11)$ entry of $m_u$ and $m_d$ stand for additional contributions from higher dimensional operators.
Not all the available parameter space is suitable to correctly reproduce the masses and the mixing angles of the first generation quarks. To explain this point we rewrite the quark mass matrices indicating only the order of magnitudes in terms of $\lambda$ for each entry:
\begin{equation}
M_u\sim\left(
\begin{array}{ccc}
\lambda^6 & \lambda^6 & \lambda^6 \\
\lambda^6 & \lambda^4 & \lambda^4 \\
\lambda^4 & \lambda^2 & 1 \\
\end{array}
\right)
\frac{v_u}{\sqrt2}\;,\qquad
M_d\sim\left(
\begin{array}{ccc}
\lambda^4 & \lambda^4 & \lambda^4 \\
\lambda^4 & \lambda^2 & \lambda^2 \\
\lambda^4 & \lambda^2 & 1 \\
\end{array}
\right)
\frac{v_d\,\lambda^2}{\sqrt2}\;.
\end{equation}
It is now easy to see that, up to small corrections,
\begin{equation}
\dfrac{m_u}{m_c}= \dfrac{m_d}{m_s}= \dfrac{\delta v_{T2}}{v_T}\approx \lambda^2\;,
\end{equation}
which is not correct in the up sector. To overcome this difficulty we assume that the correction $\delta v_{T2}$ is somewhat smaller than its natural value:
\begin{equation}
\dfrac{\delta v_{T2}}{v_T}\approx \lambda^4\;.
\label{TpTBM:ass1}
\end{equation}
This brings the up quark mass in the correct range but depletes too much the down quark mass. To get the appropriate mass for the down quark
we assume that the dimensionless coefficient $y_6$ is larger than one by a factor $1/\lambda$:
\begin{equation}
y_6\approx\dfrac{1}{\lambda}\;.
\label{TpTBM:ass2}
\end{equation}
We cannot justify the two assumptions in eqs. (\ref{TpTBM:ass1}, \ref{TpTBM:ass2}) within our approach, where, in the absence of a theory for the higher-order terms, we have allowed for the most general higher-order corrections. From our effective lagrangian approach, they should be seen as two moderate tunings that we need in order to get up and down quark masses. To summarise, in our parameter space we naturally have all dimensionless parameters of order one, with the exception of $y_6$. Concerning the VEVs, we can naturally accommodate $VEV/\Lambda_f\approx\delta VEV/VEV\approx \lambda^2$, with the exception of $\delta v_{T2}$. Looking at the equations in appendix \ref{AppB:Tp}, it is easy to see that it has no consequences on the other shifts of the VEVs. Within the restricted region of the parameter space where the two relations in eqs. (\ref{TpTBM:ass1}, \ref{TpTBM:ass2}) are approximately valid, the quark mass matrices have the following structures:
\begin{equation}
M_u=\left(\begin{array}{ccc}
\lambda^8 & \lambda^6 & \lambda^6 \\
\lambda^6 & \lambda^4 & \lambda^4 \\
\lambda^4 & \lambda^2 & 1 \\
\end{array}\right)\dfrac{v_u}{\sqrt2}\;,\qquad
M_d=\left(\begin{array}{ccc}
\lambda^6 & \lambda^3 & \lambda^4 \\
\lambda^3 & \lambda^2 & \lambda^2 \\
\lambda^4 & \lambda^2 & 1 \\
\end{array}\right)\dfrac{v_d\lambda^2}{\sqrt2}\;.
\label{qmtextures}
\end{equation}
By diagonalising the matrices in eq. (\ref{hoqm}) with standard perturbative techniques we obtain:
\begin{equation}
\begin{array}{ll}
m_u\approx \left\vert y_1\dfrac{v_u\,t}{\sqrt2} \left\{i\dd\frac{\delta v_{T2}}{\Lambda_f}-\left[\left(\dd\frac{1-i}{2}\right)^2 \dd\frac{\delta v_{T3}^2}{v_T\Lambda_f}-\dd\frac{y_2^2}{y_1^2} \dd\frac{\delta v_{\xi''}^2}{v_T\Lambda_f}\right]\right\}+...\right\vert\;,&
m_d\approx \left\vert \dfrac{v_d\,t}{\sqrt2} \dd\frac{y_6^2}{y_5}\dd\frac{\delta v_{\xi''}^2}{v_T\Lambda_f}\right\vert\;,\\[5mm]
m_c\approx \left\vert y_1 \dfrac{v_u\,t}{\sqrt2} \dd\frac{v_T}{\Lambda_f}\right\vert+O(\lambda^6)\;,&
m_s\approx \left\vert y_5 \dfrac{v_d\,t}{\sqrt2} \dd\frac{v_T}{\Lambda_f}\right\vert+O(\lambda^6)\;,\\[5mm]
m_t\approx \left\vert y_t \dfrac{v_u\,t}{\sqrt2}\right\vert+O(\lambda^4)\;, &
m_b\approx \left\vert y_b \dfrac{v_d\,t}{\sqrt2}\right\vert+O(\lambda^6)\;.
\end{array}
\end{equation}
For the mixing angles, we get:
\begin{equation}
\begin{array}{l}
V_{ud}\approx V_{cs}\approx 1+O(\lambda^2)\;,\qquad\qquad V_{tb}\approx 1\;,\\[5mm]
V_{us}^*\approx -V_{cd}\approx-\dd\frac{y_6}{y_5}\dd\frac{\delta v_{\xi''}}{v_T}-
\left[\left(\dd\frac{1-i}{2}\right)\dd\frac{\delta v_{T3}}{v_T}-\dd\frac{y_2}{y_1}\dd\frac{\delta v_{\xi''}}{v_T}\right]+O(\lambda^3)\;,\\[5mm]
V_{ub}^*\approx-\left(\dd\frac{y_7}{y_b}-\dd\frac{y_3}{y_t}\right) \left\{\dd\frac{\delta v_2}{\Lambda_f}+\dd\frac{v_1}{v_T}
\left[\left(\dd\frac{1-i}{2}\right)\dd\frac{\delta v_{T3}}{\Lambda_f}-\dd\frac{y_2}{y_1}\dd\frac{\delta v_{\xi''}}{\Lambda_f}\right] \right\}\;
\end{array}
\end{equation}
\begin{equation}
\begin{array}{l}
V_{cb}^*\approx-V_{ts}\approx\left(\dd\frac{y_7}{y_b}-\dd\frac{y_3}{y_t}\right) \dd\frac{v_1}{\Lambda_f}\;,\\[5mm]
V_{td}\approx-\dd\frac{y_6}{y_5} \left(\dd\frac{y_7}{y_b}-\dd\frac{y_3}{y_t}\right) \dd\frac{v_1\delta v_{\xi''}}{v_T \Lambda_f}+
\left(\dd\frac{y_7}{y_b}-\dd\frac{y_3}{y_t}\right) \dd\frac{\delta v_2}{\Lambda_f}\;,
\end{array}
\end{equation}
where, when not explicitly indicated, the relations include all terms up to $O(\lambda^4)$. In the previous expressions, where all the quantities are generically complex, is possible to remove all phases except the one carried by the combination $(y_7/y_b-y_3/y_t)\delta v_2/\Lambda_f$ which enters $V_{ub}$ and $V_{td}$ at the order $\lambda^4$. Notice that in our model $V_{ub}$ is of order $\lambda^4$ whereas $V_{td}$ is of order $\lambda^3$. In the Wolfenstein parametrisation of the mixing matrix, this corresponds to a combination $\rho+i\eta$ of order $\lambda$, which is phenomenologically viable. Notice that quark masses and mixing angles are all determined within their correct order of magnitudes and enough parameters are present to fit the data. Moreover, despite the large number of parameters controlling the quark sector, our model contains a well-known \cite{GST_Relation} non-trivial relation between masses and mixing angles:
\begin{equation}
\sqrt{\dfrac{m_d}{m_s}}=\left\vert V_{us}\right\vert+O(\lambda^2)\;.
\label{TpTBM:p1}
\end{equation}
Due to the approximate unitarity relation $V_{td}+V_{us}^* V_{ts}+V_{ub}^*=0$ and due to the fact that $V_{ub}$ is of order $\lambda^4$
in our model, from the relation (\ref{TpTBM:p1}) we also get:
\begin{equation}
\sqrt{\dfrac{m_d}{m_s}}=\left\vert\dfrac{V_{td}}{V_{ts}}\right\vert+O(\lambda^2)\;.
\label{TpTBM:p2}
\end{equation}
These relations well compare with the data: from \cite{PDG08} we have $\sqrt{m_d/m_s}=0.213\div 0.243$, $\vert V_{us}\vert=0.2257\pm0.0010$
and $\vert V_{td}/V_{ts}\vert=0.209\pm0.001\pm0.006$. Unfortunately, the theoretical errors affecting eqs. (\ref{TpTBM:p1}) and (\ref{TpTBM:p2}), dominated respectively by the unknown $O(\lambda^2)$ term in $V_{us}$ and by the unknown $O(\lambda^4)$ term in
$V_{td}$, are of order $20\%$. For this reason, and for the large uncertainty on the ratio $m_d/m_s$, it is not possible to turn these predictions into precise tests of the model.
It is interesting to compare our predictions with those of early models of quark masses based on $U(2)$ or $T'$ flavour symmetries \cite{4ZerosQuarks,56ZerosQuarks,QuarkRelations}.
They also predict eq. (\ref{TpTBM:p2}), with a smaller theoretical error of order $\lambda^3$.
Moreover, due to the characteristic two zero textures, in their early versions they predict $\sqrt{m_u/m_c}=\vert V_{ub}/V_{cb}\vert$,
which is off by approximately a factor two. In our model the mass of the up quark depends on additional free parameters,
that modify this wrong relation by a relative factor of order one.
\mathversion{bold}
\section{$S_4$-Based Model}
\label{Sec:S4TBM}
\setcounter{footnote}{3}
\mathversion{normal}
In the previous section we used the symmetry group $T'$ in order to describe the quark sector, keeping almost unchanged the lepton sector description of the Altarelli-Feruglio model. In this section we illustrate an alternative way to describe both leptons and quarks in a unified context based on the discrete group $S_4$. It is the group of the permutations of four objects and is composed by $24$ elements. It can be defined by two generators $S$ and $T$ that satisfy
\begin{equation}
S^4 = T^3 = (ST^2)^2 = 1 \;.
\label{S4TBM:rel}
\end{equation}
The three relations reported above directly indicate which are the discrete Abelian subgroups of $S_4$: $Z_4$, $Z_3$ and $Z_2$ respectively. Furthermore, $S_4$ presents $5$ irreducible representations \footnote{It has the same number of elements of $T'$, but the representations are different.}: two singlets, $\bf1$, $\bf1'$, one doublet, $\bf2$, and two triplets, $\bf3$ and $\bf3'$. All the technical details are reported in appendix \ref{AppA:S4}.
The presence of the two-dimensional representations of $S_4$ allows for new patterns of the neutrino mass matrix, eventually different from the one in the Altarelli-Feruglio model. Indeed, the most general neutrino mass matrix which can be diagonalised by the tribimaximal mixing can be written as
\begin{equation}
m_\nu\sim\left(
\begin{array}{ccc}
a+2c& b-c& b-c\\
b-c& b+2c& a-c\\
b-c& a-c& b+2c
\end{array}
\right)\;,
\label{S4TBM:massaNuTB}
\end{equation}
in arbitrary units. It is easy to recognise in eq. \eqref{S4TBM:massaNuTB} the $\mu-\tau$ and the magic symmetries; furthermore, this description is equivalent to eq. \eqref{FS:TB:GeneralMassMatrix}. Usually this pattern can be obtained constructing the Lagrangian in such a way that the usual Weinberg operator, $\ell\ell$ (understanding the presence of the Higgses), is forbidden at the leading order, but appears only at higher-orders with additional flavons. The previous models based on $A_4$ and on $T'$ are characterised by $b=0$. The factors $a$ in eq. (\ref{S4TBM:massaNuTB}) come from the term $\ell\ell F_1$ and the factors $c$ from $\ell\ell F_3$, where $F_1$ and $F_3$ are flavons transforming respectively as a singlet $\bf1$ and as a triplet $\bf3$ of $A_4$.
The presence of the doublet representation of $S_4$ introduces a new feature in the neutrino mass matrix: indeed the terms which contribute to $m_\nu$ are $\ell\ell F_1$, $\ell\ell F_3$ and the new $\ell\ell F_2$, where $F_2$ represents a flavon transforming as a doublet $\bf2$. In eq. (\ref{S4TBM:massaNuTB}), this last contribution is represented by the term $b$. However the presence of three parameters in order to describe three masses prevents any predictions on the neutrino masses. For this reason the $S_4$-based model in which a singlet $F_1$, a doublet $F_2$ and also a triplet $F_3$ couple to $\ell\ell$ is not phenomenologically interesting. It is not restrictive to construct a model in which only a singlet and a doublet contribute to the neutrino mass matrix, but in this case $m_1=m_3$ and it is ruled out by the experimental observations. Moreover it is possible to think about a model in which only a singlet and a triplet contribute to the neutrino mass matrix: we have verified that such a model can be built, with a natural vacuum alignment. This model provides exactly the neutrino mass matrix with $b=0$ and therefore it has the same predictions in the lepton sector as in the $A_4$-based models. For this reason in this section we study the case in which only a doublet and a triplet couple to the term $\ell\ell$ and as a result we get an unusual neutrino mass matrix
\begin{equation}
m_\nu\sim\left(
\begin{array}{ccc}
2c & b-c & b-c \\
b-c & b+2c & -c \\
b-c & -c & b+2c \\
\end{array}
\right)
\label{S4TBM:massaNuS4General}
\end{equation}
which can still be diagonalised by the tribimaximal mixing. This new pattern provides different predictions for the $0\nu2\beta$-decay and thus we expect to be able to distinguish this realisation from the other two which predict the tribimaximal mixing just looking at some observables related to the neutrino oscillations.
Moving to the quark sector, the idea is to use the doublet representations in order to describe masses and mixings. Since the Clebsch-Gordan coefficients are different with respect to those of the $T'$ model, we expect different predictions.
\subsection{The Lepton Sector}
\label{Sec:S4TBM:Leptons}
\setcounter{footnote}{3}
In this part we illustrate the model in the lepton sector, predicting an exact tribimaximal mixing at the leading order and a realistic charged lepton mass hierarchy, by the use of a flavour group $G_f$ in addition to the gauge group of the Standard Model. The complete flavour group is $G_f=S_4\times Z_5\times U(1)_{FN}$, where the three factors play different roles: the spontaneous breaking of $S_4$ down to its subgroup $Z_2\times Z_2$ in the neutrino sector is directly responsible for the tribimaximal mixing\footnote{This breaking is extremely unusual, indeed the common preserved subgroup is $Z_2$. Here $Z_2\times Z_2$ provides the same flavour structure for the neutrino mass matrix as $Z_2$ in the $A_4$-based models and it is associated to one element of the class $\mcal{C}_2$ and one of the class $\mcal{C}_4$.}; the $Z_5$ factor keeps separated the different sectors of the theory, quarks from leptons and furthermore neutrinos from charged leptons; moreover $Z_5$ plays a similar role as the baryon on the total lepton numbers, avoiding some dangerous terms, and, together to the $U(1)_{FN}$, is responsible for the hierarchy among the charged fermion masses. In table \ref{table:S4lepton_transformation}, we can see the lepton sector fields of the model and their transformation properties under $G_f$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c||c||c|c||c|c||c|}
\hline
&&&&&&&&&&& \\[-0,3cm]
& $\ell$ & $e^c$ & $\mu^c$ & $\tau^c$ & $H_{u,d}$ & $\theta$ & $\psi$ & $\eta$ & $\Upsilon$ & $\varphi$ & $\xi'$ \\
&&&&&&&&&&& \\[-0,3cm]
\hline
&&&&&&&&&&& \\[-0,3cm]
$S_4$ & $\bf3$ & $\bf1'$ & $\bf1'$ & $\bf1$ & $\bf1$ & $\bf1$ & $\bf3$ & $\bf2$ & $\bf3$ & $\bf2$ & $\bf1'$ \\
&&&&&&&&&&& \\[-0,3cm]
$Z_5$ & $\omega$ & $\omega^3$ & 1 & $\omega^2$ & 1 & 1 & $\omega^2$ & $\omega^2$ & $\omega^3$ & $\omega^3$ & 1 \\
&&&&&&&&&&& \\[-0,3cm]
$U(1)_{FN}$ & 0 & 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{\it Transformation properties of the matter fields in the lepton sector and of all the flavons of the model. We distinguish the flavon fields on their role and thus we can consider $\psi$ and $\eta$ mainly connected to the charged lepton sector and $\Upsilon$ and $\varphi$ to the neutrino sector. All these fields together to $\xi'$ are present in the quark sector.}
\label{table:S4lepton_transformation}
\end{table}
We treat the model in a supersymmetric scenario, because the minimisation of the scalar potential is simplified, but it is not compulsory for the construction of the model itself. For this purpose an additional continuous $R$-parity is introduced, under which fermion fields are singly charged while scalars are uncharged.
The superpotential for leptons can be written as an expansion in inverse powers of the cutoff of the theory $\Lambda_f$:
\begin{eqnarray}
w_e&=&\sum_{i=1}^{4}\dfrac{\theta}{\Lambda_f}\dfrac{y_{e,i}}{\Lambda_f^3}e^c(\ell X_i)'H_d+\dfrac{y_\mu}{\Lambda_f^2}\mu^c(\ell\psi\eta)'H_d+\dfrac{y_\tau}{\Lambda_f}\tau^c(\ell\psi)H_d
\label{S4TBM:eq:wd:leptons}\\[0.3cm]
w_\nu&=&\dfrac{x_d}{\Lambda_f\Lambda_L}(\ell H_u\ell H_u\varphi)+\dfrac{x_t}{\Lambda_f\Lambda_L}(\ell H_u\ell H_u\Upsilon)
\label{S4TBM:eq:wd:neutrinos}
\end{eqnarray}
where
\begin{equation}
X=\left\{\psi\psi\eta,\;\psi\eta\eta,\;\Upsilon\Upsilon\xi',\;\Upsilon\varphi\xi'\right\}
\end{equation}
using $(\ldots)$ to refer to the contraction in $\bf1$ and $(\ldots)'$ to the contraction in $\bf1'$. We indicate as usual the scale of the lepton number violation by the symbol $\Lambda_L$, which we assume to be of the same order of $\Lambda_f$. It is interesting to underline that the first contributions containing $e^c$ should be
\begin{equation}
\dfrac{\theta}{\Lambda_f}\dfrac{y'_{e,1}}{\Lambda_f^2}e^c(\ell\Upsilon\Upsilon)'H_d+\dfrac{\theta}{\Lambda_f}\dfrac{y'_{e,2}}{\Lambda_f^2}e^c(\ell\Upsilon\varphi)'H_d\;,
\end{equation}
which would dominate with respect to the terms in eq. (\ref{S4TBM:eq:wd:leptons}). However an explicit computation will show that these two terms are vanishing, once we assume that the flavons get this specific VEV alignment:
\begin{equation}\begin{array}{rclrcl}
\mean{\psi}&=&(0,\,v_\psi,\,0)\;,&\qquad
\mean{\eta}&=&(0,\,v_\eta)\;,\\[3mm]
\mean{\Upsilon}&=&(v_\Upsilon,\,v_\Upsilon,\,v_\Upsilon)\;,&\qquad
\mean{\varphi}&=&(v_\varphi,\,v_\varphi)\;,\\[3mm]
\mean{\xi'}&=&v_{\xi'}\;,&\qquad
\mean{\theta}&=&v_\theta\;.
\label{S4TBM:vev:allleptons}
\end{array}\end{equation}
We will demonstrate that this particular VEV alignment is a natural solution of the scalar potential in appendix \ref{AppB:S4}; moreover we will see that all the VEVs are of the same order of magnitude and for this reason we will parametrise the ratio $VEV/\Lambda_f$ by the parameter $u$. The only VEV which originates with a different mechanism with respect to the others is $v_\theta$ and we indicate the ratio $v_\theta/\Lambda_f$ by the parameter $t$.\\
With this setting, the mass matrix for the charged leptons is
\begin{equation}
M_e=\left(
\begin{array}{ccc}
y_e^{(1)} u^2t & y_e^{(2)} u^2t & y_e^{(3)} u^2t \\
0 & y_\mu u & 0 \\
0 & 0 & y_\tau \\
\end{array}
\right)\dfrac{u\,v_d}{\sqrt2}
\end{equation}
where the $y_e^{(i)}$ contains all the different contributions $y_{e,i}$. This matrix is already in a almost diagonal form and therefore $U_e$ and $U_{e^c}$, its diagonalising unitary matrices, are the unity matrix, apart from negligible corrections of the order of $u\,t$:
\begin{equation}
U_{e^c}^\dag M_e U_e=\diag\left(y_e\, u^2\,t,\;y_\mu\, u,\;y_\tau\right)u\,v_d/\sqrt2\;.
\end{equation}
For the neutrinos we get the following mass matrix
\begin{equation}
m_\nu=\left(
\begin{array}{ccc}
2c & b-c & b-c \\
b-c & b+2c & -c \\
b-c & -c & b+2c \\
\end{array}
\right)\dfrac{v_u^2}{\Lambda_f}\;,
\label{S4TBM:massaNuS4}
\end{equation}
where $b=x_d\,v_\varphi/\Lambda_f$ and $c=x_t\,v_\Upsilon/\Lambda_f$. Notice that it is $\mu-\tau$ symmetric and presents the magic symmetry, which assure that the matrix is of the TB-type.
At this level of approximation, the PMNS matrix is given by
\begin{equation}
U\equiv U_e^\dag\, U_{TB} = U_{TB}\;.
\end{equation}
When we introduce the higher-order terms in the Lagrangian, we expect corrections into the tribimaximal mixing of relative order u \cite{BMM_S4}. Comparing with the experimental values, the maximal deviation from the tribimaximal pattern is $0.05$ and therefore we can put an upper bound on $u$ of the same order of magnitude. For $u>0.05$ the model provides a $\theta_{12}$ angle which is not in agreement at $3\sigma$ error with respect to the experimental data in table \ref{table:OscillationData}. The mass hierarchy of the charged leptons is a consequence of the symmetry of the model and it is possible to further constrain the expansion parameters, $u$ and $t$: indeed, looking at the mass of the $\tau$ we have
\begin{equation}
u \simeq\,\dfrac{\tan\beta}{|y_\tau|} \dfrac{\sqrt{2} m_\tau}{v} \approx 0.01 \dfrac{\tan\beta}{|y_\tau|}\;,
\end{equation}
where for the $\tau$ lepton we have used its pole mass $m_\tau=(1776.84 \pm 0.17) \;\rm{MeV}$ \cite{PDG08}. Requesting $|y_\tau|<3$ we find a lower bound for $u$ close to the upper bound $0.05$ for $\tan\beta=15$, whereas for $\tan\beta=2$ it is $u>0.007$. From the requirement that also $y_\mu$ remains in the perturbative regime, the lower bound on $u$ of $0.007$ is slightly raised and we fix it at $0.01$. From now on, we will choose the maximal range of $u$ as $0.01 \lesssim u \lesssim 0.05$, which shrinks when $\tan\beta$ is increased from $2$ to $15$. In order to explain the ratio $m_e/m_\mu$, we get a range of values for the parameter $t$ which is similar to that one for $u$. Finally we can write
\begin{equation}
0.01 \lesssim u,\,t \lesssim 0.05\,.
\label{S4TBM:vev:uet}
\end{equation}
\subsubsection{Phenomenological Analysis}
In this section we perform a phenomenological analysis on the neutrino spectrum, along the same lines as section \ref{Sec:AFTBM:NuSpectrum}.
The neutrino mass matrix in eq. (\ref{S4TBM:massaNuS4}) is diagonalised by the tribimaximal mixing and the diagonal neutrino mass matrix is given by
\begin{equation}
U_\nu^T m_\nu U_\nu =\dfrac{v^2}{\Lambda_L}\diag(|3c-b|,\,|2b|,\,|3c+b|)\;,
\end{equation}
where $U_\nu=U_{TB}P$ and $P$ contains the Majorana phases $\alpha_1=-\arg(3c-b)$, $\alpha_2=-\arg(2b)$, $\alpha_3=-\arg(3c+b)$. We parametrise the mass eigenvalues in terms of $|b|=m_2/2$, $\rho$ and $\Delta$, where $\rho$ and $\Delta$ are defined as
\begin{equation}
\dfrac{c}{b}=\rho\,e^{i\Delta}\;,
\label{S4TBM:RhoDeltaDef}
\end{equation}
with $\Delta$ is in the range $[0,\,2\pi]$. Imposing the constraint $|\cos\Delta|\leq1$, it results that the model can accommodate both the mass orderings: taking the most conservative case, we have for the normal and the inverse hierarchies, respectively,
\begin{equation}
m_1>25.2\;\;\mathrm{meV}\;,\qquad\qquad m_3>0.68\;\;\mathrm{meV}\;.
\label{S4TBM:Boundm1}
\end{equation}
These values correspond to $\cos\Delta=\pm1$ and they are the values for which the spectrum presents the strongest hierarchy: the values of the masses of the other two neutrinos are given by
\begin{equation}
\begin{array}{rclcrcl}
\text{NH:}\qquad\quad m_2&=&26.7\;\;\mathrm{meV}\qquad&\text{and}\qquad m_3&=&51.9\;\;\mathrm{meV}\;,\\[3mm]
\text{IH:}\qquad\quad m_1&=&52.3\;\;\mathrm{meV}\qquad&\text{and}\qquad m_2&=&53.0\;\;\mathrm{meV}\;.
\end{array}
\end{equation}
Furthermore the sum of the neutrino masses is about $103.8$ meV for the normal hierarchy and $106.0$ meV for the inverse hierarchy. When $\cos\Delta$ approaches the zero, the neutrino spectrum becomes quasi degenerate.
It is also interesting to study the $0\nu2\beta$ parameter $|m_{ee}|$ as a function of the lightest neutrino mass:
\begin{equation}
|m_{ee}|=m_2\,\rho\;,
\end{equation}
and specifying the type of mass hierarchy we get
\beq\ba{rcl}
\text{NH:}\qquad\quad |m_{ee}| &=& \dfrac{1}{3}\sqrt{3m_1^2+2\Delta m^2_{atm}-\Delta m^2_{sol}}\;,\\[3mm]
\text{IH:}\qquad\quad |m_{ee}| &=& \dfrac{1}{3}\sqrt{3m_3^2+\Delta m^2_{atm}-2\Delta m^2_{sol}}\;.
\ea\eeq
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7.8cm]{S40Nu2Be.pdf}
\includegraphics[width=7.8cm]{S4SS0Nu2Be.pdf}
\caption{\it $|m_{ee}|$ as a function of the lightest neutrino mass, $m_1$ in the case of the normal hierarchy and $m_3$ in the case of the inverse hierarchy. On the left the effective model, while on the right the type I See-Saw version. See figure \ref{fig:AF_0nu2beta} for the description of the plot.}
\label{fig:S4first}
\end{center}
\end{figure}
In fig.(\ref{fig:S4first}), we plot $|m_{ee}|$ as a function of the lightest neutrino mass eigenstate, $m_1$ in the normal hierarchy case and $m_3$ in the inverse hierarchy one. Looking at the dark red area we can see that it falls in the quasi degenerate spectrum band and therefore we speak about normal ordering and not about normal hierarchy. When $|\cos\Delta|=1$, $|m_{ee}|$ reaches the minima which are given by:
\begin{equation}
|m_{ee}|\geq25.7\;\mathrm{meV}\qquad\text{and}\qquad|m_{ee}|\geq15.5\;\mathrm{meV}\;,
\end{equation}
in the normal ordering and inverse hierarchy, respectively. We observe that, considering only the leading order approximation level, the present model can be distinguished from the Altarelli-Feruglio model of section \ref{Sec:AFTBM} and the model based on $T'$ of section \ref{Sec:TpTBM}, just looking to the lower bound on $|m_{ee}|$: in fact those models predict a lower bound for $|m_{ee}|$, which is about $5$ meV, while in the present proposal it is larger. The next future experiments should be able, in principle, to test the present model, since the lower bound is close to the future experimental sensitivities, which are expected to reach the values of $90$ meV \cite{gerda} (GERDA), $20$ meV \cite{majorana} (Majorana), $50$ meV \cite{supernemo} (SuperNEMO), $15$ meV \cite{cuore} (CUORE) and $24$ meV \cite{exo} (EXO).\\
These results are valid only at the leading order and some deviations are expected with the introduction of the higher-order terms, that is illustrated in the following sections. The corrections are expected to be of relative order $u$ or $t$, whose allowed range is defined in eqs. (\ref{S4TBM:vev:uet}). However, close to $\cos\Delta=-1$, where the bounds are saturated, the corrections remain of relative order $u$ or $t$ and as a result the lower bounds on $m_1$ and $m_{3}$ of eqs. (\ref{S4TBM:Boundm1}) are not significantly affected. Major effects could appear when the spectrum is quasi degenerate, \mbox{$\cos\Delta\approx0$}.
\subsection{The Quark Sector}
\label{Sec:S4TBM:Quark}
\setcounter{footnote}{3}
In this part we illustrate the model in the quark sector, getting a good approximation of the experimental quark mixing matrix. In table \ref{table:S4quark_transformation}, we can see the quark sector fields of the model and their transformation properties under $S_4\times Z_5\times U(1)_{FN}$.
The superpotential in the quark sector can be written as
\begin{equation}\begin{array}{rl}
w_q\;=&y_tt^cq_3H_u+\dfrac{y_b}{\Lambda_f}b^cq_3\xi'H_d+\\[0.2cm]
&+\sum_{i=1}^{2}\dfrac{y_{tc,i}}{\Lambda_f^2}t^c\left(D_qX^{(1)}_i\right)H_u+ \sum_{i=1}^{2}\dfrac{y_{bs,i}}{\Lambda_f^2}b^c\left(D_qX^{(1)}_i\right)'H_d+\\[0.3cm]
&+\sum_{i=1}^{6}\dfrac{y_{tu,i}}{\Lambda_f^3}t^c\left(D_qX^{(2)}_i\right)H_u+ \sum_{i=1}^{6}\dfrac{y_{bd,i}}{\Lambda_f^3}b^c\left(D_qX^{(2)}_i\right)'H_d+\\[0.3cm]
&+\sum_{i=1}^{2}\dfrac{y_{c,i}}{\Lambda_f^2}c^c\left(D_qX^{(1)}_i\right)'H_u+
\sum_{i=1}^{2}\dfrac{y_{s,i}}{\Lambda_f^2}s^c\left(D_qX^{(1)}_i\right)'H_d+\\[0.3cm]
&+\dfrac{y_{ct}}{\Lambda_f}c^cq_3\xi'H_u+
\sum_{i=1}^{3}\dfrac{y_{sb,i}}{\Lambda_f^2}s^cq_3\left(\eta\varphi\right)'H_d+\\[0.3cm]
&+\sum_{i=1}^{6}\dfrac{y_{cu,i}}{\Lambda_f^3}c^c\left(D_qX^{(2)}_i\right)'H_u+
\sum_{i=1}^{6}\dfrac{y_{sd,i}}{\Lambda_f^3}s^c\left(D_qX^{(2)}_i\right)'H_d+\\[0.3cm]
&+\sum_{i=1}^{2}\dfrac{y_{u,i}}{\Lambda_f^2}\dfrac{\theta^2}{\Lambda_f^2}u^c\left(D_qX^{(4)}_i\right)H_u+ \sum_{i=1}^{2}\dfrac{y_{d,i}}{\Lambda_f^2}\dfrac{\theta}{\Lambda_f}d^c\left(D_qX^{(4)}_i\right)H_d\\[0.3cm]
&+\sum_{i=1}^{4}\dfrac{y_{ut,i}}{\Lambda_f^3}\dfrac{\theta^2}{\Lambda_f^2}u^c\left(q_3X^{(5)}_i\right)H_u+ \sum_{i=1}^{4}\dfrac{y_{db,i}}{\Lambda_f^3}\dfrac{\theta}{\Lambda_f}d^c\left(q_3X^{(5)}_i\right)H_d
\end{array}\end{equation}
where
\beq\ba{rcl}
&&X^{(1)}=\left\{\eta\eta+\psi\psi\right\}\nn\\[0.3cm]
&&X^{(2)}=\left\{\eta\eta\xi',\;\psi\psi\xi',\;\Upsilon\Upsilon\Upsilon,\; \Upsilon\Upsilon\varphi,\;\Upsilon\varphi\varphi,\;\varphi\varphi\varphi\right\}\nn\\[0.3cm]
&&X^{(3)}=\left\{\psi\Upsilon,\;\eta\varphi,\;\xi'\xi'\right\}\nn\\[0.3cm]
&&X^{(4)}=\left\{\varphi\varphi,\;\Upsilon\Upsilon\right\}\nn\\[0.3cm]
&&X^{(5)}=\left\{\psi\psi\Upsilon,\;\psi\psi\varphi,\;\psi\eta\Upsilon,\;\eta\eta\varphi\right\}\nn\;.
\ea\eeq
Since the quantum numbers of $b^c$ and $s^c$ are exactly the same, there are no fundamental distinctions between $b^c$ and $s^c$. We can therefore define $b^c$ as the field which couples to $q_3\xi'H_d$ in the superpotential $w_q$.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c||c||c|c||c|c||c|}
\hline
&&&&&&&&&&&&&& \\[-0,3cm]
& $D_q$ & $q_3$ & $u^c$ & $d^c$ & $c^c$ & $s^c$ & $t^c$ & $b^c$ & $\theta$ & $\psi$ & $\eta$ & $\Upsilon$ & $\varphi$ & $\xi'$ \\
&&&&&&&&&&&&&& \\[-0,3cm]
\hline
&&&&&&&&&&&&&& \\[-0,3cm]
$S_4$ & $\bf2$ & $\bf1$ & $\bf1'$ & $\bf1'$ & $\bf1'$ & $\bf1'$ & $\bf1$ & $\bf1'$ & $\bf1$ & $\bf3$ & $\bf2$ & $\bf3$ & $\bf2$ & $\bf1'$ \\
&&&&&&&&&&&&&& \\[-0,3cm]
$Z_5$ & $\omega^4$ & $\omega^3$ & $1$ & $1$ & $\omega^2$ & $\omega^2$ & $\omega^2$ & $\omega^2$ & 1 & $\omega^2$ & $\omega^2$ & $\omega^3$ & $\omega^3$ & 1 \\
&&&&&&&&&&&&&& \\[-0,3cm]
$U(1)_{FN}$ & 0 & 0 & 2 & 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{\it Transformation properties of all the fields in the quark sector.}
\label{table:S4quark_transformation}
\end{table}
Notice that all the flavons are present in the quark sector and as a result the $S_4$ symmetry is completely broken in the quark sector. When the flavour and the electroweak symmetries are broken, the mass matrices for the up- and down-quarks are
\begin{equation}
m_u=\left(
\begin{array}{ccc}
y_u u^2t^2 & y_u u^2t^2 & y_{ut}u^3t^2 \\
y_{cu} u^3 & y_cu^2 & y_{ct}u \\
y_{tu} u^3 & y_{tc}u^2 & y_t \\
\end{array}
\right)\dfrac{v_u}{\sqrt2}\;,\qquad
m_d=\left(
\begin{array}{ccc}
y_dut & y_dut & y_{db}u^2t \\
y_{sd}u^2 & y_su & y_{sb}u \\
y_{bd}u^2 & y_{bs}u & y_b \\
\end{array}
\right)\dfrac{u\,v_d}{\sqrt2}\;,
\end{equation}
where the Yukawas are the sum of all the different terms, which appear in the superpotential.\\
These mass matrices can be diagonalised by the following transformations:
\begin{equation}
V_{u^c}^\dag m_u V_u=(y_uu^2t^2,\;y_cu^2,\;y_t)\dfrac{v_u}{\sqrt2}\;,\qquad\qquad
V_{d^c}^\dag m_d V_d=(y_dut,\;y_su,\;y_b)\dfrac{u\,v_d}{\sqrt2}
\end{equation}
where the unitary matrices can be written in terms of order of magnitude of $u$ and $t$ as
\begin{equation}\begin{array}{rlrl}
V_u=&\left(
\begin{array}{ccc}
1 & O(u) & O(u^3) \\
-O(u) & 1 & O(u^2) \\
-O(u^3) & -O(u^2) & 1 \\
\end{array}
\right)\;,&
V_d=&\left(
\begin{array}{ccc}
1 & O(u) & O(u^2) \\
-O(u) & 1 & O(u) \\
-O(u^2) & -O(u) & 1 \\
\end{array}
\right)\;,\\[0.8cm]
V_{u^c}=&\left(
\begin{array}{ccc}
1 & O(t^2) & O(u3t^2) \\
-O(t^2) & 1 & O(u) \\
-O(ut^2) & -O(u) & 1 \\
\end{array}
\right)\;,\quad&
V_{d^c}=&\left(
\begin{array}{ccc}
1 & O(t) & O(u^2t) \\
-O(t) & 1 & O(u) \\
-O(ut) & -O(u) & 1 \\
\end{array}
\right)\;.
\end{array}\end{equation}
While the mass of the top quark is expected to be of the order of $v_u\approx\cO(100\;\mathrm{GeV})$, the mass of the bottom quark is suppressed compared with $m_t$ by the factor $u\,v_d/v_u$ so that it is of the same order of $m_\tau$. The other measured quark masses can be accommodated thanks to the $Z_5$ and the $U(1)_{FN}$ suppressions. The resulting quark mixing matrix is
\begin{equation}
V\equiv V_u^\dag V_d\simeq\left(
\begin{array}{ccc}
1 & \left(\dfrac{y_{sd}}{y_s}-\dfrac{y_{cu}}{y_c}\right)u & \left(\dfrac{y_{bd}y_{c}-y_{bs}y_{cu}}{y_by_c}\right)u^2 \\[0.3cm]
-\left(\dfrac{y_{cu}}{y_c}-\dfrac{y_{sd}}{y_s}\right)u & 1 & \dfrac{y_{bs}}{y_b}u \\[0.3cm]
\left(y_{bs}y_{sd}-\dfrac{y_{bd}y_{s}}{y_by_s}\right)u^2 & -\dfrac{y_{bs}}{y_b}u & 1 \\
\end{array}
\right)\;.
\end{equation}
In order to fit the experimental values of the mixing angles we need to invoke a moderate fine-tuning in some parameters. The (23) entry of $V$ has to be of order $\lambda^2\simeq0.05$ and therefore suggests for $u$ a value close to its upper bound. However this is not a strict constraint because this value can be well explained for the entire range of $u$ considering the Yukawas. On the other hand, the entry (12) requires an accidental enhancement of the combination $\left(\dfrac{y_{sd}}{y_s}-\dfrac{y_{cu}}{y_c}\right)$ of order $1/\lambda\sim4$ in order to describe the correct Cabibbo angle. It is possible to explain such an enhancement considering particular values of the relative phase, $\Delta_q$, between $\dfrac{y_{sd}}{y_s}$ and $\dfrac{y_{cu}}{y_c}$, which is connected to the CP violating phase: if $\Delta_q=\pi$, then the two factors sum up and the required values are easily explained.
\subsection{Higher Order Corrections}
\label{Sec:S4TBM:NLO}
\setcounter{footnote}{3}
We now discuss the deviations to the leading order results. A detailed analysis is presented in the original paper \cite{BMM_S4}, while here we summarise only the results.
Looking at the flavon sector, we find that the VEV alignment in eq. (\ref{S4TBM:vev:allleptons}) is stable under the NLO corrections and the deviations are of relative order $u$ with respect to the leading order results (see appendix \ref{AppB:S4}). This affects the fermion mass matrices, when these new VEVs are introduced in the superpotential $w$.
In the lepton sector, all the corrections from the higher-order terms introduce deviations to the lepton mixing matrix of relative order $u$ with respect to the leading order. This is in line to what happens in the Altarelli-Feruglio model, where the tribimaximal values of the atmospheric and the solar angles are corrected by $\cO(u)$ terms and the reactor angle deviates from zero by $\cO(u)$ contributions.
The analysis for the up and down quark mass matrices point out tha the corrections coming from the NLO operators of the superpotential and from the deviations to the VEVs introduce new terms of relative order $u$ in each entry of the mass matrices. As a result the quark mixing angles receive deviations of relative order $u$ with respect the initial values, which do not spoil the leading order results.
\subsection{See-Saw Extensions}
\label{Sec:S4TBM:SeeSaws}
\setcounter{footnote}{3}
In this section we deal with the study of a possible explanation of the Weinberg operator used in the previous sections in order to describe the smallness of neutrino masses. We focus only on the neutrino sector, indeed the quark and the charged lepton sectors are described by the same superpotential as in the effective model and the flavon content is not modified at all, in particular the VEV misalignment mechanism works exactly in the same way.
The simplest approach is the type I See-Saw mechanism, extending the matter content of the model by adding three right-handed neutrinos $\nu^c$ which transform as a triplet of $S_4$. Here we summarise the phenomenological results of the analysis presented in \cite{BMM_SS} and we discuss the differences with the effective model.
In the See-Saw model, it is possible to describe both the neutrino mass hierarchies and it is interesting to find the allowed ranges for the lightest neutrinos in both the cases. For the most conservative case and for the normal and inverse hierarchies, respectively, we get
\begin{equation}
m_1>10.4\;\;\mathrm{meV}\;,\qquad\qquad m_3>25.9\;\;\mathrm{meV}\;.
\label{S4TBM:Boundm1SS}
\end{equation}
These value corresponds to the condition for which the spectrum present the strongest hierarchy: the values of the masses of the other two neutrinos are given by
\begin{equation}
\begin{array}{rclcrcl}
\text{NH:}\qquad\quad m_2&=&13.4\;\;\mathrm{meV}\qquad&\text{and}\qquad m_3&=&46.6\;\;\mathrm{meV}\;,\\[3mm]
\text{IH:}\qquad\quad m_1&=&51.5\;\;\mathrm{meV}\qquad&\text{and}\qquad m_2&=&52.3\;\;\mathrm{meV}\;.
\end{array}
\end{equation}
Furthermore the sum of the neutrino masses in this case is about $70.4$ meV for the normal hierarchy and $129.7$ meV for the inverse case. When $m_1$ and $m_3$ increase their values, the spectrum becomes degenerate.
The $0\nu2\beta$ parameter $|m_{ee}|$ is different with respect to the effective model:
\beq\ba{rcl}
\text{NH:}\qquad\quad |m_{ee}| &=& \dfrac{1}{3}\sqrt{-\left(m_1^2+\Delta m^2_{sol}\right)+2m_1^2\left(\dfrac{m_1^2+\Delta m^2_{sol}}{m_1^2+\Delta m^2_{atm}}+1\right)}\;,\\[5mm]
\text{IH:}\qquad\quad |m_{ee}| &=& \dfrac{1}{3m_3}\sqrt{3m_3^4+m_3^2\left(5\Delta m^2_{atm}-4\Delta m^2_{sol}\right)+2\Delta m^2_{atm}\left(\Delta m^2_{atm}-\Delta m^2_{sol}\right)}\;.\nn
\ea\eeq
In fig.(\ref{fig:S4first}), we plot $|m_{ee}|$ as a function of the lightest neutrino mass eigenstate, $m_1$ for the normal hierarchy and $m_3$ for the inverse hierarchy. It is interesting to note that, while the dark blue area referring to the inverse hierarchy falls well inside the $1\sigma$ error band, the dark red area goes out of the $1\sigma$ error band. This small discrepancy can be easily motivated by a shift of $1\sigma$ level on the lepton mixing angles. However, unfortunately the future experimental sensitivities will not reach such small values and therefore in the next experiments it will not be possible to test these deviations in the mixing angles, as predicted by the model.
When $m_1$ and $m_3$ reaches the values as in eqs. (\ref{S4TBM:Boundm1SS}), $|m_{ee}|$ reaches the minimum which is given by:
\begin{equation}
|m_{ee}|\geq2.47\;\mathrm{meV}\qquad\text{and}\qquad|m_{ee}|\geq51.8\;\mathrm{meV}\;,
\end{equation}
in the normal and inverse hierarchies, respectively.
The See-Saw model presents different features with respect to the effective model, as it easy to understand looking at figures \ref{fig:S4first}. While in the See-Saw model the inverse hierarchy is almost restricted in the quasi degenerate spectrum area, this happens to the normal hierarchy in the effective model. Furthermore, an absence of any signal linked to some lepton number violating processes in the next experiments would fix an experimental upper bound on $|m_{ee}|$ which would completely rule out the inverse hierarchy allowing only the normal hierarchy.
Considering now the comparison with the Altarelli-Feruglio model or the model based on $T'$ and only the leading order terms, it seems possible, but difficult, to distinguish among the different realisations. However, the introduction of the higher-order corrections makes the predictions overlap in a large part of the parameter space, and it will be hard to test these little differences in the future.
It is interesting to note that the Altarelli-Feruglio and the $T'$ models present exactly the same behaviour in the quasi degenerate region, which is different from all the $S_4$ proposals: in the first case the profile of $|m_{ee}|$ is at the upper edge of the allowed region, while in the second case the profile follows approximately the central line. This behavior is due to a constraint on the Majorana phases, which are determined as a function of the neutrino masses: in the $A_4$- and $T'$-based models the constraint forces the different terms in $|m_{ee}|$ to sum up, while in the $S_4$-based model it requires a partial cancellation (see \cite{BMM_SS} for details).
In \cite{BMM_SS} we have analysed also the other possibilities of the type II and type III See-Saw mechanisms. Here we report only the phenomenological conclusions of these approaches, referring to the original paper for the details.
Introducing a scalar triplet or three fermion triplets as explained in section \ref{Sec:SM:SeeSaw}, it is possible to choose a suitable charge assignment in order to find the tribimaximal pattern as the lepton mixing matrix. Moreover, the phenomenology linked to these two alternatives is identical to the one of the effective model and of the type I See-Saw: indeed the light neutrino mass matrix in the type II See-Saw coincides with the one of the effective model, once the VEV of the triplet is identified with the VEV of $H_u$; the mass matrices in the type III See-Saw correspond to those ones of the type I, identifying $M_R$ with the mass matrix of the fermion triplets.
As a result, it is clear that it is not possible to discriminate among type I, II and III See-Saw models based on the $S_4$ symmetry group only relying on the neutrino phenomenology. New observables are needed, such as the mixing of the fermion triplets with the charged leptons or the decays of the scalar triplet.
\section{Conclusions of the Chapter}
\label{Sec:TBM:Conclusions}
\setcounter{footnote}{3}
In this chapter we focussed on three flavour models which reproduce a lepton mixing matrix of the tribimaximal type. The model based on the $A_4$ discrete symmetry is the simplest realisation in which the tribimaximal pattern naturally arises: its simplicity relies on the small dimension of the flavour group and on the small number of flavons which break the symmetry; its naturalness refers to the presence of only a moderate fine-tuning to accommodate the parameter $r\equiv \Delta m_{sol}^2/\Delta m_{atm}^2$, which translates in a small range of values for $\Delta$, the phase between the two parameters which describe the neutrino mass matrix. In the model, neutrino masses belong to a limited range and in particular there is always a lower bound for the lightest neutrino mass. Furthermore there is also a prediction for the neutrinoless-double-beta decay as a function of the neutrino masses.
The $A_4$-based model applies only to the lepton sector, indeed it fails when extending the description to the quark sector. Two possible solutions are the larger groups $T'$ and $S_4$: the first one is the double covering of $A_4$, while the latter contains $A_4$ as a subgroup. The $T'$ model reproduces exactly the same results as in the $A_4$ model and describes quarks using the doublet representations, similarly to $U(2)$-based models developed long ago. As a result, realistic mass hierarchies and mixings are achieved, introducing only a small fine-tuning to the parameters of the order of $\lambda$ in order to reproduce the measured ratio $m_u/m_c$.
The $S_4$ discrete group has the same number of elements as $T'$, but different representations. This enables to describe neutrinos with a different mass matrix, still diagonalised by the tribimaximal mixing. This leads to a completely new neutrino phenomenology. Considering only the leading order terms, it seems possible, even if difficult, to distinguish among the different realisations; unfortunately, the introduction of the higher-order corrections makes the predictions overlap in all the parameter space, apart from very small areas, which will be difficult to test in the near future. In the quark sector, the mass hierarchies are well explained through the Froggatt-Nielsen mechanism, while the mixings necessitate of a small fine-tuning of the order of $\lambda$: the Cabibbo angle is described as the difference of two complex terms of order of $\lambda^2$ and the fine-tuning corresponds to the requirement of having a coherent sum of the two terms.
In all these models, a fundamental aspect is the vacuum misalignment of the flavons, which break the discrete group down to certain subgroups which represent the effective low-energy flavour structures: it is this mechanism that assures the tribimaximal mixing for the neutrinos in the basis of diagonal charged leptons. Furthermore, the main discrete group is usually extended by additional terms, usually $Z_n$, which help keeping separated each sector from the others, at least at leading order. The symmetry breaking can be complete when speaking about quarks. The introduction of the higher-order corrections have a deep impact on the neutrino phenomenology, introducing non-negligible deviations from the tribimaximal values of the mixing angles: as a result, the solar angle can reach its measured best-fit value, the atmospheric one can slightly deviate from the maximal value, and finally the reactor angle could be non-vanishing, but still very close to zero. We expect $\theta_{13}$ to be less than or equal to approximately $0.05$ not far from the reach of the future high-intensity neutrino beam facilities.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{A Flavour Model with the Bimaximal Mixing}
\label{Sec:FlavourModelsBM}
\setcounter{equation}{0}
\setcounter{footnote}{3}
In the previous sections we investigated on a series of models which reproduce al leading order the tribimaximal pattern for the lepton mixing. When considering the next-to-the-leading order corrections, all the three mixing angles receive corrections of the same order of magnitude. This is a very general feature of tribimaximal models, in the absence of specific dynamical tricks (see \cite{Lin_LargeReactor} for a model in which such a trick is implemented). Since the experimentally allowed departures of $\theta_{12}$ from the tribimaximal value are small, at most of $\mathcal{O}(\lambda^2)$, it follows that both $\theta_{13}$ and the deviation of $\theta_{23}$ from the maximal value are expected in these models to also be at most of $\mathcal{O}(\lambda^2)$. A value of $\theta_{13} \sim \mathcal{O}(\lambda^2)$ is within the sensitivity of the experiments which are now in preparation and will take data in the near future. However, the present data do not exclude a larger value for $\theta_{13}$ with $\theta_{13} \sim \mathcal{O}(\lambda)$, as suggested in \cite{Fogli:Indication}. If experimentally it is found that $\theta_{13}$ is close to its present upper bound, this could be interpreted as an indication that the agreement with the tribimaximal mixing is accidental. Then a scheme where instead the bimaximal (BM) mixing is the correct first approximation modified by terms of $\mathcal{O}(\lambda)$ could be relevant. This is in line with the well known empirical observation that $\theta_{12}+\lambda\sim \pi/4$, a relation known as ``quark-lepton complementarity'' \cite{SymmSO3King,Complementarity}, or similarly $\theta_{12}+\sqrt{m_\mu/m_\tau} \sim \pi/4$, since $\sqrt{m_\mu/m_\tau}\sim\lambda$. No compelling model leading without parameter fixing to the exact complementary relation has been produced so far. Probably the exact complementarity relation is to be replaced with something like $\theta_{12}+\mathcal{O}(\lambda)\sim \pi/4$, which we could call ``weak'' complementarity.
In the following we construct a model based on the permutation group $S_4$ which naturally leads to the bimaximal mixing at the leading order. We adopt a supersymmetric formulation of the model in 4 space-time dimensions. The complete flavour group is $S_4\times Z_4 \times U(1)_{FN}$. In the lowest approximation, the charged leptons are diagonal and hierarchical and the light neutrino mass matrix, after See-Saw, leads to the exact bimaximal mixing. The model is built in such a way that the dominant corrections to the bimaximal mixing, from higher dimensional operators in the superpotential, only arise from the charged lepton sector at the NLO and naturally inherit $\lambda$ as the relevant expansion parameter. As a result the mixing angles deviate from the bimaximal values by terms of $\mathcal{O}(\lambda)$ (at most), and weak complementarity holds. A crucial feature of the model is that only $\theta_{12}$ and $\theta_{13}$ are corrected by terms of $\mathcal{O}(\lambda)$ while $\theta_{23}$ is unchanged at this order (which is essential to make the model agree with the present data).
\section{The Structure of the Model}
\label{Sec:AFM:Structure}
\setcounter{footnote}{3}
We discuss here the general properties of our model. We have already introduced in the previous chapter the $S_4$ discrete group in order to perform a tribimaximal model. That description, however, does not fit our purpose and we need to define a new set of generators in order to achieve the bimaximal mixing in a clever way: the two new operators $S$ and $T$ satisfy to
\begin{equation}
T^4=S^2=(ST)^3=(TS)^3=1
\end{equation}
and their explicit forms in each of the irreducible representations and the Clebsch-Gordan coefficients in our basis are collected in the appendix \ref{AppA:S4BM}. Notice that the multiplication rules are the same as in the previous chapter.
This description is suitable in our case, because, by requiring invariance under the action of $S$
\begin{equation}
m_\nu= S m_\nu S\;,
\label{AFM:invS}
\end{equation}
the resulting effective neutrino mass matrix is given by
\begin{equation}
m_\nu=\left(
\begin{array}{ccc}
x & y & y \\
y & z & x-z \\
y & x-z & z \\
\end{array}
\right)\;,
\label{AFM:GeneralMassMatrix}
\end{equation}
which can be diagonalised by the bimaximal mixing, as stated in section \ref{Sec:FS:BM}.
We formulate our model in the framework of the See-Saw mechanism (even though it would also be possible to build a version where light neutrino masses are directly described by the Weinberg operator). For this we assign the $3$ generations of left-handed lepton doublets $\ell$ and of right-handed neutrinos $\nu^c$ to two triplets $\bf3$, while the right-handed charged leptons $e^c$, $\mu^c$ and $\tau^c$ transform as $\bf1$, $\bf1'$ and $\bf1$, respectively. The $S_4$ symmetry is then broken by suitable triplet flavons. Additional symmetries are needed, in general, to prevent unwanted couplings and to obtain a natural hierarchy among $m_e$, $m_\mu$ and $m_\tau$. In our model, the complete flavour symmetry is $S_4\times Z_4\times U(1)_{FN}$. A flavon $\theta$, carrying a negative unit of the $U(1)_{FN}$ charge, acquires a VEV and breaks $U(1)_{FN}$. In view of a possible GUT extension of the model at a later stage, we adopt a supersymmetric context, so that two Higgs doublets $H_{u,d}$, invariant under $S_4$, are present in the model. The usual continuous $U(1)_R$ symmetry, related to $R$-parity and the presence of driving fields in the flavon superpotential, is implemented in the model. Supersymmetry also helps producing and maintaining the hierarchy $\langle H_{u,d}\rangle=v_{u,d}\ll \Lambda_f$ where $\Lambda_f$ is the cutoff scale of the theory.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c||c||c|c|c|c||c|c|c|c|}
\hline
&&&&&&&&&&&&&&&\\[-0,3cm]
& $\ell$ & $e^c$ & $\mu^c$ & $\tau^c$ & $\nu^c$ & $H_{u,d}$ & $\theta$ & $\varphi_\ell$ & $\chi_\ell$ & $\psi_\ell^0$ & $\chi_\ell^0$ & $\xi_\nu$ &$\varphi_\nu$ & $\xi_\nu^0$ & $\varphi_\nu^0$ \\
&&&&&&&&&&&&&&&\\[-0,3cm]
\hline
&&&&&&&&&&&&&&&\\[-0,3cm]
$S_4$ & $\bf3$ & $\bf1$ & $\bf1^\prime$ & $\bf1$ & $\bf3$ & $\bf1$ & $\bf1$ & $\bf3$ & $\bf3^\prime$ & $\bf2$ & $\bf3'$ & $\bf1$ & $\bf3$ & $\bf1$ & $\bf3$ \\
&&&&&&&&&&&&&&&\\[-0,3cm]
$Z_4$ & 1 & -1 & -i & -i & 1 & 1 & 1 & i & i & -1 & -1 & 1 & 1 & 1 & 1 \\
&&&&&&&&&&&&&&&\\[-0,3cm]
$U(1)_{FN}$ & 0 & 2 & 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
&&&&&&&&&&&&&&&\\[-0,3cm]
$U(1)_R$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 2 & 2 & 0 & 0 & 2 & 2 \\
\hline
\end{tabular}
\end{center}
\caption{\it Transformation properties of all the fields.}
\label{table:TransformationsS4BM}
\end{table}
The fields in the model and their classification under the symmetry are summarised in Table \ref{table:TransformationsS4BM}. The complete superpotential can be written as
\begin{equation}
w=w_e+w_\nu+w_d\;,
\end{equation}
where $w_d$ is responsible for the flavon VEV alignment as discussed in appendix \ref{AppB:S42}, while $w_e$ and $w_\nu$ refer to the charged lepton and neutrino sectors and can be written as
\begin{equation}
\begin{array}{l}
\begin{split}
w_e\;=&\;\frac{y_e^{(1)}}{\Lambda_f^2}\frac{\theta^2}{\Lambda_f^2}e^c(\ell\varphi_\ell\varphi_\ell)H_d+ \frac{y_e^{(2)}}{\Lambda_f^2}\frac{\theta^2}{\Lambda_f^2}e^c(\ell\chi_\ell\chi_\ell)H_d+ \frac{y_e^{(3)}}{\Lambda_f^2}\frac{\theta^2}{\Lambda_f^2}e^c(\ell\varphi_\ell\chi_\ell)H_d+\\
&+\frac{y_\mu}{\Lambda_f}\frac{\theta}{\Lambda_f}\mu^c(\ell\chi_\ell)^\prime H_d+\frac{y_\tau}{\Lambda_f}\tau^c(\ell\varphi_\ell)H_d+\dots
\end{split}
\label{AFM:wl}\\[1mm]
w_\nu\;=\;y(\nu^c\ell)H_u+M \Lambda_f (\nu^c\nu^c)+a(\nu^c\nu^c\xi_\nu)+b(\nu^c\nu^c\varphi_\nu)+\dots
\end{array}
\end{equation}
indicating with $(\ldots)$ the singlet $\bf{1}$, with $(\ldots)^\prime$ the singlet ${\bf1^\prime}$ and with $(\ldots)_R$ the representation R ($R={\bf2},\,{\bf3},\,{\bf3'}$). Note that the parameter $M$ defined above is dimensionless. In the above expression for the superpotential $w$, only the lowest order operators in an expansion in powers of $1/\Lambda_f$ are explicitly shown. Dots stand for higher
dimensional operators that will be discussed later on. The stated symmetries ensure that, for the leading terms, the flavons that appear in $w_e$ cannot contribute to $w_\nu$ and viceversa.
We will show in appendix \ref{AppB:S42} that the potential corresponding to $w_d$ possesses an isolated minimum for the following VEV configuration:
\begin{eqnarray}
&\dfrac{\mean{\varphi_\ell}}{\Lambda_f}=\left(
\begin{array}{c}
0 \\
1 \\
0 \\
\end{array}
\right)A\;,\qquad\qquad
&\dfrac{\mean{\chi_\ell}}{\Lambda_f}=\left(
\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}
\right)B\;,
\label{AFM:vev:charged:best}\\
&\dfrac{\mean{\varphi_\nu}}{\Lambda_f}=\left(
\begin{array}{c}
0 \\
1 \\
-1 \\
\end{array}
\right)C\;,\qquad\quad
&\dfrac{\mean{\xi_\nu}}{\Lambda_f}=D\;,
\label{AFM:vev:neutrinos}
\end{eqnarray}
where the factors $A$, $B$, $C$, $D$ should obey to the relations:
\begin{eqnarray}
&\sqrt{3}f_1A^2+\sqrt{3}f_2B^2+f_3AB=0\;,
\label{AFM:AB}\\[1mm]
&D=-\dfrac{M_\varphi}{g_2}\;,\qquad\qquad
C^2=\dfrac{g_2^2M_\xi^2+g_3M_\varphi^2-g_2M_\varphi M'_\xi}{2 g_2^2g_4}
\label{AFM:CD}\;.
\end{eqnarray}
In the discrete component $S_4\times Z_4$ of the full flavour group we can choose generators $(S,T,i)$,
where the imaginary unit $i$ denotes the generator of the $Z_4$ factor.
The flavons $\xi_\nu$ and $\varphi_\nu$ are invariant under $Z_4$ and their VEVs are eigenvectors of the generator $S$ corresponding to the eigenvalue 1,
so that the corresponding breaking of $S_4\times Z_4$ preserves the subgroup $G_{\nu}$ generated by $(S,i)$.
In the charged lepton sector $S_4\times Z_4$ is broken down to the subgroup $G_\ell$, generated by the product $i T$.
Indeed the generator $iT$ is given by $\pm\diag(-i ,1 ,-1)$, with the plus (minus) sign for the $\bf3$ $({\bf3'})$ $S_4$ representation.
Such a generator, acting in the appropriate representation on the VEVs of eq. (\ref{AFM:vev:charged:best}), leaves them invariant.
It is precisely the mismatch, present at the leading order, between the subgroups $G_{\nu}$ and $G_\ell$
preserved in the neutrino and charged lepton sectors, respectively, that produces the bimaximal lepton mixing, as we will explicitly see in this section.
Similarly, the Froggatt-Nielsen flavon $\theta$ gets a VEV, determined by the $D$-term associated to the local $U(1)_{FN}$ symmetry (as in the previous models), and it is denoted by $\mean{\theta}/\Lambda_f= t$.
With this VEVs configuration, the charged lepton mass matrix is diagonal
\begin{equation}
M_e=\left(
\begin{array}{ccc}
(y_e^{(1)}A^2-y_e^{(2)}B^2+y_e^{(3)}AB)t^2 & 0 & 0 \\
0 & y_\mu Bt & 0 \\
0 & 0 & y_\tau A \\
\end{array}
\right)\dfrac{v_d}{\sqrt2}
\end{equation}
so that at the leading order there is no contribution to the lepton mixing matrix from the diagonalisation of charged lepton masses.
In the neutrino sector for the Dirac and right-handed Majorana matrices we have
\begin{equation}
m_D=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}
\right)\dfrac{y\,v_u}{\sqrt2}\;,\qquad\quad
M_R=\left(
\begin{array}{ccc}
2M+2aD & -2bC & -2bC \\
-2bC & 0 & 2M+2aD \\
-2bC & 2M+2aD & 0 \\
\end{array}
\right)\Lambda_f\;.
\label{AFM:Feq:RHnu:masses}
\end{equation}
The matrix $M_R$ can be diagonalised by the bimaximal mixing matrix $U_{BM}$, which represents the full lepton mixing at the leading order, while the Majorana phases are absorbed by the introduction of the diagonal matrix $P$, defined as
\begin{equation}
P=\diag(e^{i\alpha_1/2},\,e^{i\alpha_2/2},\,e^{i\alpha_3/2})\;,
\label{AFM:Pmatrix}
\end{equation}
with $\alpha_1=-\arg(M+aD-\sqrt{2}bC)$, $\alpha_2=-\arg(M+aD+\sqrt{2}bC)$, $\alpha_3=-\arg(M+aD)$. As a result, defining $U_R=U_{BM}P$, the eigenvalues are given by
\begin{equation}
U_R^T M_R U_R\,=\,\diag(M_1,\,M_2,\,M_2)\qquad\text{with}\quad
\left\{\begin{array}{rcl}
M_1 &=& 2|M+aD-\sqrt{2}bC|\Lambda_f\;, \\[3mm]
M_2 &=& 2|M+aD+\sqrt{2}bC|\Lambda_f\;, \\[3mm]
M_3 &=& 2|M+aD|\Lambda_f\;.
\end{array}\right.
\end{equation}
After See-Saw, since the Dirac neutrino mass matrix commutes with $M_R$ and its square is a matrix proportional to unity,
the light neutrino Majorana mass matrix, given by the See-Saw relation \mbox{$m_\nu=-m_D^TM_R^{-1}m_D$}, is also diagonalised by the bimaximal mixing matrix and, defining $U_\nu=U_{BM}P^*$, the eigenvalues are
\begin{equation}
U_\nu^T m_\nu U_\nu\,=\,\diag(m_1,\,m_2,\,m_2)\qquad\text{with}\quad m_i\equiv\dfrac{|y^2|v_u^2}{2M_i}\;.
\label{AFM:spec}
\end{equation}
Notice that since the neutrino sector is not charged under the $Z_4$ symmetry, we have operators of dimension 5 which contribute to the neutrino masses and may correspond to some heavy exchange other than the right-handed neutrinos $\nu^c$. The contribution from these terms is
\begin{equation}
m_\nu^{eff}=\left(
\begin{array}{ccc}
2M'+2a'D & -2b'C & -2b'C \\
-2b'C & 0 & 2M'+2a'D \\
-2b'C & 2M'+2a'D & 0 \\
\end{array}
\right)\dfrac{v_u^2}{2\Lambda_f}\;,
\label{AFM:meff}
\end{equation}
where the choice of the labels $M'$, $a'$ and $b'$ is not accidental but reflects the fact that $m_\nu^{eff}$ is similar to $M_R$. When considering the interesting domain of parameters, we find that this effective contribution is subdominant and for this reason we will discuss its effects when dealing with the NLO terms.
At the leading order, the light neutrino mass matrix depends on only 2 effective parameters indeed the terms $M$ and $aD$ enter the mass matrix in the combination $F\equiv M+a D$. The coefficients $y_e^{(i)}$, $y_\mu$, $y_\tau$, $y$, $a$ and $b$ are all expected to be of $\mathcal{O}(1)$. A priori $M$ could be of $\mathcal{O}(1)$, corresponding to a right-handed neutrino Majorana mass of $\mathcal{O}(\Lambda_f)$, but, actually, we will see that it must be of the same order as $C$ and $D$.
We expect a common order of magnitude for the VEVs (scaled by the cutoff $\Lambda_f$):
\begin{equation}
A \sim B \sim v\;,\qquad \qquad C \sim D \sim v'\;.
\end{equation}
However, due to the different minimisation conditions that determine $(A,B)$ and $(C,D)$, we may tolerate a moderate hierarchy
between $v$ and $v'$. Similarly the order of magnitude of $t$ is in principle unrelated to those of $v$ and $v'$.
It is possible to estimate the values of $v$ and $t$ by looking at the mass ratios of charged leptons:
\begin{equation}
\dfrac{m_\mu}{m_\tau} \sim t\;, \qquad\qquad \dfrac{m_e}{m_\mu} \sim vt\;.
\end{equation}
In order to fit these relations with the data, we must have approximately $t \sim 0.06$ and $v \sim 0.08$ (modulo coefficients of $\mathcal{O}(1)$).\\
To summarise, at the leading order we have diagonal and hierarchical charged leptons together with the exact bimaximal mixing for neutrinos. It is clear that substantial NLO corrections are needed to bring the model to agree with the data on $\theta_{12}$. A crucial feature of our model is that the neutrino sector flavons $\varphi_\nu$ and $\xi_\nu$ are invariant under $Z_4$ which is not the case for the charged lepton sector flavons $\varphi_\ell$ and $\chi_\ell$. The consequence is that $\varphi_\nu$ and $\xi_\nu$ can contribute at the NLO to the corrections in the charged lepton sector, while at the NLO $\varphi_\ell$ and $\chi_\ell$ cannot modify the neutrino sector couplings. As a results the dominant genuine corrections to the bimaximal mixing only occur at the NLO through the diagonalisation of the charged leptons. We will discuss the NLO corrections in section \ref{Sec:AFM:NLO} after having proven that the necessary VEV alignment is in fact realised at the leading order.
\subsection{The Light Neutrino Spectrum and the Value of $r$}
\label{Sec:AFM:LightSpectrum}
\setcounter{footnote}{3}
We now discuss the constraints on the parameters of the neutrino mass matrix in order to get the correct value for the ratio $r$. Like for the previous models, also in this case some fine-tuning is needed to accommodate the value of $r$. In fact the triplet assignment for left-handed lepton doublets and for right-handed neutrinos tends to lead to $r\sim1$. We find useful to begin the presentation by analysing the leading order terms, even though a more complete phenomenological discussion with the inclusion of the NLO contributions will be illustrated in section \ref{Sec:AFM:Phem}.
We redefine the parameters in eqs. (\ref{AFM:spec}) as $F=M+aD$ and $Y=-\sqrt{2}bC$ so that $Y\sim v'$, while $F$, like $M$, a priori could be larger, of $\mathcal{O}(1)$. We make the phases of $F$ and $Y$ explicit by setting
\begin{equation}
F\rightarrow Fe^{i\phi_F}\qquad\qquad Y\rightarrow Ye^{i\phi_Y}\;,
\end{equation}
where now $F$ and $Y$ are real and positive parameters. Defining the phase difference $\Delta\equiv\phi_Y-\phi_F$ we can explicitly write the absolute values of the neutrino masses as
\begin{eqnarray}
m_1&=&\frac{1}{\sqrt{F^2+Y^2+2FY\cos{\Delta}}}\frac{|y^2|v_u^2}{4\Lambda_f}\nn \\[3mm]
m_2&=&\frac{1}{\sqrt{F^2+Y^2-2FY\cos{\Delta}}}\frac{|y^2|v_u^2}{4\Lambda_f}
\label{AFM:nuMasses}\\[3mm]
m_3&=&\frac{1}{F}\frac{|y^2|v_u^2}{4\Lambda_f}\nn\;.
\end{eqnarray}
Note that the phase $\Delta$ is related by a non-trivial relation to the Majorana CP phase $2 \alpha_{12}$, which, by definition, is
the phase difference between the complex eigenvalues $m_1$ and $m_2$ of the neutrino mass matrix.
Furthermore $\cos\Delta$ must be positive in order to guarantee $m_2>m_1$. By defining $F/Y\equiv\rho$, we can write the expression of the ratio $r$:
\begin{equation}
r=\dfrac{4\rho^3\cos{\Delta}}{(\rho^2+1+2\rho\cos{\Delta})(1-2\rho\cos{\Delta})}\;.
\label{AFM:eq:rf}
\end{equation}
In order to have $r$ small either we take $\rho$ small or $\cos{\Delta}$ small (or both). If $F\sim \mathcal{O}(1)$ then $\rho\sim \mathcal{O}(1/v')$, $r\sim 4\rho\cos{\Delta}$ and $\cos{\Delta}$ must be extremely small: $\cos{\Delta}\sim v' r/4 \sim 10^{-3}$. We prefer to take $\rho$ small, such that
\begin{equation}
r\sim4\rho^3\cos\Delta\;.
\end{equation}
If so, in order to accommodate the value of $r$, we only need, for example, $4\cos\Delta\sim1$ and $\rho\sim 1/3$. In conclusion, we have to take $F\sim M\sim \mathcal{O}(v')$ and $\rho=F/Y$ moderately small.
We interpret the relation $M\sim F \sim v'$, necessary to reproduce the value of $r$, as related to the fact that the right-handed neutrino Majorana M mass must empirically be smaller than the cutoff $\Lambda_f$ of the theory. In the context of a grand unified theory this corresponds to
the requirement that $M$ is of $\mathcal{O}(M_{GUT})$ rather than of $\mathcal{O}(M_{Planck})$.
With these positions, the neutrino spectrum shows a moderate normal hierarchy, with
\begin{equation}
m_{1,2}\sim\dfrac{1}{Y}\;\sim\;\mathcal{O}\left(\frac{1}{v'}\right)\;,\qquad\qquad
m_3\sim\mathcal{O}\left(\dfrac{1}{\rho Y}\right)\;,
\end{equation}
in units of $|y^2|v_u^2/4\Lambda_f$. At the leading order an inverse ordering of the neutrino masses is forbidden, as
we can see from eq. (\ref{AFM:nuMasses}), which, for $m_2>m_1$, always implies $m_3>m_1$.\\
At the leading order the lightest neutrino mass $m_1$ has a lower bound. Indeed, the only possible way to decrease $m_1$
is to take $Y$ as large as possible. By expanding eqs. (\ref{AFM:nuMasses}) in powers of $\rho$, we have
\begin{equation}
m_1\approx \rho m_3\ge \rho \sqrt{\Delta m^2_{atm}}\ge 10\, {\rm meV}\;.
\end{equation}
As we will see in section \ref{Sec:AFM:Phem}, this lower bound can be evaded by including NLO corrections, but values of $m_1$ much smaller than $10$ meV would require an additional tuning of the parameters.
\section{The Next-To-Leading Order Corrections}
\label{Sec:AFM:NLO}
\setcounter{footnote}{3}
We now summarise the set of subleading corrections to the superpotential that are essential to bring the model in agreement with the data. The detailed analysis can be found in the original paper \cite{AFM_BimaxS4}.
Classifying the corrections according to an expansion in inverse powers of $\Lambda_f$, the new flavon VEV configuration can be written as
\begin{equation}
\langle \Phi \rangle=\langle \Phi \rangle_{LO}+\delta \Phi
\end{equation}
where $\Phi=(\xi_\nu,~\varphi_\nu, ~\varphi_\ell, ~\chi_\ell)$ and $\langle \Phi \rangle_{LO}$ are given by eqs. (\ref{AFM:vev:charged:best}) and (\ref{AFM:vev:neutrinos}). As illustrated in the appendix \ref{AppB:S42}, in the neutrino sector the shifts $\delta \xi_\nu,~\delta \varphi_\nu$ turn out to be proportional to the leading order VEVs $\langle \Phi \rangle_{LO}$ and can be absorbed in a redefinition of the parameters $C$ and $D$. Instead, in the charged lepton sector, the shifts $\delta \varphi_\ell, ~\delta \chi_\ell$ have a non trivial structure, so that the leading order texture is modified:
\begin{equation}
\mean{\varphi_\ell}=\left(
\begin{array}{c}
{\delta \varphi_\ell}_1\\
A' \Lambda_f \\
0\\
\end{array}
\right)\qquad
\qquad\mean{\chi_\ell}=\left(
\begin{array}{c}
{\delta \chi_\ell}_1 \\
0 \\
B' \Lambda_f \\
\end{array}
\right)
\label{AFM:vev:charged:nlo}
\end{equation}
where $A'$ and $B'$ satisfy a relation similar to that in eq. (\ref{AFM:AB}) and the shifts ${\delta \varphi_\ell}_1$ and ${\delta \chi_\ell}_1$ are proportional to $v'v\Lambda_f$, that are, in other words, suppressed by a factor $v'$ with respect to the leading order entries $A\Lambda_f$ and $B\Lambda_f$, respectively.
The fermion mass matrices receive corrections due to higher-order operators in the respective superpotential and due to the new VEV configurations.
The NLO operators contributing to the lepton masses can be obtained by inserting in all possible ways $\xi_\nu$ or $\varphi_\nu$ in the leading order operators and by extracting the $S_4\times Z_4\times U(1)_{FN}$ invariants. Insertions of one power of the flavons $\varphi_\ell$ or $\chi_\ell$
are forbidden by the invariance under the $Z_4$ component of the flavour symmetry group. We find that the corrected mass matrix for the charged leptons at this order has the (23) and (32) elements still vanishing. By omitting all order one coefficients, the charged lepton mass matrix and the unitary matrix that realises the transformation to the physical basis, where the product $(M_e^\dag M_e)$ is diagonal at the NLO, are given by
\begin{equation}
M_e=\left(
\begin{array}{ccc}
vt^2 & vv't^2 & vv't^2 \\
v't & t & 0 \\
v' & 0 & 1 \\
\end{array}
\right)\dfrac{v\,v_d}{\sqrt2}\;,\qquad
U_e=\left(
\begin{array}{ccc}
1 & V_{12} v' & V_{13} v' \\
-V_{12} v' & 1 & 0 \\
-V_{13} v' & 0 & 1 \\
\end{array}
\right)\;,
\end{equation}
where the coefficients $V_{ij}$ are of $\mathcal{O}(1)$.
Moving to the neutrino sector, we have already stated that the structure of the leading order VEVs of the neutrino flavons is unchanged and therefore the only corrections come from the higher-order terms in $w_\nu$. However, it is straightforward to verify that, even after the inclusion of the NLO corrections, both $M_R$ and $m_\nu$ can be exactly diagonalised by the bimaximal mixing, which therefore represents the total contribution to lepton mixing of the neutrino sector. These corrections introduce also some terms in the mass eigenvalues of relative order $v'$ with respect to the leading order results, but they do not affect the type of the spectrum.
Since the neutrino mass matrix is diagonalised by $U_{BM}$, the PMNS matrix can be written as
\begin{equation}
U=U_e^\dag U_{BM}P^{'*}\;,
\end{equation}
where $P'$ is the diagonal matrix of the Majorana phases which differs from the original $P$ only due to the NLO contributions. The corrections from $U_e$ affect the neutrino mixing angles at the NLO according to
\begin{equation}
\sin^2\theta_{12}=\dfrac{1}{2}-\frac{1}{\sqrt{2}}(V_{12}+V_{13})v'\;,\qquad
\sin^2\theta_{23}=\dfrac{1}{2}\;,\qquad
\sin\theta_{13}=\dfrac{1}{\sqrt{2}}(V_{12}-V_{13})v'\;.
\label{AFM:sinNLO}
\end{equation}
By comparing these expressions with the current experimental values of the mixing angles in table \ref{table:OscillationData}, we see that, to correctly reproduce $\theta_{12}$ we need a parameter $v'$ of the order of the Cabibbo angle $\lambda$. Moreover, barring cancellations of/among some the $V_{ij}$ coefficients, also the reactor angle is corrected by a similar amount.
\begin{figure}[h!]
\centering
{\includegraphics[width=11cm]{S12S13.jpg}}
\caption{\it $\sin^2\theta_{13}$ as a function of $\sin^2\theta_{12}$ is plotted, following eqs. (\ref{AFM:sinNLO}). The plot is symmetric with respect $\sin^2\theta_{12}=0.5$ and we report here only the left-part. The parameters $V_{ij}$ are treated as random complex numbers of absolute value between 0 and 2, while $|v'|$ has been fixed at the indicative value of 0.15. The gray bands represent the regions excluded by the experimental data \cite{Fogli:Indication}: the horizontal one corresponds to the $3\sigma$-upper bound for $\sin^2\theta_{13}$ of 0.46 and the vertical ones to the region outside the $3\sigma$ error range $[0.26 - 0.37]$ for $\sin^2\theta_{12}$.}
\label{fig:AFMfigure1213}
\end{figure}
Any quantitative estimates are clearly affected by large uncertainties due to the presence of unknown parameters of order one, as we can see in figure \ref{fig:AFMfigure1213}, but in our model a value of $\theta_{13}$ much smaller than the present upper bound would be unnatural.
All this discussion works at the NLO, but we expect that at the NNLO the value of $\theta_{23}$ will also be modified with deviations of about $\mathcal{O}(\lambda^2)$ at most. The next generation of experiments, in particular those exploiting a high intensity neutrino beam, will probably reduce the experimental error on $\theta_{23}$ and the sensitivity on $\theta_{13}$ to few degrees. If no significant deviations from zero of $\theta_{13}$ will be detected, our construction will be ruled out.
A salient feature of our model is that, at the NLO accuracy, the large corrections of $\mathcal{O}(\lambda)$ only apply to $\theta_{12}$ and $\theta_{13}$ while $\theta_{23}$ is unchanged at this order. As a correction of $\mathcal{O}(\lambda)$ to $\theta_{23}$ is hardly compatible with the present data this feature is very crucial for the phenomenological success of our model. It is easy to see that this essential property depends on the selection in the neutrino sector of flavons $\xi_\nu$ and $\varphi_\nu$ that transform as $\bf1$ and $\bf3$ of $S_4$, respectively. We have checked that if, for example, the singlet $\xi_\nu$ is replaced by a doublet $\psi_\nu$ (and correspondingly the singlet driving field $\xi_\nu^0$ is replaced by a doublet $\psi_\nu^0$), all other quantum numbers being the same, one can construct a variant of the model along similar lines, but in this case all the 3 mixing angles are corrected by terms of the same order. This confirms that a particular set of $S_4$ breaking flavons is needed in order to preserve $\theta_{23}$ from taking as large corrections as the other two mixing angles.
\section{Phenomenological Implications}
\label{Sec:AFM:Phem}
\setcounter{footnote}{3}
We now develop a number of important physical consequences of our model and derive some predictions. We consider the predicted spectrum and the effective mass $m_{ee}$ for neutrinoless-double-beta decay. The light neutrino mass matrix, including the NLO corrections, is given by:
\begin{equation}
m_\nu=-m_D^TM_R^{-1}m_D+m_\nu^{eff}\;,
\end{equation}
where $m_\nu$ and $M_R$ are the mass matrices at the NLO and $m_\nu^{eff}$ is given in eq. (\ref{AFM:meff}). It is diagonalised by the bimaximal unitary transformation and its complex eigenvalues are given by:
\begin{eqnarray}
m_1&=&\left[(F'+Y')-\dfrac{(y'+Y_2)^2}{4(F+Y+a_2 C^2)}\right]\dfrac{v_u^2}{\Lambda_f}\nonumber \\
m_2&=&\left[(F'-Y')-\dfrac{(y'-Y_2)^2}{4(F-Y+a_2 C^2)}\right]\dfrac{v_u^2}{\Lambda_f}\\
m_3&=&\left[-F'+\dfrac{y'^2}{4(F-2 a_2 C^2)}\right]\dfrac{v_u^2}{\Lambda_f}\;,\nonumber
\label{AFM:numNLO}
\end{eqnarray}
where
\begin{equation}
F'=M'+a' D\;,\qquad Y'=-\sqrt{2} b' C\;,\qquad Y_2=-\sqrt{2} y_2 C\;,\qquad y'=y+y_1 D\;,
\end{equation}
with $y_{1,2}$ and $a_2$ NLO parameters (see \cite{AFM_BimaxS4} for details). We see that the leading order expressions are recovered in the limit $F'=Y'=y_{1,2}=a_2=0$. By exploiting eqs. (\ref{AFM:numNLO}) we can study some observables like the effective $0\nu2\beta$-decay mass, $|m_{ee}|$, the lightest neutrino mass, $m_1$, and the sum of the neutrino masses directly from the experimental data. We perform a numerical analysis, by treating all the leading order, NLO and effective parameters as random complex numbers with absolute value between 0 and 3.
\begin{figure}[h!]
\centering
\subfigure[$|m_{ee}|$ vs. $m_1$]
{\includegraphics[width=7cm]{p1.jpg}}
\subfigure[$\sum m_i$ vs. $m_1$]
{\includegraphics[width=7cm]{p2.jpg}}
\caption{\it In figure (a), $|m_{ee}|$ as a function of the lightest neutrino mass, $m_1$, is plotted. The constraints which have been imposed in the plot are the experimental values at $3\sigma$ for $\Delta m^2_{atm}$, $\Delta m^2_{sol}$ and the mixing angles. All the parameters of the model are treated as random complex parameters. The experimental bounds are equal as in figure \ref{fig:0nu2betaGeneral}. In figure (b) we show the sum of the neutrino masses as a function of the lightest neutrino mass.}
\label{fig:AFMfigure2}
\end{figure}
In figure \ref{fig:AFMfigure2}(a), we plot $|m_{ee}|$ as a function of the lightest neutrino mass. The points correspond to the case of normal ordering of the neutrino masses, with a moderate hierarchy or a quasi degenerate spectrum.
However, at variance with the results of the leading order, some solutions of our numerical simulation also reproduce an inverse hierarchical spectrum. The plot displays only the points corresponding to choices of the parameters reproducing
$\Delta m^2_{atm}$, $\Delta m^2_{sol}$ and the mixing angles within a $3\sigma$ interval. The figure suggests that a lower bound of about $0.1$ meV holds for the lightest neutrino mass. Indeed, with the inclusion of the NLO corrections, from eq. (\ref{AFM:numNLO}) we see that $m_1$ can vanish if a cancellation between the NLO and leading order contributions takes place. This however requires an additional fine-tuning of the parameters which has been reproduced in our numerical analysis only partially and by very few points. Similarly the scatter plot indicates a lower bound for $|m_{ee}|$ of about $0.1$ meV.
In figure \ref{fig:AFMfigure2}(b), we plot the sum of the neutrino masses as a function of the lightest neutrino mass, $m_1$. The vertical band refers to the future sensitivity of KATRIN experiment, while the horizontal ones to the cosmological bounds \cite{CosmoNu}. There are typically five representative combinations of the cosmological data, which lead to increasingly stronger upper bounds on the sum of the neutrino masses: we are showing the two strongest ones, at $0.60$ eV and $0.19$ eV. This plot is typical for normal hierarchy or quasi degenerate spectrum. The only special feature is the lower bound on $m_1$, which, as explained above, relies on a naturalness assumption.
\section{Extension to Quarks: a GUT Realisation}
\label{Sec:AFM:PS}
\setcounter{footnote}{3}
In this section we are interested in the extension to the quark sector. A first attempt is to adopt for quarks the same representations under $S_4$ that have been used for leptons: the left-handed quark doublets $q$ transform as a triplet $\bf3$, while the right-handed quarks $(u^c,\,d^c)$, $(c^c,\,s^c)$ and $(t^c,\,b^c)$ transform as $\bf1$, $\bf1'$ and $\bf1$, respectively. We can similarly extend to quarks the transformations of $Z_4$ (and $U(1)_R$) given for leptons. As a result, it is easy to see that the quark mass matrices are diagonal, at the leading order on the expansion parameters, exactly as for the charged leptons and to account for the correct mass hierarchies the $U(1)_{FN}$ has to be suitably implemented. At this level the CKM matrix is the unity matrix and to get realistic mixings, the higher-order corrections should switch on off-diagonal entries with a well-defined pattern: $(12)\sim\lambda$, $(23)\sim\lambda^2$ and $(13)\sim\lambda$. By an explicit computation we find the following result for the quark mass matrices: in terms of order of magnitude
\begin{equation}
M_d=\left(
\begin{array}{ccc}
v\,t^2 & v\,v'\,t^2 & v\,v'\,t^2 \\
v'\,t & t & v^{\prime\,2}\,t \\
v' & v^{\prime\,2} & 1 \\
\end{array}
\right)\dfrac{v\,v_d}{\sqrt2}\;,\qquad
M_u=\left(
\begin{array}{ccc}
v\,t^3 & v\,v'\,t^3 & v\,v'\,t^3 \\
v'\,t^2 & t^2 & v^{\prime\,2}\,t^2 \\
v' & v^{\prime\,2} & 1 \\
\end{array}
\right)\dfrac{v\,v_u}{\sqrt2}\;.
\end{equation}
Calculating now the unitary matrices which diagonalise $M_d^\dag M_d$ and $M_u^\dag M_u$ we find
\begin{equation}
V_d\sim V_u=\left(
\begin{array}{ccc}
1 & v' & v' \\
-v' & 1 & v^{\prime\,2} \\
-v' & -v^{\prime\,2} & 1 \\
\end{array}
\right)\;.
\end{equation}
At a first sight, barring cancellations among the single entries, we can see that the CKM matrix $V=V_u^\dag V_d$ should be similar to $V_u$ and $V_d$ and as a consequence it cannot correctly describe the quark mixings: while the entries $(12)$ and $(23)$ well reproduce the measured values, the large values in the $(13)$ and $(31)$ entries would require a large fine-tuning of order $\lambda^2$. To a closer look at the superpotential which generate such a result, we see that the higher-order corrections have two independent sources: new higher-order operators calculated with the VEVs at the leading order and the original superpotential calculated with the NLO VEVs. As a result we do not expect any cancellation between $V_u$ and $V_d$ when multiplied to give the CKM matrix, contrary to what happens in the Altarelli-Feruglio model in section \ref{Sec:AFTBM:Quarks}.\\
An alternative possibility is to further investigate on the complementarity relations:
\beq\ba{rcl}
\theta_{12}+\lambda &\simeq& \pi/4\;,\\[3mm]
\theta_{23}+\lambda^2 &\simeq& - \pi/4\;.
\label{ABM:anglescomplement}
\ea\eeq
These equations suggest that the angles in the CKM and PMNS matrices may have a common origin which can be motivated for example in Pati-Salam models, where the following relation holds,
\begin{equation}
U_e\sim V_d\;.
\label{ABM:Comple}
\end{equation}
We can use this to write the CKM and PMNS matrices as
\begin{equation}
\begin{array}{l}
U\;=\;R_{23}\left(-\dfrac{\pi}{4}\right) R_{13}(\lambda) R_{12}\left(\dfrac{\pi}{4} - \lambda\right)
\;=\;\Big(\underbrace{R_{23}\left(\dfrac{\pi}{4}\right) R_{13}(\lambda) R_{12}(\lambda)}_{U_e}\Big)^\dag \underbrace{R_{12}\left(\dfrac{\pi}{4}\right)}_{U_\nu}\;\\[3mm]
V\;=\; R_{12}(\lambda)
\;=\;\Big(\underbrace{R_{23}\left(\dfrac{\pi}{4}\right) R_{13}(\lambda) R_{12}(\lambda)}_{V_u}\Big)^\dagger \underbrace{R_{23}\left(\dfrac{\pi}{4}\right) R_{13}(\lambda) R_{12}(\lambda))}_{V_d}.
\end{array}
\label{ABM:CKMrots}
\end{equation}
Here $R_{ij}(\alpha)$ stand for rotations in the $(ij)$ plane of the angle $\alpha$ (apart from coefficients $\cO(1)$ in front of each angles). The coefficients of the angles in the rotations in $V_u^\dag$ and $V_d$ should be such that the rotations cancel each other in the $(13)$ sector, but not in the $(12)$ sector. Thus we should introduce terms which distinguish between the up- and down-quark sectors, indeed possible within the Pati-Salam context.
Moving to the explicit form of the mass matrices, the generic Majorana neutrino mass matrix $m_\nu$ which is diagonalised by $U_\nu=R_{12}\left(\dfrac{\pi}{4}\right)$,
\begin{equation}
m_\nu^{diag}\;=\;R_{12}\left(\dfrac{\pi}{4}\right)^T\,m_\nu \,R_{12}\left(\dfrac{\pi}{4}\right)\,,
\end{equation}
is given by
\begin{equation}
m_\nu \sim\left(
\begin{array}{ccc}
a & b & 0 \\
b & a & 0 \\
0 & 0 & c \\
\end{array}
\right)\,.
\label{ABM:Mnu1}
\end{equation}
Considering the charged lepton mass matrix $M_e$, the product $M_e^\dag\,M_e$ should be diagonalised by the action of $V_d$ as in eq. \eqref{ABM:CKMrots},
\begin{equation}
R_{12}(-\lambda) R_{13}(-\lambda) R_{23}\left(-\dfrac{\pi}{4}\right) \, M_e^\dag \, M_e \, R_{23}\left(\dfrac{\pi}{4}\right) R_{13}(\lambda) R_{12}(\lambda)\,.
\end{equation}
In the limit $m_e\to 0$ we find the generic structure for the product $M_e^\dag\,M_e$:
\begin{equation}
M_e^\dag\,M_e \sim \dfrac{m_\tau^2}{2} \left(
\begin{array}{ccc}
0 & \lambda & \lambda \\
\lambda & 1 & 1 \\
\lambda & 1 & 1 \\
\end{array}
\right) +
\dfrac{m_\mu^2}{2} \left(
\begin{array}{ccc}
0 & \lambda & -\lambda \\
\lambda & 1 & -1 \\
-\lambda & -1 & 1 \\
\end{array}
\right) +\mathcal{O}(\lambda^2) \;,
\label{ABM:Mlsq}
\end{equation}
that can be obtained if $M_e$ is given by
\begin{equation}
M_e \sim \dfrac{m_\tau}{\sqrt{2}} \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
\lambda & 1 & 1 \\
\end{array}
\right) +
\dfrac{m_\mu}{\sqrt{2}} \left(
\begin{array}{ccc}
0 & 0 & 0 \\
\lambda & 1 & -1 \\
0 & 0 & 0
\end{array}
\right) +\mathcal{O}(\lambda^2) \;.
\label{ABM:Ml1}
\end{equation}
It is interesting to note that, moving to the basis of diagonal charged leptons and considering only the leading order terms, the neutrino mass matrix results to be of the classical bimaximal type.
From eq. \eqref{ABM:Comple}, the relation $M_e\sim M_d$ follows and therefore the down-quark matrix has a similar structure as in eq. \eqref{ABM:Ml1}. Looking at eq. (\ref{ABM:CKMrots}) we find that also $M_u$ should have a similar structure. We find that we can satisfy the constraint on the $(12)$ and $(13)$ rotations if the third columns of $M_u$ and $M_d$ are proportional to each other, but the second columns are not.\\
So far we have not given yet any explanation of the origin of these mass matrices and mixings and we leave this analysis to \cite{ABM_PSS4}. Here we only say that such a construction is possible in a Pati-Salam realisation where the flavour symmetry is $S_4\times Z_4\times U(1)_{FN}$: as for the non-GUT model described in the rest of the chapter, key points are a suitable choice of the group representations for the flavons and the particular VEV misalignment whose effects are a reactor angle and a deviation from $\pi/4$ of the solar angle of the order of $\lambda$, while introducing small deviations of the order $\lambda^2$ from the maximal value of the atmospheric angle.
A difficulty with respect the non-GUT model refers to the study of the gauge coupling running and of the Higgs potential: while in a general Pati-Salam model, in particular without any flavour symmetry implementation, it is possible to reproduce a realistic sequential symmetry breaking chain, with the usual Standard Model or MSSM Higgs fields at the electroweak scale, the introduction of a flavour symmetry puts strong constraints. Indeed we need additional scalars which transform under the gauge group and the effect is to sandwich the energy scales of the different symmetry breakings, lowering to $10^{14}$ GeV the energy scale of the (almost) unification.
In \cite{ABM_PSS4} we stress out that a combined study of the flavour and Higgs sectors should be compulsory, due to the strong interplay between the two. This is in open contrast with a general attitude: usually people focus only on a single aspect, either the flavour sector or the gauge/Higgs one.
\section{Conclusions of the Chapter}
\label{Sec:AFM:Conclusions}
\setcounter{footnote}{3}
In this part we have illustrated a See-Saw model based on the flavour symmetry $S_4\times Z_4 \times U(1)_{FN}$ where the bimaximal mixing is realised at the leading order in a natural way. The hierarchy of charged lepton masses is obtained as a combined effect of the $U(1)_{FN}$ symmetry breaking measured by the parameter $t$ and of the $S_4\times Z_4$ breaking induced by $v$, proportional to the VEVs of $\varphi_\ell$ and $\chi_\ell$. We have $m_\mu/m_\tau =t \sim 0.06$ and $m_e m_\tau/m_\mu^2 =v \sim 0.08$.
Since exact bimaximal mixing implies a value of $\tan{\theta_{12}}$ which is excluded by the data, large corrections are needed. The dominant corrections to the bimaximal mixing arise at the NLO only through the diagonalisation of the charged lepton mass matrix. The shifts of the quantities $\sin^2{\theta_{12}}$ and $\sin{\theta_{13}}$ from the bimaximal values are linear in the parameter $v'$, proportional to the VEVs of $\varphi_\nu$ and $\xi_\nu$, which is expected to be of the same order as $v$, but not necessarily too close, as $v$ and $v'$ are determined by two different sets of minimisation equations. From the experimental value $\tan^2{\theta_{12}}= 0.45\pm 0.04$, which is sizeably different than the bimaximal value $\tan^2{\theta_{12}}= 1$, we need $v'\sim \mathcal{O}(\lambda)$.
As in most models where the bimaximal mixing is only corrected by the effect of charged lepton diagonalisation, one also expects $\theta_{13}\sim \mathcal{O}(\lambda)$. A value of $\theta_{13}$ near the present bound would be a strong indication in favour of this mechanism and a hint that the closeness of the measured values of the mixing angles to the tribimaximal values may be purely an accident. In addition, a very important feature of our model is that the shift of $\sin^2{\theta_{23}}$ from the maximal mixing value of 1/2 vanishes at the NLO and is expected to be of $\mathcal{O}(\lambda^2)$ at most. In our $S_4$ model, this property is obtained by only allowing the breaking of $S_4$ in the neutrino sector via flavons transforming as $\bf1$ and $\bf3$ (in particular with no doublets).
In order to reproduce the experimental value of the small parameter $r=\Delta m^2_{sol}/\Delta m^2_{atm}$ we need some amount of fine-tuning. For instance, the right-handed neutrino Majorana mass $M$ should be below the cutoff $\Lambda_f$ (this is reminiscent of the fact that empirically $M \sim M_{GUT}$ rather than $M \sim M_{Planck}$). The neutrino spectrum is mainly of the normal hierarchy type (or moderately degenerate), the smallest light neutrino mass and the $0\nu \beta \beta$-parameter $|m_{ee}|$ are expected to be larger than about $0.1$ meV.
When extending such a flavour treatment to the quark sector we face several problems: simply adopting the same representations used for leptons, the model does not account for a realistic CKM matrix, due to large contributions to the $(13)$ and $(31)$ entries of $V$. An improvement is possible moving to a grand unified context, where a Pati-Salam model has been studied: the flavour symmetry is still $S_4\times Z_4\times U(1)_{FN}$ and realistic mass matrices and mixings are found. The disadvantage of this choice manifests in the presence of strong constraints on the scalar content of the model, which deeply affect the gauge coupling running.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{Running Effects on Flavour Models}
\label{Sec:Running}
\setcounter{equation}{0}
\setcounter{footnote}{3}
In all the flavour models described in the previous sections, mass matrices and mixings are evaluated at a very high energy scale. On the other hand, for a comparison with the experimental results, it is necessary to evolve the observables to low energies through the renormalisation group (RG) running. In general the deviations from high energy values due to the running consist in minor corrections which cannot be measured in the future neutrino experiments, apart from some special case in which these deviations undergo a large enhancement.
In the present section we will discuss the effects of the renormalisation group running on the lepton sector when masses and mixings are the result of an underlying flavour symmetry, dealing with both the Standard Model and the MSSM contexts.
When the lightness of the neutrino masses is explained through the five-dimensional Weinberg operator, it is a general result \cite{RunningNoSeeSaw} that the running corrections become relevant only when the neutrino spectrum is almost degenerate or inversely hierarchical (and only for particular values of the Majorana phases) or when in the supersymmetric context $\tan\beta$ is large. Similar results have been found when particular flavour structures for the neutrino masses are invoked, such as the tribimaximal \cite{Running+TB} and the bimaximal \cite{Running+BM} patterns.
When we consider models in which the type I See-Saw mechanism in implemented, few studies have been proposed in literature \cite{RunningSeeSaw} and only regarding general, in particular non-flavour, models. For this reason we focus \cite{LMP_RGE} our attention only on flavour models in which the type I See-Saw is responsible for the light neutrino masses.
We first describe, in a very general context, two kinds of interesting constraints on the Dirac neutrino Yukawa $Y_\nu$ from flavour symmetries and then analyse their impact on running effects. We start considering flavour models in which $Y_\nu$ is proportional to a unitary matrix as it is the case, for example, when the right-handed singlet neutrinos or the charged leptons are in an irreducible representation of the flavour group $G_f$. Then we extend this constraint to a more general class of flavour models in which the mixing textures are independent from the mass eigenstates: as a general result, we find that in this class of models, the effect of the running through the See-Saw thresholds can always be absorbed by a small shift on neutrino mass eigenvalues and the mixing angles remain unchanged. This conclusion is, in particular, independent both from the specific mixing pattern implied by the flavour symmetry and from the basis in which we are working.
Mass-independent mixing textures usually exhibit an underlying discrete symmetry nature: the tribimaximal and the bimaximal patterns, the golden-ratio mixing \cite{golden_ratio} and some (but not all) cases of the trimaximal mixing \cite{Trimaximal} belong to the category of mass-independent mixing textures.
In a second moment, as an explicit example, we describe in detail the running effects on the tribimaximal mixing texture in the Altarelli-Feruglio model described in section \ref{Sec:AFTBM}.
\mathversion{bold}
\section{Running Effects on Neutrino Mass Operator $m_\nu$}
\label{Sec:LMP:Running}
\setcounter{footnote}{3}
\mathversion{normal}
In this section we begin to analyse, in a general context, the renormalisation group equations (RGEs) for neutrino masses below and above the See-Saw threshold, both in the Standard Model and in the MSSM extended with three right-handed neutrinos. We consider the Lagrangian in the lepton sector of the type I See-Saw already defined in eqs. (\ref{SM:LagrangianY}, \ref{SM:LagrangianTypeI}):
\begin{equation}
\mscr{L}=e^c Y_e H^\dag \ell+ \nu^c Y_\nu \widetilde H^\dag \ell + \nu^c M_R \nu^c +h.c.
\end{equation}
where the supersymmetric case is easily derived considering two Higgs doublets, all the fields as supermultiplets and identifying the holomorphyc part of $\mscr{L}$ with a superpotential. In what follows we concentrate only on the Standard Model particles and for this reason in our notation a chiral superfield and its $R$-parity even component are denoted by the same letter.
Given the heavy Majorana and the Dirac neutrino mass matrices, $M_R$ and $m_D=Y_\nu v/\sqrt2$ respectively, the light $m_\nu$ is obtained from block-diagonalising the complete $6\times6$ neutrino mass matrix,
\begin{equation}
m_\nu=-\dfrac{v^2}{2}Y_\nu^TM_R^{-1}Y_\nu\;,
\label{LMP:EqSee-Saw}
\end{equation}
The matrix $m_\nu$ is modified by quantum corrections according to the RGEs widely studied in the literature \cite{RunningSeeSaw}. For completeness, in appendix \ref{AppendixC}, we report the full RGEs for all the interested quantities in the running.
In order to analytically study the change of $m_\nu(\mu)$ from high to low-energy, it is useful to work in the basis in which the Majorana neutrino mass is diagonal and real, $\hat{M}_R = \diag(M_S, M_M, M_L)$. The mass eigenvalues can be ordered as $M_S < M_M < M_L$. Furthermore, we can divide the running effects in three distinct energy ranges: from the cutoff $\Lambda$ of the theory down to $M_L$, the mass of the heaviest right-handed neutrino; from $M_L$ down to $M_S$, the mass of the lightest right-handed neutrino; below $M_S$ down to $\varrho$, which can be either $m_Z$, considered as the electroweak scale, or $m_{SUSY}$, the average energy scale for the supersymmetric particles.
\begin{description}
\item[$\mathbf{\Lambda_f\longrightarrow M_L}$.]
Above the highest See-Saw scale the three right-handed neutrinos are all active and the dependence of the effective light neutrino mass matrix from the renormalisation scale $\mu$ is given by mean of the $\mu-$dependence of $Y_\nu$ and $M_R$:
\begin{equation}
m_\nu(\mu)\, =\, -\dfrac{v^2}{2} \, Y_\nu^T(\mu)\, M_R^{-1}(\mu) \,Y_\nu(\mu) \;.
\label{LMP:effnumass1}
\end{equation}
Then from the RGEs in eqs. (\ref{LMP:EqRGEMSSM}, \ref{LMP:EqRGESM}), it is not difficult to see that the evolution of the effective mass matrix $m_\nu$ is given by:
\begin{equation}
16\pi^2 \, \frac{\mathrm{d} m_\nu}{\mathrm{d} t} =\Big(C_eY_e^\dagger Y_e + C_\nu Y_\nu^\dagger Y_\nu\Big)^T \, m_\nu +m_\nu \, \Big(C_e Y_e^\dagger Y_e + C_\nu Y_\nu^\dagger Y_\nu\Big) + \bar\alpha \, m_\nu
\label{LMP:betanu1}
\end{equation}
with
\begin{equation}
\begin{array}{ll}
C_e=\,-\dfrac{3}{2}\;,\qquad C_\nu=\dfrac{1}{2}&\qquad\text{in the SM}\\[5mm]
C_e=\,C_\nu=1&\qquad\text{in the MSSM}
\end{array}
\end{equation}
and
\begin{equation}
\begin{array}{ccl}
\bar\alpha_{SM}&=&2\Tr\left[ 3Y_u^\dagger Y_u + 3Y_d^\dagger Y_d + Y_\nu^\dagger Y_\nu + Y_e^\dagger Y_e \right] - \dfrac{9}{10} g_1^2 - \dfrac{9}{2} g_2^2\\[5mm]
\bar\alpha_{MSSM}&=&2\Tr\Big[ 3Y_u^\dagger Y_u + Y^{\dagger}_\nu Y_\nu \Big] - \dfrac{6}{5} g_1^2 - 6 g_2^2\;.
\end{array}
\end{equation}
\item[$\mathbf{M_L\longrightarrow M_S}$.]
The effective neutrino mass matrix $m_\nu$ below the highest See-Saw scale
can be obtained by sequentially integrating out $\nu^c_n$ with $n=L,M,S$:
\begin{equation}\label{LMP:effnumass2}
m_\nu \,=\,-\dfrac{v^2}{4} \left(\,\accentset{(n)}{\kappa}+ 2 \accentset{(n)}{Y}_\nu^T\accentset{(n)}{M_R}^{-1}\accentset{(n)}{Y}_\nu\right)
\end{equation}
where $\accentset{(n)}{\kappa}$ is the coefficient of the effective neutrino mass operator $ (\widetilde H^\dag \ell)^T(\widetilde H^\dag\ell)$. From the (tree-level) matching condition, it is given by
\begin{equation}
\accentset{(n)}{\kappa}_{ij}\,=\,2(Y_\nu^T)_{in}\,M^{-1}_n\,(Y_\nu)_{nj}\;,
\label{LMP:kn}
\end{equation}
which is imposed at $\mu=M_n$.
At $M_L$, the $2\times3$ Yukawa matrix $\accentset{(L)}{Y}_\nu$ is obtained by simply removing the
$L$-th row of $Y_\nu$ and the $2\times2$ mass matrix $\accentset{(L)}{M_R}$ is found from $M_R$ by removing the $L$-th row and $L$-th column. Further decreasing the energy scale down to $M_M$, $\accentset{(M)}{Y}_\nu$ is a single-row matrix, obtained by removing the $M$-th row from $\accentset{(L)}{Y}_\nu$, and $\accentset{(M)}{M_R}$ consists of a single parameter, found by removing the $M$-th row and $M$-th column from $\accentset{(L)}{M}_R$. Finally at $M_S$, $\accentset{(S)}{Y}_\nu$ and $\accentset{(S)}{M}_R$ are vanishing.
In the Standard Model, the two parts which define $m_\nu$ in eq. (\ref{LMP:effnumass2}) evolve in different ways. We can summarise the corresponding RGEs as follows:
\begin{equation}
16\pi^2 \, \frac{\mathrm{d} \accentset{(n)}{X}}{\mathrm{d} t} = \left( \dfrac{1}{2}\accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu - \dfrac{3}{2}Y_e^\dagger Y_e \right)^T \accentset{(n)}{X} + \accentset{(n)}{X} \left( \dfrac{1}{2}\accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu - \dfrac{3}{2}Y_e^\dagger Y_e \right) + \accentset{(n)}{\bar\alpha}_X \accentset{(n)}{X}
\end{equation}
where
\begin{equation}
\begin{array}{ccl}
\accentset{(n)}{\bar\alpha}_\kappa &=& 2\Tr\left[ 3Y_u^\dagger Y_u + 3Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e \right] -3 g_2^2 + \lambda_H\\[5mm]
\accentset{(n)}{\bar\alpha}_{Y_\nu^TM_R^{-1}Y_\nu} &=& 2\Tr\left[ 3Y_u^\dagger Y_u + 3Y_d^\dagger Y_d + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu + Y_e^\dagger Y_e \right]-\dfrac{9}{10} g_1^2 - \dfrac{9}{2} g_2^2\;,
\end{array}
\end{equation}
with $\lambda_H$ the Higgs self-coupling.\footnote{We use the convention that the Higgs self-interaction term in the
Lagrangian is $-\lambda_H (H^\dagger H)^2/4$.}
In MSSM the running of $\accentset{(n)}{\kappa}$ and of $\accentset{(n)}{Y}_\nu^T\accentset{(n)}{M_R}^{-1}\accentset{(n)}{Y}_\nu$ is the same and therefore we can write
\begin{equation}
16 \pi^2 \,
\frac{\mathrm{d} m_\nu} {\mathrm{d} t}\,=\,\left(Y_e^\dagger Y_e +\accentset{(n)}{Y}_\nu^\dagger\accentset{(n)}{Y}_\nu\right)^T m_\nu + m_\nu \left( Y_e^\dagger Y_e + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu\right) + \accentset{(n)}{\bar\alpha}m_\nu \; ,
\label{LMP:betanu2}
\end{equation}
where
\begin{equation}
\accentset{(n)}{\bar\alpha} = 2\Tr\left[ 3Y_u^\dagger Y_u + \accentset{(n)}{Y}_\nu^\dagger \accentset{(n)}{Y}_\nu \right] -\dfrac{6}{5} g_1^2 - 6 g_2^2\;.
\end{equation}
\item[$\mathbf{M_S\longrightarrow\lambda}$.] For energy range below the mass scale of the lightest right-handed neutrino, all the $\nu^c_n$ are integrated out and $\accentset{(S)}{Y}_\nu$ and $\accentset{(S)}{M}_R$ vanish. In the right-hand side of eq. (\ref{LMP:effnumass2}) only the term $\accentset{(S)}{\kappa}$ is not vanishing and in this case the effective mass matrix $m_\nu$ evolves as:
\begin{equation}
16\pi^2 \, \dfrac{\mathrm{d} m_\nu}{\mathrm{d} t} = \Big(C_eY_e^\dagger Y_e\Big)^T \, m_\nu + m_\nu \, \Big(C_eY_e^\dagger Y_e\Big) + \accentset{(S)}{\bar\alpha} \, m_\nu
\label{LMP:betanu3}
\end{equation}
with
\begin{equation}
\begin{array}{ccl}
\accentset{(S)}{\bar\alpha}_{SM}&=& 2\Tr\left[ 3Y_u^\dagger Y_u + 3Y_d^\dagger Y_d + Y_e^\dagger Y_e \right] -3 g_2^2 + \lambda_H\\[5mm]
\accentset{(S)}{\bar\alpha}_{MSSM}&=& 6\Tr\Big[Y_u^\dagger Y_u \Big] - \dfrac{6}{5} g_1^2 - 6 g_2^2\;.
\end{array}
\end{equation}
\end{description}
\mathversion{bold}
\subsection{Analytical Approximation to the Running Evolution of $m_\nu$}
\label{Sec:LMP:RunningApprox}
\setcounter{footnote}{3}
\mathversion{normal}
Now we analytically solve the RGEs for $m_\nu$ in the leading Log approximation. All the Yukawa couplings $Y_i^\dagger Y_i$ for $i=\nu,e,u,d$ are evaluated at their initial value at the cutoff $\Lambda$. Furthermore we will keep only the leading contributions from each $Y_i^\dagger Y_i$ term, for $i=e,u,d$, i.e. $|y_\tau|^2$, $|y_t|^2$ and $|y_b|^2$ respectively. The corrections to the leading order $Y_i^\dag Y_i$ come from their running evolution as well as from their subleading terms and they contribute to the final result as subleading effects and we can safely neglect them in our analytical estimate.
In the MSSM context, the general solution to eqs. (\ref{LMP:betanu1}, \ref{LMP:betanu2}, \ref{LMP:betanu3}) have all the same structure, which is approximately given by
\begin{equation}
m_{\nu\,\text{(lower Energy)}} \approx I_U J_e^T J_\nu^Tm_{\nu\,\text{(higher Energy)}} J_\nu J_e
\label{LMP:generalsol}
\end{equation}
where $I_U$, $J_e$ and $J_\nu$ are all exponentials of integrals containing loop suppressing factors and as a result they are close to $\mathbb{1}$. Note that $I_U$ is a universal contribution defined as
\begin{equation}
I_U=\exp\left[-\dfrac{1} {16\pi^2}\int\accentset{(n)}{\bar\alpha}~\mathrm{d} t \right]
\label{LMP:I}
\end{equation}
where the integral runs between two subsequent energy scales and we have extended the definition of $\accentset{(n)}{\bar\alpha}$ by identifying $\accentset{(\Lambda)}{\bar\alpha} \equiv \bar\alpha$ in order to include the range from $\Lambda$ down to $M_L$. $J_e$ is the contribution from charged lepton Yukawa couplings which is always flavour-dependent, while $J_\nu$ is the contribution from the neutrino Yukawa coupling: they are given by
\begin{equation}
J_{e}=\exp\left[-\dfrac{1}{16\pi^2}\int Y_e^\dagger Y_e\,\mathrm{d} t \right]\;,\qquad
J_{\nu}=\exp\left[-\dfrac{1}{16\pi^2}\int\accentset{(n)}{Y}_{\nu}^\dagger \accentset{(n)}{Y}_{\nu}\,\mathrm{d} t \right]\;,
\label{LMP:EqJeGeneral}
\end{equation}
where also here we have extended the definition of $\accentset{(n)}{Y}_{\nu}$ by identifying $\accentset{(\Lambda)}{Y}_{\nu}$ with $Y_\nu$ in order to include the range between $\Lambda$ and $M_L$. \footnote{In eq. (\ref{LMP:EqJeGeneral}), the combination $Y_e^\dagger Y_e$ should enter with $\accentset{(n)}{Y}_e$ instead of $Y_e$, as one can see from the RGEs in appendix \ref{AppendixC}. In our approximation, however, they coincide.}
Differently from $J_e$, $J_\nu$ can be flavour-dependent or not.
In the Standard Model context, the running effects do not factorise, due to the different evolution of $\accentset{(n)}{\kappa}$ and $\accentset{(n)}{Y}_\nu^T \accentset{(n)}{M}_R^{-1} \accentset{(n)}{Y}_\nu$ between the See-Saw mass thresholds. However eq. (\ref{LMP:generalsol}) applies also to the Standard Model context when $m_\nu$ is a result of a flavour symmetry: in this case, by a suitable redefinition of the parameters which define te mass eigenvalues, the sum $\accentset{(n)}{\kappa}+\accentset{(n)}{Y}_\nu^T \accentset{(n)}{M}_R^{-1} \accentset{(n)}{Y}_\nu$ after the running evolution has exactly the same flavour structure of $m_{\nu\,\text{(higher Energy)}}$. For the purposes of the present discussion we simply assume that eq. (\ref{LMP:generalsol}) is valid also in the Standard Model context and an explicit example will be proposed in section \ref{Sec:LMP:RGcoefficients}.
Expanding $J_e$ and $J_\nu$ in Taylor series and summing up eq. (\ref{LMP:generalsol}) on several energy ranges one can approximately calculate the neutrino mass at low-energy as
\begin{equation}
m_{\nu(\varrho)}\simeq I_U\left(m_{\nu(\Lambda)}+
\Delta m^{(J_e)}_\nu + \Delta m^{(J_\nu)}_\nu\right)\;,
\label{LMP:generalsol2}
\end{equation}
where the low-energy scale $\varrho$ is $m_Z$ in the case of Standard Model and $m_{\rm SUSY}$ for MSSM. The explicit form of the universal part $I_U$ is given by:
\begin{equation}
\begin{split}
\hspace{-7mm}
I_U^{\rm SM}\;=&\;\mathbb{1}\;\times\;\exp\Bigg[-\dfrac{1}{16\pi^2}\Bigg[\left(-\dfrac{9}{10}g_1^2-\dfrac{9}{2}g_2^2+ 6|y_t|^2\right)\ln\dfrac{\Lambda_f}{m_Z}+ \left(\dfrac{9}{10}g_1^2+\dfrac{3}{2}g_2^2+\lambda_H\right)\ln\dfrac{M_S}{m_Z}+\\[3mm]
&\hspace{3cm}+y^2\left(2\ln\dfrac{M_M}{M_S}+ 4\ln\dfrac{M_L}{M_M}+7\ln\dfrac{\Lambda_f}{M_L}\right)\Bigg]\Bigg]\;,
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\hspace{-7mm}
I_U^{\rm MSSM}\;=&\;\mathbb{1}\;\times\;\exp\Bigg[-\dfrac{1}{16\pi^2}\Bigg[\left(-\dfrac{6}{5}g_1^2-6g_2^2+ 6|y_t|^2\right)\ln\dfrac{\Lambda_f}{m_{SUSY}}+\\[3mm]
&\hspace{3cm}+y^2\left(2\ln\dfrac{M_M}{M_S}+ 4\ln\dfrac{M_L}{M_M}+8\ln\dfrac{\Lambda_f}{M_L}\right)\Bigg]\Bigg]\;.
\end{split}
\end{equation}
$\Delta m^{(J_e)}_\nu$ is the the contribution from $J_e$ and can easily be calculated as:
\begin{equation}
\Delta m^{(J_e)}_\nu = m_{\nu (\Lambda)} ~ \diag (0, 0, \Delta_\tau)+\diag (0, 0, \Delta_\tau) m_{\nu (\Lambda)}
\label{LMP:Je}
\end{equation}
where the small parameter $\Delta_\tau$ is given by
\begin{equation}
\begin{array}{ccll}
\Delta_\tau&\equiv&-\dfrac{3m^2_\tau}{16\pi^2 v^2}\ln\dfrac{\Lambda}{m_Z}&\quad\text{in the SM}\\[3mm]
\Delta_\tau&\equiv&\dfrac{m^2_\tau}{8\pi^2 v^2} (1+ \tan^2 \beta) \ln\dfrac{\Lambda}{m_{SUSY}}&\quad\text{in the MSSM}\;
\end{array}
\label{LMP:DeltaTau}
\end{equation}
with $\tan\beta$ the usual ratio between the VEVs of the neutral spin zero components of $H_u$ and $H_d$, the two doublets responsible for electroweak symmetry breaking in the MSSM. On the other hand, the contribution from $J_\nu$, $\Delta m^{(J_\nu)}_\nu$, non trivially depends on the neutrino Yukawa coupling $Y_\nu$ which cannot be determined by low-energy observables without additional ingredients. In section \ref{Sec:LMP:FlavourSym_RGE}, we will analyse strong impacts of the flavour symmetries on $J_\nu$, but before proceeding, we comment on the hierarchy among the various running contributions to the neutrino mass. Indeed, assuming that the flavour symmetries have no effects on $Y_\nu$, we expect that
\begin{equation}
Y_\nu^\dagger Y_\nu \sim \accentset{(n)}{Y}_\nu^\dagger\accentset{(n)}{Y}_\nu = \mathcal{O}(\mathbb{1})
\end{equation}
and therefore we conclude that the contribution from $J_\nu$ always dominates. In \cite{LMP_RGE} we explicitly show that this conclusion holds both in the Standard Model and in the MSSM even for large $\tan\beta$ (we consider $\tan\beta=60$ as the maximal value). One should expect that a similar observation holds also for the lepton mixing angles, but quite frequently flavour symmetries imply a $J_\nu$ which is flavour-independent or has no effects on mixing angles, as we will see in a moment.
\section{Flavour Symmetries and Running Effects}
\label{Sec:LMP:FlavourSym_RGE}
\setcounter{footnote}{3}
In the present section, we will apply the general results of the running evolution of the neutrino mass operator $m_\nu$ to models beyond the Standard Model, where a flavour symmetry is added to the gauge group of the Standard Model. The main task is to track some interesting connections between the running effects and the realisation of the flavour symmetry.
In a given basis, $Y_e^\dagger Y_e$ and $m_\nu$ can be diagonalised by unitary matrices, $U_e$ and $U_\nu$, respectively. The lepton mixing matrix is given by $U = U_e^\dagger U_\nu$. In a flavour model, the charged lepton Yukawa, the neutrino mass matrix and therefore the PMNS matrix are dictated by the flavour symmetry $G_f$. We have already discussed in section \ref{Sec:FS:SSB} that $G_f$ must be spontaneously broken in order to naturally describe fermion masses and mixings: here, we simply assume that $G_f$ is spontaneously broken by a set of flavon fields $\Phi$ at a very high scale. Suppose that, at the leading order, the neutrino mixing matrix is given by $U_0$ which differs from $U$ by subleading contributions $\sim \langle \Phi \rangle / \Lambda_f$ where $\Lambda_f$ is the cutoff scale of the flavour symmetry $G_f$. We will begin with some general assumptions on $U_0$ without however specifying its form. Then we will move to specialise in a concrete case in which $U_0$ is given by the tribimaximal mixing pattern. Similar studies can be done considering other mass-independent textures, such as the bimaximal, the golden-ratio and (sometimes) the trimaximal schemes.
\subsection{Running Effects on Neutrino Mixing Patterns}
\label{Sec:LMP:RunningPatterns}
\setcounter{footnote}{3}
As described in section (\ref{Sec:LMP:Running}) the relevant running effects on $m_\nu$ are encoded in the combinations $Y^\dagger_e Y_e$ and $Y^\dagger_\nu Y_\nu$. Furthermore, we observe that a relevant contribution to the running of $Y^\dagger_e Y_e$ is encoded by $Y^\dagger_\nu Y_\nu$.
We perform the analysis in the basis in which the charged leptons are diagonal, then at high energy we have
\begin{equation}
Y^\dagger_e Y_e = \diag(m^2_e, m^2_\mu, m^2_\tau)\dfrac{2}{v^2}\;.
\end{equation}
From now on, we will use $v$ in the notation of the Standard Model and in order to convert similar expressions to the MSSM, it is sufficient to substitute $v$ with $v_{u,d}$, when dealing with neutrinos or charged leptons, respectively. This simple form changes when evolving down to low energies. This running effect of $Y^\dagger_e Y_e$ on $m_\nu$ is of second order and we can safely forget it. However it can generate a non trivial $U_e$ and consequently introduces additional corrections to the PMNS matrix $U$. We will return to this effect in section \ref{Sec:LMP:Chargedsector}.
Since flavour symmetries impose constraints on $Y_\nu$, they should have some impacts also on running effects. In this section we are interested in two classes of constraints. The first class is characterised by $Y_\nu$ proportional to a unitary matrix: $Y_\nu^\dagger Y_\nu \sim \mathbb{1}$ or $Y_\nu Y_\nu ^\dagger \sim \mathbb{1}$ is frequent in the presence of a flavour symmetry, since it is, for example, a consequence of the first Schur's lemma when $\ell$ or $\nu^c$ transforms in a irreducible representation of the group $G_f$ \cite{BBFN_Lepto}. In the second class, we assume that $m_\nu$ can be exactly diagonalised by $U_0$ according to
\begin{equation}
\hat{m}_\nu = U^T_0 m_\nu U_0
\label{LMP:Diag_nu}
\end{equation}
where $\hat{m}_\nu = \diag (m_1, m_2, m_3)$ with $m_i$ positive and $U_0$ is a mass-independent mixing pattern enforced by the flavour symmetry $G_f$. Independently from the way in which $G_f$ is broken, it is straightforward to see that the neutrino Yukawa coupling in the basis of diagonal right-handed Majorana neutrinos, which we indicate as $\hat{Y}_\nu$, has the following simple form
\begin{equation}
\hat{Y}_\nu = i D\,U^\dagger_0
\label{LMP:General_hatY}
\end{equation}
where $D=\diag (\pm \sqrt{2m_1M_1},\pm \sqrt{2m_2M_2},\pm \sqrt{2m_3M_3})/v$. Notice that $\hat{Y}_\nu$ becomes unitary if $D\sim\mathbb{1}$. However, the present case is not strictly a generalisation of the previous one since a unitary $Y_\nu$ does not necessarily imply a mass-independent mixing pattern.
In \cite{LMP_RGE} we show that $m_\nu$ does not change its flavour structure under $J_\nu$ if $Y_\nu$ belongs to one of these classes: the running effects from $J_\nu$ correct only the neutrino mass eigenvalues but not the mixing angles. Therefore, the only flavour-dependent running contribution to $m_\nu$ is encoded in $J_e$.
\mathversion{bold}
\subsubsection{A Special Case $U_0 = i U_{\rm TB} P^*$ and $D \propto \mathrm{\bf diag} (1,1,-1)$}
\setcounter{footnote}{3}
\mathversion{normal}
In this part we consider a special case of $\hat{Y}_\nu =i D\, U^\dagger_0$ in which the expression of $U_0$ is enforced by the flavour symmetry group $A_4$ in the context of the Altarelli-Feruglio model described in section \ref{Sec:AFTBM}. A more detailed analysis of the running effects will be discussed in the next section. Here we only comment on the constraints on the mixing matrix $U_0 = i U_{\rm TB} P^*$ and the neutrino Yukawa coupling in the hatted basis:
\begin{equation}
\hat{Y}_\nu\equiv yPU_{TB}^TO_{23}=y P\left(\begin{array}{ccc}
\sqrt{2/3}& -1/\sqrt{6} & -1/\sqrt{6}\\
1/\sqrt{3} & +1/\sqrt{3}& +1/\sqrt{3} \\
0 & +1/\sqrt{2} & -1/\sqrt{2}\\
\end{array}\right)
\label{LMP:Ynu}
\end{equation}
where $y$ is a positive parameter of order $\mathcal{O}(1)$, $P$ is the usual diagonal matrix of the Majorana phases and $O_{23}$ is defined as
\begin{equation}
O_{23}=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}
\right)\;.
\end{equation}
In order to confront eq. (\ref{LMP:Ynu}) with the general expression $\hat{Y}_\nu =i D\, U^\dagger_0$
we observe that
\begin{equation}
\hat{Y}_\nu = yPU_{TB}^TO_{23} U_{TB} U_{TB}^T = \diag (y,y,-y) P U^T_{TB}\;.
\end{equation}
Then we conclude that (\ref{LMP:Ynu}) corresponds to the special case in which $D= \diag (y,y,-y)$. Furthermore, in the Altarelli-Feruglio model considered in this section, there is a very simple relation between $m_i$ and $M_i$ given by $m_i= v_u^2 y^2 /2M_i$.
Now we explicitly calculate the renormalisation group running from $\Lambda_f$ down to $\varrho$ for this special case using the approximate analytical expressions given in section \ref{Sec:LMP:RunningApprox}. In the physical basis, it is useful to define the light neutrino mass matrix eq. (\ref{LMP:EqSee-Saw}) at the initial energy scale $\Lambda_f$: by imposing the condition $m_{\nu(\Lambda)}=U_0^*\hat{m}_\nu U_0^\dag$, we have
\begin{equation}
\begin{split}
m_\nu^{TB}&=-U_{TB}\,P\,\hat{m}_\nu\,P\,U_{TB}^T\\[3mm]
&=-\left[\dfrac{\tilde m_3}{2}\left(\begin{array}{ccc}
0&0&0\\
0&1&-1\\
0&-1&1\end{array}\right)
+\dfrac{\tilde m_2}{3}\left(\begin{array}{ccc}
1&1&1\\
1&1&1\\
1&1&1\end{array}\right)
+\dfrac{\tilde m_1}{6}\left(\begin{array}{ccc}
4&-2&-2\\
-2&1&1\\
-2&1&1\end{array}\right)\right]\;,
\end{split}
\label{LMP:EqMnuTBMmasses}
\end{equation}
where $\tilde m_i=m_ie^{i\alpha_i}$. It is necessary to specify the kind of neutrino mass spectrum: in the normal hierarchy the light neutrinos are ordered as $m_1<m_2<m_3$ and the heavy ones as $M_3<M_2<M_1$; while in the inverse hierarchy they are arranged as $m_3<m_1\lesssim m_2$ and $M_2\lesssim M_1<M_3$.
The general result of the running effects on $m_\nu$ is given by eq.~(\ref{LMP:generalsol2}) which in our case becomes
\begin{equation}
m_{\nu(\varrho)}=I_U\left(m_\nu^{TB}+\Delta m^{(J_e)}_\nu + \Delta m^{(J_\nu)}_\nu \right)\;.
\label{LMP:mTBM}
\end{equation}
The analytical result for both $I_U$ and $\Delta m^{(J_e)}_\nu$ (see section \ref{Sec:LMP:Running}) does not depend on the type of the neutrino spectrum, it is sufficient to identify $M_S,M_M,M_L$ with the correct hierarchy between $M_1, M_2, M_3$\;. In particular, for the tribimaximal mixing pattern, the contribution from $J_e$ is given by
\begin{equation}
\begin{split}
\Delta m^{(J_e)}_\nu &=m_\nu^{TB} ~ \diag (0,\, 0,\, \Delta_\tau)+ \diag (0,\, 0,\, \Delta_\tau) m_\nu^{TB}\\
&=-\left(\begin{array}{ccc}
0 & 0 & \dfrac{\tilde m_1}{3}-\dfrac{\tilde m_2}{3} \\[3mm]
0 & 0 & -\dfrac{\tilde m_1}{6}-\dfrac{\tilde m_2}{3}+\dfrac{\tilde m_3}{2}\\[3mm]
\dfrac{\tilde m_1}{3}-\dfrac{\tilde m_2}{3} & -\dfrac{\tilde m_1}{6}-\dfrac{\tilde m_2}{3}+\dfrac{\tilde m_3}{2} & -\dfrac{\tilde m_1}{3}-\dfrac{2\tilde m_2}{3}-\tilde m_3 \\
\end{array}
\right)\Delta_\tau\;.
\end{split}
\end{equation}
Naturally, the contribution from $J_\nu$ depends on the type of the neutrino spectrum,
however it can be written in the same form for both the spectra:
\begin{equation}
\Delta m^{(J_\nu)}_\nu=-\left[\dfrac{\tilde m'_1}{6}\left(
\begin{array}{ccc}
4 & -2 & -2 \\
-2 & 1 & 1 \\
-2 & 1 & 1 \\
\end{array}
\right)
+\dfrac{2 \tilde m'_2}{3}\left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1 \\
\end{array}
\right)
+\tilde m'_3\left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & -1 \\
0 & -1 & 1 \\
\end{array}
\right)\right]
\label{LMP:Deltam_Jnu}
\end{equation}
where $\tilde m'_i$ are redefinitions of the light neutrino masses:
\begin{description}
\item[\textbf{Normal Hierarchy:}]
\begin{equation}
\begin{array}{llll}
\tilde m'_1=\tilde m_1(p+q)\;,& \tilde m'_2=\tilde m_2(x+q)\;,& \tilde m'_3=\tilde m_3(x+z)&\text{in the SM}\\[3mm]
\tilde m'_1=0\;,& \tilde m'_2=2 \tilde m_2x\;,& \tilde m'_3=2 \tilde m_3(x+z)&\text{in the MSSM}
\end{array}
\label{LMP:mpNH}
\end{equation}
with
\begin{equation}
\begin{array}{rcl}
p&=&-\dfrac{1}{16\pi^2}(-3g_2^2+\lambda+\dfrac{9}{10}g_1^2+\dfrac{9}{2}g_2^2)\ln\dfrac{M_1}{M_2}\\[3mm]
q&=&-\dfrac{1}{16\pi^2}(-3g_2^2+\lambda+\dfrac{9}{10}g_1^2+\dfrac{9}{2}g_2^2)\ln\dfrac{M_2}{M_3}\\[3mm]
x&=&-\dfrac{y^2}{32\pi^2}\ln\dfrac{M_1}{M_2}\\[3mm]
z&=&-\dfrac{y^2}{32\pi^2}\ln\dfrac{M_2}{M_3}\;;
\end{array}
\end{equation}
\item[\textbf{Inverse Hierarchy:}]
\begin{equation}
\begin{array}{llll}
\tilde m'_1=\tilde m_1(x+q)\;,& \tilde m'_2=\tilde m_2 (x+z)\;,& \tilde m'_3=\tilde m_3(p+q)&\text{in the SM}\\[3mm]
\tilde m'_1=2\tilde m_1 x\;,& \tilde m'_2= 2\tilde m_2 (x+z)\;,& \tilde m'_3=0&\text{in the MSSM}
\end{array}
\label{LMP:mpIH}
\end{equation}
with
\begin{equation}
\begin{array}{rcl}
p&=&-\dfrac{1}{16\pi^2}(-3g_2^2+\lambda+\dfrac{9}{10}g_1^2+\dfrac{9}{2}g_2^2)\ln\dfrac{M_3}{M_1}\\[3mm]
q&=&-\dfrac{1}{16\pi^2}(-3g_2^2+\lambda+\dfrac{9}{10}g_1^2+\dfrac{9}{2}g_2^2)\ln\dfrac{M_1}{M_2}\\[3mm]
x&=&-\dfrac{y^2}{32\pi^2}\ln\dfrac{M_3}{M_1}\\[3mm]
z&=&-\dfrac{y^2}{32\pi^2}\ln\dfrac{M_1}{M_2}\;.
\end{array}
\end{equation}
\end{description}
Comparing $m_\nu^{TB}$ of eq. (\ref{LMP:EqMnuTBMmasses}) with the perturbations $\Delta m_\nu$ of eqs. (\ref{LMP:Deltam_Jnu}), we note the presence of the same flavour structure for several matrices and in particular, by redefining $\tilde m_i$ to absorb the terms $\tilde m'_i$ it is possible to account for the See-Saw contributions from the renormalisation group running into $m_\nu^{TB}$. As a consequence the leading order predictions for the tribimaximal angles receive corrections only from the terms proportional to $\Delta_\tau$. This result explicitly confirms what we outlined in the previous section.
\subsection{Running Effects in the Charged Lepton Sector}
\label{Sec:LMP:Chargedsector}
\setcounter{footnote}{3}
The presence of a term proportional to $\hat{Y}^\dagger_\nu \hat{Y}_\nu$ in the RG equation for $Y_e$ can switch on off-diagonal entries in the charged lepton Yukawa matrix $Y_e$. When rotated away, this additional contribution introduces a non-trivial $U_e$ and consequently corrects the lepton mixing matrix $U$. For a unitary $\hat Y_\nu$, this correction appears only between the See-Saw mass scales while in the general case it appears already from the cutoff $\Lambda_f$.
In close analogy with the running effects on neutrino mass matrix in eq. (\ref{LMP:mTBM}), the full result of the running for charged lepton mass matrix can conventionally
be written as
\begin{equation}
(Y^\dagger_e Y_e)_ {(\varrho)}=I_e\left[ (Y^\dagger_e Y_e)_{(\Lambda_f)}
+\Delta (Y^\dagger_e Y_e) \right]\;,
\label{LMP:CorrectedYeYe}
\end{equation}
where $I_e$ is an irrelevant global coefficient which can be absorbed by, for example, $y_\tau$. Now we move to the case of tribimaximal mixing pattern. In this case, the flavour-dependent corrections can be explicitly calculated:
\begin{description}
\item[\textbf{NH case:}]
\begin{equation}
\Delta (Y^\dagger_e Y_e)\simeq y_\tau^2 \left[ a_e \left(
\begin{array}{ccc}
0 & 0 & 1 \\
0 & 0 & -1/2 \\
1 & -1/2 & 5 \\
\end{array}
\right)
+b_e \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & -1 \\
0 & -1 & 2 \\
\end{array}
\right)
+c_e \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2 \\
\end{array}
\right)\right]\;,
\label{LMP:EqDeltaYeDagYeNH}
\end{equation}
\item[\textbf{IH case:}]
\begin{equation}
\Delta (Y^\dagger_e Y_e)\simeq y_\tau^2 \left[a'_e \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 2 \\
\end{array}
\right)
+b'_e \left(
\begin{array}{ccc}
0 & 0 & 1\\
0 & 0 & 1 \\
1 & 1 & 2 \\
\end{array}
\right)
+c'_e \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 2 \\
\end{array}
\right)\right]\;,
\label{LMP:EqDeltaYeDagYeIH}
\end{equation}
\end{description}
where the coefficients are
\begin{equation}
\begin{array}{lr}
a_e=b'_e=-\dfrac{C'_\nu}{16 \pi^2} \dfrac{y^2}{3} \ln\dfrac{M_1}{M_2},&\qquad b_e=-\dfrac{C'_\nu}{16 \pi^2} \dfrac{y^2}{2} \ln\dfrac{M_2}{M_3}\;, \\[5mm]
c_e=c'_e=-\dfrac{3 C'_e y_\tau^2}{16 \pi^2}\ln\dfrac{\Lambda_f}{m_{SUSY}(m_Z)}\;,&\qquad a'_e= -\dfrac{C'_\nu}{16 \pi^2} \dfrac{y^2}{2} \ln\dfrac{M_3}{M_1}\;,
\label{LMP:coeff}
\end{array}
\end{equation}
and $C'_\nu=-3/2 \;(1)$, $C'_e=3/2 \;(3)$ in the Standard Model (MSSM). Here we observe that the off-diagonal contributions to $Y^\dagger_e Y_e$ are encoded in $a_e$, $b_e$, $a'_e$ and $b'_e$ which depend only on the See-Saw scales $M_i$. As a result, as we will show in the
next section, $c_e$ and $c'_e$ do not affect the lepton mixing angles.
\subsection{Full Running Effects on the Tribimaximal Mixing Pattern}
\label{Sec:LMP:RGcoefficients}
\setcounter{footnote}{3}
In this section, we combine various contributions discussed in previous sections into the observable matrix $U$ from which we extract angles and phases at low-energy. Since we are interested in physical quantities, we eliminate one of the phases of $P$ and in particular we express each result as a function of $\alpha_{ij}\equiv(\alpha_i-\alpha_j)/2$, removing $\alpha_3$. The corrected mixing angles can be written as
\begin{equation}
\theta_{ij(\varrho)} = \theta^{TB}_{ij}+k_{ij}+\ldots
\end{equation}
where $\theta^{TB}_{13} = 0$, $\theta^{TB}_{12} = \arcsin \sqrt{1/3}$, $\theta^{TB}_{23} = -\pi/4$, dots stand for subleading corrections and $k_{ij}$ are defined by
\begin{eqnarray}
k_{12}&=&\dfrac{1}{3\sqrt2}\left(\dfrac{|\tilde m_1+\tilde m_2|^2}{m_2^2-m_1^2} \Delta_\tau - 3 a_e\right)\nn\\[3mm]
k_{23}&=&\left\{
\begin{array}{ll}
\dfrac{1}{6}\left[\left(\dfrac{|\tilde m_1+\tilde m_3|^2}{m_3^2-m_1^2}+2\dfrac{|\tilde m_2+\tilde m_3|^2}{m_3^2-m_2^2}\right) \Delta_\tau -3 a_e -6 b_e \right] &\qquad\hbox{for NH} \\[3mm]
\dfrac{1}{6}\left[\left(\dfrac{|\tilde m_1+\tilde m_3|^2}{m_3^2-m_1^2}+2\dfrac{|\tilde m_2+\tilde m_3|^2}{m_3^2-m_2^2}\right) \Delta_\tau +3 a_e +3 a'_e \right] &\qquad \hbox{for IH}
\end{array}
\right
\end{eqnarray}
\begin{equation}
k_{13}=\dfrac{1}{3\sqrt2}\sqrt{4m_3^2 \Delta_\tau^2\left(\dfrac{m_1\sin\alpha_{13}}{m_1^2-m_3^2}-\dfrac{m_2\sin\alpha_{23}}{m_2^2-m_3^2}\right)^2+ \left[\left(\dfrac{|\tilde m_1+\tilde m_3|^2}{m_1^2-m_3^2}-\dfrac{|\tilde m_2+\tilde m_3|^2}{\tilde m_2^2-\tilde m_3^2}\right)\Delta_\tau-3 a_e\right]^2}\;.\nn
\end{equation}
In the previous expressions we can clearly distinguish the contributions coming from the diagonalisation of the corrected tribimaximal neutrino mass matrix (\ref{LMP:mTBM}) and those from the diagonalisation of (\ref{LMP:CorrectedYeYe}). As it is clear from (\ref{LMP:coeff}), the corrections to the tribimaximal mixing from the charged lepton sector is important only for hierarchical right-handed neutrinos and will approach to zero as soon as the spectrum becomes degenerate. On the other hand, the corrections from the neutrino sector should be enhanced if the light neutrinos are quasi-degenerate and if the $\tan \beta$ is large, in the MSSM case.\\
The physical Majorana phases are also corrected due to the running and we found the following results:
\begin{equation}
\alpha_{ij(\varrho)}\simeq\alpha_{ij}+\delta\alpha_{ij}\Delta_\tau+\ldots
\end{equation}
where $\alpha_{ij}$ are the starting values at $\Lambda_f$ and
\begin{equation}
\delta\alpha_{13}=\dfrac{2}{3}\dfrac{m_1m_2\sin(\alpha_{13}-\alpha_{23})}{m_2^2-m_1^2}\;,\qquad\qquad
\delta\alpha_{23}=\dfrac{4}{3}\dfrac{m_1m_2\sin(\alpha_{13}-\alpha_{23})}{m_2^2-m_1^2}\;.
\label{LMP:Deltaalpha}
\end{equation}
At $\Lambda_f$, $\sin{\theta^{TB}_{13}}$ is vanishing and as a result the Dirac CP-violating phase is undetermined. An alternative is to study the Jarlskog invariants which are well-defined at each energy scale. At $\Lambda_f$, $J_{CP}$ is vanishing, while after the renormalisation group running it is given by
\begin{equation}
J_{CP}=\dfrac{1}{18}\left|m_3\left(\dfrac{m_1\sin\alpha_{13}}{m_1^2-m_3^2}- \dfrac{m_2\sin\alpha_{23}}{m_2^2-m_3^2}\right)\right|\Delta_\tau\;.
\label{LMP:Jarlskog}
\end{equation}
Two comments are worth. First of all, in the expression for $k_{13}$, it is easy to recover the resulting expression for $J_{CP}$ as the first term under the square root, apart global coefficients. This means that the running procedure introduces a mixing between the expression of the reactor angle and of the Dirac CP-phase. Moreover we can recover the value of the Dirac CP-phase directly from eq. (\ref{LMP:Jarlskog}) and we get the following expression:
\begin{equation}
\begin{split}
\cot\delta=&-\dfrac{m_1(m_2^2-m_3^2)\cos\alpha_{13}-m_2(m_1^2-m_3^2) \cos\alpha_{23}-m_3(m_1^2-m_2^2)}{m_1(m_2^2-m_3^2)\sin\alpha_{13}-m_2(m_1^2-m_3^2)\sin\alpha_{23}}+\\[3mm]
&-\dfrac{3 a_e (m_2^2-m_3^2)(m_1^2-m_3^2)}{2 m_3 \left[m_1(m_2^2-m_3^2)\sin\alpha_{13}-m_2(m_1^2-m_3^2)\sin\alpha_{23}\right] \Delta_\tau} \;.
\end{split}
\end{equation}
In the neutrino sector, the running contributions from the See-Saw terms are present only in the resulting mass eigenvalues:
\begin{equation}
m_{i(\lambda)}\simeq m_i(1+\delta m_i)+\ldots
\end{equation}
where $m_i$ are the starting values at $\Lambda_f$ and $\delta m_i$, in both the Standard Model and the MSSM and in both the normally and inversely hierarchical spectra, are given by
\begin{equation}
\delta m_1=\dfrac{m'_1}{m_1}-\dfrac{\Delta_\tau}{3}\;,\qquad
\delta m_2=2\dfrac{m'_2}{m_2}-\dfrac{2\Delta_\tau}{3}\;,\qquad
\delta m_3=2\dfrac{m'_3}{m_3}-\Delta_\tau\;,
\end{equation}
with $m'_i\equiv|\tilde m'_i|$, given as in eqs. (\ref{LMP:mpNH}, \ref{LMP:mpIH}).
\section{Running Effects in the Altarelli-Feruglio Model}
\label{Sec:LMP:AFmodel_RG}
\setcounter{footnote}{3}
In this section we will apply the analysis of renormalisation group running effects on the lepton mixing angles to the Altarelli-Feruglio model, already introduced in section \ref{Sec:AFTBM}. In order to perform such a study, it is important to verify the initial assumptions made in section \ref{Sec:LMP:RGcoefficients}, in particular, we see that eq. (\ref{LMP:Ynu}) exactly corresponds to the one implied by the Altarelli-Feruglio model, when moving to the physical basis (the phase of $y$ can be absorbed in the definition of $P$). On the other side, the presence of flavon fields has a relevant impact on the results of the analysis. In the unbroken phase, flavons are active fields and should modify the RGEs. Since the only source of the $A_4$ breaking is the VEVs of the flavons, any flavour structure is preserved above the corresponding energy scale, whatever interactions are present. In particular, the lagrangian (\ref{AFTBM:LnuSeeSaw}) contains all possible leading order terms, given the group assignments, and its invariance under $A_4$ is maintained moving downward to the scale $\mean{\varphi}$, where significant changes in the flavour structure can appear. From eqs. (\ref{AFTBM:SSMassMatrices}) and (\ref{AFTBM:RHEigenvalues}), we deduce that $\mean{\varphi}\sim M_i$ and as a result in the Altarelli-Feruglio model $\Delta_\tau$ must be proportional to $\ln(\mean{\varphi}/\varrho)$ and not to $\ln(\Lambda_f/\varrho)$.
Furthermore, it is relevant for the subsequent discussion to recall the level of degeneracy of the neutrino masses in the allowed space of parameters. The ratios between the right-handed neutrinos are well defined for the normal hierarchy, $M_1/M_3\sim11$ and $M_2/M_3\sim5$, while in the case of the inverse hierarchy, the ratio $M_1/M_2$ is fixed at $1$ while $M_3/M_2$ varies from about $3$ to $1$, going from the lower bound of $m_3$ up to the KATRIN sensitivity.
We will separately discuss the evolution of angles and phases for both type of hierarchy. In the following, the results will be shown for the Standard Model and for the MSSM with $\tan\beta =15$ in the absence of other explicit indications. Without loss of generality, we choose $y=1$ for our numerical analysis. We also set $\mean{\varphi}=10^{15}$. The spectrum spans the range obtained in (\ref{AFTBM:RangeMasses}).
\subsection{Running of the Angles}
\label{Sec:LMP:RGAngles}
\setcounter{footnote}{3}
Since we are interested in deviations of the corrected mixing angles from the tribimaximal predictions and in comparing them with experimental values, it is convenient to relate the coefficients $k_{ij}$ defined in section \ref{Sec:LMP:RGcoefficients} with physical observables. Keeping in mind that $\vert k_{ij}\vert \ll 1$ and that we start from a tribimaximal mixing matrix, it follows that
\begin{equation}
\sin\theta_{13}\simeq k_{13}\;,\qquad\cos2 \theta_{23}\simeq 2 k_{23}\;,\qquad \sin^2\theta_{12}-\dfrac{1}{3} \simeq \dfrac{2 \sqrt{2}}{3} k_{12}\;.
\end{equation}
The corrections to the tribimaximal mixing angles as functions of $m_{1,3}$ in the normal and inverse hierarchies are shown in figure \ref{fig:LPM:ANG}.
\begin{figure}[ht!]
\centering
\includegraphics[width=7.8cm]{NH12.pdf}
\includegraphics[width=7.8cm]{IH12.pdf}
\includegraphics[width=7.8cm]{NH13.pdf}
\includegraphics[width=7.8cm]{IH13.pdf}
\includegraphics[width=7.8cm]{NH23.pdf}
\includegraphics[width=7.8cm]{IH23.pdf}
\vspace{-0.2cm}
\caption{\it Corrections to the tribimaximal mixing angles as functions of the lightest neutrino masses, for the normal hierarchy on the left and for the inverse hierarchy on the right. The plots show the MSSM case with $\tan\beta =15$ (solid blue) and the Standard Model case (black dashed), compared to the current $1\sigma$ and $3\sigma$ limits (dashed red). $m_{1,3}$ are restricted in a range which is given by eq. (\ref{AFTBM:RangeMasses}) or by the KATRIN bound.}
\vspace{-1cm}
\label{fig:LPM:ANG}
\end{figure}
We begin with the case of the normal hierarchy. Since the dependence of the corrected mixing angles from $\Delta_\tau$ is the same, Standard Model corrections are generally expected to be smaller than those in MSSM. However, from figure \ref{fig:LPM:ANG} we see that, in normal hierarchy, there is not a large split between the two curves for Standard Model and MSSM. This fact suggests a dominant contribution coming from the charged lepton sector as discussed in section \ref{Sec:LMP:RGcoefficients}. For the atmospheric and reactor angles, the deviations from the tribimaximal predictions lie roughly one order of magnitude below the $1\sigma$ limit. In particular, running effects on $\sin\theta_{13}$ are even smaller than the NLO contributions analysed in section \ref{Sec:AFTBM:NLO} which are of $\mathcal{O}(u)$, without cancellations. On the other hand, since the experimental value of the solar angle is better measured than the other two, the running effects become more important in this case. Indeed, the running corrections to the tribimaximal solar angle evade the $1\sigma$ limit as it can be clearly seen in figure \ref{fig:LPM:ANG}. Anyway, we observe that for both the atmospheric and solar angles, the running contribution is of the same order as the contribution from NLO operators.
Now we move to analyse the case of the inverse hierarchy. In this case, since the neutrino spectrum predicted by the Altarelli-Feruglio model is almost degenerate and in particular $m_2/m_1\sim1$, the contribution from the charged lepton sector in eqs. (\ref{LMP:EqDeltaYeDagYeIH}) is subdominant. As a consequence the information which distinguishes the Standard Model case from the MSSM one is mainly dictated by $\Delta_\tau$ defined in eq. (\ref{LMP:DeltaTau}). As a result the running effects in the MSSM are always larger than in the Standard Model and for large $\tan \beta$ they are potentially dangerous. The curves corresponding to the atmospheric and reactor angles do not go above the $3\sigma$ and $1\sigma$ windows respectively. However, the deviation from $\theta^{TB}_{12}$ presents a more interesting situation. For example, for $\tan\beta \gtrsim 10$, the running effects push the value of the solar angle beyond the $3\sigma$ limit for the entire spectrum. For lower values of $\tan\beta$, the model is within the $3\sigma$ limit only for a (small) part of the spectrum where the neutrinos are less degenerate. Comparing with the running effects, in the inverse hierarchy, the contribution from NLO operators in the Altarelli-Feruglio model is under control.
\subsection{Running of the Phases}
\label{Sec:LMP:RGPhases}
\setcounter{footnote}{3}
Majorana phases are affected by renormalisation group running effects too. Since there is no experimental information on Majorana phases available at this moment we will simply show their values at low-energy, comparing them with the predictions in the Altarelli-Feruglio model. We stress again that they are completely determined by only one parameter, the mass of the lightest neutrino, $m_1$ for the normal hierarchy and $m_3$ for the inverse hierarchy.
\begin{figure}[ht!]
\centering
\includegraphics[width=7.6cm]{NHMJ.pdf}\quad
\includegraphics[width=7.6cm]{IHMJ.pdf}
\caption{\it Majorana phases $\alpha_{13}$ and $\alpha_{23}$ as functions of the lightest left-handed neutrino masses. For the normal hierarchy (left panel) the corresponding curves at low and high energies are undistinguishable. For the inverse hierarchy (right panel) the curves refer to low-energy values in MSSM with $\tan\beta =15$ (solid blue or red) and the Altarelli-Feruglio predictions at $\Lambda_f$ (dashed blue or red).}
\label{fig:LPM:MAJ}
\end{figure}
In the case of normal hierarchy, Majorana phases are essentially not corrected by running effects. This feature is due to the fact that $\delta \alpha_{13}$ and $\delta \alpha_{23}$ of eqs. (\ref{LMP:Deltaalpha}) are proportional to $\sin(\alpha_{13}-\alpha_{23})$ which is close to zero, as we can see looking at the left panel of figure \ref{fig:LPM:MAJ}. In the case of inverse hierarchy, MSSM running effects always increase the values of phases when moving from high energy to low-energy and they are maximised for $\tan\beta=15$, especially when the neutrino spectrum becomes degenerate. On the contrary, in the Standard Model context, the low-energy curves cannot be distinguished from the high energy ones.
As described in section (\ref{Sec:LMP:RGcoefficients}), a definite Dirac CP violating phase $\delta$ arises from running effects even if, in the presence of a tribimaximal mixing pattern, it is undetermined in the beginning. Although the final Dirac phase can be large, Jarlskog invariant, which measures an observable CP violation, remains small because of the smallness of $\theta_{13}$ (see \cite{LMP_RGE} for details). These results are valid both for the Standard Model and for MSSM.
\section{Conclusions of the Chapter}
\label{Sec:LMP:Conclusions}
\setcounter{footnote}{3}
In this section we have studied the running effects on neutrino mixing patterns. In See-Saw models, the running contribution from the neutrino Yukawa coupling $Y_\nu$, encoded in $J_\nu$, is generally dominant at energies above the See-Saw threshold. However, this effect, which in general introduces appreciable deviations from the leading order mixing patterns, does not affect the mixing angles, under specific conditions: in the first part of the section, we have analysed two classes of models in which this indeed happens.
The first class is characterised by a $Y_\nu$ proportional to a unitary matrix. It is the case, for example, when the right-handed singlet neutrinos or the charged leptons belong to an irreducible representation of the flavour group. The second class is the mass-independent mixing pattern, in which, in particular, the effects of $J_\nu$ can be absorbed by a small shift of neutrino mass eigenvalues leaving mixing angles unchanged.
The widely studied tribimaximal mixing pattern belongs, for example, to this second class of models.
Subsequently, we focused on the Altarelli-Feruglio model. The aim is to analyse the running effects on the tribimaximal mixing pattern in addition to the NLO corrections already present in this model due to the spontaneous breaking of the symmetry and to confront them with experimental values. The analysis has been performed both in the Standard Model and MSSM. We found that for the normal hierarchy light neutrinos, the dominant running contribution comes from the charged lepton sector which weakly depends on both $\tan\beta$ and mass degeneracy. As a result, for this type of spectrum, the tribimaximal prediction is stable under running evolution. Moreover, the running contribution is of the same order or smaller with respect to the contribution from NLO operators. On the other hand, in the case of the inverse hierarchy, the deviation of the solar angle from its tribimaximal value can be larger than the NLO contribution and, in particular in MSSM, for $\tan\beta \gtrsim 10$ an inversely hierarchical spectrum is strongly disfavored. In the end, we observe that for both spectra, the reactor angle $\theta_{13}$ does not receive appreciable deviations from zero.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{Flavour Violation}
\label{Sec:FlavourViolation}
\setcounter{equation}{0}
\setcounter{footnote}{3}
In the previous sections we illustrated a series of flavour models in which the fermion masses and mixings can be explained by the introduction of an additional symmetry group. All these models can accommodate the present measured fermion mass hierarchies and manifest particular mixing patterns for leptons and for quarks, which all can fit the experimental data (thanks also to the large number of parameters present in the models). Beyond these features, the study of the neutrinoless-double-beta decay, of the type of hierarchy, of the bounds on the smallest neutrino mass and on the sum of the neutrino masses could represent viable ways in order to confirm or disfavour each of these realisations. However, comparing the phenomenological predictions of the various models, we cannot see a clear distinction. In order to find new ways to characterise each model, it would be highly desirable to investigate other types of observables, not directly related to neutrino properties.
In this chapter we analyse the predictions of a class of flavour models concerning processes which violate the individual lepton number, such as $\mu\to e \gamma$, $\tau\to e\gamma$ and $\tau\to\mu\gamma$. We will focus on a flavour symmetry that contains the discrete group $A_4$, which is particularly successful in reproducing the lepton mixing angles observed in neutrino oscillations. In particular we consider the flavour group $G_f=A_4\times Z_3\times U(1)_{FN}$ and the flavon content which characterise the Alterelli-Feruglio model already discussed in section \ref{Sec:AFTBM}. The total lepton number is generally broken at a large scale $\Lambda_L$ (possibly related to the VEV of the flavons, $\langle\varphi\rangle$) and light neutrinos are assumed to be of Majorana type. Depending on the specific way the total lepton number is violated (either by higher dimensional operators or via the see-saw mechanism), the neutrino mass spectrum can have different properties that can be tested in future experiments. Other types of observables, not directly related to neutrino properties, naturally arise if there is new physics at a much lower energy scale $M$, around $1\div 10$ TeV. Indeed we have several indications, both from the experimental and from the theory side, that this can be the case. For instance, the observed discrepancy in the anomalous magnetic moment of the muon, the overwhelming evidence of dark matter, the evolution of the gauge coupling constants towards a common high-energy value and the solution of the hierarchy problem can all benefit from the existence of new particles around the TeV scale. In this chapter we assume that such a new scale exists and that the associated degrees of freedom do not provide new sources of baryon and/or lepton number violation.
At least four different scales are present in our approach: the scale of flavour dynamics $\Lambda_f$, the lepton number breaking scale $\Lambda_L$, the scale introduced by the VEVs of the flavon fields $\langle\varphi\rangle$ and the new physics scale $M$. A generic hierarchy among the scales is $M \ll \langle \varphi \rangle \ll \Lambda_f$ with $\Lambda_L$ expected to be comparable to or smaller than $\Lambda_f$.
We will first adopt an effective field theory approach, where the dominant physical effects of the new particles at low energies can be described by local six-dimensional operators, suppressed by two powers of the new mass scale $M$ and explicitly conserving baryon and lepton numbers. They will contribute to physical effects like the anomalous magnetic moments (MDMs) of the charged leptons, their electric dipole moments (EDMs) and lepton flavour violating (LFV) transitions like $\mu\to e \gamma$, $\tau\to e \gamma$ and $\tau\to\mu\gamma$. We will separately treat the general case where no further requirement is enforced and the supersymmetric case, where additional constraints are present. In this second context a cancellation is expected to take place, considerably changing the conclusion that can be derived from the existing bound on $\mu\to e \gamma$. It is then important to perform a direct computation of the branching ratios within an explicit supersymmetric model incorporating the flavour symmetry $A_4\times Z_3\times U(1)_{FN}$.
In the second part of the chapter we consider the Altarelli-Feruglio model by adding a full set of Supersymmetry breaking terms consistent with the flavour symmetry. We assume that the breaking of Supersymmetry occurs at a scale higher than or comparable to the flavour scale, simulated in our effective Lagrangian by a cutoff, so that at energies close to the cutoff scale we have non-universal boundary conditions for the soft Supersymmetry breaking terms, dictated by the flavour symmetry. Depending on the assumed mechanism of Supersymmetry breaking we may have boundary conditions different from these, possibly enforced at a smaller energy scale. For this reason, our approach maximises the possible effects on LFV processes.
We perform a detailed calculation of the slepton mass matrices in the physical basis and evaluate the branching ratios for the mentioned LFV decays in the mass insertion (MI) approximation. We find a behaviour for these processes which is different from what expected in the supersymmetric variant of the effective Lagrangian approach: in particular we identify an additional contribution to the right-left (RL) block of the slepton mass matrix which survives to the cancellation. We then enumerate the conditions under which such a contribution is absent and the original behavior is recovered: we could not find a dynamical explanation to justify the realisation of these conditions in our model, even if some of them are naturally realised in the context of supergravity (SUGRA) models.
Finally we comment on the agreement of our results with the experimental measurements and bounds. In particular, assuming a SUGRA framework with a common mass scale $m_{SUSY}$ for soft sfermion and Higgs masses and a common mass $m_{1/2}$ for gauginos at high energies, we numerically study the normalised branching ratios of $\ell_i\to \ell_j\gamma$ using full one-loop expressions and explore the parameter space of the model. We find that the branching ratios for $\mu\to e \gamma$, $\tau\to \mu\gamma$ and $\tau \to\ e \gamma$ are all of the same order of magnitude. Therefore, applying the present MEGA bound on $BR(\mu\to e \gamma)$, this implies that $\tau\to \mu\gamma$ and $\tau \to\ e \gamma$ have rates much smaller than the present (and near future) sensitivity. Moreover, still considering the MEGA limit, we find that small values of the symmetry breaking parameter and of $\tan\beta$ are favoured for $m_{SUSY}$ and $m_{1/2}$ below $1000$ GeV, i.e. in the range of a possible detection of sparticles at LHC. Furthermore, it turns out to be rather unnatural to reconcile the values of superparticle masses necessary to account for the measured deviation in the muon anomalous magnetic moment from the Standard Model value with the present bound on the branching ration of $\mu\to e\gamma$. In our model values of such deviation smaller than $100 \times 10^{-11}$ are favoured.
\section{Low-Energy Effective Approach}
\label{Sec:LFV:Effective}
\setcounter{footnote}{3}
In this section we adopt an effective field theory approach, where the dominant physical effects of the new particles at low energies
can be described by local dimension six operators, suppressed by two powers of the new mass scale $M$ and explicitly conserving baryon and lepton numbers. We can account for the flavour breaking effects by requiring invariance of these operators under the flavour symmetry and by encoding the symmetry breaking effects in the flavon fields. This approach is strictly related to, and indeed inspired by, that of minimal flavour violation (MFV)\cite{MFV,MLFV1,MLFVother} already described in section \ref{Sec:MFV}. While such a minimal symmetry choice has the advantage that it can accommodate any pattern of lepton masses and mixing angles, it does not provide any clue about the origin of the approximate tribimaximal pattern observed in the lepton mixing matrix. For this reason, here we choose a flavour group that includes the discrete factor $A_4$: $G_f=A_4\times Z_3\times U(1)_{FN}$, a sort of minimal choice to properly account for both, the neutrino and the charged lepton mass spectra.
Following section \ref{Sec:AFTBM}, neutrinos can get their small masses through the low-energy Weinberg operator and the tribimaximal pattern, in the basis of diagonal charged leptons, is achieved via a specific vacuum misalignment of the flavons. The Lagrangian can be written as an expansion in powers of $\varphi/\Lambda_f$. When flavons develop a VEV, $\mean{\varphi}$, all the flavour information are enclosed in the Yukawa couplings: $Y_f=Y_f\left(\mean{\varphi}\right)$.
Under the assumption that $\vert\langle\varphi\rangle\vert\ll 1$, the functions $Y_f$ can be expanded in powers of $\langle\varphi\rangle$ and only a limited number of terms gives a non-negligible contribution. Finally, in the language of effective field theories, the leading terms of the relevant effective Lagrangian are given by
\begin{equation}
\mscr{L}_{eff}=\mscr{L}_K+e^{c\,T} H^\dagger Y_e \ell +\mscr{L}_\nu+ i\dfrac{e}{M^2}{e^c}^T H^\dagger\sigma^{\mu\nu}F_{\mu\nu}{\cal M}\ell +\text{h.c.}+[\text{4-fermion operators}]
\label{LFV:Eff:leff}
\end{equation}
where $\mscr{L}_K$ stands for the kinetic terms, $\mscr{L}_\nu$ contains the terms describing the neutrino masses and $e$ is the electric charge. The complex $3\times3$ matrix $\cM$, with indices in the flavour space, is a function of $\langle\varphi\rangle$: $\cM=\cM\left(\langle\varphi\rangle\right)$. Invariance under $SU(2)_L\times U(1)_Y$ gauge transformations is guaranteed if $F_{\mu\nu}$ is any arbitrary combination of $B_{\mu\nu}$, the field strength of $U(1)_Y$, and $\sigma^a W^a_{\mu\nu}$, the non-Abelian field strength of $SU(2)_L$. Here we are interested in the component of this combination along the direction of the unbroken $U(1)_{em}$ and we identify $F_{\mu\nu}$ with the electromagnetic field strength.
We can imagine that we derive such an effective Lagrangian from a fundamental theory by integrating out two different sets of modes. In a first step we can integrate out the flavon fields and the possible degrees of freedom associated with the violation of $B-L$, thus obtaining a complete set of mass terms for all the light particles including neutrinos, charged fermions and the quanta with masses around the TeV scale. Subsequently, around the scale $M=1\div 10$ TeV, we integrate out these additional quanta and we generate the other operators listed in eq. (\ref{LFV:Eff:leff}). The latter are still invariant under $G_f$, once we treat the symmetry breaking parameters as spurions.
In this way we can fully consider all flavour symmetry breaking effects and keep a high degree of predictability since the expansion in the small symmetry breaking parameters can be truncated after few terms. Thereby, the same symmetry breaking parameters that control lepton masses and mixing angles also control the flavour pattern of the other operators in $\mscr{L}_{eff}$. Moreover the effects described by these operators are suppressed by $1/M^2$ and not by inverse powers of the larger scales $\Lambda_f$ and $\Lambda_L$ and this opens up the possibility that they might be observable in the future.
In a field basis where the kinetic terms are canonical and the charged lepton mass matrix is diagonal, where we will denote vectors and matrices with a hat, the real and imaginary parts of the matrix elements $\hat{\cM}_{ii}$ are proportional to the MDMs $a_i$ and to the EDMs $d_i$ of charged leptons, respectively \cite{Raidal,Brignole,Ciuchini,MLFV1,MLFVother}:
\begin{equation}
a_i=2 m_i \dfrac{v}{\sqrt{2} M^2}\Re\hat{\cM}_{ii}\;,\qquad\qquad d_i=e \dfrac{v}{\sqrt{2} M^2} \Im \hat{\cM}_{ii}\qquad\qquad(i=e,\mu,\tau)\;,
\end{equation}
\vskip 0.2cm
The off-diagonal elements $\hat{\cM}_{ij}$ describe the amplitudes for the LFV transitions $\mu\to e \gamma$, $\tau\to\mu\gamma$ and $\tau\to e \gamma$ \cite{ArgandaHerrero,Raidal,Ciuchini,MLFV1,MLFVother,Brignole,BorzumatiMasiero,HisanoFukuyama}:
\begin{equation}
R_{ij}\equiv\dfrac{BR(\ell_i\to \ell_j\gamma)}{BR(\ell_i\to \ell_j\nu_i{\overline{\nu}_j})}=\dfrac{12\sqrt{2}\pi^3 \alpha}{G_F^3 m_i^2 M^4}\left(\vert\hat{\cM}_{ij}\vert^2+\vert\hat{\cM}_{ji}\vert^2\right)\;,
\end{equation}
where $\alpha$ is the fine structure constant, $G_F$ is the Fermi constant and $m_i$ is the mass of the lepton $\ell_i$. Finally the four-fermion operators describe other flavour violating processes like $\tau^-\to \mu^+ e^- e^-$ and $\tau^-\to e^+ \mu^- \mu^-$. In this section our focus will be mainly on the processes $\mu\to e \gamma$, $\tau\to\mu\gamma$ and $\tau\to e\gamma$.
\subsection{The Dipole Matrix}
\label{Sec:LFV:EFF:DipoleMatrix}
\setcounter{footnote}{3}
In this section we present the analytical results for the dipole matrix, which is the main ingredient in order to evaluate MDMs, EDMs and LFV transitions. Before proceeding few comments are worth. We assume that the small symmetry breaking parameters of the Altarelli-Feruglio model, $u$ and $t$, are real parameters, encoding all the complex factors in the coupling of the Lagrangian. We further assume, without loss of generality, that the allowed range of values for $u$ is $0.001<u<0.05$, while $t$ should be around $0.05$, both in the general and in the supersymmetric contexts. The general Lagrangian which describes charged leptons and neutrinos for the class of models with the flavour symmetry $G_f$ and with the same flavon content of the Altarelli-Feruglio model is illustrated in eq. (\ref{AFTBM:Ll}, \ref{AFTBM:Lnu}) and here we only recall the elements which are useful for the present analysis: in particular we will concentrate only on the charged lepton sector. When calculating the kinetic terms, we find that the non-canonical contributions are small (at most at order $\cO(u)$ and $\cO(t^2)$). They should be taken into account when calculating lepton masses and the dipole transitions induced by the matrix $\hat{\cM}$, but an explicit calculation shows that at the leading order in our expansion parameters the charged lepton masses and the elements of $\hat{\cM}$ are not affected by these non-canonical kinetic terms \cite{Kahler}. For this reason we can safely assume canonical kinetic terms from the beginning (for a detailed discussion see the original paper \cite{FHLM_Efficace}).
Looking at eq. (\ref{LFV:Eff:leff}), we observe that the Lagrangian of the dipole moments have the same structure in flavour space of the Lagrangian of the charged leptons. As a consequence the entries of the charged lepton mass matrix and of the dipole matrix are exactly of
the same order of magnitude and only the coefficients are nominally different. After the breaking of the flavour and the electroweak symmetries, we find the following matrices:
\begin{equation}
Y_e \sim \cM = \left(
\begin{array}{ccc}
\cO(t^2 u) & \cO(t^2 u^2) & \cO(t^2 u^2) \\
\cO(t u^2) & \cO(t u) & \cO(t u^2) \\
\cO(u^2) & \cO(u^2) & \cO(u) \\
\end{array}
\right)\;.
\label{LFV:Eff:yl&M_subleading}
\end{equation}
Notice that the off-diagonal elements of $Y_e$ and of $\cM$ originate either from the subleading contributions to the VEV of the $\varphi_T$ multiplet, or from a double flavon insertion of the type $\xi^\dagger \varphi_S$ or $\xi \varphi_S^\dagger$. In a generic case we expect that both these contributions are equally important and contribute at the same order to a given off-diagonal dipole transition. There is however a special case where the double flavon insertions $\xi^\dagger \varphi_S$ and its conjugate are suppressed compared to the subleading corrections to $\varphi_T$. This happens when the underlying theory is supersymmetric and Supersymmetry is softly broken, under the general assumption that the only sources of chirality flip are either fermion masses or sfermion masses of left-right type. Both of them, up to the order $u^2$, are described by the insertion of $\varphi_T$ or $\varphi_T^2$ in the relevant operators. The origin of the suppression of $\xi^\dagger \varphi_S$ and $\xi \varphi_S^\dagger$ is the holomorphycity of the superpotential in supersymmetric theories and we refer to the original paper \cite{FHLM_Efficace} for a complete discussion.
The impact of such a suppression is strong: indeed, moving into the physical basis in which the charged leptons are diagonal, with respect to what happens in the general case, an overall depletion in some elements of the dipole matrix takes place.
\subsection{Dipole Transitions}
\label{Sec:LFV:EFF:DipoleTransitions}
\setcounter{footnote}{3}
Here we give the results for the dipole moments when all the transformations to move into the physical basis have been carried out. Furthermore we compare them to the experimental values. In order to move to the physical basis it is only necessary to rotate the charged lepton into the diagonal form and we arrive at the following leading order result for the dipole matrix that, in this basis, will be denoted by $\mathcal{\hat M}$: we have
\begin{equation}
\mathcal{\hat M} = \left(
\begin{array}{ccc}
\cO(t^2 u) & \cO(t^2 u^2) & \cO(t^2 u^2) \\
\cO(t u^2) & \cO(t u) & \cO(t u^2) \\
\cO(u^2) & \cO(u^2) & \cO(u) \\
\end{array}
\right)\;,\qquad
\mathcal{\hat M} = \left(
\begin{array}{ccc}
\cO(t^2\,u) & \cO(t^2\,u^2) & \cO(t^2\,u^2) \\
\cO(t\,u^3) & \cO(t\,u) & \cO(t\,u^2) \\
\cO(u^3) & \cO(u^3) & \cO(u) \\
\end{array}
\right)\;,
\label{LFV:Eff:Mhat}
\end{equation}
in the general and in the supersymmetric cases, respectively. The main difference between the general and the supersymmetric approaches is the suppression of the elements below the diagonal: $\sim\cO(u^2)$ in the first case and $\sim\cO(u^3)$ in the second. This additional suppressing factor has a relevant consequence for the analysis on the LFV transitions. Furthermore, as reported in section \ref{Sec:AFTBM:NLO}, when the supersymmetric case is considered the VEV of $\varphi_T$ at the NLO has the second and third components equal ($c_2=c_3$), and as a result the elements ${\cal \hat{M}}_{\tau e}$ and ${\cal \hat{M}}_{\tau \mu}$ become equal and not only of the same order.
The MDMs and EDMs for the general and the supersymmetric cases are of the same order of magnitude: we display only the leading terms and they arise at first order in the $u$ parameter:
\begin{equation}
\begin{array}{rcl}
a_e &=& 2 m_e\dfrac{v}{\sqrt{2} M^2}\, \cO(t^2 \, u) \\[3mm]
a_\mu &=& 2 m_\mu\dfrac{v}{\sqrt{2} M^2}\,\cO(t \, u) \\[3mm]
a_\tau &=& 2 m_\tau\dfrac{v}{\sqrt{2} M^2}\,\cO(u)
\end{array}\;,\qquad\qquad
\begin{array}{rcl}
d_e&=&e \dfrac{v}{\sqrt{2} M^2}\, \cO(t^2 \, u) \\[3mm]
d_\mu&=&e \dfrac{v}{\sqrt{2} M^2}\, \cO(t \, u) \\[3mm]
d_\tau&=&e \dfrac{v}{\sqrt{2} M^2}\, \cO(u)\;.
\end{array}
\label{LFV:Eff:MDMEDM}
\end{equation}
Considering the explicit form of the charged lepton masses, we can write these expression as follows:
\begin{equation}
a_i=\cO\left(2\dfrac{m_i^2}{M^2}\right)\;,\qquad\qquad d_i=\cO\left(e\dfrac{m_i}{M^2}\right)\;.
\label{LFV:Eff:oom}
\end{equation}
We can derive a bound on the scale $M$, by considering the existing limits on MDMs and EDMs and by using eqs. (\ref{LFV:Eff:oom}) as exact equalities to fix the ambiguity of the unknown coefficients. We find the results shown in table \ref{table:LFV:Eff:BoundsMDMEDM}. We see that, in order to accept values of $M$ in the range $1\div 10$ TeV, we should invoke a cancellation in the imaginary part of $\hat{\cal{M}}_{ee}$, which can be either accidental or due to CP-conservation in the considered sector of the theory.
\begin{table}[!ht]
\centering
\begin{math}
\begin{array}{|c|c|}
\hline
& \\[-9pt]
d_e<1.6\times 10^{-27}\;e\,\;\mathrm{cm}&M>80\,\;\mathrm{TeV}\\[3pt]
\hline
&\\[-9pt]
d_\mu<2.8\times 10^{-19}\;e\,\;\mathrm{cm}&M>80\,\;\mathrm{GeV}\\[3pt]
\hline
&\\[-9pt]
\delta a_e<3.8\times 10^{-12}&M>350\,\;\mathrm{GeV}\\[3pt]
\hline
&\\[-9pt]
\delta a_\mu\approx 30\times 10^{-10}&M\approx 2.7\,\;\mathrm{TeV}\\[3pt]
\hline
\end{array}
\end{math}
\caption{\it Experimental limits on lepton MDMs and EDMs\cite{ExperimentalBoundsMDMmu,ExperimentalBoundsMDMe,PDG08} and corresponding bounds on the scale $M$, derived from eqs. (\ref{LFV:Eff:oom}). The data on the $\tau$ lepton have not been reported since they are much less constraining. For the anomalous magnetic moment of the muon, $\delta a_\mu$ stands for the deviation of the experimental central value from the Standard Model expectation.}
\vspace{-0.5cm}
\label{table:LFV:Eff:BoundsMDMEDM}
\end{table}
Concerning the flavour violating dipole transitions for the general case, using eq. ({\ref{LFV:Eff:Mhat}) we see that the rate for $\ell_i\to \ell_j\gamma$ is dominated by the contribution ${\cal\hat M}_{ij}$, since ${\cal\hat M}_{ji}$ is suppressed by a relative factor of $\cO(t)$ for $\mu\to e\gamma $ and $\tau\to\mu\gamma$ and of $\cO(t^2)$ for $\tau\to e\gamma$. We get:
\begin{equation}
R_{ij}=\dfrac{48\pi^3 \alpha}{G_F^2 M^4}\vert w_{ij} ~u\vert^2
\label{LFV:Eff:LFV}
\end{equation}
where $w_{ij}$ are coefficients of $\cO(1)$ which contain two contributions, one coming from the off-diagonal element $(ij)$ of the original dipole matrix ${\cal M}$ and one coming from the effect of diagonalising the charged lepton mass matrix. Both contribute at the same order in $u$. Since the quantities $w_{ij}$ are all expected to be of order one, we can conclude that in the class of models considered here the branching ratios for the three transitions $\mu\to e\gamma $, $\tau\to\mu\gamma$ and $\tau\to e\gamma$ should all be of the same order:
\begin{equation}
BR(\mu\to e \gamma)\approx BR(\tau\to\mu\gamma)\approx BR(\tau\to e \gamma)\;.
\end{equation}
This is a distinctive feature of our class of models, since in most of the other existing models there is a substantial difference between the branching ratios\cite{MFV,MLFV1,MLFVother,SUSYLFV+symmetries,SUSYFP+GUTs}. In particular it often occurs that $BR(\mu\to e \gamma)<BR(\tau\to\mu\gamma)$. Given the present experimental bound $BR(\mu\to e \gamma)<1.2\times 10^{-11}$\cite{MEGA}, our result implies that $\tau\to\mu\gamma$ and $\tau\to e \gamma$ have rates much below the present and expected future sensitivity \cite{ExperimentalBoundsDecaysTau}. Moreover, from the current (future) experimental limit on $BR(\mu\to e \gamma)$\cite{MEGA,MEG} and assuming $\vert w_{\mu e}\vert=1$, we derive the following bounds
\begin{equation}
\begin{array}{ll}
u = 0.001 &\longrightarrow\quad M>10~(30)\,\;\mathrm{TeV}\\[3mm]
u = 0.05 &\longrightarrow\quad M>70~(200)\,\;\mathrm{TeV}\;.
\end{array}
\end{equation}
This pushes the scale $M$ considerably above the range we were initially interested in. In particular $M$ is shifted above the region of
interest for $(g-2)_\mu$ and probably for LHC.
When considering the supersymmetric case we get
\begin{equation}
R_{ij}= \dfrac{48\pi^3 \alpha}{G_F^2 M^4}\left[\vert w^{(1)}_{ij} u^2\vert^2+\dfrac{m_j^2}{m_i^2} \vert w^{(2)}_{ij} u\vert^2\right]
\label{LFV:Eff:LFVsusy}
\end{equation}
where $w^{(1,2)}_{ij}$ are coefficients of $\cO(1)$ depending on the coefficients coming from the original dipole matrix and from the charged lepton mass matrix.
\begin{figure}[ht!]
\centering
\includegraphics[width=12cm]{PlotBRxU.jpg}
\vspace{-0.4cm}
\caption{\it The branching ratio of $\mu\to e \gamma$ as a function of $u$, eq. (75). The deviation of the anomalous magnetic moment of the muon from the Standard Model expectation is kept fixed to its experimental range\cite{ExperimentalBoundsMDMmu}. The unknown coefficients $\tilde{w}^{(1,2)}_{\mu e}$ are equal to 1 (black region) or are random complex numbers with absolute values between zero and two (blue points). The continuous (dashed) horizontal line corresponds to the present (future expected) experimental bound on $BR(\mu\to e\gamma)$\cite{MEGA,MEG}.}
\label{fig:LFV:Eff:BR}
\vspace{-0.5cm}
\end{figure}
Notice that now the first contribution on the right-hand side of eq. (\ref{LFV:Eff:LFVsusy}) is suppressed by a factor of $u$ compared to the non-supersymmetric case. In most of the allowed range of $u$, the branching ratios of $\mu\to e \gamma$ and $\tau \to\mu \gamma$ are similar and larger than the branching ratio of $\tau\to e \gamma$. Assuming $\vert w^{(1,2)}_{\mu e}\vert=1$, the present (future) experimental limit on $BR(\mu\to e \gamma)$ implies the following bounds
\begin{equation}
\begin{array}{ll}
u = 0.001 &\longrightarrow\quad M>0.7~(2)\,\;\mathrm{TeV}\\[3mm]
u = 0.05 &\longrightarrow\quad M>14~(48)\,\;\mathrm{TeV}\;.
\end{array}
\end{equation}
We see that at variance with the non-supersymmetric case there is a range of permitted values of the parameter $u$ for which the scale $M$ can be sufficiently small to allow an explanation of the observed discrepancy in $a_\mu$, without conflicting with the present bound on $BR(\mu\to e \gamma)$. We can eliminate the dependence on the unknown scale $M$ by combining eqs. (\ref{LFV:Eff:oom}) and (\ref{LFV:Eff:LFVsusy}). For $\mu\to e \gamma$ we get
\begin{equation}
R_{e\mu}= \dfrac{12\pi^3 \alpha}{G_F^2 m_\mu^4}\left(\delta a_\mu\right)^2
\left[\vert \tilde{w}^{(1)}_{\mu e}\vert^2 \vert u\vert^4+\dfrac{m_e^2}{m_\mu^2} \vert \tilde{w}^{(2)}_{\mu e}\vert^2\vert u\vert^2\right]
\label{LFV:Eff:muegamma}
\end{equation}
where $\tilde{w}^{(1,2)}_{\mu e}$ are unknown, order one coefficients. We plot $BR(\mu\to e\gamma)$ versus $u$ in figure \ref{fig:LFV:Eff:BR}, where the coefficients $\tilde{w}^{(1,2)}_{\mu e}$ are kept fixed to 1 (black region) or are random complex numbers with absolute values between zero and two (blue points). The deviation of the anomalous magnetic moment of the muon from the Standard Model prediction is in the interval of the experimentally allowed values, about three sigma away from zero. Even if the ignorance about the coefficients $\tilde{w}^{(1,2)}_{\mu e}$ does not allow us to derive a sharp limit on $u$, we see that the present limit on $BR(\mu\to e \gamma)$ disfavors values of $u$ larger than few percents. We recall that in this model the magnitudes of $u$ and $\theta_{13}$ are comparable.
\subsection{Comparison with Minimal Flavour Violation }
\label{Sec:LFV:EFF:MFV}
\setcounter{footnote}{3}
It is instructive to compare the previous results with those of MFV \cite{MFV,MLFV1,MLFVother}. We have already introduced the main features of MFV in section \ref{Sec:MFV} and we recall here only that, by going to a basis where the charged leptons are diagonal, the Yukawa matrices for charged leptons and neutrinos can be written as
\begin{equation}
\hat{Y}_e=\dfrac{\sqrt{2}}{v} \diag(m_e,\,m_\mu,\,m_\tau)\;,\qquad\qquad \hat{Y}_\nu=\dfrac{\Lambda_L}{4v^2} U^* \diag(m_1,\,m_2,\,m_3) U^\dagger\;,
\end{equation}
where $U$ is the lepton mixing matrix. The diagonal elements of the matrix $\mathcal{\hat{M}}$ evaluated in MFV are analogous to those of the previous class of models and similar bounds on the scale $M$ are derived from the existing data on MDMs and EDMs. The off-diagonal elements are given by:
\begin{equation}
\mathcal{\hat{M}}_{ij}\;=\;\beta (\hat{Y}_e \hat{Y}_\nu^\dagger \hat{Y}_\nu)_{ij}
\;=\;\sqrt{2}\beta\dfrac{m_i}{v}\dfrac{\Lambda_L^2}{16v^4}\left[\Delta m^2_{sol} U_{i2} U^*_{j2}\pm\Delta m^2_{atm} U_{i3} U^*_{j3}\right]
\end{equation}
where $\beta$ is an overall coefficient of order one and the plus (minus) sign refers to the case of normal (inverted) hierarchy.
We see that, due to the presence of the ratio $\Lambda_L^2/v^2$ the overall scale of these matrix elements is much less constrained than in the previous case. This is due to the fact that MFV does not restrict the overall strength of the coupling constants $Y_\nu$, apart from the requirement
that they remain in the perturbative regime. Very small or relatively large (but smaller than one) $Y_\nu$ can be accommodated by adjusting the scale $\Lambda_L$. On the contrary this is not allowed in the case previously discussed where the size of the symmetry breaking effects is restricted to the small window ($0.001<u<0.05$) and the scale $\Lambda_L$ is determined within a factor of about fifty. The conclusion is that in MFV the non-observation of $\ell_i\to \ell_j\gamma$ could be justified by choosing a small $\Lambda_L$, while a positive signal in $\mu\to e \gamma$ with a branching ratio in the range $1.2\times 10^{-11}\div 10^{-13}$ could also be fitted by an appropriate $\Lambda_L$, apart from a small region of the $\theta_{13}$ angle, around $\theta_{13}\approx0.02$ where a cancellation can take place.
The dependence on the scale $\Lambda_L$ is eliminated by considering ratios of branching ratios:
\begin{equation}
\dfrac{BR(\mu\to e\gamma)}{BR(\mu\to e\nu_\mu{\bar \nu_e})}\frac{BR(\tau\to \mu\nu_\tau{\bar \nu_\mu})}{BR(\tau\to \mu\gamma)}=
\left\vert\frac{2\Delta m^2_{sol}}{3\Delta m^2_{atm}}\pm \sqrt{2}\sin\theta_{13} e^{i\delta}\right\vert^2<1\;,
\label{mfv}
\end{equation}
where we took the tribimaximal ansatz to fix $\theta_{12}$ and $\theta_{23}$. We see that $BR(\mu\to e\gamma)<BR(\tau\to \mu\gamma)$ always in MFV. Moreover, for $\theta_{13}$ above approximately $0.07$, $BR(\mu\to e\gamma)<1.2\times 10^{-11}$ implies $BR(\tau\to \mu\gamma)<10^{-9}$. For $\theta_{13}$ below $0.07$, apart possibly from a small region around $\theta_{13}\approx0.02$, both the transitions $\mu\to e \gamma$ and $\tau\to\mu\gamma$ might be above the sensitivity of the future experiments.
We also observe that in MFV the only difference between the general case and the supersymmetric one is the presence of two doublets in the low-energy Lagrangian. In MFV a chirality flip in leptonic operators necessarily requires the insertion of the matrix $\hat{Y}_e$, both in the general and in the supersymmetric case and, apart from the possibility of $\tan\beta$ enhanced contributions, similar predictions for the LFV processes are expected in the two cases.
\mathversion{bold}
\section{Explicit Supersymmetric $A_4$ Model}
\label{Sec:LFV:FullSUSYModel}
\setcounter{footnote}{3}
\mathversion{normal}
In this section we move to consider an explicit supersymmetric model incorporating the flavour symmetry $A_4\times Z_3\times U(1)_{FN}$. The Lagrangian of the model accounts for three distinct terms:
\begin{equation}
\begin{split}
\mscr{L}=&\;\int \mathrm{d}^2\theta_{SUSY} \mathrm{d}^2\overline{\theta}_{SUSY} \cK (\overline{z}, e^{2 V} z)+\left[\int \mathrm{d}^2 \theta_{SUSY} w(z)+\text{h.c.}\right]\\[3mm]
&+\frac{1}{4}\left[\int \mathrm{d}^2\theta_{SUSY} f(z) {\cal W W}+\text{h.c.}\right]\;,
\end{split}
\label{LFV:leel}
\end{equation}
where ${\cal K}(\overline{z},z)$ is the K\"ahler potential, $w(z)$ is the superpotential, $f(z)$ is the gauge kinetic function, $V$ is the Lie-algebra valued vector supermultiplet, describing the gauge fields and their superpartners. \footnote{In our notation a chiral superfield and its $R$-parity even component are denoted by the same letter. The $R$-parity odd component is indicated by a tilde in the following and the conjugate (anti-chiral) superfield is denoted by a bar.} Finally ${\cal W}$ is the chiral superfield describing, together with the function $f(z)$, the kinetic terms of gauge bosons and their superpartners. Each of the terms on the right-hand side can be written in an expansion in powers of the flavon fields. The flavour symmetry of the Lagrangian $\mscr{L}$ is spontaneously broken by the VEVs of the flavons, which in the supersymmetric context are aligned as
\begin{equation}
\begin{array}{ccl}
\dfrac{\langle\varphi_T\rangle}{\Lambda_f}&=&(u,0,0)+(c' u^2,c u^2,c u^2)+\cO(u^3)\\[3mm]
\dfrac{\langle\varphi_S\rangle}{\Lambda_f}&=& c_b(u,u,u)+\cO(u^2)\\[3mm]
\dfrac{\langle\xi\rangle}{\Lambda_f}&=&c_a u+\cO(u^2)\;,
\end{array}
\label{LFV:vevs}
\end{equation}
where $c$, $c'$, $c_{a,b}$ are complex numbers with absolute value of order one and $u$ is one of the two small symmetry breaking parameters in the theory. With respect to eq. (\ref{AFTBM:vevsplus}), we specify the subleading corrections only for $\mean{\varphi_T}$, since those of $\mean{\varphi_S}$ and $\mean{\xi}$ do not affect the following analysis. Moreover we simplify the notation using $c'$ instead of $c_1$ and $c$ instead of $c_2$ and $c_3$, which are equal at this level of approximation. The second symmetry breaking parameter is the VEV of the Froggatt-Nielsen $U(1)_{FN}$ symmetry, $t$. It is useful to briefly recall the mechanism which assures this specific vacuum misalignment. A set of driving fields, two $A_4$-triplets $\varphi_T^0$ and $\varphi_S^0$ plus an $A_4$-singlet $\xi^0$, is introduce in the model and the following driving superpotential can be written down:
\begin{equation}
\begin{split}
w_d\;=&\;M_T (\varphi_T^0 \varphi_T)+ g (\varphi_T^0 \varphi_T\varphi_T)+\\
&+g_1 (\varphi_S^0 \varphi_S\varphi_S)+ g_2 \tilde{\xi} (\varphi_S^0 \varphi_S)+ g_3 \xi^0 (\varphi_S\varphi_S)+ g_4 \xi^0 \xi^2+ g_5 \xi^0 \xi \tilde{\xi}+ g_6 \xi^0 \tilde{\xi}^2+\ldots\;,
\end{split}
\label{LFV:wd}
\end{equation}
where dots denote subleading non-renormalisable corrections. In the limit of unbroken Supersymmetry all $F$-terms vanish when the vacuum in eq. (\ref{LFV:vevs}) is considered. In \cite{AF_Modular} it is shown that this setting is an isolated minimum of the scalar potential, achieved in a completely non-tuned way. We remark that in the supersymmetric limit the VEVs of the driving fields $\varphi_T^0$, $\varphi_S^0$ and $\xi^0$ are zero. This is however in general no longer true, if we include soft Supersymmetry breaking terms into the flavon potential, as has been discussed in \cite{FHM_VEV}. The origin of the VEV for $\theta$ can be find in appendix \ref{AppB:Tp}. We recall the range of values in which $u$ and $t$ can run in the supersymmetric context:
\begin{equation}
0.007\lesssim u\lesssim0.05\;,\qquad\qquad t\approx0.05\;.
\label{LFV:Bounds}
\end{equation}
Since we have two independent symmetry breaking parameters, we consider a double expansion of $\mscr{L}$ in powers of $u$ and $t$. In this expansion we keep terms up to the second order in $u$, i.e. terms quadratic in the fields $\varphi_{S,T}$ and $\xi$. The expansion in the parameter $t$ is stopped at the first non-trivial order, that is by allowing as many powers of the field $\theta$ as necessary in order to obtain non-vanishing values for all entries of the matrices describing lepton masses as well as for the entries of the matrices describing kinetic terms and slepton masses. \footnote{Concerning the K\"ahler potential we observe that we can additionally write down operators involving the total invariant $\overline{\theta}\theta=|\theta|^{2}$. These contribute to the diagonal elements of the kinetic terms and the slepton masses. In the K\"ahler potential for the left-handed fields they can be safely neglected, since the leading order correction is of $O(u)$. In the right-handed sector, they contribute at the same order as the terms arising through a double flavon insertion.} Finally, second order corrections in $u$ also arise from the subleading terms of the VEV $\langle\varphi_T\rangle$ and are included in our estimates.
The soft Supersymmetry breaking terms are generated from the supersymmetric Lagrangian by promoting all coupling constants, such as Yukawa couplings, couplings in the flavon superpotential and couplings in the K\"ahler potential, to superfields with constant $\theta_{SUSY}^2$ and $\theta_{SUSY}^2\overline{\theta}_{SUSY}^2$ components \cite{LutyReview}. Through this we derive subsequently the soft masses $(m_{(e,\nu)LL}^2)_K$ and $(m_{e RR}^2)_K$ from the K\"ahler potential. One contribution to $m_{eRL}^2$, which we call $(m_{e RL}^2)_1$ in the following, arises from the Yukawa couplings present in the superpotential $w$.
Important contributions to slepton masses originate from the modification of the VEVs of flavons and driving fields due to Supersymmetry breaking effects. A detailed study of the VEVs of these fields and their dependence on the soft Supersymmetry breaking parameters is presented in \cite{FHM_VEV} and we summarise the main results here. When soft Supersymmetry breaking terms are included into the flavon potential, the VEVs in eq. (\ref{LFV:vevs}) receive additional contributions of order $m_{SUSY}$, completely negligible compared to $\Lambda_f\,u$. At the same time,
the driving fields $\varphi_T^0$, $\varphi_S^0$ and $\xi^0$ develop a VEV of the size of the soft Supersymmetry breaking scale $m_{SUSY}$.
An equivalent statement is that the auxiliary components of the flavons acquire a VEV at the leading order of the size of $m_{SUSY} \times u \, \Lambda_f$.
Especially, for the auxiliary part on the flavon supermultiplet $\varphi_T$ we have \cite{FHM_VEV}:
\begin{equation}
\dfrac{1}{\Lambda_f} \left\langle \dfrac{\partial w}{\partial \varphi_T} \right\rangle = \zeta \, m_{SUSY} \, \left\{ (u,0,0)
+ (c_F^\prime u^2, c_F u^2, c_F u^2) \right\}
\label{LFV:VEVsauxphiT}
\end{equation}
where $\zeta$, $c_F^\prime$ and $c_F$ are in general complex numbers with absolute value of order one. The parameter $\zeta$ vanishes in the special case of universal soft mass terms in the flavon potential. When different from zero, the VEVs of the auxiliary components of the flavon supermultiplet $\varphi_T$ generate another contribution to the soft masses of RL-type, which we denote as $(m_{e RL}^2)_2$. This contribution is analogous to the one which has been found before in the supergravity context and which can have a considerable effect on the size of the branching ratio of radiative leptonic decays, as shown in \cite{FtermSUGRA}. Indeed, as we shall see below, in the global supersymmetric model under consideration the leading dependence of the normalised branching ratios $R_{ij}$ on $u$ is dominated by $(m_{e RL}^2)_2$. We remark that the VEVs in eq. (\ref{LFV:VEVsauxphiT}) and those of the corresponding flavon field $\varphi_T$ in eq. (\ref{LFV:vevs}) have a similar structure but they are not proportional, in general. This is due to the different coefficients $c$, $c^\prime$ and $c_F$, $c_F^\prime$, which can be qualitatively understood as follows: the coefficients $c$, $c^\prime$ mainly depend on a set of parameters that remain in the supersymmetric limit and receive completely negligible corrections from the Supersymmetry breaking terms. On the contrary $\langle \partial w/\partial \varphi_T\rangle$ vanishes in the supersymmetric limit, to all orders in $u$, and $c_F$, $c_F^\prime$ crucially depend on the set of parameters describing the Supersymmetry breaking. We will see that, if $c$ and $c_F$ accidentally coincide (up to complex conjugation), a cancellation in the leading behaviour of $R_{ij}$ takes place.
Similarly, the VEV of the Froggatt-Nielsen field $\theta$ becomes
shifted, when soft Supersymmetry breaking terms are included into the potential, so that:
\begin{equation}
\frac{M_{FI}^2}{g_{FN}} - |\langle\theta\rangle|^2 = c_{\theta} \, m_{SUSY}^2 \; ,
\end{equation}
with $c_{\theta}$ being an order one number, holds. This will lead to a contribution $(m_{e RR}^2)_{D,FN}$ to the soft masses of RR-type,
since only the right-handed charged leptons $e^c$ and $\mu^c$ are charged under $U(1)_{FN}$. Apart from these there are supersymmetric contributions to $m_{(e,\nu) LL}^2$ and $m_{e RR}^2$ from $F$ and $D$-terms, $(m_{(e,\nu) LL}^2)_{F (D)}$ and $(m_{e RR}^2)_{F (D)}$,
as well as a contribution to $m_{e RL}^2$ coming from the $F$-term of $H_d$, called $(m_{e RL}^2)_{3}$ in the following.
The detailed derivation of the kinetic terms as well as of the mass matrices for fermion and sfermions can be found in the original paper \cite{FHLM_LFV}.
\section{Slepton Masses in the Physical Basis}
\label{Sec:LFV:Phys}
\setcounter{footnote}{3}
In this section we discuss the results for the slepton masses in the physical basis and comment on results found in the literature. To derive the physical masses and the unitary transformations that enter our computation, we have to go into a basis in which kinetic terms are canonical, for both, slepton and lepton, fields. Subsequently, we diagonalise the mass matrix of the charged leptons via a biunitary transformation. To avoid flavour-violating gaugino-lepton-slepton vertices in this intermediate step, we perform the same transformation on both fermion and scalar components of the involved chiral superfields. This procedure gives us the physical slepton mass matrices ${\hat m}^2_{(e,\nu)LL}$, ${\hat m}^2_{eRR}$ and ${\hat m}^2_{eRL}$. The results shown here are obtained under the assumption that all the parameters of the model are real. The analytical expressions for the slepton mass matrices in the physical basis contain the first non-vanishing order in each of the matrix elements. We start with the left-left (LL) block. The contribution from the soft breaking terms is common to charged sleptons and sneutrinos and reads:
\begin{equation}
\begin{array}{lcl}
(\hat{m}_{eLL}^2)_K &=& (\hat{m}_{\nu LL}^2)_K\\[3mm]
&&\hspace{-2cm}=\left(
\begin{array}{ccc}
n_0 + 2 \, \hat{n}_1 \, u
& (\hat{n}_4 + (3 \, \hat{n}_1 + \hat{n}_2) \, c) \, u^2
& (\hat{n}_5 + (3 \, \hat{n}_1 - \hat{n}_2) \, c) \, u^2 \\
(\hat{n}_4 + (3 \, \hat{n}_1 + \hat{n}_2) \, c) \, u^2
& n_0 - (\hat{n}_1 + \hat{n}_2) \, u
& (\hat{n}_6 - 2 \, \hat{n}_2 \, c) \, u^2 \\
(\hat{n}_5 + (3 \, \hat{n}_1 - \hat{n}_2) \, c) \, u^2
& (\hat{n}_6 - 2 \, \hat{n}_2 \, c) \, u^2
& n_0 - (\hat{n}_1 - \hat{n}_2) \, u
\end{array}
\right) \, m_{SUSY}^2
\end{array}
\label{LFV:eq:m_LL_hat}
\end{equation}
where $\hat{n}_i$ are complex parameters with modulus of order 1. The supersymmetric $F$ and $D$-term contributions are given by:
\begin{equation}
(\hat{m}_{eLL}^2)_F=\hat{M}_e^T \hat{M}_e\;,\qquad\qquad(\hat{m}_{\nu LL}^2)_F\simeq0
\end{equation}
and
\begin{equation}
(\hat{m}^2_{eLL})_D=\left(-\dfrac{1}{2}+\sin^2\theta_W \right) \cos 2\beta ~m_Z^2 \times \mathbb{1}\;,\qquad
(\hat{m}^2_{\nu LL})_D=\left(+\dfrac{1}{2} \right) \cos 2\beta~m_Z^2 \times \mathbb{1}\;,
\end{equation}
with $\hat{M}_e$ being the mass matrix for the charged leptons in the same basis, i.e. diagonal and with canonically normalised kinetic terms. The supersymmetric $D$-term contributions are proportional to the unity matrix. Notice that in the physical basis all supersymmetric contributions are diagonal in flavour space. Both the $F$ and the $D$-term contributions are small compared to that coming from the K\"ahler potential. The relative suppression is of order $\left(\hat{M}_e^T \hat{M}_e\right)/m_{SUSY}^2$ and $m_Z^2/m_{SUSY}^2$, respectively, which do not exceed the per cent level for typical values of $m_{SUSY}$ around 1 TeV. Note also that the supersymmetric part is the only one that distinguishes between charged sleptons and sneutrinos.
For $\hat{m}_{eRR}^2$ we find that $(\hat{m}_{eRR}^2)_K$ is given by:
\begin{equation}
\label{LFV:eq:m_RR_hat}
(\hat{m}_{eRR}^2)_K = \left( \begin{array}{ccc}
n_1^c
& 2 \, c \, (n_1^c - n_2^c) \, \dfrac{m_e}{m_\mu} u
& 2 \, c \, (n_1^c - n_3^c) \, \dfrac{m_e}{m_\tau} u\\[0.1in]
2 \, c \, (n_1^c - n_2^c) \, \dfrac{m_e}{m_\mu} u
& n_2^c
& 2 \, c \, (n_2^c - n_3^c) \, \dfrac{m_\mu}{m_\tau} u\\[0.1in]
2 \, c \, (n_1^c - n_3^c) \, \dfrac{m_e}{m_\tau} u
& 2 \, c \, (n_2^c - n_3^c) \,\dd \frac{m_\mu}{m_\tau} u
& n_3^c
\end{array}
\right) \, m_{SUSY}^2\;.
\end{equation}
The supersymmetric terms are:
\begin{equation}
(\hat{m}_{eRR}^2)_F=\hat{M}_e \hat{M}_e^T\qquad\mbox{and}\qquad
(\hat{m}^2_{eRR})_D=-\sin^2\theta_W \cos 2\beta\,m_Z^2 \times \mathbb{1}\;.
\end{equation}
Also in this case the supersymmetric contributions are diagonal and numerically negligible in most of our parameter space. The dominant contribution is thus $(\hat{m}_{eRR}^2)_K$.
Finally, coming to the RL block of the mass matrix for charged sleptons, we find:
\begin{equation}
\label{LFV:eq:m_RL_hat}
(\hat{m}_{eRL}^2)_{1} = \left( \begin{array}{ccc}
\dfrac{z_e}{y_e} \, m_e
& 2 c \, \dfrac{(z_e y_\mu - z_\mu y_e)} {y_e y_\mu}\, m_e u
& 2 c\, \dfrac{(z_e y_\tau - z_\tau y_e)}{y_e y_\tau} \, m_e u\\[0.1in]
c \, \dfrac{(z_\mu y_\mu^\prime - z_\mu^\prime y_\mu)}{y_\mu^2} \, m_\mu u^2
& \dfrac{z_\mu}{y_\mu} m_\mu
& 2 c \,\dfrac{(z_\mu y_\tau - z_\tau y_\mu)}{y_\mu y_\tau} m_\mu u\\[0.1in]
c \, \dfrac{(z_\tau y_\tau ^\prime - z_\tau ^\prime y_\tau)}{y_\tau^2} \, m_\tau u^2
& c \, \dfrac{(z_\tau y_\tau ^\prime - z_\tau ^\prime y_\tau)}{y_\tau^2} \, m_\tau u^2
& \dfrac{z_\tau}{y_\tau} \, m_\tau
\end{array}
\right) \, m_{SUSY} \; ,
\end{equation}
\begin{equation}
\label{LFV:eq:m_RL_hat_2}
(\hat{m}_{eRL}^2)_{2} = \zeta \left( \begin{array}{ccc}
m_e
& (c_F -c) \, m_e u
& (c_F -c) \, m_e u\\[0.1in]
(c_F -c) \, m_\mu u
& m_\mu
& (c_F -c) \, m_\mu u\\[0.1in]
(c_F -c) \, m_\tau u
& (c_F -c) \, m_\tau u
& m_\tau
\end{array}
\right) \, m_{SUSY} \; ,
\end{equation}
and
\begin{equation} \label{LFV:eq:m_RL_hat_3}
(\hat{m}^2_{eRL})_3=-\mu \tan\beta\,\hat{M}_e\;.
\end{equation}
The matrix $\hat{m}_{eRL}^2$ is the sum of these three contributions. An important feature of $(\hat{m}_{eRL}^2)_1$ is that the elements below the diagonal are suppressed by a factor $u$ compared to the corresponding elements of $(\hat{m}_{eRL}^2)_2$. Nevertheless there are cases in which this second contribution can be suppressed. In the first case the VEVs of the auxiliary fields contained in the supermultiplet $\varphi_T$ vanish, i.e. the parameter $\zeta$ is zero, due to the fact that the soft Supersymmetry breaking terms in the flavon potential are (assumed to be) universal, that is equal to the terms of the superpotential $w_d$ up to an overall proportionality constant \cite{FHM_VEV}. The second possibility arises, if the VEVs of the auxiliary fields can be completely aligned with those of the flavon $\varphi_T$ at the leading order as well as NLO, such that $c_F$ becomes equal to $c$. In both cases the off-diagonal elements of $(\hat{m}_{eRL}^2)_{2}$ are further suppressed than shown in eq. (\ref{LFV:eq:m_RL_hat_2}). We emphasise this fact here, since it turns out that the suppression of the off-diagonal elements below the diagonal as it occurs in the case of $(\hat{m}_{eRL}^2)_{1}$ is relevant for the actual size of the leading behaviour of the normalised branching ratios $R_{ij}$ with respect to the expansion in $u$. As we shall see in section \ref{Sec:LFV:MI_AnaliticResults}, in a general case $R_{ij} \propto u^2$ holds, whereas, if the contribution in eq. (\ref{LFV:eq:m_RL_hat_2}) vanishes or is also suppressed, $R_{ij}$ is proportional to $u^4$. The contribution $(\hat{m}^2_{eRL})_3$ is diagonal in flavour space. Concerning the possible size of this contribution, note that $|\mu| \tan \beta/m_{SUSY}$ is the relative magnitude of the non-vanishing elements of $(\hat{m}_{eRL}^2)_3$ with respect to the corresponding ones in $(\hat{m}_{eRL}^2)_{1,2}$. Notice finally that the (31) and (32) element of $\hat{m}_{eRL}^2$ coincide.
In \cite{FHLM_LFV} we present an analysis of the renormalisation group effects on the slepton masses in the leading Log approximation and we see that these effects can be neglected or absorbed into our parametrisation of the soft mass terms.
\section{Results in the Mass Insertion Approximation}
\label{Sec:LFV:MI}
\setcounter{footnote}{3}
We can now evaluate the normalised branching ratios $R_{ij}$ for the LFV transitions $\mu\to e \gamma$, $\tau\to\mu\gamma$ and $\tau\to e \gamma$. In this section we establish the leading dependence of the quantities $R_{ij}$ on the symmetry breaking parameter $u$. We then compare the results with the conclusions of the effective approach already illustrated in section \ref{Sec:LFV:Effective}. Performing this comparison, it is important to keep in mind that when $R_{\mu e}$ is dominated by a one-loop amplitude with virtual particles of mass $m_{SUSY}$, $M$ and $m_{SUSY}$ are roughly related by $M=(4\pi/g) m_{SUSY}$ and a given lower bound on $M$ corresponds to a lower bound on $m_{SUSY}$ one order of magnitude smaller.
\subsection{Analytic results}
\label{Sec:LFV:MI_AnaliticResults}
\setcounter{footnote}{3}
It is useful to first analyse the predictions in the so-called mass insertion (MI) approximation, where we have a full control of the results in its analytic form. A more complete discussion based on one-loop results can be found in section \ref{Sec:LFV:Numerical}. For the case at hand, the MI approximation consists in expanding the amplitudes in powers of the off-diagonal elements of the slepton mass matrices, normalised to their average mass. From the expression of the mass matrices of the previous section we see that in our case such an expansion amounts to an expansion in the parameters $u$ and $t$, which we can directly compare with eq. (\ref{LFV:Eff:LFVsusy}). A common value in the diagonal entries of both LL and RR blocks is assumed and we consequently set $n_0=n_1^c=n_2^c=n_3^c=1$ and also $\hat{n}_1=\hat{n}_2=0$ in this section, so that the average mass becomes $m_{SUSY}$. On the contrary, no assumptions have been made for the trilinear soft terms, which keep the expression as in eqs. (\ref{LFV:eq:m_RL_hat}-\ref{LFV:eq:m_RL_hat_3}). Concerning chargino and neutralino mass matrices, they carry a dependence on the vector boson masses $m_{W,Z}$ through off-diagonal matrix elements. Such a dependence is not neglected in this approximation, but only the leading order term of an expansion in $m_{W,Z}$ over the relevant supersymmetric mass combination is kept. At the same time, to be consistent, we have to neglect the supersymmetric contributions of $\hat{m}^2_{\nu LL}$ and $\hat{m}^2_{e LL}$ and therefore $\hat{m}^2_{\nu LL}$ and $\hat{m}^2_{e LL}$ coincide. Using these simplifications, the ratios $R_{ij}$ can be expressed as:
\begin{equation}
R_{ij}= \frac{48\pi^3 \alpha}{G_F^2 m_{SUSY}^4}\left(\vert A_L^{ij} \vert^2+\vert A_R^{ij} \vert^2 \right)\;.
\label{LFV:rij}
\end{equation}
At the leading order, the amplitudes $A_L^{ij}$ and $A_R^{ij}$ are given by:
\begin{eqnarray}
A_L^{ij}&=&a_{LL} (\delta_{ij})_{LL} + a_{RL} \frac{m_{SUSY}}{m_i} (\delta_{ij})_{RL}\nn\\
A_R^{ij}&=&a_{RR} (\delta_{ij})_{RR} + a_{LR} \frac{m_{SUSY}}{m_i} (\delta_{ij})_{LR}
\label{LFV:ALAR}
\end{eqnarray}
where $a_{CC'}$ $(C,C'=L,R)$ are dimensionless functions of the ratios $M_{1,2}/m_{SUSY}$, $\mu/m_{SUSY}$ and of $\tan\theta_W$ and can be found in appendix \ref{AppE:MI} . Their typical size is one tenth of $g^2/(16\pi^2)$, $g$ being the $SU(2)_L$ gauge coupling constant.
\begin{table}[!ht]
\centering
\begin{math}
\begin{array}{|c|c|c|c|c|}
\hline
&&&& \\[-9pt]
ij &w^{LL}_{ij} &w^{RL}_{ij} &w^{RR}_{ij} &w^{LR}_{ij}\\[10pt]
\hline
&&&&\\[-9pt]
\mu e &\hat{n}_4 &\zeta (c_F-c)& 0 &2 \, \dfrac{(z_e y_\mu - z_\mu y_e)} {y_e y_\mu}\,~c +\zeta (c_F-c)\\[3pt]
\hline
&&&&\\[-9pt]
\tau e &\hat{n}_5 &\zeta (c_F-c)& 0 &2 \, \dfrac{(z_e y_\tau - z_\tau y_e)}{y_e y_\tau} ~c+\zeta (c_F-c)\ \\[3pt]
\hline
&&&&\\[-9pt]
\tau \mu &\hat{n}_6 &\zeta (c_F-c)& 0 &2 \,\dfrac{(z_\mu y_\tau - z_\tau y_\mu)}{y_\mu y_\tau}~c+\zeta (c_F-c) \\[3pt]
\hline
\end{array}
\end{math}
\caption{\it Coefficients $w^{CC'}_{ij}$ characterising the transition amplitudes for $\mu\to e \gamma$, $\tau\to e \gamma$ and $\tau\to \mu\gamma$, in the MI approximation in which $n_0$ and $n_i^c$ are set to one and $\hat{n}_{1,2}$ to zero so that $w^{RR}_{ij}$ vanish.}
\label{table:coefficientsLFV}
\end{table}
Notice that $a_{CC'}$ do neither depend on $u$ nor on the fermion masses $m_{i,j}$. Finally, $(\delta_{ij})_{CC'}$ parametrise the MIs and are defined as:
\begin{equation}
(\delta_{ij})_{CC'}=\frac{(\hat{m}^2_{eCC'})_{ij}}{m^2_{SUSY}}\;.
\end{equation}
From the mass matrices of the previous section, we find ($j<i$):
\begin{equation}
\begin{array}{ll}
(\delta_{ij})_{LL}=w^{LL}_{ij} u^2\;,&\quad
(\delta_{ij})_{RL}=\dfrac{m_i}{m_{SUSY}} \left( w^{RL}_{ij} u +w^{'RL}_{ij} u^2\right)\\[3mm]
(\delta_{ij})_{RR}=w^{RR}_{ij} \dfrac{m_j}{m_i} u\;,&\quad
(\delta_{ij})_{LR}=w^{LR}_{ij} \dfrac{m_j}{m_{SUSY}} u\;.
\end{array}
\label{LFV:deltas}
\end{equation}
where for the mass insertion $(\delta_{ij})_{RL}$ we have also displayed the NLO contributions, in order to better compare our results with those of the effective Lagrangian approach. The explicit expression for the leading order coefficients $w^{CC'}_{ij}$ are listed in table \ref{table:coefficientsLFV}. Also the NLO coefficients $w^{'RL}_{ij} $ are dimensionless combinations of order one parameters. By substituting the mass insertions of eq. (\ref{LFV:deltas}) into the amplitudes $A_{L,R}^{ij}$ of eq. (\ref{LFV:ALAR}) and by using eq. (\ref{LFV:rij}), we get:
\begin{equation}
R^{SUSY}_{ij}= \frac{48\pi^3 \alpha}{G_F^2 M^4}\left[\vert w^{(0)}_{ij} u\vert^2+ 2 w^{(0)}_{ij} w^{(1)}_{ij} u^3+
\vert w^{(1)}_{ij} u^2\vert^2+\frac{m_j^2}{m_i^2} \vert w^{(2)}_{ij} u\vert^2\right]
\label{LFV:RSUSY}
\end{equation}
with $M= (4 \pi/g) m_{SUSY}$ and
\beq\ba{rcl}
w^{(0)}_{ij}&=&\dfrac{16 \pi^2}{g^2} a_{RL} w^{RL}_{ij}\;,\\[3mm]
w^{(1)}_{ij}&=&\dfrac{16 \pi^2}{g^2} \left(a_{LL} w^{LL}_{ij}+a_{RL} w^{'RL}_{ij}\right)\;,\\[3mm]
w^{(2)}_{ij}&=&\dfrac{16 \pi^2}{g^2} \left(a_{RR} w^{RR}_{ij}+a_{LR} w^{LR}_{ij}\right)\;.
\label{LFV:wcoeff}
\ea\eeq
The behaviour displayed in eq. (\ref{LFV:RSUSY}) differs from the one expected on the basis of the effective Lagrangian approach in the SUSY case, eq. (\ref{LFV:Eff:LFVsusy}).
This is due to the presence of the term $w^{(0)}_{ij} \propto w^{RL}_{ij}$. Assuming $w^{RL}_{ij}=0$ we recover what is expected from the effective Lagrangian approach
in the supersymmetric case, whereas when $w^{RL}_{ij}$ does not vanish, the leading order behaviour matches the prediction of the effective Lagrangian
approach in the generic, non-supersymmetric case, eq. (\ref{LFV:Eff:LFV}). As shown in table \ref{table:coefficientsLFV}, the coefficient $w^{RL}_{ij}$ is universal, namely it is independent from the flavour indices and it vanishes in two cases:
\begin{itemize}
\item[i)]
$c_F=c$, which reflects the alignment of the VEVs of the scalar
and auxiliary components of the flavon supermultiplet $\varphi_T$, see eqs. (\ref{LFV:vevs}) and (\ref{LFV:VEVsauxphiT}).
\item[ii)]
$\zeta=0$ which can be realised by special choices of the soft Supersymmetry breaking terms
in the flavon sector, i.e. the assumption of universal soft Supersymmetry breaking terms in the flavon potential.
\end{itemize}
In our model none of these possibilities is natural, see \cite{FHM_VEV}, and both require a tuning of the underlying parameters.
If $w^{RL}_{ij}=0$, the result expected from the effective Lagrangian approach in the supersymmetric case is obtained in a non-trivial way. Indeed, it is a consequence of a cancellation taking place
when going from the Lagrangian to the physical basis. In particular, for $w^{RL}_{ij}=0$, $R^{SUSY}_{ij}$ scales as $u^4$ and not as $u^2$
for $m_j=0$.
In the general case when $w^{RL}_{ij}$ is non-vanishing, the dominant contribution to $R_{ij}^{SUSY}$ regarding the expansion in $u$
is flavour independent and, at the leading order in the $u$ expansion,
we predict $R_{\mu e}=R_{\tau\mu}=R_{\tau e}$, at variance with the predictions
of most of the other models, where, for instance, $R_{\mu e}/R_{\tau \mu}$ can be much smaller than one \cite{Raidal,MLFV1,MLFVother,SUSYLFV+symmetries}.
If $w^{RL}_{ij}$ is non-vanishing, it is interesting to analyse the relative weight of the leading and subleading contributions to $R_{ij}$.
For this purpose we calculate the the numerical values of the functions $a_{CC'}$, in the limit $\mu=M_{1,2}=m_{SUSY}$:
\begin{equation}
a_{LL}\sim+(2.0\pm16.3)\;,\qquad\quad
a_{RL}=a_{LR}\sim+0.30\;,\qquad\quad
a_{RR}\sim-(0.5\div1.3)\;.
\end{equation}
As one can see, in this limit the dominant coefficient is ${a}_{LL}$, which is larger than $a_{RL}=a_{LR}$ by a factor $7\div 54$,
and larger than $a_{RR}$ by a factor $-(4\div 13)$, depending on $\tan\beta=2\div 15$. Assuming coefficients $w_{ij}^{CC'}$ of order one
in eqs. (\ref{LFV:wcoeff}), we see that the most important contributions in the amplitudes for the considered processes are $a_{RL} u$ and $a_{LL} u^2$. The ratio between the subleading and the leading one is $(a_{LL}/a_{RL}) u\approx (7\div 54) u$. When $u$ is close to its lower bound, which in our model requires a small value of $\tan\beta$, the leading contribution clearly dominates over the subleading one. However, for $u$ close to $0.05$, which allows to consider larger values of $\tan\beta\approx 15$, the non-leading contribution can be as large as the leading one and can even dominate over it. The transition between the two regimes occurs towards larger values of $u$.
The numerical dominance of the coefficient $a_{LL}$ has also another consequence: for vanishing $w^{RL}_{ij}$, $R_{ij}$ is dominated by the contributions of $a_{LL} w^{LL}_{ij}$, whose values are not universal, but expected to be of the same order of magnitude for all channels. Thus even when $w^{RL}_{ij}=0$, we predict $R_{\mu e}\approx R_{\tau\mu}\approx R_{\tau e}$.
\section{Numerical analysis}
\label{Sec:LFV:Numerical}
\setcounter{footnote}{3}
In this section we perform a numerical study of the normalised branching ratios $R_{ij}$ and of the deviation $\delta a_\mu$ of the anomalous magnetic moment of the muon from the Standard Model value. We use the full one-loop results for the branching ratios of the radiative decays
as well as for $\delta a_\mu$. These can be found in \cite{deltaamu_susy,HisanoFukuyama,Arganda,g2BR} and are displayed in appendix \ref{AppE:one-loop} for convenience.
\subsection{Framework}
\label{Sec:LFV:Framework}
\setcounter{footnote}{3}
As discussed in the preceding sections, in our model the flavour symmetry $A_4 \times Z_3 \times U(1)_{FN} \times U(1)_R$ constrains not only the mass matrices of leptons, but also those of sfermions. These are given at the high energy scale $\Lambda_f \approx \Lambda_L$, which we assume to be close to $10^{16}$ GeV, the supersymmetric grand unification scale. The flavour symmetry does not fix the soft supersymmetric mass scale $m_{SUSY}$. It also does not constrain the parameters involved in the gaugino as well as the Higgs(ino) sector. These are fixed by
our choice of a SUGRA framework in which $m_{SUSY}$ is the common soft mass scale for all scalar particles and $m_{1/2}$ the common mass scale of the gauginos. Thus, at the scale $\Lambda_f \approx \Lambda_L$ we have
\begin{equation}
M_1 (\Lambda_L) = M_2 (\Lambda_L) = m_{1/2} \; .
\end{equation}
Effects of RG running lead at low energies (at the scale $m_Z$ of the $Z$ mass) to the following masses for gauginos
\begin{equation}
M_1(m_Z)\simeq\dfrac{\alpha_1(m_Z)}{\alpha_1(\Lambda_L)}M_1(\Lambda_L)\qquad\qquad
M_2(m_Z)\simeq\dfrac{\alpha_2(m_Z)}{\alpha_2(\Lambda_L)}M_2(\Lambda_L)\;,
\end{equation}
where $\alpha_i=g_i^2/4\pi$ ($i=1,2$) and according to gauge coupling unification at $\Lambda_f\approx \Lambda_L$, $\alpha_1(\Lambda_L)=\alpha_2(\Lambda_L)\simeq 1/25$. Concerning the effects of the RG running on the soft mass terms, as we have seen in section \ref{Sec:LFV:Phys} these are small or can be absorbed into our parametrisation of the soft mass terms. Thus, in the contributions $(\hat{m}_{eRL}^2)_{1,2}$ to the RL block we take $m_{SUSY}$ as input parameter. Nevertheless, we explicitly take into account the RG effect on the average mass scale of the LL block, $m_L^2$, and in the RR block, $m_R^2$,
\begin{equation}
\begin{array}{rcl}
m_L^2(m_Z)&\simeq& m_L^2(\Lambda_L)+0.5M_2^2(\Lambda_L)+0.04M_1^2(\Lambda_L) \simeq m_{SUSY}^2 +0.54 m_{1/2}^2 \; ,\\[3mm]
m_R^2(m_Z)&\simeq& m_R^2(\Lambda_L)+0.15M_1^2(\Lambda_L) \simeq m_{SUSY}^2 +0.15 m_{1/2}^2 \; .
\end{array}
\end{equation}
The parameter $\mu$ is fixed through the requirement of correct electroweak symmetry breaking:
\begin{equation}
|\mu|^2\simeq-\dfrac{m_Z^2}{2}+m_{SUSY}^2\dfrac{1+0.5\tan^2\beta}{\tan^2\beta-1}+m_{1/2}^2\dfrac{0.5+3.5 \tan^2\beta}{\tan^2\beta-1}\;,
\label{LFV:defmuSUGRA}
\end{equation}
so that $\mu$ is determined by $m_{SUSY}$, $m_{1/2}$ and $\tan\beta$ up to its sign. We recall that in our model the low-energy parameter $\tan\beta$ is related to the size of the expansion parameter $u$, the mass of the $\tau$ lepton and the $\tau$ Yukawa coupling $y_\tau$ in as in the following
\begin{equation}
u=\dfrac{\tan\beta}{|y_\tau|}\dfrac{\sqrt2 m_\tau}{v}\approx0.01\dfrac{\tan\beta}{|y_\tau|}\;,
\label{LFV:tanb&u&yt}
\end{equation}
as we have already pointed out in eq. (\ref{AFTBM:tanb&u&yt}). For this reason, requiring $1/3 \lesssim |y_\tau| \lesssim 3$
constrains $\tan\beta$ to lie in the range $2 \lesssim \tan\beta \lesssim 15$. As already commented, the lower bound $\tan\beta =2$
is almost excluded experimentally, since such low values of $\tan\beta$ usually lead to a mass for the lightest Higgs below the LEP2 bound
of $114.4$ GeV \cite{mhbound}. \footnote{This bound assumes that the Higgs is SM-like. For the case of generic MSSM Higgs the bound is much lower, $91.0$ GeV \cite{MSSMmhbound}.}
In our numerical analysis the parameters are the following: the two independent mass scales $m_{SUSY}$ and $m_{1/2}$, the sign of the parameter $\mu$ and the parameters of the slepton mass matrices shown in section \ref{Sec:LFV:Phys} in the physical basis. We recall that the results of section \ref{Sec:LFV:Phys} have been obtained under the assumption that the parameters are real and we keep working under the same assumption here. We also assume that the parameters on the diagonal of the slepton mass matrices $(\hat{m}_{(e,\nu)LL}^2)_K$ and $(\hat{m}_{eRR}^2)_K$, $n_0$ and $n^c_{1,2,3}$, are positive in order to favour positive definite square-masses, to avoid electric-charge breaking minima and further sources of electroweak symmetry breaking. The absolute value of the ${\cal O}(1)$ parameters is varied between $1/2$ and $2$. We will choose some representative values for $u$ in the allowed range $0.007 \lesssim u \lesssim 0.05$. The other expansion parameter $t$ is fixed to be $0.05$. In the analysis of the normalised branching ratios $R_{ij}$ we fix $\tan\beta$ and $u$ and then we derive the Yukawa couplings $y_e$, $y_\mu$ and $y_\tau$. When discussing the anomalous magnetic moment of the muon instead we vary $y_\tau$ between $1/3$ and $3$ and calculate $\tan\beta$ by using eq. (\ref{LFV:tanb&u&yt}). Having determined $\tan\beta$, the Yukawa couplings $y_e$ and $y_\mu$ can be computed.
The allowed region of the parameter space is determined by performing several tests. We check whether the mass of the lightest chargino is above 100 GeV \cite{PDG08}, whether the lightest neutralino is lighter than the lightest charged slepton, whether the lower bounds for the charged slepton masses are obeyed \cite{PDG08} and whether the masses of all sleptons are positive. The constraint on the mass of the lightest chargino implies a lower bound on $m_{1/2}$ which slightly depends on the sign of $\mu$. In our plots for $R_{ij}$ we also show the results for points of the parameter space that do not respect the chargino mass bound. For low values of $m_{SUSY}$, e.g. $m_{SUSY}=100$ GeV, the requirement that the lightest neutralino is lighter than the lightest charged slepton is equivalent to the requirement that the parameters in the diagonal entries of the slepton mass matrices $(\hat{m}_{(e,\nu)LL}^2)_K$ and $(\hat{m}_{eRR}^2)_K$ are larger than one.
For larger values of $m_{SUSY}$, e.g. $m_{SUSY}=1000$ GeV, this requirement does not affect our analysis anymore. We note that masses of charginos and neutralinos are essentially independent from the ${\cal O}(1)$ parameters of the slepton mass matrices and thus their masses fulfill with very good accuracy (better for larger $m_{1/2}$)
\begin{equation}
M_{ \widetilde{\chi}^0_1} \approx 0.4 m_{1/2} \;,\qquad
M_{ \widetilde{\chi}^0_2} \approx M_{ \widetilde{\chi}^-_1} \approx 0.8 m_{1/2} \;,\qquad
M_{ \widetilde{\chi}^0_3} \approx M_{ \widetilde{\chi}^0_4} \approx M_{ \widetilde{\chi}^-_2} \approx |\mu| \;.
\end{equation}
For the slepton masses we find certain ranges which depend on our choice of the ${\cal O}(1)$ parameters.
\subsection{Results for Radiative Leptonic Decays}
\label{Sec:LFV:ResultsDecays}
\setcounter{footnote}{3}
We first discuss the results for the branching ratio of the decay $\mu\to e\gamma$. This branching ratio is severely constrained
by the result of the MEGA experiment \cite{MEGA}
\begin{equation}
R_{\mu e} \approx BR(\mu\to e\gamma) < 1.2 \times 10^{-11}
\end{equation}
and will be even more constrained by the on-going MEG experiment \cite{MEG} which will probe the regime
\begin{equation}
R_{\mu e}\approx BR(\mu\to e\gamma) \gtrsim 10^{-13} \; .
\end{equation}
We explore the parameter space of the model by considering two different values of the expansion parameter $u$, $u=0.01$ and $u=0.05$, two different values of $\tan\beta$, $\tan\beta=2$ and $\tan\beta=15$, as well as two different values of the mass scale $m_{SUSY}$, $m_{SUSY}=100$ GeV and $m_{SUSY}=1000$ GeV. We show our results in scatter plots in figure \ref{LFV:ScatterBR} choosing $m_{1/2}$ to be $m_{1/2} \lesssim 1000$ GeV. All plots shown in figure \ref{LFV:ScatterBR} are generated for $\mu>0$.
\begin{figure}
\centering
\subfigure[$\tan\beta=2$, $u=0.01$ and $m_{SUSY}=100$ GeV.]
{\includegraphics[width=7.8cm]{MEG_Scatter_tanB2_u1_M100.pdf}}
\subfigure[$\tan\beta=2$, $u=0.01$ and $m_{SUSY}=1000$ GeV.]
{\includegraphics[width=7.8cm]{MEG_Scatter_tanB2_u1_M1000.pdf}}
\subfigure[$\tan\beta=2$, $u=0.05$ and $m_{SUSY}=100$ GeV.]
{\includegraphics[width=7.8cm]{MEG_Scatter_tanB2_u5_M100.pdf}}
\subfigure[$\tan\beta=2$, $u=0.05$ and $m_{SUSY}=1000$ GeV.]
{\includegraphics[width=7.8cm]{MEG_Scatter_tanB2_u5_M1000.pdf}}
\subfigure[$\tan\beta=15$, $u=0.05$ and $m_{SUSY}=100$ GeV.]
{\includegraphics[width=7.8cm]{MEG_Scatter_tanB15_u5_M100.pdf}}
\subfigure[$\tan\beta=15$, $u=0.05$ and $m_{SUSY}=1000$ GeV.]
{\includegraphics[width=7.8cm]{MEG_Scatter_tanB15_u5_M1000.pdf}}
\vspace{-0.3cm}
\caption{\it Scatter plots of $BR(\mu\to e \gamma)$ as a function of $m_{1/2}$, for different values of $\tan\beta$, $u$ and $m_{SUSY}$.
The red (dark gray) points correspond to points in which the mass of the lightest chargino is below the limit coming from direct searches.
The horizontal lines show the current MEGA bound (continuous line) and the prospective MEG bound (dashed line).}
\label{LFV:ScatterBR}
\end{figure}
As one can see from figure \ref{LFV:ScatterBR}(a), for very low $\tan\beta=2$, small $u=0.01$, small $m_{SUSY}=100$ GeV the experimental upper limit from the MEGA experiment on $BR(\mu\to e\gamma)$ can be passed in almost all parameter space of our model for values of $m_{1/2}$ as small as $450$ GeV. For $m_{SUSY}=100$ GeV and $m_{1/2}=450$ GeV the sparticle masses are rather light: the lightest neutralino has a mass of $175$ GeV, the lightest chargino of $350$ GeV, the masses of the right-handed (charged) sleptons vary between $175$ and $285$ GeV and the masses of the left-handed sleptons are in the range $(250\div 500)$ GeV. Thus, especially the right-handed sleptons are expected to be detected at LHC. In a model also including quarks (and hence squarks) we find for the squarks that they can have masses $\gtrsim 700$ GeV and gluinos with masses of about $1000$ GeV, all accessible at LHC. To pass the prospective bound coming from the MEG experiment in a sizable portion of parameter space of our model $m_{1/2}$ has to be chosen larger, $m_{1/2} \gtrsim 600$ GeV. Then, however, the masses of the sleptons might only be detected at LHC in case of right-handed sleptons. As one can see, values of $m_{1/2} \lesssim 155$ GeV are excluded due to the constraint on the lightest chargino mass. Studying the same value of $\tan\beta$ and $u$, but taking $m_{SUSY}$ to be as large as $1000$ GeV, we can see from figure \ref{LFV:ScatterBR}(b) that now the bound set by the MEGA experiment on $BR(\mu\to e\gamma)$ is respected in the whole parameter space of our model for all values of $m_{1/2}$. Also the foreseen limit of the MEG experiment can only exclude a smaller part of the parameter space of the model for all values of $m_{1/2}$. In this setup, the prospects for detecting supersymmetric particles at LHC are the best for gauginos due to the possible low value of $m_{1/2}$. The slepton masses are expected to be roughly $m_{SUSY}$ and thus too large to allow for a detection at LHC.
Increasing the value of the expansion parameter $u$ from $u=0.01$ to $u=0.05$, as done in figure \ref{LFV:ScatterBR}(c) and \ref{LFV:ScatterBR}(d), increases also the branching ratio of the decay $\mu\to e\gamma$ by approximately two orders of magnitude, since the different contributions to the branching ratio scale at least with $u^2$, as analysed in section \ref{Sec:LFV:MI}. For this reason for low values of $m_{SUSY}=100$ GeV, $m_{1/2}$ has to take values $m_{1/2} \gtrsim 600$ GeV in order for the result of $BR(\mu\to e\gamma)$ to be compatible with the MEGA bound at least in some portion of the parameter space of our model. For the point $(m_{SUSY},m_{1/2})=(100 \, \rm GeV, 600 \, GeV)$ the sparticle spectrum is characterised as follows: the lightest neutralino has a mass of $240$ GeV, the lightest chargino of $470$ GeV, right-handed sleptons between $250$ and $350$ GeV and left-handed sleptons generally above $300$ GeV. For this reason
there still exists the possibility to detect right-handed sleptons at LHC. Concerning gluinos and squarks these are expected to have masses between $1000$ and $1500$ GeV so that they also can be detected at LHC. As one can see from figure \ref{LFV:ScatterBR}(c) the on-going
MEG experiment can probe nearly the whole parameter space of the model for $\tan\beta=2$, $u=0.05$ and $m_{SUSY}=100$ GeV for values
of $m_{1/2} \lesssim 1000$ GeV. Increasing the parameter $m_{SUSY}$ to $1000$ GeV shows that applying the existing bound on $BR(\mu\to e\gamma)$ of $1.2 \times 10^{-11}$ cannot exclude small values of $m_{1/2}$. The situation changes, if the expected bound from the MEG experiment is employed, because then values of $m_{1/2}$ smaller than $1000$ GeV become disfavoured.
Finally, we show in figure \ref{LFV:ScatterBR}(e) and \ref{LFV:ScatterBR}(f) the results obtained for $\tan\beta=15$. We remind that this value is the largest possible one of $\tan\beta$ in our model. Requiring that the $\tau$ Yukawa coupling does not become too large entails that $\tan\beta=15$ fixes the expansion parameter $u$ to take a value close to its upper limit, $u=0.05$. The value of $BR(\mu\to e \gamma)$ is thus enhanced through $\tan\beta$ as well as $u$. This is clearly shown in figure \ref{LFV:ScatterBR}(e) and \ref{LFV:ScatterBR}(f), because for a low value of $m_{SUSY}=100$ GeV already the MEGA bound practically excludes almost the whole parameter space of our model
for all values of $m_{1/2} \lesssim 1000$ GeV. Increasing the mass parameter $m_{SUSY}$ to $1000$ GeV slightly improves the situation, because now there exists a marginal probability to pass the MEGA bound. Again, however, the MEG experiment can probe all parameter space of our model for $m_{1/2} \lesssim 1000$ GeV. Thus, for $m_{SUSY} \lesssim 1000$ GeV and $m_{1/2} \lesssim 1000$ GeV the parameter space of our model is already severely constrained for moderate values of $\tan\beta$ which entail large $u \approx 0.05$ by the bound coming from the MEGA collaboration, but surely will be conclusively probed by the MEG experiment. Choosing $\mu<0$ hardly affects the results presented here apart from slightly decreasing the lower bound on $m_{1/2}$ coming from the chargino mass bound. Thus, all statements made also apply for $\mu<0$.
In summary, the current bound on $BR(\mu\to e\gamma)$ prefers regions in the parameter space of our model with small $u$ or small $\tan\beta$, as long as the SUGRA mass parameters should be chosen smaller than $1000$ GeV. The foreseen MEG bound strongly favours regions in which $u$ is small for $m_{SUSY}$ and $m_{1/2}$ being not too large. The fact that smaller values of $u$ are preferred has consequences also for the expectations of the detection prospects for the reactor mixing angle $\theta_{13}$, because this angle scales with $u$: it might thus not be possible to detect $\theta_{13}$ with the reactor and neutrino beam experiments under preparation \cite{NeutrinoData,Theta13future}.
Concerning the radiative $\tau$ decays, $\tau\to \mu\gamma$ and $\tau\to e\gamma$, the result found in the MI approximation that
the branching ratios of these decays are of the same order of magnitude as $BR(\mu\to e\gamma)$ is essentially confirmed in a numerical
analysis. Due to the random parameters differences up to two orders of magnitude are expected and found, especially for the case of larger
$\tan\beta$: looking at table \ref{table:coefficientsLFV}, when the parameters $c_F$ and $c$ are of opposite sign the leading contributions are suppressed and the non-universal NLO terms become relevant. However, it is still highly improbable that the decays $\tau\to \mu\gamma$ and $\tau\to e\gamma$ could be detected at a SuperB factory, assuming a prospective limit of $BR(\tau\to \mu \gamma)$, $BR(\tau\to e \gamma) \gtrsim 10^{-9}$ \cite{SuperB}.
\subsection{Results for Anomalous Magnetic Moment of the Muon}
\label{Sec:LFV:ResultsAmu}
\setcounter{footnote}{3}
As is well known, the value found for the anomalous magnetic moment of the muon \cite{ExperimentalBoundsMDMmu}
\begin{equation} a_\mu^{EXP}=116592080(63)\times 10^{-11}
\end{equation}
shows a 3.4 $\sigma$ deviation
\begin{equation} \delta a_\mu=a_\mu^{EXP}-a_\mu^{SM}=+302(88)\times 10^{-11}
\label{LFV:Da}
\end{equation}
from the value expected in the Standard Model
\begin{equation} a_\mu^{SM}=116591778(61)\times 10^{-11} \; .
\end{equation}
Thus, it might be interesting to consider the case in which this deviation is attributed to the presence of supersymmetric particles with masses of a few hundred GeV. The one-loop contribution to the anomalous magnetic moment of the muon in supersymmetric extensions of the Standard Model has been studied by several authors \cite{deltaamu_susy}.
\begin{figure}[h!]
\centering
\subfigure[$u=0.01$.]
{\includegraphics[width=7.8cm]{BRvsG2_u1_300.jpg}}
\subfigure[$u=0.05$.]
{\includegraphics[width=7.8cm]{BRvsG2_u5_1000.jpg}}
\vspace{-0.3cm}
\caption{\it Scatter plots in the plane $BR(\mu\to e\gamma)-\delta a_\mu$, for values of $u=0.01,\,0.05$. The value of $\tan\beta$ is fixed through the relation with the $\tau$ Yukawa coupling, which lies in the interval $[1/3,3]$. The values of $m_{SUSY}$ and of $m_{1/2}$ are chosen between $10$ and $300$ GeV for the left panel and between $10$ and $1000$ GeV in the right one. The horizontal lines correspond to the MEGA (continuous line) and the MEG bounds (dashed line); the vertical lines correspond to the measurements on $\delta a_\mu$: the continuous one is the best fit value and the dashed ones correspond to the $3\sigma$ boundaries.}
\label{LFV:ScatterBrG2}
\end{figure}
We study the compatibility between the requirement that $\delta a_\mu$ is explained by the exchange of relatively light supersymmetric particles
and the experimental upper limit on $BR(\mu\to e\gamma)$ coming from the MEGA experiment. We choose again two different values of $u$, $u=0.01$ and $u=0.05$. To better explore the parameter space we vary the $\tau$ Yukawa coupling between $1/3$ and $3$ and fix the value of $\tan\beta$ through the relation given in eq. (\ref{LFV:tanb&u&yt}). As a consequence in the plot for $u=0.01$, see figure \ref{LFV:ScatterBrG2}(a), $2 \lesssim \tan\beta \lesssim 3$ holds and for $u=0.05$ $\tan\beta$ takes values $2\lesssim \tan\beta \lesssim 15$. Similarly, $m_{SUSY}$ and $m_{1/2}$ are chosen to lie in intervals $[10 \, \rm GeV, 300 \, GeV]$ and $[10 \, \rm GeV, 1000 \, GeV]$ for $u=0.01$ and $u=0.05$, respectively. The different choice of intervals is due to the fact that values of a few hundred GeV for $m_{SUSY}$ and $m_{1/2}$ are disfavoured by the existing limit on $BR(\mu\to e\gamma)$ when $u=0.05$. As one can clearly see from figure \ref{LFV:ScatterBrG2}, in almost the whole parameter space of our model it is not natural to reproduce the observed deviation of the muon anomalous magnetic moment and at the same time to respect the existing bound on the branching ratio of $\mu\to e \gamma$.
This kind of incompatibility is well known in constrained supersymmetric theories, because the explanation of the $3.4 \sigma$ discrepancy necessitates small values of $m_{SUSY}$ and $m_{1/2}$ and larger values of $\tan\beta$, which in turn enhance the branching ratio of the radiative LFV decays. Thus, we have to conclude that either there exist further sources of contributions to the anomalous magnetic moment of the muon beyond those present in our model, or - as is also discussed in the literature - the theoretical value $a_\mu^{SM}$ found in the Standard Model is closer to $a_\mu^{EXP}$ so that the possible discrepancy becomes less than $100 \times 10^{-11}$, a value which could well be explained in our model.
\section{Conclusions of the Chapter}
\label{Sec:LFV:Conclusion}
\setcounter{footnote}{3}
In this chapter we studied a series of phenomena involving flavour transitions in order to find new observables, not related to neutrino oscillations, which can be useful to test the numerous flavour models present in literature. The introduction of new physics, supersymmetric or not, corresponding to an energy scale at about $M=1\div10$ TeV is an interesting possibility: a solution to the hierarchy problem, a successful gauge coupling unification, the existence of a Dark Matter candidate and an explanation of the observed discrepancy in the anomalous magnetic moment of the muon might be found if this intermediate energy scale is introduced.
We pursed the analysis considering a set of flavour models, based on the symmetry group $A_4\times Z_3\times U(1)_{FN}$, originally proposed in order to describe lepton masses and mixing angles, through the introduction the Weinberg operator (for alternative studies with the type I See-Saw see \cite{LFVA4+SeeSaw}). The introduction of new physics in this class of models allows to study a set of new low-energy observables, as leptonic MDMs, EDMs and LFV transitions like $\mu\to e \gamma$, $\tau\to\mu\gamma$ and $\tau\to e \gamma$. We have constructed the effective low-energy Lagrangian that describes these observables. Such an effective Lagrangian is invariant under the flavour symmetry, and all flavour breaking effects are encoded in the dependence on a set of flavons $\varphi$ which develop VEVs. These also control lepton masses and mixing angles. The dominant operators are obtained by expanding the Lagrangian in powers of $\varphi/\Lambda_f$ and by keeping the first few terms in the expansion. The leading contributions have dimension six and are suppressed by two powers of a new scale $M$. Apart from this energy scale, all the relevant information needed to predict MDMs, EDMs and LFV transitions is contained in a dimensionless matrix $\mathcal{\hat M}$, whose elements can be computed up to unknown order-one coefficients from our Lagrangian.
The strongest bound comes from the EDM of the electron: $M>80$ TeV. A lower value for $M$ can be tolerated in the presence of a cancellation in the imaginary part of $\mathcal{\hat{M}}_{ee}$, perhaps related to CP-conservation in the considered sector of the theory.
This problem is also present in MFV\cite{MFV,MLFV1,MLFVother}, where one simply assumes that $\mathcal{\hat{M}}_{ee}$ is real.
Coming to LFV dipole transitions, we have found that in the general case the branching ratios for $\mu\to e \gamma$, $\tau\to \mu\gamma$ and $\tau \to\ e \gamma$ are all expected to be of the same order, at variance with MFV. Given the present limit on $BR(\mu\to e \gamma)$, this implies that $\tau\to \mu\gamma$ and $\tau \to\ e \gamma$ have rates much smaller than the present (and near future) sensitivity. The absolute values of these branching ratios depend on the flavon VEVs that in our class of models are determined by a parameter $u$ in the range $0.001<u<0.05$. In the general case, for $BR(\mu\to e \gamma)<1.2\times 10^{-11}~(10^{-13})$ we get $M>10~(30)$ TeV if $u=0.001$ and $M>70~(200)$ TeV for $u=0.05$. The anomalous MDM of the muon $a_\mu$ and its deviation from the SM expectation provide the indication for a lower scale $M$,
of the order of few TeV, which would also be of great interest for LHC. In order to reconcile this possibility with the results derived from the LFV dipole transitions, we have reconsidered the matrix $\mathcal{\hat M}$ in a supersymmetric context, where additional constraints have to be applied. The operators describing $\mu\to e \gamma$, $\tau\to \mu\gamma$ and $\tau \to\ e \gamma$ flip the lepton chirality. By assuming that in a supersymmetric theory the only sources of chirality flips are the fermion masses and the sfermion mass terms of left-right type, we find that a cancellation takes place in the elements of $\mathcal{\hat M}$ below the diagonal. As a result the limits on the scale $M$ become less severe.
For $BR(\mu\to e \gamma)<1.2\times 10^{-11}~(10^{-13})$ we get $M>0.7~(2)$ TeV if $u=0.001$ and $M>14~(48)$ TeV for $u=0.05$.
At variance with the non-supersymmetric case there is a range of values of the parameter $u$ for which the scale $M$ can be sufficiently small to allow for an explanation of the observed discrepancy in $a_\mu$, without conflicting with the present bound on $\mu\to e \gamma$. Since in our framework $\theta_{13}$ is comparable to $u$, the present limit on $BR(\mu\to e \gamma)$ together with the existing discrepancy
in $a_\mu$ point to a rather small value for $\theta_{13}$, of the order of few percents in radians, close to but probably just below the sensitivity expected in future experiments at reactors or with high intensity neutrino beams.
Subsequently, we have extended the original model by including Supersymmetry breaking terms consistent with all symmetry requirements. Our model is an effective theory, valid at energy scales below a cutoff $\Lambda_f \approx \Lambda_L$, where we have derived the spectrum of supersymmetric particles, in the slepton sector, under the assumption that the Supersymmetry breaking scale is larger than $\Lambda_f$. It provides an example of a model in which the slepton mass matrices at the scale $\Lambda_f$ are not universal. Left-handed sleptons are approximately universal, with a small departure from universality controlled by $u$, the flavour symmetry breaking parameter which runs in a small range around few per cent and has a size similar to the reactor mixing angle $\theta_{13}$. Right-handed sleptons have soft masses of the same order, but the relative difference among them is expected to be of order one. Off-diagonal elements in both sectors, as well as in the RL block, are small and have a characteristic pattern in terms of powers of $u$.
This structure is maintained by the effects coming from the RG running from $\Lambda_f \approx \Lambda_L$ down to the electroweak scale.
We got the slepton mass matrices to compute the normalised branching ratios $R_{ij}$ for the transitions $\mu\to e \gamma$, $\tau\to\mu\gamma$ and $\tau\to e \gamma$. At variance with other models based on flavour symmetries we found $R_{\mu e}\approx R_{\tau\mu}\approx R_{\tau e}$ and, given the present limit on $R_{\mu e}$, these rare $\tau$ decays are practically unobservable in our model. On a more theoretical side, the scaling $R_{ij}\propto u^2$, found in the MI approximation, violates an expectation based on an effective Lagrangian approach, which suggested $R_{ij}\propto u^4$ in the limit of massless final charged lepton. We have identified the source of such a violation in a single, flavour independent, contribution of the RL block of the slepton mass matrix. Such a contribution originates from the VEVs of the auxiliary components of the flavon supermultiplets. We have classified the conditions under which this universal contribution is absent.
In a numerical analysis of $R_{\mu e}$ we found that already the current bound from the MEGA experiment requires the parameter $u$ to be small or $\tan\beta$ to be small for supersymmetric mass parameters $m_{SUSY}$ and $m_{1/2}$ below $1000$ GeV, to guarantee detection of sparticles at LHC. Applying the prospective MEG bound tightens the parameter space of our model even more to small $u$ and $\tan\beta$ or requires mass parameters $m_{SUSY}$ and $m_{1/2}$ above $1000$ GeV. Furthermore, we showed that the deviation of the experimentally observed value of the magnetic moment of the muon from the Standard Model one cannot be naturally explained in our framework, for $BR(\mu\to e\gamma)$ below the current bound. The maximal value of $\delta a_{\mu}$ in our model is around $100 \times 10^{-11}$ for $BR(\mu\to e\gamma) \lesssim 10^{-11}$.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter{Leptogenesis}
\label{Sec:Leptogenesis}
\setcounter{equation}{0}
\setcounter{footnote}{3}
From a theoretical perspective the smallness of neutrino masses can be well understood within the See-Saw mechanism, in which Standard Model is extended by adding new heavy states. As we have already seen in the previous chapters, a flavour symmetry can act on the flavour structure of the Majorana and Dirac mass matrices, explaining the observed lepton mixing pattern. Introducing right-handed neutrinos with large Majorana masses, the type I See-Saw naturally contains all the necessary ingredients for a dynamical generation of a cosmic lepton asymmetry through the decays of these heavy singlet states (leptogenesis): ($a$) Lepton number violation arising from the Majorana mass terms of the new fermionic fields; ($b$) CP-violating sources from complex Yukawa couplings; ($c$) departure from thermal equilibrium in the hot primeval plasma at the time the singlet neutrinos start decaying. This lepton asymmetry is then reprocessed into a baryon asymmetry through $B+L$ violating anomalous electroweak processes \cite{Kuzmin} thus yielding an explanation to the origin of the baryon asymmetry of the Universe \cite{Hinshaw} i.e. baryogenesis through leptogenesis (for a recent review see \cite{DavidsonNardiNir}).
In the lepton sector a source of CP violation, e.g. neutrino oscillations, is already present, but it has been generically shown that baryon asymmetry is insensitive to the low-energy CP-violating phases \cite{Branco,Davidson}. However, it is possible to identify a class of models in which a connection between CP violation responsible for leptogenesis and CP violation observable at low energies can be established thanks to some flavour effects. This is linked to the additional constraints on the parameter space coming from the flavour sector, when a flavour symmetry is implemented in a model. As pointed out in \cite{JM_A4Lepto}, in the context of the Altarelli-Feruglio model with type I See-Saw the CP-violating asymmetry ($\epsilon_{\nu^c}$) vanishes in the limit of exact tribimaximal mixing, with leptogenesis becoming viable only when deviations from this pattern are taken into account. The explicit structure of the corrections responsible for these deviations are model-dependent and therefore whether a connection between $\epsilon_{\nu^c}$ and low-energy parameters can be established will depend on the particular realisation.
In this chapter we extend upon the work in \cite{JM_A4Lepto}. In particular, we study the viability of leptogenesis in the context of
models based on an arbitrary flavour symmetry leading to mass-independent mixing textures. When there is only type I See-Saw, dealing with three right-handed neutrinos, independently of the nature of the underlying symmetry, we conclude that $\epsilon_{\nu^c}=0$ in the limit of exact symmetry. Under these conditions, only deviations coming from the flavour symmetry breaking yield $\epsilon_{\nu^c}\neq 0$. It is important to note that this result is not in general valid in the presence of other types of See-Saws.
On the same context, a different analysis but with very similar conclusions has been proposed in \cite{BBFN_Lepto}. In particular it is interesting to report one of the results therein: if the three right-handed neutrinos transform under the flavour symmetry as an irreducible representation then, in the limit of the exact symmetry, the neutrino Yukawa is proportional to a unitary matrix and as a result the CP asymmetry vanishes. It is then possible to find a relation between this statement and ours one: indeed, when the three right-handed neutrinos transform as an irreducible representation of the flavour symmetry it is possible that the lepton mixing matrix is a mass-independent texture, in the limit of the exact symmetry; as a result our conclusion can be seen an alternative formulation of the statement in \cite{BBFN_Lepto}.
\section{The Basic Framework}
\label{Sec:Lep:basis}
\setcounter{footnote}{3}
In this section we establish a more suitable notation, commonly used to deal with leptogenesis. The type I See-Saw Lagrangian is given by
\begin{equation}
\mscr{L}= (Y_e)_{ij}\overline{\ell}_i\,H\,e^c_j + (Y_\nu)_{i \alpha}\overline{\ell}_i \widetilde{H} \nu^c_\alpha + \dfrac{1}{2}(M_R)_{\alpha \beta}\nu^c_\alpha\nu^c_\beta+\text{h.c.}\;,
\label{Lep:eq:Lagrangian}
\end{equation}
where as usual $\ell$ are the lepton $SU(2)_L$ doublets, $e^c$ are the complex conjugate charged lepton $SU(2)_L$ singlets and $H$ is the
Higgs $SU(2)_L$ doublet. Latin indices $i,j\dots$ label the lepton flavour of the left-handed species, whereas Greek indices $\alpha,\beta\dots$ refer to the right-handed species. $Y_e$, $Y_\nu$ and $M_R$ are $3\times3$ matrices in flavour space.
The effective light neutrino masses originate by the usual type I See-Saw and, after the electroweak symmetry breaking, the mass matrix is given by
\begin{equation}
m_\nu=-m_D \, M_R^{-1}\, m_D^T\;,
\label{Lep:eq:lightNeu-MM}
\end{equation}
where $m_D=Y_\nu\,v/\sqrt2$. From now on we assume that in the physical basis, $m_\nu$ is exactly diagonalised by a mass-independent mixing matrix $U_0$ and therefore
\begin{equation}
\hat{m}_\nu=P\,U_0^T\, m_\nu\,U_0\,P\;,
\label{Lep:eq:diagonalisation}
\end{equation}
where $P$ accounts for the low-energy Majorana phases. In general, $m_D$ as well as $M_R$ ($M_R=M_R^T$) are complex matrices which can be diagonalised as follows
\begin{equation}
\hat{m}_D=U_L^\dag \, m_D\, U_R \;,\qquad\qquad
\hat{M}_R=V_R^T \, M_R\, V_R \;,
\label{Lep:def}
\end{equation}
with $U_L,U_R,V_R$ $3\times 3$ unitary matrices, characterised in general by 3 rotation angles and 6 phases.
According to eq. (\ref{Lep:def}) the effective neutrino mass matrix in (\ref{Lep:eq:lightNeu-MM}) can be written as
\begin{equation}
m_\nu= - U_L\,\hat{m}_D \, (U_R^\dag\,V_R)\, \hat{M}_R^{-1}\, (V_R^T U_R^*)\,\hat{m}_D\,U_L^T\,.
\label{Lep:ss}
\end{equation}
The requirement of having exact $U_0$ diagonalisation can be written according to eqs. (\ref{Lep:eq:diagonalisation}) and (\ref{Lep:ss}), requiring that
\begin{equation}
\hat{m}_\nu= - P\,(U_0^T U_L)\,\hat{m}_D \, (U_R^\dag\,V_R)\,\hat{M}_R^{-1}\, (V_R^T U_R^*)\,\hat{m}_D\,(U_L^T U_0)\,P
\label{Lep:ssdiag}
\end{equation}
is diagonal and real. It is useful to introduce the notation of the Dirac neutrino mass matrix in the basis in which the right-handed neutrino mass matrix $\hat{M}_R$ is real and diagonal:
\begin{equation}
m_D^R\equiv m_DV_R\,.
\label{Lep:eq:mDR_def}
\end{equation}
\subsection{General Remarks on Leptogenesis}
\label{Sec:Lep:gen-rem-lepto}
\setcounter{footnote}{3}
Our discussion will be entirely devoted to ``unflavoured'' leptogenesis scenarios: in the framework of flavour symmetry models the heavy singlet neutrinos typically have masses above $10^{13}$~GeV and for $T\gtrsim 10^{12}$ GeV lepton flavours are indistinguishable \cite{Nardi,Abada}. In the standard thermal leptogenesis scenario singlet neutrinos $\nu^c$ are produced by scattering processes after inflation. Subsequent out-of-equilibrium decays of these heavy states generate CP-violating asymmetries given by \cite{DavidsonNardiNir,Covi}
\begin{equation}
\epsilon_{\nu^c_\alpha} \;=\;\dfrac{\Gamma_\alpha-\overline{\Gamma}_\alpha}{\Gamma_\alpha+\overline{\Gamma}_\alpha}\;
=\; \dfrac{1}{4v^2 \pi (m_D^{R\,\dagger} \;m_D^R)_{\alpha\al}} \sum_{\beta\neq \alpha}\mbb{I}\mrm{m}\left[\left((m_D^{R\,\dagger} \;m_D^R)_{\beta\alpha}\right)^2\right] f(z_\beta)\;,
\label{Lep:eq:cp-asymm}
\end{equation}
where $\Gamma_\alpha$ and $\overline{\Gamma}_\alpha$ are the total $\nu^c_\alpha$-decay widths into leptons and anti-leptons, respectively, $z_\beta=M_\beta^2/M_\alpha^2$ and the loop function can be expressed as
\begin{equation}
f(z_\beta) = \sqrt{z_\beta} \left[ \dfrac{2 - z_\beta}{1 - z_\beta} - (1 + z_\beta)\; \log\left(\dfrac{1 + z_\beta}{z_\beta}\right) \right]\;.
\label{Lep:eq:loop-func}
\end{equation}
Depending on the singlet neutrino mass spectrum the loop function can be further simplified. In the hierarchical limit ($M_\alpha\ll M_\beta$) and in the case of an almost degenerate heavy neutrino spectrum ($z_\beta=1+\delta_\beta$, $\delta_\beta\ll 1$), this function becomes respectively
\begin{equation}
f(z_\beta) \to -\dfrac{3}{2\sqrt{z_\beta}}\;,\qquad\qquad f(1+\delta_\beta)\simeq -\dfrac{1}{\delta_\beta}\;.
\label{Lep:eq:loop-function}
\end{equation}
In any case, as can be seen from eq. (\ref{Lep:eq:cp-asymm}), whether the CP-violating asymmetry vanishes will be determined by the Yukawa coupling combination $m_D^{R\,\dagger} m_D^R$.
\mathversion{bold}
\section[CP asymmetry and Exact $U_0$ Mixing Without any FS]{CP asymmetry and Exact $U_0$ Mixing Without any Flavour Symmetry}
\label{Sec:Lep:unfamiliar}
\setcounter{footnote}{3}
\mathversion{normal}
While the $U_0$ mixing pattern can be well understood as a consequence of an underlying flavour symmetry, in principle it might be that it arises from a random set of parameters (though quite unlikely). For completeness, in this section we consider this possibility and study the consequences on the CP-violating asymmetry. We focus on the tribimaximal mixing pattern, but similar analysis can be done with the same conclusions dealing with other mass-independent textures.
When the neutrino mixing angles are fixed to satisfy the tribimaximal mixing pattern and in addition to the measured mass squared differences we have a set of eight constraints on the parameter space:
\begin{equation}
m_{\nu_{12}}=m_{\nu_{13}}\;,\qquad
m_{\nu_{22}}=m_{\nu_{33}}\;,\qquad
m_{\nu_{11}}=m_{\nu_{22}}+m_{\nu_{23}}-m_{\nu_{12}}\;,
\label{Lep:cond1}
\end{equation}
yielding six constraints (from the real and imaginary parts of the mass matrix entries); the atmospheric and solar mass scales provide the remaining two.
To determine the effect of such constraints on $\epsilon_{\nu^c}$ it is practical to use a parametrisation of $m_D$ which ensures that the tribimaximal mixing and the correct neutrino masses occur. In the basis in which the right-handed neutrino mass matrix is diagonal and real it is convenient to introduce the orthogonal complex matrix $R$ defined by the so-called Casas-Ibarra parametrisation \cite{CasasIbarra}, namely
\begin{equation}
R^*= (\hat{m}_\nu)^{-1/2} \,U_0^T\,m_D^R\, (\hat{M}_R)^{-1/2}\;.
\label{Lep:eq:casas-ibarra}
\end{equation}
All the low-energy observables are contained in the leptonic mixing matrix $U_0$ and in the diagonal and real light neutrino mass matrix $\hat{m}_\nu$. The matrix $R$ turns out to be very useful in expressing the CP-violating asymmetry parameter. Considering for simplicity the case of hierarchical right-handed neutrinos ($M_1 \ll M_2 \ll M_3$ - thus validating the approximation in \eq{Lep:eq:loop-function}), \eq{Lep:eq:cp-asymm} can be rewritten as
\begin{equation}
\epsilon_{\nu^c_\alpha} = -\dfrac{3 M_\alpha}{8 \pi v^2} \dfrac{\Im \left[\sum_j m_j^2 R_{j\alpha}^2\right]} {\sum_j m_j |R_{j\alpha}|^2}\;,
\label{Lep:eq:cp-asymm-CI}
\end{equation}
where $m_j\equiv(\hat m_\nu)_{jj}$. Once the right-handed neutrino mass spectrum and low-energy observables are fixed, random values of $m^R_D$ correspond to random values of $R$. It is shown by eq. (\ref{Lep:eq:cp-asymm-CI}) that leptogenesis is completely insensitive to low-energy lepton mixing and CP-violating phases \cite{Branco} \footnote{This statement is in general also true in flavoured leptogenesis \cite{Davidson}.} and therefore the viability of leptogenesis is not at all related with the tribimaximal mixing or in general with any accidental mixing pattern considered. The CP-violating asymmetry is determined by the values of the entries of $R$ which are arbitrary in the absence of any flavour symmetry, and consequently $\epsilon_{\nu^c}\neq 0$ in general and its absolute value depends upon the heavy fermionic singlet masses, the light neutrino masses and $R$.
\mathversion{bold}
\section{Implications of Flavour Symmetries on $\epsilon_{\nu^c_\alpha}$}
\label{Sec:Lep:exTBFS}
\setcounter{footnote}{3}
\mathversion{normal}
We consider now the case in which an underlying flavour symmetry enforces an exact mixing pattern. It will be evident throughout the proof that it holds for any mixing pattern where the mixing matrix consists purely of numbers, but we will assume tribimaximal mixing for definiteness.
Within the case considered the transformation properties of $\ell$ and $\nu^c$ under the flavour symmetry group $G_f$ determine the structure of $m_D$ and $M_R$ (which are no longer arbitrary). Indeed, these matrices can be regarded as form-diagonalisable matrices \cite{LV_Theorem}, i.e. the parameters which determine their eigenvalues are completely independent from the parameters that define their diagonalising
matrices. Accordingly, vanishing off-diagonal elements of $\hat{m}_\nu$ in eq. (\ref{Lep:ssdiag}) can arise only if
\begin{equation}
U_{TB}^T U_L= P_L \, O_{D_i} \quad \mbox{and}\quad U_R^\dag\,V_R=O^\dag_{D_i}\,P_R\,O_{R_{m}}\;,
\label{Lep:eq:rot-mat-relations}
\end{equation}
where $P_{L,R}=\text{diag}(e^{i\alpha^{R,L}_1},e^{i\alpha^{R,L}_2}, e^{i \alpha^{R,L}_3})$ whereas $O_{D_i}$ and $O_{R_m}$ are respectively unitary and orthogonal matrices that arbitrarily rotate the $i$ and $m$ degenerate eigenvalues of $m_D$ and $M_R$ such that if $m_D$ ($M_R$) has no degenerate eigenvalues $O_{D_i}=\mathbb{1}$ ($O_{R_m}=\mathbb{1}$). Note that the requirement of having canonical kinetic terms in addition to preserving the $m$-fold degeneracy of the right-handed neutrino mass matrix enforces $O_{R_m}$ to be real. It is easy to understand the conditions given in \eq{Lep:eq:rot-mat-relations} by the use of a \emph{reductio ad absurdum}. Let us consider for simplicity the case without any degeneracy in the eigenvalues of $\hat{m}_D$ and $\hat{M}_R$: $O_{D_i}=\mathbb{1}$ and $O_{R_m}=\mathbb{1}$. If the products $U_{TB}^TU_L$ and $U_R^\dag V_R$ are not diagonal, but simply unitary matrices with non-vanishing off-diagonal entries, then the right-hand side of eq. \eqref{Lep:ssdiag} is in general a matrix whose entries are linear combinations of the mass eigenvalues of $\hat{m}_D$ and of $\hat{M}_R$. In order to have $\hat{m}_\nu$ diagonal, the off-diagonal entries must vanish and this is possible only if the respective linear combinations cancel out. However, there are no a priori reasons to have such cancellations, since it corresponds to have well-defined relationships between the eigenvalues of $\hat{m}_D$ and of $\hat{M}_R$, which is, in other words, a fine-tuning. Avoiding this possibility, the only solution is to consider \eq{Lep:eq:rot-mat-relations}.
As shown in \cite{ABMMM_Lepto}, under the condition in \eq{Lep:eq:rot-mat-relations} the $m_D^R$ can be written as
\begin{equation}
m_D^R= U_{TB}\,D\, \hat{v}\, O_{R_m}\;,
\label{Lep:vCI1}
\end{equation}
with $\hat{v}$ a diagonal real matrix and $D$ a diagonal unitary matrix which contains all the phases $\alpha^{R,L}_i$. It is straightforward to recover from eq. (\ref{Lep:vCI1}) the following $R^*$ matrix:
\begin{equation}
R^*=\hat{m}_\nu^{-1/2}\, \hat{v}\, \hat{M}_R^{-1/2} \;.
\label{Lep:ourR}
\end{equation}
By comparing \eq{Lep:ourR} with the Casas-Ibarra parametrisation given in \eq{Lep:eq:casas-ibarra} we deduce that in the case of exact tribimaximal mixing the matrix $R$ is real and according to \eq{Lep:eq:cp-asymm-CI} the CP-violating asymmetry vanishes.
Note that so far we did not refer to any specific model realisation and we have assumed just exact tribimaximal diagonalisation of $m_\nu$ within the context of type I See-Saw. We not only confirm the result in \cite{JM_A4Lepto}, but also extend it to any possible flavour symmetry responsible for the exact tribimaximal scheme. It is also straightforward to check (by replacing $U_{TB}$ with a mass-independent mixing matrix $U_0$) that the matrix $R$ still turns out to be real for other exact mixing schemes as long as they are mass-independent. Note also that although we have only considered three right-handed neutrinos our result is absolutely generalisable to models with either two right-handed neutrinos or more than three such as \cite{moreNR}. On the other side, the proof does not hold however in the presence of additional degrees of freedom, e.g. in models involving type I and type II See-Saw.
An important consequence of our proof is that if the $U_0$ mixing pattern is due to any underlying flavour symmetry in a type I See-Saw
scenario, the viability of leptogenesis depends upon possible departures from the exact pattern. In the context of models based on
discrete flavour symmetries that predict $U_0$ mixing at the leading order this is achieved through NLO corrections. Since the size of the
deviations from $U_0$ mixing are not arbitrary, in principle one might expect the CP-violating asymmetry to be constrained by low-energy
observables such as $\theta_{13}$ and/or the CP-violating phases. However we have shown that in the general case the combination of NLO corrections that produce $\epsilon_{\nu^c}\neq0$ is not directly related with any low-energy observable. Consequently, while we conclude that general model-independent NLO corrections guarantee a non-vanishing CP-violating asymmetry, correlations among low-energy observables in the leptonic sector and $\epsilon_{\nu^c}$ cannot be established unless the nature of the corrections is well known, i.e. once the flavour model realisation has
been specified \cite{FelipeSerodio_Lepto,HMP_Lepto,BBFN_Lepto,JM_A4Lepto}.
\section{Conclusions of the Chapter}
\label{Sec:Lep:Conclusions}
\setcounter{footnote}{3}
In this chapter we considered under rather general conditions the possibility of links between low-energy observables and high-energy
parameters that are relevant for leptogenesis: in the most general case no such connections can be recovered. The situation can improve considering flavour constraints. When simply assuming, in particular without introducing an underlying flavour symmetry, for the lepton mixing matrix a mass-independent texture, such as the well-known tribimaximal or bimaximal patterns, we conclude that it is in general possible to obtain leptogenesis, but it is not sufficient to provide a link between the different type of parameters.
On the contrary, it is possible to improve the situation considering the more natural case where mass-independent mixing patterns originate from an underlining flavour symmetry. We confirmed the results of \cite{JM_A4Lepto}, for which the CP-violating asymmetry vanishes in the limit of exact tribimaximal mixing in the case of unflavoured leptogenesis and only type I See-Saw. We generalised this conclusion into a model-independent proof that is valid for any flavour symmetries which impose mass-independent mixing textures. On the other hand, in order to have viable leptogenesis, the model has to require NLO corrections lifting the exact mixing, or alternatively independent contributions to the CP asymmetries such as those that naturally arise from an interplay between different See-Saws.
\clearpage{\pagestyle{empty}\cleardoublepage}
|
2,877,628,091,256 | arxiv |
\section{Summary}
\subsection{The Problem: High DRAM Latency} \label{sec:problem}
Primarily due to its low cost-per-bit, DRAM has long been the choice substrate
for architecting main memory subsystems. In fact, DRAM's cost-per-bit has been
decreasing at a rapid rate as DRAM process technology scales to integrate ever
more DRAM cells into the same die area. As a result, each successive generation
of DRAM has enabled increasingly large-capacity main memory subsystems at low
cost.
In stark contrast to the continued scaling of cost-per-bit, the {\em latency}
of DRAM has remained almost constant. During the same 11-year interval in which
DRAM's cost-per-bit decreased by a factor of 16, DRAM latency (as measured by
the \trcd and \trc timing constraints) decreased by only 30.5\% and
26.3\%~\cite{future_cpu, samsung_roadmap}, as shown in Figure 1 of our
paper~\cite{tldram}. From the perspective of the processor, an access to DRAM
takes hundreds of cycles --- time during which the processor may be stalled,
waiting for DRAM. Such wasted time leads to large performance degradations.
\subsection{Key Observations and Our Goal} \label{sec:observation}
{\bf Bitline: Dominant Source of Latency.} In DRAM, each bit is represented as
electrical charge in a capacitor-based {\em cell}. The small size of this
capacitor necessitates the use of an auxiliary structure, called a {\em
sense-amplifier}, to detect the small amount of charge held by the cell and
amplify it to a full digital logic value. But, a sense-amplifier is
approximately one hundred times larger than a cell~\cite{rambus-power}. To
amortize their large size, each sense-amplifier is connected to many DRAM cells
through a wire called a {\em bitline}.
Every bitline has an associated parasitic capacitance whose value is
proportional to the length of the bitline. Unfortunately, such parasitic
capacitance slows down DRAM operation for two reasons. First, it increases the
latency of the sense-amplifiers. When the parasitic capacitance is large, a
cell cannot quickly create a voltage perturbation on the bitline that could be
easily detected by the sense-amplifier. Second, it increases the latency of
charging and precharging the bitlines. Although the cell and the bitline must
be restored to their quiescent voltages during and after an access to a cell,
such a procedure takes much longer when the parasitic capacitance is large. Due
to the above reasons and a detailed latency break-down (refer to our HPCA-19
paper~\cite{tldram}), we conclude that long bitlines are the dominant source of
DRAM latency~\cite{jedec-ddr, dram_latency, mutlu-imw13, mutlu-book15}.
{\bf Latency vs.~Cost Trade-Off.} The bitline length is a key design parameter
that exposes the important trade-off between latency and die-size (cost). Short
bitlines (few cells per bitline) constitute a small electrical load (parasitic
capacitance), which leads to low latency. However, they require more
sense-amplifiers for a given DRAM capacity
(Figure~\ref{fig:intro_specialized_dram}), which leads to a large die-size. In
contrast, long bitlines have high latency and a small die-size
(Figure~\ref{fig:intro_commodity_dram}). As a result, neither of these two
approaches can optimize for both latency and cost-per-bit.
\input{fig/intro_commodity_specialized}
Figure~\ref{fig:cell-per-bitline-trade-off} shows the trade-off between DRAM
latency and die-size by plotting the latency (\trcd and \trc) and the die-size
for different values of cells-per-bitline. Existing DRAM architectures are
either optimized for die-size (commodity DDR3~\cite{samsung_spec, ddr3-4gb})
and are thus low cost but high latency, or optimized for latency
(RLDRAM~\cite{rldram}, FCRAM~\cite{fcram}) and are thus low latency but high
cost.
\input{fig/motivation_latency_area}
{\bf The goal} of our paper~\cite{tldram} is to design a new DRAM architecture
to approximate the best of both worlds (i.e., low latency and low cost), based
on the key observation that that long bitlines are the dominant source of DRAM
latency.
\subsection{Tiered-Latency DRAM} \label{sec:tldram}
To achieve the latency advantage of short bitlines and the cost advantage of
long bitlines, we propose the {\em Tiered-Latency DRAM} (TL-DRAM) architecture,
which is shown in Figure~\ref{fig:intro_tldram} and \ref{fig:substrate_tld}. The
key idea of TL-DRAM is to divide the long bitline into two shorter segments
using an {\em isolation transistor}: the {\em near segment} (connected directly
to the sense-amplifier) and the {\em far segment} (connected through the
isolation transistor).
\input{fig/substrate_tldram}
The primary role of the isolation transistor is to electrically decouple the
two segments from each other. This changes the effective bitline length (and
also the effective bitline capacitance) as seen by the cell and
sense-amplifier. Correspondingly, the latency to access a cell is also changed
--- albeit differently depending on whether the cell is in the near or the far
segment.
When accessing a cell in the near segment, the isolation transistor is turned
off, disconnecting the far segment (Figure~\ref{fig:substrate_tld_near}). Since
the cell and the sense-amplifier see only the reduced bitline capacitance of
the shortened near segment, they can drive the bitline voltage more easily. As
a result, the bitline voltage is restored more quickly, so that the latency
(\trc) for the near segment is significantly reduced. On the other hand, when
accessing a cell in the far segment, the isolation transistor is turned on to
connect the entire length of the bitline to the sense-amplifier. In this case,
the isolation transistor acts like a resistor inserted between the two segments
(Figure~\ref{fig:substrate_tld_far}) and limits how quickly charge flows to the
far segment. Because the far segment capacitance is charged more slowly, it
takes longer for the far segment voltage to be restored, so that the latency
(\trc) is increased for cells in the far segment.
{\bf Latency, Power, and Die-Area.} Table~\ref{tbl:latency_comparison}
summarizes the latency, power, and die-area characteristics of TL-DRAM to other
DRAMs, estimated using circuit-level SPICE simulation~\cite{ibm_55nm} and
power/area models from Rambus~\cite{rambus-power}. Compared to commodity DRAM
(long bitlines) which incurs high latency (\trc) for all cells, TL-DRAM offers
significantly reduced latency (\trc) for cells in the near segment, while
increasing the latency for cells in the far segment due to the additional
resistance of the isolation transistor. In DRAM, a large fraction of the power
is consumed by the bitlines. Since the near segment in TL-DRAM has a lower
capacitance, it also consumes less power. On the other hand, accessing the far
segment requires toggling the isolation transistors, leading to increased power
consumption. Mainly due to additional isolation transistors, TL-DRAM increases
die-area by 3\%. Our paper includes detailed circuit-level analyses of TL-DRAM
(Section 4 of~\cite{tldram}).
\input{tbl/intro-latency-die-size}
\subsection{Leveraging TL-DRAM} \label{sec:mechanism}
TL-DRAM enables the design of many new memory management policies that exploit
the asymmetric latency characteristics of the near and the far segments. Our
HPCA-19 paper (in Section 5) describes four ways of taking advantage of
TL-DRAM. Here, we describe two approaches in particular.
In the first approach, the memory controller uses the near segment as a {\em
hardware-managed cache} for the far segment. In our HPCA-19
paper~\cite{tldram}, we discuss three policies for managing the near segment
cache. (The three policies differ in deciding when a row in the far segment is
cached into the near segment and when it is evicted.) In addition, we propose a
new data transfer mechanism ({\em Inter-Segment Data Transfer}) that
efficiently migrates data between the segments by taking advantage of the fact
that the bitline is a bus connected to the cells in both segments. By using
this technique, the data from the source row can be transferred to the
destination row over the bitlines at very low latency (additional 4ns over
\trc). Furthermore, this Inter-Segment Data Transfer happens exclusively within
DRAM bank without utilizing the DRAM channel, allowing concurrent accesses to
other banks.
In the second approach, the near segment capacity is exposed to the OS,
enabling the OS to use the full DRAM capacity. We propose two concrete
mechanisms, one where the memory controller uses an additional layer of
indirection to map frequently accessed pages to the near segment, and another
where the OS uses static/dynamic profiling to directly map frequently accessed
pages to the near segment. In both approaches, the accesses to pages that are
mapped to the near segment are served faster and with lower power than in
conventional DRAM, resulting in improved system performance and energy
efficiency.
\subsection{Results: Performance and Power}
Our HPCA-19 paper~\cite{tldram} provides extensive detail about both of the
above approaches. But, due to space constraints, we present the evaluation
results of only the first approach, in which the near segment is used as
hardware-managed cache managed under our best policy ({\em Benefit-Based
Caching}) to show the advantage of our TL-DRAM substrate.
{\bf Performance \& Power Analysis.} Figure~\ref{fig:result_cores} shows the
average performance improvement and power-efficiency of our proposed mechanism
over the baseline with conventional DRAM, on 1-, 2- and 4-core systems. As
described in Section~\ref{sec:tldram}, access latency and power consumption are
significantly lower for near segment accesses, but higher for far segment
accesses, compared to accesses in a conventional DRAM. We observe that a large
fraction (over 90\% on average) of requests hit in the rows cached in the near
segment, thereby accessing the near segment with low latency and low power
consumption. As a result, TL-DRAM achieves significant performance improvement
by 12.8\%/12.3\%/11.0\% and power savings by 23.6\%/26.4\%/28.6\% in
1-/2-/4-core systems, respectively.
\input{fig/result_cores}
{\bf Sensitivity to Near Segment Capacity.} The number of rows in the near
segment presents a trade-off, since increasing the near segment's size
increases its capacity but also increases its access latency.
Figure~\ref{fig:result_single_sensitive} shows the performance improvement of our
proposed mechanisms over the baseline as we vary the near segment size.
Initially, performance improves as the number of rows in the near segment since
more data can be cached. However, increasing the number of rows in the near
segment beyond 32 reduces the performance benefit due to the increased
capacitance.
\input{fig/result_capacity_sensitive}
{\bf Other Results.} In our HPCA-19 paper, we provide a detailed analysis of
how timing parameters and power consumption vary when varying the near segment
length, in Section 4 and 6.3, respectively. We also provide a comprehensive
evaluation of the mechanisms we build on top of the TL-DRAM substrate for
single- and multi-core systems in Section 8.
All of our results are gathered using an in-house version of
Ramulator~\cite{kim-cal2015}, an open-source DRAM simulator~\cite{ramulator},
which is integrated into an in-house processor simulator.
\section{Significance}
\subsection{Novelty}
To our knowledge, our HPCA-19 paper is the first to enable latency
heterogeneity in DRAM without significantly increasing cost-per-bit and to
propose hardware/software mechanisms that leverage this latency heterogeneity
to improve system performance. We make the following major contributions.
{\bf A Cost-Efficient Low-Latency DRAM.} Based on the key observation that long
internal wires (bitlines) are the dominant source of DRAM latency, we propose a
new DRAM architecture called Tiered-Latency DRAM (TL-DRAM). To our knowledge
this is the first work to enable low-latency DRAM without significantly
increasing the cost-per-bit. By adding a single isolation transistor to each
bitline, we carve out a region within a DRAM chip, called the near segment,
that is fast and energy-efficient. This comes at a modest overhead of 3\%
increase in DRAM die-area. While there are two prior approaches to reduce DRAM
latency (using short bitlines~\cite{rldram, fcram}, adding an SRAM cache in
DRAM~\cite{hidaka-ieeemicro1990, cdram, esdram, cached-dram}), both of these
approaches significantly increase die-area due to additional sense-amplifiers
or additional area for SRAM cache, as we evaluate in our paper~\cite{tldram}.
Compared to these prior approaches, TL-DRAM is a much more cost-effective
architecture for achieving low latency.
\mycolor{There are many works that reduce {\em overall memory access latency}
by modifying DRAM, the DRAM-controller interface, and DRAM controllers.} These
works enable more parallelism and bandwidth~\cite{salp, dsarp, rowclone,
lee-taco2016}, reduce refresh counts~\cite{liu12, liu13, khan14,
venkatesan-hpca2006, avatar}, accelerate bulk operations~\cite{rowclone,
seshadri-cal2015, seshadri-micro2015, chang-hpca2016}, accelerate computation
in the logic layer of 3D-stacked DRAM~\cite{ahn-isca2015a, ahn-isca2015b,
zhang-hpca2014, msa3d}, enable better communication between CPU and other
devices through DRAM~\cite{lee-pact2015}, leverage process variation and
temperature dependency in DRAM~\cite{aldram}, leverage DRAM access
patterns~\cite{hassan-hpca2016}, \mycolor{reduce write-related latencies by
better designing DRAM and DRAM control policies~\cite{chatterjee-hpca2012,
lee-techreport2010, seshadri-isca2014}, and reduce overall queuing latencies in
DRAM by better scheduling memory requests~\cite{stfm-micro07, parbs, atlas,
tcm, subramanian-tpds2016, subramanian-iccd2014, rlmc, usui-taco2016}. Our
proposal is orthogonal to all of these approaches and can be applied in
conjunction with them to achieve higher latency and energy benefits.}
{\bf Inter-Segment Data Transfer.} By implementing latency heterogeneity within
a DRAM subarray, TL-DRAM enables efficient data transfer between the fast and
slow segments by utilizing the bitlines as a wide bus. This mechanism takes
advantage of the fact that both the source and destination cells share the same
bitlines. Furthermore, this inter-segment migration happens only within a DRAM
bank and does not utilize the DRAM channel, thereby allowing concurrent
accesses to other banks over the channel. This inter-segment data transfer
enables fast and efficient movement of data within DRAM, which in turn enables
efficient ways of taking advantage of latency heterogeneity.
Son et al. proposes a low latency DRAM architecture~\cite{ahn13} that has fast
(long bitline) and slow (short bitline) subarrays in DRAM. This approach
provides largest benefit when allocating latency critical data to the low
latency regions (the low latency subarrays. Therefore, overall memory system
performance is sensitive to the page placement policy. However, our
inter-segment data transfer enables efficient relocation of pages, leading to
dynamic page placement based on the latency criticality of each page.
\subsection{Potential Long-Term Impact}
{\bf Tolerating High DRAM Latency by Enabling New Layers in the Memory
Hierarchy.} Today, there is a large latency cliff between the on-chip last
level cache and off-chip DRAM, leading to a large performance fall-off when
applications start missing in the last level cache. By introducing an
additional fast layer (the near segment) within the DRAM itself, TL-DRAM
smoothens this latency cliff.
Note that many recent works added a DRAM cache or created heterogeneous main
memories~\cite{lee-isca2009, lee-ieeemicro2010, qureshi-isca2009, timber, rbla,
ramos-ics11, satish-date11, meza-weed13, hrm-dsn2014, nil-micro2012,
ren-micro2015, li-corr2015, pdram-dac09} to smooth the latency cliff between
the last level cache and a longer-latency non-volatile main memory, e.g., Phase
Change Memory~\cite{lee-isca2009, lee-ieeemicro2010,
qureshi-isca2009}, or to take advantage of the advantages of multiple
different types of memories to optimize for multiple metrics. Our approach is
similar at the high-level (i.e., to reduce the latency cliff at low cost by
taking advantage of heterogeneity) yet we introduce the new low-latency layer
within DRAM itself instead of adding a completely separate device.
{\bf Applicability to Future Memory Devices.} We show the benefits of TL-DRAM's
asymmetric latencies. Considering that most memory devices adopt a similar cell
organization (i.e., a 2-dimensional cell array and row/column bus connections),
our approach of reducing the electrical load of connecting to a bus (bitline)
to achieve low access latency can be applicable to other memory devices.
Furthermore, the idea of performing inter-segment data transfer can also
potentially be applied to other memory devices, regardless of the memory
technology. For example, we believe it is promising to examine similar
approaches for emerging memory technologies like Phase Change
Memory~\cite{lee-isca2009, qureshi-isca2009, qureshi-micro2009, meza-iccd2012,
yoon-taco2014, lee-cacm2010} or STT-MRAM~\cite{kultursay-ispass2013,
wang-islped2014}, as well as the NAND flash memory
technology~\cite{luo-msst2015, yu-hpca2015, yu-dsn2015, cai-sigmetrics2014,
cai-iccd2013}.
{\bf New Research Opportunities.} The TL-DRAM substrate creates new
opportunities by enabling mechanisms that can leverage the latency
heterogeneity offered by the substrate. We briefly describe three
directions, but we believe many new possibilities abound.
{\setlength{\leftmargini}{0.15in}
\begin{itemize}\itemsep0pt\parskip0pt\vspace{-0.1in}
\item {\em New ways of leveraging TL-DRAM.} TL-DRAM is a substrate that can
be utilized for many applications. Although we describe two major ways of
leveraging TL-DRAM in our HPCA-19 paper, we believe there are several more
ways to leverage the TL-DRAM substrate both in hardware and software. For
instance, new mechanisms could be devised to detect data that is latency
critical (e.g., data that causes many threads to becomes
serialized~\cite{ebrahimi-micro2011, dm-isca10, bis, acs, uba} or data that
belongs to threads that are more latency-sensitive~\cite{atlas,
tcm, mise, usui-taco2016, sms, medic-pact, subramanian-iccd2014,
subramanian-tpds2016, subramanian-micro2015}) or could become latency
critical in the near future and allocate/prefetch such data into the near
segment.
\item {\em Opening up new design spaces with multiple tiers.} TL-DRAM can be
easily extended to have multiple latency tiers by adding more isolation
transistors to the bitlines, providing more latency asymmetry. (Our HPCA-19
paper provides an analysis of the latency of a TL-DRAM design with three
tiers, showing the spread in latency for three tiers.) This enables new
mechanisms both in hardware and software that can allocate data appropriately
to different tiers based on their access characteristics such as locality,
criticality, etc.
\item {\em Inspiring new ways of architecting latency heterogeneity within
DRAM.} To our knowledge, TL-DRAM is the first to enable latency heterogeneity
within DRAM by significantly modifying the existing DRAM architecture. We
believe that this could inspire research on other possible ways of
architecting latency heterogeneity within DRAM or other memory devices.
\end{itemize}
}
|
2,877,628,091,257 | arxiv | \section{Introduction}
A central result in the theory of equilibrium black holes in four and higher dimensions is the rigidity theorem~\cite{Hawking:1971vc, Hollands:2006rj, Moncrief:2008mr}. This states that the event horizon of a stationary, rotating, black hole is a Killing horizon with respect to the Killing field
\begin{equation}
K = \frac{\partial}{\partial t} + \Omega \frac{\partial}{\partial {\phi}} \; , \label{K}
\end{equation}
where $\partial /\partial t$ is the stationary Killing field, $\partial /\partial \phi$ is a Killing field generating the rotational symmetry, and $\Omega$ the angular velocity of the black hole with respect to the static asymptotic frame.\footnote{In greater than four spacetime dimensions a black hole may have multiple rotational Killing fields with corresponding angular velocities which must be included in (\ref{K}).} For asymptotically flat space times, it is clear that if the angular velocity is non-zero, the Killing field $K$ must become spacelike outside a large enough ball. Therefore, there is no possibility of having matter in equilibrium and co-rotating with the black hole (since it would have to exceed the speed of light).
Black holes in anti de Sitter (AdS) spacetimes are of central importance in the context of the AdS/CFT duality~\cite{Witten:1998zw}. For such black holes the situation is quite different. In particular, for $D \geq 4$ asymptotically globally AdS black holes, such as the Kerr-AdS black hole and its higher dimensional generalisation, the Killing field $K$ is timelike everywhere outside the horizon if $| \ell \Omega | \leq 1$, where $\ell$ is the radius of AdS. In this case $K$ defines a frame in which matter can co-rotate in equilibrium with the black hole. This raises the interesting possibility of having black holes which are invariant under a single Killing field $K$. Matter here refers also to gravitons, hence this argument suggests the possibility of new vacuum solutions.\footnote{Such solutions would not violate the rigidity theorem since the stationary Killing field would be normal to the horizon.} Although Kerr-AdS black holes are thought to be stable if $| \ell \Omega | \leq 1$~\cite{Hawking:1999dp}, it has been proposed that new solutions invariant under just the co-rotating Killing field may arise as the endpoint of a superradiant instability which occurs for rapidly rotating Kerr-AdS black holes (i.e. $| \ell \Omega |>1$)~\cite{Kunduri:2006qa, Cardoso:2006wa}.
Finding new vacuum black hole solutions invariant under a single Killing field is a daunting task. By coupling matter fields which are invariant under just the co-rotating Killing field, one may avoid the complication of dealing metrics with a single Killing field. Indeed, examples with a complex scalar field have been found numerically~\cite{Dias:2011at} (see also~\cite{Stotyn:2011ns, Stotyn:2013yka}). In this note we follow a different strategy by examining this problem in pure gravity in lower dimensions.
Although there are no local degrees of freedom, three-dimensional Einstein gravity with a negative cosmological constant provides a valuable toy model for examining certain higher dimensional questions~\cite{Deser:1983nh}. Brown and Henneaux demonstrated that there exist boundary conditions such that the asymptotic symmetry algebra is the infinite dimensional conformal symmetry of a cylinder~\cite{Brown:1986nw}. Furthermore, Banados-Teitelboim-Zanelli (BTZ) found explicit black hole solutions to the $D=3$ Einstein equations~\cite{Banados:1992wn, Banados:1992gq}. Although locally AdS$_3$, globally this is a family of stationary and axisymmetric black holes which are asymptotically AdS$_3$ with a cylinder conformal boundary and possess mass $M$ and angular momentum $J$.
The BTZ black holes always satisfy $| \ell \Omega | \leq 1$. For the non-extreme black hole ($|\ell \Omega |< 1$) the Killing field $K$ is everywhere timelike outside the horizon, whereas for the extreme black hole ($|\ell \Omega |=1$) the Killing field $K$ is everywhere null. By the above arguments, this raises the possibility of black hole solutions invariant under a single Killing field. However, the BTZ black hole does not suffer from a superradiant instability (since it never rotates faster than the speed of light, the stability argument used in higher dimensions can be applied~\cite{Hawking:1999dp}). Therefore, such putative solutions may not arise from the evolution of some perturbation of the BTZ black hole. Indeed, stationary and axisymmetric black holes which are coupled to a complex scalar field invariant under a co-rotating Killing field, have been argued not to exist~\cite{Stotyn:2012ap, Stotyn:2013spa}.
In this note we show that in fact black holes with a single Killing field do {\it not} exist in three dimensional Einstein gravity, by explicitly determining the most general Einstein metric with a (non-singular) Killing horizon. It turns out the general solution with a spatially compact horizon always possesses a second commuting Killing field and hence must be related to the BTZ black hole (or its near-horizon geometry) by a diffeomorphism. Interestingly, in the case of a degenerate horizon the general solution is related to the extreme BTZ black hole by a {\it large} diffeomorphism. Our results establish a new type of uniqueness theorem for three-dimensional AdS black holes.\footnote{See e.g.~\cite{Rooman:2000ei, Fischetti:2012ps} for other types of uniqueness results.}
In fact, the general solution may have an interesting interpretation in the dual CFT. One expects that by acting on the BTZ black hole with a general element of the asymptotic-symmetry diffeomorphism group, one would obtain AdS$_3$ solutions with arbitrary Virasoro charges. We refer to these as {\it descendants} of the BTZ black hole.\footnote{Note that these are not descendants of pure states in the dual CFT.} These new solutions should also have two commuting Killing fields, corresponding to the push-forward of the Killing fields of the BTZ black hole.\footnote{We thank Harvey Reall for this observation.} If these geometries still contain a Killing horizon, they must be within our general class of Einstein metrics. Indeed, we identify a general class of extreme black holes that are asymptotically AdS$_3$ with cylinder boundary, which carry arbitrary charges with respect to one of the Virasoro algebras and vanishing charges with respect to the other. Hence these geometries are descendants (in the above sense) of the extreme BTZ black hole.
Before moving on, we mention a technical motivation which led us to investigating this problem in the extreme case. An important inverse problem is to understand how, given a near-horizon geometry, one determines the possible corresponding extreme black holes. As we will show, three dimensional gravity provides a simple setup which allows one to examine this question explicitly.
\section{General solution}
\subsection{Derivation}
Consider a general $2+1$ dimensional spacetime containing a smooth\footnote{In fact, rather than smooth, we will only need to assume the functions $f,h$ are $C^1$ and $\gamma$ is $C^2$. } Killing horizon $\mathcal{N}$ of a future-pointing, complete, Killing field $K$ with a one-dimensional spacelike cross-section $H$. In the neighbourhood of $\mathcal{N}$ the metric in Gaussian null coordinates reads, see e.g.~\cite{Kunduri:2013gce},
\begin{equation}
\textrm{d} s^2 = 2 \textrm{d} v \left( \textrm{d} \lambda + \lambda h (\lambda ,x) \textrm{d} x + \tfrac{1}{2} \lambda f(\lambda ,x) \textrm{d} v\right) + \gamma(\lambda, x)^2 \textrm{d} x^2 \; , \label{horizon}
\end{equation}
where $K= \partial/ \partial v$ is the Killing field which is null on $\mathcal{N}$ and $\partial / \partial \lambda$ is tangent to null geodesics which are transverse to the horizon $\mathcal{N}$ such that $\lambda>0$ is the exterior region and $\mathcal{N}= \left\lbrace \lambda =0 \right\rbrace $. The coordinate $(x)$ is on the one-dimensional spacelike cross-section $H$, which by assumption has a non-degenerate induced metric so $\gamma>0$ in the neighbourhood of $\mathcal{N}$.
We wish to find the general vacuum solution of this form with a cosmological constant $R_{\mu\nu} = \Lambda g_{\mu\nu}$. Of course any Einstein metric in three dimensions is {\it locally} isometric to one of the maximally symmetric spaces: we are concerned with spacetimes with a {\it global} Killing horizon as above.
To compute the Ricci tensor it is convenient to use the null-orthonormal basis $(e^{+}, e^-, e^x)$ defined by
\begin{equation}
e^+ = \textrm{d} v \; , \qquad e^- = \textrm{d} \lambda + \lambda h \textrm{d} x + \tfrac{1}{2} \lambda f \textrm{d} v \; , \qquad e^x = \gamma \textrm{d} x \; , \label{basis}
\end{equation}
so that the metric reads $ \textrm{d} s^2 = 2 e^+ e^- + e^x e^x$. It turns out that the function defined by
\begin{equation}
b \equiv
\partial_x f+\lambda f \partial_\lambda h - \lambda h \partial_\lambda f\; , \label{bdef}
\end{equation}
appears naturally in the curvature calculations. We find that with respect to the above basis the Ricci tensor is
\begin{eqnarray*}
&&R_{++} = \frac{1}{2 \gamma} \left[ \lambda h \partial_\lambda \left( \frac{1}{\gamma} \lambda b \right) - \partial_x \left( \frac{1}{\gamma} \lambda b \right) - \tfrac{1}{2} \lambda^2 f^2 \partial^2_\lambda \gamma \right] \; ,\\
&&R_{+-} = \tfrac{1}{2} \partial_\lambda^2 (\lambda f) +\frac{1}{2 \gamma} \left[ \partial_\lambda(\lambda f \partial_\lambda \gamma) -\frac{1}{ \gamma} ( \partial_\lambda(\lambda h))^2 + \partial_x \left( \frac{1}{\gamma} \partial_\lambda(\lambda h)\right) - \lambda h \partial_\lambda \left( \frac{1}{\gamma} \partial_\lambda(\lambda h)\right) \right] \; , \\
&&R_{+x} = \tfrac{1}{2} \partial_\lambda \left( \frac{1}{\gamma} \lambda b \right) -\tfrac{1}{4} \lambda f \partial_\lambda \left( \frac{1}{\gamma} \partial_\lambda (\lambda h) \right) \; ,\\
&& R_{--} = - \frac{1}{\gamma} \partial^2_\lambda \gamma \; , \qquad R_{-x} = \tfrac{1}{2} \partial_\lambda \left( \frac{1}{\gamma} \partial_\lambda (\lambda h) \right) \; , \\
&&R_{xx} = \frac{1}{ \gamma} \left[ \partial_\lambda(\lambda f \partial_\lambda \gamma) -\frac{1}{2 \gamma} ( \partial_\lambda(\lambda h))^2 + \partial_x \left( \frac{1}{\gamma} \partial_\lambda(\lambda h)\right) - \lambda h \partial_\lambda \left( \frac{1}{\gamma} \partial_\lambda(\lambda h)\right) \right] \enskip .
\end{eqnarray*}
The $--$ component of the Einstein equations immediately implies $\gamma = \gamma_0(x)+ \lambda \gamma_1(x)$,
where $\gamma_0(x), \gamma_1(x)$ are arbitrary functions. We may use the coordinate freedom on $H$ to set $\gamma_0=1$, which we will assume henceforth. The $-x$ component can be easily integrated for $h$ and the most general solution which is regular at $\lambda=0$ is
\begin{equation}
h = h_0(x) \left( 1+ \tfrac{1}{2} \lambda \gamma_1(x) \right),
\end{equation} where $h_0$ is an arbitrary function. Now consider the ${+-}$ and ${xx}$ components. This is facilitated by noting that the Einstein equation implies $\tfrac{1}{2}R_{xx} - R_{+-}=- \tfrac{1}{2}\Lambda$, which explicitly reads
\begin{equation}
\partial_\lambda^2 (\lambda f) -\tfrac{1}{2} \left( \frac{ \partial_\lambda (\lambda h)}{\gamma} \right)^2 = \Lambda \; .
\end{equation}
This can now be integrated for $f$ and the most general solution regular at $\lambda=0$ is
\begin{equation}
f = f_0(x) + \tfrac{1}{2}\left( \Lambda + \tfrac{1}{2} h_0(x)^2 \right) \lambda \label{f}
\end{equation}
where $f_0$ is an arbitrary function. Now, the $xx$ equation is satisfied iff
\begin{equation}
\partial_x h_0 - \tfrac{1}{2} h_0^2 + f_0 \gamma_1 = \Lambda \; .
\end{equation}
It remains to consider the $++$ and $+x$ components. It is easy to see these are satisfied if and only if $\lambda b / \gamma$ is a constant. Hence by regularity at $\lambda=0$ we deduce $b=0$. Finally, by substituting into (\ref{bdef}) one finds $b = \partial_x f_0$ and so we deduce that $f_0(x) = - 2\kappa$, where $\kappa$ is a constant. We have now satisfied all components of the Einstein equation.
To summarise, we have found that the most general solution with a non-singular Killing horizon is given by:
\begin{eqnarray}
\gamma(\lambda, x) &=& 1 + \lambda \gamma_1(x) \nonumber \\
h(\lambda, x) &=& h_0(x) \left( 1+ \tfrac{1}{2} \lambda \gamma_1(x) \right) \nonumber \\
f(\lambda, x) &=& -2\kappa + \tfrac{1}{2}\lambda \left( \Lambda + \tfrac{1}{2} h_0(x)^2 \right) \; , \label{soln}
\end{eqnarray}
where $\kappa$ is a constant and $h_0, \gamma_1$ are arbitrary functions subject to the constraint
\begin{equation}
\partial_x h_0 - \tfrac{1}{2} h_0^2 -2\kappa \gamma_1 = \Lambda \; . \label{constr}
\end{equation}
The various quantities which appear in the solution all have a direct geometrical meaning. The 1-form $h_0 \textrm{d} x$ is the connection of the normal bundle on $H$, viewed as a submanifold of the spacetime. The function $\gamma_1= \theta|_{\lambda=0}$ where $\theta$ is the expansion of the null geodesic congruence tangent to $\partial / \partial \lambda$, i.e. $\theta=\gamma^{-1}\partial_\lambda \gamma$. The constant $\kappa$ is the surface gravity on the Killing horizon, i.e. $ \textrm{d} K^2 |_{\lambda=0} = - 2\kappa K|_{\lambda=0}$.
In the non-degenerate case, $\kappa \neq 0$, the constraint equation (\ref{constr}) can be solved to determine the extrinsic data $\gamma_1$ in terms of the intrinsic data $h_0$, so the solution depends on the constant $\kappa$ and one freely specifiable function on the horizon $h_0(x)$. On the other hand, in the degenerate case, $\kappa=0$, we deduce that the constraint equation (\ref{constr}) reduces to the Einstein equation for the near-horizon geometry~\cite{Kunduri:2013gce} and $\gamma_1(x)$ is an {\it arbitrary} function on $H$. Hence, once the near-horizon solution has been fixed, the degenerate solution depends only on one freely specifiable function $\gamma_1(x)$. This explicitly shows that decoupling of intrinsic and extrinsic data occur if and only if the horizon is degenerate.
In general, Gaussian null coordinates are only defined in a neighbourhood on the horizon, in particular, as long as the transverse null geodesic congruence $\partial /\partial \lambda$ does not caustic. For our solution, observe that if $\gamma_1(x_0)<0$ for some $x_0$ the transverse null geodesics converge initially, i.e. $\theta(\lambda, x_0)<0$ for small $\lambda$, and furthermore $\theta \to -\infty$ as $\lambda \to 1/|\gamma_1(x_0)|$. On the other hand, if $\gamma_1(x)\geq 0$ it is clear the coordinate system can be extended to all positive values of $\lambda$.
We emphasise that our general solution is valid for any cosmological constant. Motivated by the discussion in the introduction, in this note we will focus on AdS solutions with compact cross-sections of the horizon. Therefore, henceforth we set $\Lambda = - 2/ \ell^2$ and $H \cong S^1$. We thus identify $x\sim x +2\pi R$, where $R>0$ is the radius of the horizon, and assume the functions $h_0(x), \gamma_1(x)$ are $2\pi R$-periodic. Thus, if $\gamma_1(x)>0$, then $\lambda \to \infty$ is a conformal boundary with boundary metric
\begin{equation}
\lambda^{-2} \textrm{d} s^2 \to - \frac{ \textrm{d} v^2}{\ell^2} + (\gamma_1 \textrm{d} x+ \tfrac{1}{2}h_0 \textrm{d} v)^2 \; . \label{bdy}
\end{equation}
If $h_0$ is constant we may define coordinates $t=v$ and $ \textrm{d} \phi =\gamma_1(x) \textrm{d} x + \tfrac{1}{2}h_0 \textrm{d} v$ which explicitly show the boundary is a flat cylinder. This will be relevant below.
\subsection{Extra Killing field}
We will now show that under the assumptions $H \cong S^1$ and $\Lambda<0$, our general solution in fact always possesses a second Killing field which commutes with $K$ and is globally defined (i.e. it is compatible with the periodic identification $x \sim x+2\pi R$).
A tedious calculation shows that for the general non-degenerate case $\kappa \neq 0$, the most general Killing field which commutes with $K$ is (a multiple of)
\begin{equation}
X = \left( c+ \frac{h_0}{2\kappa} \right) \partial_ v + \frac{\lambda^2 h_0 h_0'}{4\kappa \gamma} \partial_\lambda + \left( 1 - \frac{\lambda h_0'}{2\kappa \gamma} \right) \partial_x \; ,
\end{equation}
where $c$ is a constant and we have used (\ref{constr}). Observe that this Killing field is globally defined, tangent to the horizon $\mathcal{N}$ and has closed orbits.
For the general degenerate case $\kappa=0$, equation (\ref{constr}) shows $h_0$ is determined by the near-horizon equation
\begin{equation}
\partial_x h_0 - \tfrac{1}{2} h_0^2 = - \frac{2}{\ell^2} \; . \label{nh}
\end{equation}
It has been shown that the most general solution on $H \cong S^1$ is $h_0 = 2 / \ell$ (choosing a sign), which corresponds to the near-horizon geometry of the extreme BTZ black hole~\cite{Kunduri:2013gce}. In this case, it can be shown that the most general globally defined Killing field which commutes with $K$ is (a multiple of)
\begin{equation}
X =(c+ y) \partial_ v +\frac{\lambda^2 y'}{\ell \gamma} \partial_\lambda + \left(1 - \frac{\lambda y'}{\gamma} \right) \partial_x \; , \label{Xex}
\end{equation}
where $c$ is a constant and $y(x)$ is the unique periodic solution to
\begin{equation}
y' - \frac{2}{\ell} y = \gamma_1(x) \; . \label{yeq}
\end{equation}
Again, note that this Killing field has closed orbits and is also tangent to the horizon.
Thus in either case we see that a general spacetime containing a Killing horizon with compact cross-sections always possesses a second Killing field $X$ with closed orbits which commutes with $K$, i.e. it is {\it axisymmetric}.\footnote{We emphasise that, although related, this does not follow from the usual rigidity theorem for stationary rotating black holes.} Since $X$ is tangent to the horizon, we could always choose a different cross-section $\tilde{H} \cong S^1$ such that $X$ is tangent to $\tilde{H}$ for some constant $c$. In this case, the solution written in Gaussian null coordinates $(\tilde{v}, \tilde{\lambda}, \tilde{x}$) adapted to this new cross-section $\tilde{H}$, must take our general form (\ref{soln}) but with $\tilde{h}_0, \tilde{\gamma}_1$ constant functions. It is then easy to see the solution is given by the BTZ black hole or its near-horizon geometry, as we show next.
Thus suppose that $\partial /\partial x$ is a Killing field with closed orbits so $h_0$ and $\gamma_1$ are constant functions. Using the discrete transformations $x \to -x$ and $(v,\lambda, \kappa) \to -(v, \lambda, \kappa)$ we may always arrange $h_0\geq 0$ and $\gamma_1\geq 0$, respectively.
If $\gamma_1 > 0$ define two positive parameters $(r_+, r_-)$ by $\gamma_1=1/r_+$ and $h_0 = 2r_-/(\ell r_+)$. Solving the constraint (\ref{constr}) implies $\kappa = (r_+^2- r_-^2)/(\ell^2 r_+)$. Now, performing the coordinate change
\begin{eqnarray}
\lambda &=&r-r_+\;, \nonumber \\ \textrm{d} v &=& \textrm{d} t + \frac{ \textrm{d} r}{N^2} \nonumber \\ \textrm{d} x &=& r_+ \textrm{d} \phi - \frac{r_-}{\ell} \textrm{d} t+r_+ \left( N^\phi - \frac{r_-}{r_+\ell} \right) \frac{ \textrm{d} r}{N^2} \; , \label{btzcoords}
\end{eqnarray}
where we have defined the functions
\begin{equation}
N^2 = \frac{(r^2-r_+^2)(r^2-r_-^2)}{\ell^2 r^2} \; , \qquad \qquad {N}^\phi = \frac{r_- r_+}{\ell r^2} \; ,
\end{equation}
gives
\begin{equation}
\textrm{d} s_{\text{BTZ}}^2 = - N^2 \textrm{d} t^2 + \frac{ \textrm{d} r^2}{N^2} + r^2 \left( \textrm{d} {\phi} + N^\phi \textrm{d} t \right)^2 \; ,
\end{equation}
which is the BTZ black hole solution. If $\kappa\geq 0$ the horizon $\lambda=0$ corresponds to the outer horizon, whereas if $\kappa<0$ the horizon $\lambda=0$ corresponds to the inner horizon.
If $\gamma_1=0$, the constraint (\ref{constr}) can be immediately solved to get $h_0 = 2/\ell$ and hence the solution in this case simply reads
\begin{equation}
\textrm{d} s^2 = \left( -2 \kappa \lambda - \frac{\lambda^2}{\ell^2} \right) \textrm{d} v^2 + 2 \textrm{d} v \textrm{d} \lambda + \left( \textrm{d} x + \frac{\lambda \textrm{d} v}{\ell} \right)^2 \; .
\end{equation}
If $\kappa=0$ this is the near-horizon limit of the extreme BTZ black hole. If $\kappa \neq 0$ this is the decoupling limit of the near-extreme BTZ black hole.
\section{General solution with a degenerate horizon}
In this section we will study the general solution containing a degenerate horizon ($\kappa=0$) with compact cross-sections $H \cong S^1$. A shown above, the general spacetime in this case is given by
\begin{equation}
\textrm{d} s^2 = 2 \textrm{d} v \left[ \textrm{d} \lambda + \frac{2}{\ell} \lambda(1+\tfrac{1}{2} \lambda \gamma_1(x)) \textrm{d} x \right] + (1+\lambda \gamma_1(x))^2 \textrm{d} x^2 \; , \label{deg}
\end{equation}
where $\gamma_1(x)$ is an arbitrary periodic function $\gamma_1(x+2\pi R) =\gamma_1(x)$.
\subsection{Large diffeomorphism}
We now explicitly show that this solution is globally isometric to the BTZ black hole, or its near-horizon geometry, by introducing coordinates adapted to the two commuting Killing fields $K=\partial/\partial v$ and $X$ given by (\ref{Xex}).
The inner products of the Killing fields are thus:
\begin{eqnarray}
K^2=0, \qquad K \cdot X = \frac{2\lambda}{\ell} \left( 1- \frac{\lambda y}{\ell} \right), \qquad X^2 = 1+ \frac{4c\lambda}{\ell} \left( 1- \frac{\lambda y}{\ell} \right) \; .
\end{eqnarray}
Define a third vector field $U$ by: $U^2=0, U \cdot X=0, U \cdot K=1$. It is easy to show that
\begin{equation}
U = -\frac{1}{2} C^2 \partial_v+ \partial_\lambda +C e_x \label{Uex}
\end{equation}
where $e_x= \frac{1}{\gamma}( \partial_x - \lambda h \partial_\lambda)$ is a dual vector to the basis (\ref{basis}) and the function $C$ satisfies
\begin{equation}
y+c + \left( 1- \frac{2 \lambda y}{\ell} \right) C - \frac{\lambda}{\ell} \left(1-\frac{\lambda y}{\ell} \right) C^2=0 \; .
\end{equation}
The discriminant of this quadratic is simply $X^2$. Hence the unique solution which is regular on the horizon is
\begin{equation}
C = \frac{ 1- \frac{2\lambda y}{\ell} - \sqrt{1+ \frac{4c\lambda}{\ell} \left( 1- \frac{\lambda y}{\ell} \right)}}{\frac{2}{\ell} \lambda (1 - \frac{\lambda y}{\ell})} \; , \label{Ceq}
\end{equation}
where we must have $X^2>0$.
Now a tedious calculation shows that $[X, U]=0$ if and only if
\begin{equation}
\frac{\lambda^2 y'}{\ell} \partial_\lambda C + \left(1 - \frac{2\lambda y}{\ell} \right) \partial_x C + y'=0 \; . \label{XU}
\end{equation}
Remarkably, it can be shown that (\ref{Ceq}) automatically satisfies (\ref{XU}). This allows us to deduce that a new coordinate system $(\tilde{v}, \tilde{\lambda}, \tilde{x})$ exists such that
\begin{equation}
K = \frac{\partial}{ \partial \tilde{v}}, \qquad U = \frac{\partial}{\partial \tilde{\lambda}}, \qquad X= \frac{\partial}{\partial \tilde{x}} \; .
\end{equation}
From (\ref{Uex}) we may read off $ \frac{\partial \lambda}{\partial \tilde{\lambda}} = 1- \frac{\lambda h C}{\gamma} $ and $\frac{\partial x}{\partial \tilde{\lambda}} = \frac{C}{\gamma}$ which imply
\begin{eqnarray}
\partial_{\tilde{\lambda}} \sqrt{X^2} &=&\frac{2c}{\ell} \frac{\left[ 1- \frac{2\lambda y}{\ell} - \frac{2\lambda}{\ell} \left( 1- \frac{\lambda y}{\ell} \right) C \right] }{\sqrt{1+ \frac{4c\lambda}{\ell} \left( 1- \frac{\lambda y}{\ell} \right) }} = \frac{2c}{\ell} \; ,
\end{eqnarray}
where in the second equality we used (\ref{Ceq}). Hence, integrating and fixing the horizon to be at $\tilde{\lambda}=0$ we get
\begin{equation}
\sqrt{X^2}= 1+\frac{2c \tilde{\lambda}}{\ell} \;, \qquad K \cdot X = \frac{2\tilde{\lambda}}{\ell} \left( 1+ \frac{c \tilde{\lambda}}{\ell} \right) \; .
\end{equation}
Therefore, the metric in the new coordinates is
\begin{equation}
\textrm{d} s^2 = 2 \textrm{d} \tilde{v} \left[ \textrm{d} \tilde{\lambda} + \frac{2}{\ell} \tilde{\lambda}\left(1+\frac{c \tilde{\lambda}}{\ell} \right) \textrm{d} \tilde{x} \right] + \left(1+\frac{2 c \tilde{\lambda}}{\ell} \right)^2 \textrm{d} \tilde{x}^2 \; .
\end{equation}
This expresses the solution in Gaussian null coordinates adapted to a cross-section $\tilde{H} \cong S^1$ which is tangent to $X$. It is thus takes our general form (\ref{deg}) with $\tilde{\gamma_1} = 2c /\ell$ a constant. As we showed above this is the extreme BTZ black hole ($c \neq 0$) or its near-horizon geometry $(c=0)$.
The results of the next section will show that the diffeomorphism constructed above must be a large diffeomorphism.
\subsection{Asymptotic charges}
We now consider the extreme solution in a chart adapted to a general cross-section, i.e. the spacetime (\ref{deg}). We will assume that the transverse null geodesics $\partial /\partial \lambda$ are strictly expanding, i.e. $\gamma_1(x)>0$. This ensures it is asymptotically AdS$_3$ with a cylinder conformal boundary. In fact since $h_0$ is constant and $x$ is periodically identified, this immediately follows from (\ref{bdy}).
To see this in more detail, consider the coordinate change defined by
\begin{eqnarray}
r &=& \lambda + \frac{1}{\gamma_1(x)} \nonumber \\
t &=& v + \frac{\ell^2}{r} \left(1+ \frac{\beta(x)}{3 r^2} \right) \nonumber \\
\phi &=& \int \gamma_1(x) \textrm{d} x + \frac{v}{\ell} - \frac{\beta(x)}{3r^3} \label{ads3coords}
\end{eqnarray}
where the function
\begin{equation}
\beta \equiv \frac{1}{\gamma_1^2} \left( 1 - \frac{\ell \gamma_1'}{\gamma_1} \right) \; . \label{beta}
\end{equation}
To derive this coordinate change, we expanded the one for extreme BTZ (\ref{btzcoords}) for large $r$ and then allowed the subleading terms to depend on $x$.
Observe that the coordinate change (\ref{ads3coords}) forces $\phi$ to be a periodic coordinate with period $\int_0^{2\pi R} \gamma_1(x) \textrm{d} x$. By scaling $(v, \lambda, \gamma_1) \to ( c v, c^{-1} \lambda, c\gamma_1)$, we may always fix the period of $\phi$ to be $2\pi$. In these coordinates our general metric (\ref{deg}) has the following asymptotics
\begin{eqnarray}
g_{tt} &=& - \frac{r^2}{\ell^2} + \frac{2 \beta(x)}{\ell^2}+ \mathcal{O} (r^{-1}) \;, \qquad g_{t \phi} = -\frac{\beta(x)}{\ell} + \mathcal{O}(r^{-1}) \; , \qquad g_{\phi \phi} = r^2 + \mathcal{O}(r^{-1}) \; ,\nonumber \\
g_{tr} &=& - \frac{2 \ell \beta'(x)}{3 \gamma_1(x) r^3} + \mathcal{O}(r^{-4}) \; , \qquad g_{r \phi} = \frac{\ell^2 \beta'(x)}{3 \gamma_1(x) r^3} +\mathcal{O}(r^{-4}) \; , \nonumber \\ g_{rr} &=& \frac{\ell^2}{r^2} + \frac{2\ell^2 \beta(x)}{r^4} +\mathcal{O}(r^{-5}) \; , \label{asymptoticmetric}
\end{eqnarray}
for $r \to \infty$, which explicitly shows that our spacetime is asymptotically AdS$_3$ in the sense of Brown and Henneaux~\cite{Brown:1986nw}.
Observe that for large $r$
\begin{equation}
\phi - \frac{t}{\ell} = \int \gamma_1(x) \textrm{d} x + \mathcal{O}(r^{-1})
\end{equation}
and hence asymptotically $x$ is purely a function of $\phi - \frac{t}{\ell}$ (note the coordinate change is invertible due to our assumption $\gamma_1>0$).
From the subleading terms in (\ref{asymptoticmetric}) we may compute the asymptotic charges of this solution. The asymptotic-symmetry generators are~\cite{Brown:1986nw}
\begin{equation}
L^\pm_n = \frac{1}{2} e^{ i n ( \frac{t}{\ell} \mp \phi ) } \left(\ell \frac{\partial}{\partial t} \mp \frac{\partial}{\partial \phi} \right) + \dots
\end{equation}
where $\dots$ denotes subleading terms and also terms proportional to $\partial_r$ which will not be needed. The conserved charge $Q[\xi]$ associated to an asymptotic symmetry generated by a vector field $\xi$ is an integral at fixed time $t$ over the boundary circle at spacelike infinity $r\to \infty$. We find that, relative to the zero mass BTZ solution, the Virasoro charges are
\begin{eqnarray}
Q[L^+_n] &=& \frac{1}{\ell \pi} \int_0^{2\pi} \textrm{d} \phi \; e^{ i n ( \frac{t}{\ell} - \phi ) } \beta(x) \\
Q[L^-_n] &=& 0 \; ,
\end{eqnarray}
where as noted above asymptotically $x$ is only a function of $\phi - \frac{t}{\ell}$. Thus our general solution generically carries non-zero charges only in one of the Virasoro algebras.
In particular, the mass $\ell M = Q[L_0^+]+Q[L_0^-]$ and angular momentum $J = Q[L_0^+]-Q[L_0^-]$ are given by
\begin{equation}
\ell M = J = \frac{1}{\ell \pi} \int_0^{2\pi} \textrm{d} \phi \; \beta(x) = \frac{1}{\ell \pi} \int_0^{2\pi R} \frac{ \textrm{d} x }{\gamma_1(x)} \; ,
\end{equation}
where in the final equality we converted to the $x$ coordinate (at constant $t$) and used the explicit form of $\beta$ together with periodicity.
Thus we see that the mass/angular-momentum relation satisfied by the extreme BTZ black hole persists for this class of spacetimes. However, unlike the BTZ black hole, these carry arbitrary non-zero charges with respect to all the Virasoro generators $L^+_n$ and vanishing ones with respect to $L^-_n$. In particular, the general solution is characterised by the Virasoro charges $Q[L^+_n]$ with $n \neq 0$. It is worth noting that if $Q[L^+_n]=0$ for all $n \neq 0$, then the function $\beta$ must be a constant and hence (\ref{beta}) implies $\gamma_1$ must be a constant (using periodicity) and we recover the BTZ black hole. Therefore, these geometries may be interpreted as descendants of the extreme BTZ black hole.
\section{Non-degenerate horizon}
In this section we study the general solution containing a non-degenerate horizon ($\kappa \neq 0$) with compact cross-sections $H \cong S^1$. As shown above, the general solution is given by (\ref{horizon}), (\ref{soln}) and is determined by the constant $\kappa$ and an arbitrary function $h_0(x)$, with $\gamma_1(x)$ then determined by (\ref{constr}). These functions must be periodic with $x\sim x+2\pi R$.
We will also assume that $\gamma_1(x)>0$ so the transverse null geodesics $\partial /\partial \lambda$ are strictly expanding and that $\kappa>0$ to ensure the null generators are future complete. Under these conditions, it can be shown that (\ref{constr}) implies
\begin{equation}
h_0(x)^2 < \frac{4}{\ell^2} \; , \label{h0ineq}
\end{equation}
for {\it all} $x$ (otherwise $h_0$ is monotonic, contradicting periodicity). In fact, this condition implies the Killing field $K$ is timelike {\it everywhere} outside the horizon.
We will first analyse the conformal boundary of this Einstein spacetime. The main complication arises due to the fact that the conformal boundary metric in the frame defined by equation (\ref{bdy}) is not flat for non-constant $h_0(x)$. Remarkably, we find there is a simple Weyl transformation on the boundary which makes (\ref{bdy}) a flat cylinder and is consistent with the global identifications we already have (i.e. $x$ periodic and $v$ not). It may be verified that
\begin{equation}
\textrm{d} s^2_b= \frac{- \frac{ \textrm{d} v^2}{\ell^2} + (\gamma_1 \textrm{d} x+ \tfrac{1}{2}h_0 \textrm{d} v)^2}{\Omega(x)^2} \; , \label{bdy2}
\end{equation}
where
\begin{equation}
\Omega(x) \equiv \sqrt{1- \frac{\ell^2 h_0^2}{4} } \; ,
\end{equation}
is a flat metric for any $h_0(x), \gamma_1(x)$. Indeed, this can be seen by performing the coordinate change
\begin{equation}
\textrm{d} t = c_\beta \textrm{d} v + \frac{\ell \gamma_1}{\Omega^2} \left( s_\beta - \frac{c_\beta \ell h_0}{2} \right) \textrm{d} x \; ,\qquad \textrm{d} \phi = s_\beta \frac{ \textrm{d} v}{\ell}+ \frac{\gamma_1}{\Omega^2}\left( c_\beta - \frac{s_\beta \ell h_0}{2} \right) \textrm{d} x \; ,
\end{equation}
where $c_\beta = \cosh \beta, s_\beta = \sinh \beta$ and $\beta$ is a constant ``boost" parameter,
which gives
\begin{equation}
\textrm{d} s_b^2 = -\frac{ \textrm{d} t^2}{\ell^2} + \textrm{d} \phi^2 \; .
\end{equation}
Observe that we have to include the boost $\beta$ (which is a large diffeomorphism) in order to ensure that the coordinate $t$ is not periodically identified. The condition to avoid identifications of $t$ corresponds to the following specific choice of boost:
\begin{equation}
\tanh \beta =\frac{ \int_{0}^{2\pi R} \frac{\ell h_0 \gamma_1}{2\Omega^2} \textrm{d} x}{ \int_0^{2 \pi R} \frac{\gamma_1}{\Omega^2} \textrm{d} x} \; ,
\end{equation}
which we assume henceforth. Observe that due to $\gamma_1>0$ and (\ref{h0ineq}),
\begin{equation}
\left| \frac{ \int_{0}^{2\pi R} \frac{\ell h_0 \gamma_1}{2\Omega^2} \textrm{d} x}{ \int_{0}^{2\pi R} \frac{\gamma_1}{\Omega^2} \textrm{d} x} \right| \leq \frac{ \int_{0}^{2\pi R} \frac{\ell |h_0| \gamma_1}{2\Omega^2} \textrm{d} x}{\int_{0}^{2\pi R} \frac{\gamma_1}{\Omega^2} \textrm{d} x} <1 ,
\end{equation}
so a unique value for this special boost always exists. Furthermore, by a discrete transformation $x \to -x$ we may always arrange $\beta \geq 0$, which we will assume below. Also note that $\partial \phi/ \partial x >0$ so the coordinate $\phi$ inherits a periodicity from $x$. By scaling $(v, \lambda, \gamma_1) \to ( c v, c^{-1} \lambda, c\gamma_1)$, we may always fix the period of $\phi$ to be $2\pi$. Hence, in the above conformal frame the boundary is indeed globally a flat cylinder, as claimed.
We will now compute the asymptotic charges by working in the cylinder conformal frame. Consider the coordinate change defined by
\begin{eqnarray}
r &=& \Omega \left( \lambda + \frac{\ell^2 \kappa}{\Omega^2} \right) \; ,\nonumber \\
t &=& c_\beta v + \int \frac{\ell \gamma_1}{\Omega^2}\left( s_\beta - \frac{c_\beta \ell h_0}{2} \right) \textrm{d} x + \frac{\ell^2}{\Omega r}\left[ c_\beta - \frac{s_\beta \ell h_0}{2}+ \frac{\kappa^2\ell^4}{3\Omega^2 r^2}\left( c_\beta - \frac{s_\beta \ell^3 h_0^3}{8} \right) \right] \; ,\nonumber \\
\phi &=& s_\beta \frac{v}{\ell}+ \int \frac{\gamma_1}{\Omega^2}\left( c_\beta - \frac{s_\beta \ell h_0}{2} \right) \textrm{d} x + \frac{\ell}{\Omega r} \left[ s_\beta - \frac{c_\beta \ell h_0}{2} +\frac{\kappa^2 \ell^4}{3\Omega^2 r^2} \left( s_\beta - \frac{c_\beta \ell^3 h_0^3}{8} \right) \right] \; . \label{phigen}
\end{eqnarray}
In these coordinates we find that the metric for $r \to \infty$ has the following behaviour:
\begin{eqnarray}
&&g_{tt} = -\frac{r^2}{\ell^2} + \frac{\kappa^2\ell^2}{\Omega^2}\left( c_\beta^2 - \frac{s_\beta^2 \ell^2 h_0^2}{4} \right) +\mathcal{O}(r^{-3}) \; , \qquad \quad g_{t\phi} = -s_\beta c_\beta \kappa^2 \ell^3 + \mathcal{O}(r^{-3}) \; , \nonumber \\ && g_{\phi\phi} = r^2 + \frac{\kappa^2 \ell^4}{\Omega^2}\left( s_\beta^2 - \frac{c_\beta^2 \ell^2 h_0^2}{4} \right)+ \mathcal{O}
(r^{-3}) \nonumber \; , \qquad \qquad g_{tr}= \mathcal{O}(r^{-3}) \; , \\ && g_{\phi r} = \mathcal{O}(r^{-3}) \; ,\qquad \qquad g_{rr} = \frac{\ell^2}{r^2}+ \frac{\kappa^2 \ell^6}{\Omega^2 r^4} \left(1+ \frac{\ell^2 h_0^2}{4} \right) +\mathcal{O}(r^{-5}) \; ,
\end{eqnarray}
which is of Brown-Henneaux form and hence suitable for reading off the conserved charges. Observe that asymptotically $x$ is a function of {\it both} $\phi \pm t/\ell$. Hence a priori, one would expect the Virasoro charges $Q[L^\pm_n] \neq 0$ for all integers $n$. In fact, performing the computation, we actually find that
\begin{equation}
Q[L_n^\pm] = \ell^3 \kappa^2 \left( \frac{1}{2} +s_\beta^2 \pm s_\beta c_\beta \right) \delta_{n,0} \; .
\end{equation}
Therefore, we see that all higher Virasoro charges vanish, whereas the zero mode charges give the following mass and angular momentum
\begin{equation}
M = \ell^2 \kappa^2 ( c_\beta^2+ s_\beta^2), \qquad \qquad J = 2\ell^3 \kappa^2 s_\beta c_\beta \; .
\end{equation}
Observe that
\begin{equation}
\ell M - J = \ell^3 \kappa^2 (c_\beta - s_\beta )^2>0.
\end{equation}
Therefore, our spacetime has precisely the same Virasoro charges as the non-extreme BTZ black hole. It follows that it must be diffeomorphic to the non-extreme BTZ black hole.\footnote{This follows from the fact that the Fefferman-Graham expansion (\ref{FG}) terminates in three-dimensions and that $Q[L_n^\pm] \sim \int_0^{2\pi} \textrm{d} \phi \; e^{in x^\pm} T^\pm(x^\pm) $.} Hence, in contrast to the extreme case, we do not obtain descendants of the non-extreme BTZ black hole (which would possess arbitrary charges with respect to all $L^\pm_n$ and thus be related by a large diffeomorphism).
\section{Discussion}
Another way of understanding our results may be as follows. The Fefferman-Graham expansion for three-dimensional Einstein spacetimes terminates and hence the conformal boundary metric and stress tensor determine the full spacetime~\cite{Skenderis:1999nb}. For asymptotically globally AdS$_3$ spacetimes, so a cylinder conformal boundary metric $-\frac{ \textrm{d} t^2}{\ell^2}+ \textrm{d} \phi^2$ where $\phi \sim \phi +2\pi$, it is easy to determine the general Einstein spacetime~\cite{Banados:1998gg}, which is
\begin{equation}
\textrm{d} s^2 = \frac{1}{z^2} \left[\ell^2 \textrm{d} z^2 + 2 ( \textrm{d} x^+ + \tfrac{1}{2}z^2{T^-}(x^-) \textrm{d} x^- ) ( \textrm{d} x^- +\tfrac{1}{2} z^2{T^+}(x^+) \textrm{d} x^+ ) \right] \; , \label{FG}
\end{equation}
where $z=0$ is the conformal boundary, $\sqrt{2} x^\pm =\phi \pm \frac{t}{\ell}$ are lightcone coordinates on the cylinder, and the two arbitrary functions $T^\pm(x^\pm)$ are the components of the boundary stress tensor.
Now, suppose (\ref{FG}) describes a black hole with a horizon invariant under a Killing field of the form (\ref{K}). If $|\ell \Omega | \neq 1$, then it is straightforward to show that {\it both} $T^\pm(x^\pm)$ must be constant functions and hence the spacetime is stationary and axisymmetric (if $T^\pm >0$ this is the BTZ black hole). On the other hand, if $|\ell \Omega| =1$, then only one of $\partial_+$ or $\partial_-$ is a Killing field; without loss of generality suppose $\Omega \ell=1$ so $K\propto \partial_+$. Then $T^+(x^+)$ is again a constant, although now $T^-(x^-)$ can be an arbitrary function. It then follows that $|K|^2 = T^+$ is a constant and therefore such a Killing horizon exists if and only if $T^+=0$. In this case, $K$ is a globally null Killing field and since by assumption it is tangent to the black hole horizon, the horizon must be {\it degenerate}.
This simple argument allows for extreme black holes more general than extreme BTZ, which are related by a large diffeomorphism to the extreme BTZ. Furthermore, it also does not appear to allow for more general non-extreme black holes with a {\it Killing} horizon. This picture is consistent with the results derived in this paper, which were obtained by determining the most general three-dimensional Einstein metric containing a Killing horizon.
The asymptotic Virasoro charges of our general extreme black hole show that these geometries can be interpreted as descendants of the extreme BTZ (as defined in the introduction). It would be interesting to better understand their CFT interpretation.
On the other hand, we did not find descendants of the non-extreme BTZ black hole within our general solution with a non-extreme horizon (these would be related by a large diffeomorphism).\footnote{Our only assumption was that the transverse null geodesics are strictly expanding, so $\gamma_1(x)>0$ for all $x$. It would be interesting to analyse the other cases to confirm the generality of this statement.} This indicates that the descendants of the non-extreme BTZ black hole do {\it not} have a Killing horizon. It would be interesting to understand this by directly analysing under what conditions the general Einstein metric (\ref{FG}) contains a non-singular horizon. \\
\noindent {\bf Acknowledgements}. We would like to thank Mukund Rangamani, Harvey Reall and Joan Simon for useful comments. We would especially like to thank Don Marolf for pointing out the possible existence of a second Killing field, as well as Harvey Reall and Joan Simon for further insights. CL is supported by a Principal Career Development Scholarship at the University of Edinburgh. JL is supported by an EPSRC Career Acceleration Fellowship.
|
2,877,628,091,258 | arxiv | \section{Introduction}
In three groundbreaking articles \cite{FreedHopkinsTelemanI:2011, FreedHopkinsTelemanII:2013, FreedHopkinsTelemanIII:2011} Freed, Hopkins and Teleman proved a close connection between the Verlinde algebra of a compact Lie group $G$ and its twisted equivariant $K$-theory, where $G$ acts on itself by conjugation. In case $G$ is simply connected their theorem boils down to the following statement: Let $R_k(G)$ be the Verlinde ring of positive energy representations of the loop group $LG$ at level~$k \in \mathbb{Z}$. Then the following $R(G)$-modules are naturally isomorphic
\begin{equation} \label{eqn:FHT}
R_k(G) \cong \,^{\tau(k)}K^{\dim(G)}_G(G)\ .
\end{equation}
This identification turns into an isomorphism of rings if the left hand side is equipped with the fusion product and the right hand side with the product induced by Poincar\'{e} duality and the group multiplication. The representation theory of loop groups also dictates the fusion rules of sectors in conformal field theories associated to these groups. In joint work with Gannon the first named author proved that it is in fact possible to recover the full system of modular invariant partition functions of these CFTs from the twisted $K$-theory picture \cite{EvansGannon-Modular:2009, EvansGannon-ModularII:2013}. This approach has been particularly successful in the case of the loop groups of tori, where the modular invariants can be expressed as $KK$-elements. Even exotic fusion categories, like the ones constructed by Tambara-Yamagami have elegant descriptions in terms of $K$-theory as shown in the upcoming paper \cite{EvansGannon-tori:2019}.
For a simple and simply connected Lie group $G$ the classical equivariant twists of $K$-theory over $G$ are classified up to isomorphism by the equivariant cohomology group $H^3_G(G;\mathbb{Z}) \cong \mathbb{Z}$. The twist $\tau(k)$ in the FHT theorem corresponds to $(k+\check{h}(G))$ times the generator, where $\check{h}(G)$ is the dual Coxeter number of $G$. There are several ways to represent the generator of $H^3_G(G;\mathbb{Z})$ geometrically: As a Dixmier-Douady bundle (a locally trivial bundle of compact operators) \cite{Meinrenken-conjugacy:2009}, as a bundle of projective spaces \cite{AtiyahSegal-TwistedK:2004}, in terms of (graded) central extensions of groupoids \cite{FreedHopkinsTelemanI:2011} or as a (bundle) gerbe \cite{Meinrenken-basicgerbe:2003, MurrayStevenson-basic_gerbe:2008}.
From a homotopy theoretic viewpoint (and neglecting the group action for a moment) twisted $K$-theory is an example of a twisted cohomology theory. If $R$ denotes an $A_{\infty}$ ring spectrum, then it comes with a space of units $GL_1(R)$ and has a classifying space of $R$-lines $BGL_1(R)$, which turns out to be an infinite loop space for $E_{\infty}$ ring spectra \cite{AndoBlumbergGepner-RLines:2014, AndoBlumbergGepner-Twists:2010}. In this situation the twists of $R$-theory are classified by $[X, BGL_1(R)]$. If $KU$ denotes a ring spectrum representing $K$-theory, then the group $[X,BGL_1(KU)]$ splits off
\[
H^1(X; \mathbb{Z}/2\mathbb{Z}) \times H^3(X;\mathbb{Z})
\]
equipped with the multiplication
\[
(\omega_1, \delta_1) \cdot (\omega_2, \delta_2) = (\omega_1 + \omega_2, \delta_1 + \delta_2 + \beta(\omega_1 \cup \omega_2))\ ,
\]
where $\beta$ denotes the Bockstein homomorphism. The twists classified by $H^1(X;\mathbb{Z}/2\mathbb{Z})$ can easily be included in the classical picture for example by using graded central extensions as in \cite{FreedHopkinsTelemanI:2011} or graded projective bundles \cite{AtiyahSegal-TwistedK:2004}.
However, it was already pointed out by Atiyah and Segal in \cite{AtiyahSegal-TwistedK:2004} that the group $[X,BGL_1(KU)]$ is in general more subtle than ordinary cohomology. In joint work with Dadarlat the second author found an operator-algebraic description of the twists of $K$-theory which covers the full group $[X,BGL_1(KU)]$ and is based on locally trivial bundles of stabilised strongly self-absorbing $C^*$-algebras \cite{DadarlatP-DixmierDouady:2016}. This picture is also easily adapted to include groups of the form $[X, BGL_1(KU[\tfrac{1}{d}])]$, i.e.\ the twists of the localisation of $K$-theory away from an integer $d$.
Motivated by the isomorphism \eqref{eqn:FHT} in the FHT theorem, the operator algebraic model in the non-equivariant case \cite{DadarlatP-DixmierDouady:2016} and the bundle gerbe description of the basic gerbe developed by Murray and Stevenson \cite{MurrayStevenson-basic_gerbe:2008} we introduce higher (i.e.\ non-classical) equivariant twists over $G = SU(n)$ in this paper. Our construction takes an exponential functor $F \colon \mathcal{V}^{{\mathrm{iso}}}_{\C} \to \mathcal{V}^{{\mathrm{iso}}}_{\C}$ on the category of complex finite-dimensional inner product spaces and isomorphisms as input and produces an equivariant Fell bundle $\mathcal{E} \to \mathcal{G}$. The groupoid $\mathcal{G}$ comes with an action of $G$ and an augmentation map $\mathcal{G} \to G$, which is an equivariant equivalence, if $G$ is equipped with the adjoint action.
Our main examples of exponential functors are the top exterior power functor $\@ifnextchar^\@extp{\@extp^{\,}}^{\mathrm{top}}$ and the full exterior algebra $\@ifnextchar^\@extp{\@extp^{\,}}^*$. For $F = \left(\@ifnextchar^\@extp{\@extp^{\,}}^{\mathrm{top}}\right)^{\otimes m}$ with $m \in \mathbb{N}$ our construction reproduces the basic gerbe from \cite{MurrayStevenson-basic_gerbe:2008}. A family of non-classical equivariant twists we will sometimes focus on arises from $F = \left(\@ifnextchar^\@extp{\@extp^{\,}}^*\right)^{\otimes m}$. Further examples of exponential functors and a classification result in terms of involutive $R$-matrices are discussed in \cite{Pennig:2018}.
Each exponential functor $F$ gives rise to a strongly self-absorbing $C^*$-algebra $\mathsf{M}_\mathnormal{F}^{\infty}$, which carries a canonical $G$-action and is isomorphic to $\mathbb{C}$ (with the trivial action) in the classical case and to an infinite UHF-algebra for higher twists. The $C^*$-algebra $C^*(\mathcal{E})$ associated to the Fell bundle $\mathcal{E}$ is a $C(G)$-algebra that is stably $C(G)$-isomorphic to the section algebra of a locally trivial bundle $\mathcal{A} \to G$ with fibre $\mathsf{M}_\mathnormal{F}^{\infty} \otimes \mathbb{K}$. Neglecting the $G$-action, this bundle is classified by a continuous map
\(
G \to BGL_1\left(KU[\tfrac{1}{d}]\right)
\),
where $d = \dim(F(\mathbb{C}^n))$. At this point our work makes contact with \cite{DadarlatP-DixmierDouady:2016}. We conjecture that the classifying map agrees up to homotopy with the map
\[
\tau^n_F \colon SU(n) \to SU \simeq BBU_{\oplus} \to BBU_{\otimes}[\tfrac{1}{d}] \to BGL_1\left(KU[\tfrac{1}{d}]\right)
\]
considered in \cite{Pennig:2018}, but defer the proof to future work. We expect an analogous statement to be true in an equivariant setting, but since the units of genuine $G$-equivariant ring spectra are a matter of current research in equivariant stable homotopy theory (see for example \cite[Ex.~5.1.17]{book:Schwede}) we will come back to this question in future work as well.
The $G$-equivariant $K$-theory of $C^*(\mathcal{E})$ is a module over the localisation $K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \cong R_F(G) = R(G)[F(\rho)^{-1}]$ of the representation ring $R(G)$ at $F(\rho)$, where $\rho$ denotes the standard representation of $SU(n)$. In general these $R_F(G)$-modules are computable via a spectral sequence similar to the one used in \cite{Meinrenken-conjugacy:2009}. Our computations for $SU(2)$ are summarised in Thm.~\ref{thm:twisted_eq_K_of_SU2}. In this case the spectral sequence reduces to the Mayer-Vietoris sequence.
We also compute the rationalised higher equivariant twisted $K$-theory of $SU(3)$ for general exponential functor twists, see Thm.~\ref{thm:main_theorem_SU(3)}. Here we adapt the approach developed in \cite{AdemCantareroGomez-TwistedK:2018} to our situation. In particular, we identify the rationalised chain complex on the $E^1$-page of the spectral sequence as the one computing Bredon cohomology of the maximal torus $\mathbb{T}^2$ of $SU(3)$ with respect to a certain local coefficient system.
Even though to our knowledge equivariant exponential functor twists of the form studied here have not been considered in the literature before, similar exponential morphisms played a crucial role in \cite{Teleman-ModuliOfSurfaces:2004}. Instead of a localisation of $KU$, the ring spectrum considered by Teleman is $KU[[t]]$, the power series completion of $K$-theory and he shows that
\[
^\tau\!K^{\dim(G)}_G(G)\otimes \mathbb{C}[[t]]
\]
is a Frobenius algebra and therefore extends to a 2D topological field theory for admissible higher twists $\tau$.
We expect the same to be true for the equivariant higher twisted $K$-groups $K^G_{\dim(G)}(C^*(\mathcal{E}))$ and find evidence for this conjecture in the following remarkable properties of these groups (which are to be understood after rationalisation for $n = 3$):
\begin{itemize}
\item The spectral sequence collapses on the $E^2$-page for all exponential functor twists.
\item The $R_F(G)$-module $K_{\dim(G)}^G(C^*(\mathcal{E}))$ is a quotient of $R_F(G)$ by an ideal. In particular, it carries a ring structure.
\item The local coefficient system in the $SU(3)$-case over the Lie algebra~$\mathfrak{t}$ of $\mathbb{T}^2$ is determined by a homomorphism
\[
\pi_1(\mathbb{T}^2) \to GL_1(R_F(\mathbb{T}^2))
\]
similar to the one constructed in \cite[Prop.~3.4]{AdemCantareroGomez-TwistedK:2018}, which is reminiscent of the appearance of the flat line bundles in \cite[(3.4)]{FreedHopkinsTeleman-Complex:2008}.
\end{itemize}
The paper is structured as follows: Section 2 contains preliminary material about Morita-Rieffel equivalences between $C^*$-algebras. Exponential functors (Def.~\ref{def:exp_functor}) are revisited as well.
In Section~3 we describe the construction of the equivariant Fell bundle $\mathcal{E} \to \mathcal{G}$ from an exponential functor $F$ in several steps: The groupoid $\mathcal{G}$ decomposes into three disjoint connected components $\mathcal{G}_-$, $\mathcal{G}_0$ and $\mathcal{G}_+$ that are compatible with composition. A saturated half-bundle is essentially a saturated Fell bundle over the subcategory $\mathcal{G}_{0} \cup \mathcal{G}_+$. In Section~3.1 we prove the technical result that any saturated half-bundle (Def.~\ref{def:sat_half_bundle}) extends uniquely up to isomorphism to a saturated Fell bundle (Thm.~\ref{thm:extension_of_Fell_bdls}). Moreover, if the half-bundle carries a $G$-action in an appropriate sense, then this action extends uniquely to one on the Fell bundle (Cor.~\ref{cor:G-action_on_ext}). We then focus on the construction of the half-bundle $\mathcal{E}_{0,+}$ associated to an exponential functor $F$ in Sec.~3.2 (see Lemma~\ref{lem:cE_is_a_half_bundle}) and discuss the group action on it in Sec.~3.3 (see in particular Cor.~\ref{cor:Fell_bundle}). Combining the results from all three sections we obtain a saturated $G$-equivariant Fell bundle $\mathcal{E} \to \mathcal{G}$ describing the exponential functor twist over $SU(n)$ associated to $F$.
In Sec.~4 we look at the $C^*$-algebra $C^*(\mathcal{E})$ associated to $\mathcal{E}$ and prove that it is a continuous $C(G)$-algebra (see Lemma~\ref{lem:C(G)-algebra}) with fibre $C^*(\mathcal{E}_g)$ over $g \in G$ Morita equivalent to $\mathsf{M}_\mathnormal{F}^{\infty}$ (Lem.~\ref{lem:Fell_condition}). Since $C^*(\mathcal{E})$ satisfies the generalised Fell condition, it is stably isomorphic to a locally trivial bundle classified by a continuous map to $BGL_1(KU[d^{-1}])$ where $d =\dim(F(\mathbb{C}))$ by Cor.~\ref{cor:CStarBundle}. The spectral sequence which allows us to compute the equivariant higher twisted $K$-groups is introduced in Cor.~\ref{cor:spectral_seq} in Sec.~4.1. Using results from strongly self-absorbing $C^*$-dynamical systems we then prove in Prop.~\ref{prop:module_structure} in Sec.~4.2 that its terms are in fact modules over $R_F(G) \cong R(G)[F(\rho)^{-1}]$ and so is $K_*^G(C^*(\mathcal{E}))$.
The final section contains the computations of the equivariant higher twisted $K$-groups in the cases $SU(2)$ (Sec.~5.1) and $SU(3)$ (Sec.~5.2). In the first case the result is summarised in Thm.~\ref{thm:twisted_eq_K_of_SU2}. For $SU(3)$ we first determine the differentials in Lemma~\ref{lem:diff_d0_d1} and compare the resulting chain complex with the one computing the Bredon cohomology of $\mathfrak{t}$ in Sec.~5.2.1. This allows us to compute the rational equivariant higher twisted $K$-theory in Thm.~\ref{thm:main_theorem_SU(3)}.
\subsection*{Acknowledgements}
The second author would like to thank Dan Freed and Steffen Sagave for helpful discussions. Part of this work was completed while the authors were staying at the Newton Institute during the programme ``Operator algebras: subfactors and their applications''. Both authors would like to thank the institute for its hospitality. Their research was supported in part by the EPSRC grants EP/K032208/1 and EP/N022432/1.
\section{Preliminaries}
\subsection{Bimodules and Morita-Rieffel equivalences}
In this section we collect some well-known facts about Hilbert bimodules and Morita-Rieffel equivalences. This is mainly to fix notation. A detailed introduction to Hilbert $C^*$-modules can be found in \cite{Lance-Toolkit:1995}. Let $A,B$ be separable unital $C^*$-algebras.
\begin{definition} \label{def:AB-bimod}
An \emph{$A$-$B$-bimodule} is a right Hilbert $B$-module $V$ together with a $*$-homomorphism
\[
\psi_A \colon A \to \rcpt{B}{V}\ ,
\]
where $\rcpt{B}{V}$ denotes the compact adjointable right $B$-linear operators on~$V$. An $A$-$B$-bimodule is called a \emph{(Morita-Rieffel) equivalence bimodule} if $V$ is full and $\psi_A$ is an isomorphism.
\end{definition}
Given a right Hilbert $B$-module $V$ with inner product $\rscal{\cdot}{\cdot}{B}$ we can associate a left Hilbert $B$-module $V^\text{op}$ to it in a natural way. The vector space underlying $V^\text{op}$ is $\overline{V}$, i.e.\ $V$ equipped with the conjugate linear structure. For a given element $v \in V$ we denote the corresponding element in $V^\text{op}$ by $v^*$. The left multiplication by $b \in B$ is defined by
\[
b\,v^* = (v\,b^*)^*
\]
and the left $B$-linear inner product is $\lscal{B}{v_1^*}{v_2^*} = \left(\rscal{v_2}{v_1}{B}\right)^*$.
The space $\hom_B(V,B)$ of right $B$-linear adjointable morphisms is a left Hilbert $B$-module via the left multiplication $(b \cdot \varphi)(v) = b\varphi(v)$ and the inner product $\lscal{B}{\varphi_1}{\varphi_2} = \varphi_1 \circ \varphi_2^* \in \hom_B(B,B) \cong B$. The map
\[
V^\text{op} \to \hom_B(V,B) \qquad , \qquad v^* \mapsto \rscal{v}{\,\cdot\,}{B}
\]
provides a canonical isomorphism of left Hilbert $B$-modules and we will sometimes identify the two. Note that there is a conjugate linear bijection
\begin{equation} \label{eqn:star_on_mod}
V \to V^\text{op} \quad , \quad v \mapsto v^*
\end{equation}
which satisfies $(v\,b)^* = b^*\,v^*$.
The definition of $A$-$B$ equivalence bimodules may seem asymmetric in $A$ and $B$. It is actually not: Let $V$ be an $A$-$B$ equivalence bimodule. It carries a left multiplication by $a \in A$ defined by
\(
a\,v = \psi_A(a)v
\)
and a left $A$-linear inner product given by
\[
\lscal{A}{v_1}{v_2} = \psi_A^{-1}\left(v_1\rscal{v_2}{\,\cdot\,}{B}\right)\ .
\]
With respect to this multiplication and inner product $V$ is a full left Hilbert $A$-module. The rank $1$ operator $\lscal{A}{\,\cdot\,}{v_1}\,v_2$ agrees with the right multiplication by $\rscal{v_1}{v_2}{B}$. Since $V$ is full, the compact left $A$-linear operators therefore agree with $B$, but the multiplication is reversed, i.e.\ we obtain an isomorphism
\[
\psi_B \colon B^\text{op} \to \lcpt{A}{V}
\]
that sends $b$ to right multiplication by $b$. Thus, we could alternatively define an $A$-$B$ equivalence bimodule as a full left Hilbert $A$-module together with an isomorphism $\psi_B$ as above.
If $V$ is an $A$-$B$ equivalence bimodule, then $V^\text{op}$ is a full left Hilbert $B$-module. Let
\[
\psi_A^\text{op} \colon A^\text{op} \to \lcpt{B}{V^\text{op}}
\]
be the $*$-homomorphism given by $\psi_A^\text{op}(a)(v^*) = (\psi_A(a^*)v)^*$. Conjugation by $v \mapsto v^*$ induces a conjugate linear isomorphism $\rcpt{B}{V} \cong \lcpt{B}{V^\text{op}}$. From this we deduce that $\psi_A^\text{op}$ is a (linear) $*$-isomorphism. Thus, $V^\text{op}$ is a $B$-$A$ equivalence bimodule.
Note that the left $A$-linear and right $B$-linear inner product on an $A$-$B$-equivalence bimodule $V$ satisfy the compatibility condition
\begin{equation} \label{eqn:comp_inner_prod}
v_1\rscal{v_2}{v_3}{B} = \lscal{A}{v_1}{v_2}v_3
\end{equation}
for all $v_1,v_2,v_3 \in V$.
Let $A$, $B$ and $C$ be separable unital $C^*$-algebras and let $V$ be an $A$-$B$ equivalence bimodule and $W$ be a $B$-$C$ equivalence bimodule. The tensor product over $B$ gives an $A$-$C$ equivalence bimodule that we will denote by
\[
V \otimes_B W\ .
\]
For details about this construction we refer the reader to \cite[Chap.~4]{Lance-Toolkit:1995} or \cite{Rieffel-MoritaEquiv:1982}. The left $A$-linear inner product provides an $A$-$A$ bimodule isomorphism
\[
\lscal{A}{\cdot}{\cdot} \colon V \otimes_B V^\text{op} \to A \ .
\]
Similarly, $\rscal{\cdot}{\cdot}{B} \colon V^\text{op} \otimes_A V \to B$ is a bimodule isomorphism as well. Concerning the opposite bimodule of a tensor product, there is a canonical isomorphism
\begin{equation} \label{eqn:op_of_tensor}
(V \otimes_B W)^\text{op} \cong W^\text{op} \otimes_B V^\text{op}
\end{equation}
given on elementary tensors by $(v \otimes w)^* \mapsto w^* \otimes v^*$.
Let $G$ be a compact group and let $\alpha \colon G \to \Aut{B}$ be a continuous action of $G$ on $B$, where $\Aut{B}$ is equipped with the pointwise-norm topology. We will call $B$ a $G$-algebra for short. A \emph{$G$-equivariant right Hilbert $B$-module} \cite[Def.~1 and Def.~2]{Kasparov-ThmStinespring:1980} is defined to be a right Hilbert $B$-module $V$ together with an action of $G$ (denoted by $g \cdot v$ for $g \in G, v \in V$) that satisfies
\begin{enumerate}[a)]
\item $g \cdot (vb) = (g \cdot v)\alpha_g(b)$,
\item $\rscal{g\cdot v}{g \cdot w}{B} = \alpha_g(\rscal{v}{w}{B})$,
\item $(g, v) \mapsto g \cdot v$ is continuous.
\end{enumerate}
If $V$ is a $G$-equivariant right Hilbert $B$-module, then $V^\text{op}$ equipped with the action $g \cdot v^* = (g \cdot v)^*$ is a $G$-equivariant left Hilbert $B$-module. The group $G$ acts continuously on the $C^*$-algebra $\rcpt{B}{V}$ by conjugation. If $A$ denotes another $G$-algebra, then a $G$-equivariant $A$-$B$-bimodule is an $A$-$B$-bimodule~$V$ where the structure map $\psi \colon A \to \rcpt{B}{V}$ is $G$-equivariant.
\subsection{Exponential functors}
Let $\mathcal{V}^{{\mathrm{fin}}}_{\C}$ be the category of finite-dimensional complex inner product spaces and linear maps and denote by $\mathcal{V}^{{\mathrm{iso}}}_{\C} \subset \mathcal{V}^{{\mathrm{fin}}}_{\C}$ the subgroupoid with the same objects but unitary isomorphisms as its morphisms. The higher twists we are going to construct will depend on the choice of an exponential functor on $\mathcal{V}^{{\mathrm{iso}}}_{\C}$. In the context of higher twists these were first considered in \cite{Pennig:2018}, which also contains a classification of those exponential functors that arise from restrictions of polynomial exponential functors on $\mathcal{V}^{{\mathrm{fin}}}_{\C}$ in terms of involutive solutions to the Yang-Baxter equation (involutive $R$-matrices). The following definition is taken from \cite[Def.~2.1]{Pennig:2018} and we refer the reader to that paper for a detailed description of the three conditions a), b) and c) stated below.
\begin{definition} \label{def:exp_functor}
An \emph{exponential functor on } $\mathcal{V}^{{\mathrm{fin}}}_{\C}$ (resp.\ $\mathcal{V}^{{\mathrm{iso}}}_{\C}$) is a triple consisting of a functor $F \colon \mathcal{V}^{{\mathrm{fin}}}_{\C} \to \mathcal{V}^{{\mathrm{fin}}}_{\C}$ (resp.\ $F \colon \mathcal{V}^{{\mathrm{iso}}}_{\C} \to \mathcal{V}^{{\mathrm{iso}}}_{\C}$) together with natural unitary isomorphisms
\[
\tau_{V,W} \colon F(V \oplus W) \to F(V) \otimes F(W)
\]
and $\iota \colon F(0) \to \mathbb{C}$ that satisfy the following conditions
\begin{enumerate}[a)]
\item $F$ preserves adjoints,
\item $\tau$ is associative,
\item $\tau$ is unital with respect to $\iota$.
\end{enumerate}
For an exponential functor $F$ (on $\mathcal{V}^{{\mathrm{fin}}}_{\C}$ or $\mathcal{V}^{{\mathrm{iso}}}_{\C}$) let $d(F)=\dim(F(\mathbb{C}))$. We define the \emph{dimension spectrum of $F$} to be
\[
\text{Dim}(F) := \{ \dim(F(V))\ |\ V \in \obj{\mathcal{V}^{{\mathrm{iso}}}_{\C}}\} = \{ d(F)^n \ | \ n \in \mathbb{N}_0\}\ .
\]
\end{definition}
The exterior algebra functor $F(V) = \@ifnextchar^\@extp{\@extp^{\,}}^*V$ provides a natural example of an exponential functor. The symmetric algebra $\text{Sym}^*(V)$ of a vector space $V$ comes with natural transformations $\tau$ and $\iota$ as above. It is, however, ruled out by the fact that $\text{Sym}^*(V)$ is infinite-dimensional. The exterior algebra functor can be modified as follows: Let $W$ be a finite-dimensional inner product space and consider
\[
F^W(V) = \bigoplus_{k=0}^\infty W^{\otimes k} \otimes \@ifnextchar^\@extp{\@extp^{\,}}^k V
\]
As outlined in \cite[Sec.~2.2]{Pennig:2018} this provides a polynomial exponential functor $F^W \colon \mathcal{V}^{{\mathrm{fin}}}_{\C} \to \mathcal{V}^{{\mathrm{fin}}}_{\C}$.
\section{Higher twists via Fell bundles}
In this section we will consider a groupoid $\mathcal{G}$ that carries an action of $G = SU(n)$ and comes with a surjection $\mathcal{G} \to G$ that is equivariant with respect to the conjugation action of $G$ on itself. In fact, $\mathcal{G}$ will be Morita equivalent to $G$. We will then construct a Fell bundle $\pi \colon \mathcal{E} \to \mathcal{G}$ such that its total space $\mathcal{E}$ comes with an action of $G$ such that $\pi$ is equivariant. The groupoid $\mathcal{G}$ decomposes into a subcategory $\mathcal{G}_{0,+}$ and we will construct the analogue of a saturated Fell bundle over this category first, before extending it to all of $\mathcal{G}$. To achieve this we will need the extension theorem proven in the next section.
\subsection{An extension theorem for saturated Fell bundles}
In this section we consider the following situation: Let $\mathcal{G}$ be a topological groupoid with object space $\mathcal{G}^{(0)}$. Suppose that we have a decomposition
\begin{equation} \label{eqn:decomp}
\mathcal{G} = \mathcal{G}_{-} \cup \mathcal{G}_0 \cup \mathcal{G}_{+}
\end{equation}
into disjoint open and closed subspaces. Let $\mathcal{G}^{(2)}$ be the space of composable arrows. For $\mathcal{U}, \mathcal{V} \subset \mathcal{G}$ define
\[
\mathcal{U} \cdot \mathcal{V} = \left\{ \mathsf{g}_1 \cdot \mathsf{g}_2 \in \mathcal{G} \ |\ (\mathsf{g}_1,\mathsf{g}_2) \in \mathcal{G}^{(2)} \text{ and } \mathsf{g}_1 \in \mathcal{U}, \mathsf{g}_2 \in \mathcal{V} \right\}
\]
and $\mathcal{U}^{-1} = \left\{ \mathsf{g}^{-1} \in \mathcal{G} \ | \ \mathsf{g} \in \mathcal{U} \right\}$. We will assume that the decomposition (\ref{eqn:decomp}) satisfies the following conditions
\begin{align}
\left(\mathcal{G}_+\right)^{-1} &= \mathcal{G}_- \label{eqn:inv_pm} \\
\left(\mathcal{G}_0\right)^{-1} &= \mathcal{G}_0 \label{eqn:inv_zero} \\
\mathcal{G}_+ \cdot \mathcal{G}_+ &\subseteq \mathcal{G}_+ \label{eqn:m_pp}\\
\mathcal{G}_0 \cdot \mathcal{G}_+ &= \mathcal{G}_+ \label{eqn:m_zp} \\
\mathcal{G}_+ \cdot \mathcal{G}_0 &= \mathcal{G}_+ \label{eqn:m_pz} \\
\mathcal{G}_0 \cdot \mathcal{G}_0 &= \mathcal{G}_0 \label{eqn:m_zz}
\end{align}
Since the identities on the objects of $\mathcal{G}$ are fixed points of the inversion and the decomposition is disjoint, we obtain from (\ref{eqn:inv_pm}) and (\ref{eqn:inv_zero}) that they must be contained in $\mathcal{G}_0$. Therefore (\ref{eqn:m_pz}) is actually equivalent to $\mathcal{G}_+ \cdot \mathcal{G}_0 \subseteq \mathcal{G}_+$ and likewise for (\ref{eqn:m_zp}). Taking inverses we also obtain
\begin{align*}
\mathcal{G}_- \cdot \mathcal{G}_- &\subseteq \mathcal{G}_-\\
\mathcal{G}_0 \cdot \mathcal{G}_- &= \mathcal{G}_- \\
\mathcal{G}_- \cdot \mathcal{G}_0 &= \mathcal{G}_-
\end{align*}
\begin{definition} \label{def:sat_half_bundle}
Let $A$ be a separable unital $C^*$-algebra. A \emph{saturated (Fell) half-bundle} is given by the following data: A Banach bundle $\mathcal{E}_{0,+} \to \mathcal{G}_{0,+}$ with the property that
\[
\left.\mathcal{E}_{0,+}\right|_{\mathcal{G}_0} = \mathcal{G}_0 \times A
\]
and a continuous multiplication map $\mu \colon \mathcal{E}_{0,+}^{(2)} \to \mathcal{E}_{0,+}$ where
\[
\mathcal{E}_{0,+}^{(2)} = \left\{ (e_1,e_2) \in \mathcal{E}_{0,+}^2 \ |\ (\pi(e_1),\pi(e_2)) \in \mathcal{G}^{(2)} \right\} \subset \left(\mathcal{E}_{0,+}\right)^2
\]
is equipped with the subspace topology. This data has to satisfy the following conditions:
\begin{enumerate}[a)]
\item The multiplication $\mu$ is bilinear and associative. It extends the canonical one on $\left.\mathcal{E}_{0,+}\right|_{\mathcal{G}_0} = \mathcal{G}_0 \times A$ and fits into a commutative diagram
\[
\begin{tikzcd}
\mathcal{E}_{0,+}^{(2)} \ar[d,"\pi \times \pi" left] \ar[r,"\mu"] & \mathcal{E}_{0,+} \ar[d,"\pi"] \\
\mathcal{G}_{0,+}^{(2)} \ar[r] & \mathcal{G}_{0,+}
\end{tikzcd}
\]
in which the lower horizontal map is the groupoid multiplication. We will use the abbreviated notation $e_1 \cdot e_2 := \mu(e_1,e_2)$.
\item There is a continuous inner product $\rscal{\,\cdot\,}{\,\cdot\,}{A} \colon \mathcal{E}_{0,+} \times_{\mathcal{G}_{0,+}} \mathcal{E}_{0,+} \to A \times \mathcal{G}^{(0)}$ that is right $A$-linear with respect to the multiplication $(e,a) \mapsto e \cdot a$ induced by $\mu$ with $\pi(e) = \mathsf{g} \in \mathcal{G}_{0,+}$ and $\pi(a) = \id{s(\mathsf{g})} \in \mathcal{G}_0$. It fits into the commutative diagram
\[
\begin{tikzcd}[column sep=2cm]
\mathcal{E}_{0,+} \times_{\mathcal{G}_{0,+}} \mathcal{E}_{0,+} \ar[r,"\rscal{\,\cdot\,}{\,\cdot\,}{A}"] \ar[d,"\pi" left] & A \times \mathcal{G}^{(0)} \ar[d,"\pi"] \\
\mathcal{G}_{0,+} \ar[r,"s" below] & \mathcal{G}^{(0)}
\end{tikzcd}
\]
and restricts to $\rscal{(a_1,\mathsf{g})}{(a_2,\mathsf{g})}{A} = (a_1^*a_2, s(\mathsf{g}))$ for $(a_i,\mathsf{g}) \in \left.\mathcal{E}_{0,+}\right|_{\mathcal{G}_0}$. It is compatible with the norm in the sense that
\begin{equation} \label{eqn:inner_prod_and_norm}
\lVert \rscal{e}{e}{A} \rVert = \lVert e \rVert^2
\end{equation}
and turns each fibre $\left(\mathcal{E}_{0,+}\right)_{\mathsf{g}}$ into a right Hilbert $A$-module. The left multiplication $(a,e) \mapsto a \cdot e$ with $\pi(e) = \mathsf{g}$ and $\pi(a) = \id{r(\mathsf{g})}$ is compact adjointable with respect to this inner product with adjoint given by $a^*$. Moreover, this left multiplication induces a $*$-isomorphism
\[
\psi_{A,\mathsf{g}} \colon A \to \rcpt{A}{\left(\mathcal{E}_{0,+}\right)_{\mathsf{g}}}
\]
between $A$ and the compact $A$-linear operators on each fibre. In other words, each fibre $\left(\mathcal{E}_{0,+}\right)_\mathsf{g}$ is an $A$-$A$ equivalence bimodule.
\item The Hilbert $A$-bimodule structure on the fibres is compatible with the multiplication in the sense that $\mu$ induces an $A$-$A$ bimodule isomorphism
\[
\begin{tikzcd}[column sep=2cm]
\left( \mathcal{E}_{0,+} \right)_{\mathsf{g}_1} \otimes_A \left( \mathcal{E}_{0,+} \right)_{\mathsf{g}_2} \ar[r,"\mu" above, "\cong" below] & \left( \mathcal{E}_{0,+} \right)_{\mathsf{g}_1\mathsf{g}_2}
\end{tikzcd}
\]
for each composable pair $(\mathsf{g}_1,\mathsf{g}_2) \in \mathcal{G}^{(2)}_{0,+}$.
\end{enumerate}
\end{definition}
\begin{remark}
Note that Def.~\ref{def:sat_half_bundle} c) and \eqref{eqn:inner_prod_and_norm} imply that the norm on $\mathcal{E}_{0,+}$ is submultiplicative in the sense that $\lVert e_1 \cdot e_2 \rVert \leq \lVert e_1 \rVert \cdot \lVert e_2 \rVert$.
\end{remark}
\begin{theorem} \label{thm:extension_of_Fell_bdls}
Let $(\mathcal{E}_{0,+}, \mu, \rscal{\,\cdot\,}{\,\cdot\,}{A})$ be a saturated half-bundle. There is a saturated Fell bundle $\pi \colon \mathcal{E} \to \mathcal{G}$ with the property that $\left.\mathcal{E}\right|_{\mathcal{G}_{0,+}} = \mathcal{E}_{0,+}$, the multiplication of $\mathcal{E}$ restricts to the one of $\mathcal{E}_{0,+}$ and for $e_1,e_2 \in \mathcal{E}_g$ we have
\(
e_1^* \cdot e_2 = \rscal{e_1}{e_2}{A}\ .
\) Moreover, $\pi \colon \mathcal{E} \to \mathcal{G}$ is unique up to (a canonical) isomorphism of Fell bundles.
\end{theorem}
\begin{proof}
Let $\text{inv} \colon \mathcal{G} \to \mathcal{G}$ be given by $\text{inv}(\mathsf{g}) = \mathsf{g}^{-1}$. We first extend $\mathcal{E}_{0,+}$ over $\mathcal{G}_-$ by defining $\pi \colon \mathcal{E}_- \to \mathcal{G}_-$ as $\mathcal{E}_- = \left(\text{inv}^*\mathcal{E}_+\right)^\text{op}$. This means that we have $\left(\mathcal{E}_-\right)_\mathsf{g} = \left(\mathcal{E}_+\right)_{\mathsf{g}^{-1}}^\text{op}$ fibrewise for all $\mathsf{g} \in \mathcal{G}_-$. Let $\mathcal{E} = \mathcal{E}_- \cup \mathcal{E}_{0,+}$. Together with the canonical quotient map to $\mathcal{G}$ this is a Banach bundle. The conjugate linear bijection $e \mapsto e^*$ from \eqref{eqn:star_on_mod} yields a well-defined $*$-operation on $\mathcal{E}$. It fits into the commutative diagram
\[
\begin{tikzcd}
\mathcal{E} \ar[r,"*"] \ar[d,"\pi" left] & \mathcal{E} \ar[d,"\pi"] \\
\mathcal{G} \ar[r,"\text{inv}" below] & \mathcal{G}
\end{tikzcd}
\]
Next we have to extend the multiplication map $\mu$ to all of $\mathcal{E}$, i.e.\ we have to construct a bimodule isomorphism $\mu \colon \mathcal{E}_{\mathsf{g}_1} \otimes_A \mathcal{E}_{\mathsf{g}_2} \to \mathcal{E}_{\mathsf{g}_1\mathsf{g}_2}$ for $(\mathsf{g}_1,\mathsf{g}_2) \in \mathcal{G}^{(2)}$. Depending on which subset $\mathsf{g}_1, \mathsf{g}_2$ and $\mathsf{g}_1\mathsf{g}_2$ are contained in, there are six cases to consider:
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& $\mathsf{g}_1$ & $\mathsf{g}_2$ & $\mathsf{g}_1\cdot \mathsf{g}_2$\\
\hline
1 & $+$ & $+$ & $+$ \\
2 &$+$ & $-$ & $+$ \\
3 &$+$ & $-$ & $-$ \\
4 &$-$ & $+$ & $+$ \\
5 &$-$ & $+$ & $-$ \\
6 &$-$ & $-$ & $-$ \\
\hline
\end{tabular}
\end{center}
A $+$, respectively $-$, refers to the case that the groupoid element is in $\mathcal{G}_{0,+}$, respectively $\mathcal{G}_-$. We need the relation
\[
(e_1 \cdot e_2)^* = e_2^* \cdot e_1^*
\]
to hold in a Fell bundle. This implies that if we have defined the multiplication in case $k$, then we have fixed it in case $(7-k)$ as well for $k \in \{1,\dots,6\}$. This reduces the number of cases to consider to the first three.
We will use the following graphical representation of groupoid elements: A morphism in $\mathcal{G}_{0,+}$ will be drawn as an arrow pointing right, a morphism in $\mathcal{G}_-$ corresponds to an arrow pointing left.\footnote{Contrary to the usual notation of morphisms we will draw the arrows from the codomain to the domain. This way the order of composition agrees with the composition of the arrows.} Let $\mathsf{g}_1,\mathsf{g}_2 \in \mathcal{G}^{(2)}$. If $\mathsf{g}_1\cdot \mathsf{g}_2$ ends up in $\mathcal{G}_{0,+}$, then the concatenation of the two corresponding arrows will end in a point to the right of the base of $\mathsf{g}_1$. Similarly, a composition ending up in $\mathcal{G}_-$ will end up in a point left of the base of $\mathsf{g}_1$. Cases $1,2$ and $3$ are drawn as follows:
\begin{center}
\begin{tikzpicture}[->,>=stealth',node distance=1.4cm,scale=1]
\node (11) {$\bullet$};
\node (12) [right of=11] {$\bullet$};
\node (13) [right of=12] {$\bullet$};
\path (11) edge node [above] {$\mathsf{g}_{1}$} (12);
\path (12) edge node [above] {$\mathsf{g}_{2}$} (13);
\node at (4.5,0) (21) {$\bullet$};
\node (22) [right of=21] {$\bullet$};
\node (23) [right of=22] {$\bullet$};
\path (21) edge[bend left] node [above] {$\mathsf{g}_{1}$} (23);
\path (23) edge node [below] {$\mathsf{g}_{2}$} (22);
\node at (9,0) (31) {$\bullet$};
\node (32) [right of=31] {$\bullet$};
\node (33) [right of=32] {$\bullet$};
\path (32) edge node [above] {$\mathsf{g}_{1}$} (33);
\path (33) edge[bend left] node [below] {$\mathsf{g}_{2}$} (31);
\end{tikzpicture}
\end{center}
The multiplication is already defined in case $1$. For case $2$, let $(\mathsf{g}_1, \mathsf{g}_2) \in \mathcal{G}^{(2)}$ with $\mathsf{g}_1 \in \mathcal{G}_{0,+}$, $\mathsf{g}_2 \in \mathcal{G}_-$, $\mathsf{g}_1 \cdot \mathsf{g}_2 \in \mathcal{G}_{0,+}$. Observe that $\mathsf{g}_1 = \mathsf{g}_1\mathsf{g}_2 \cdot \mathsf{g}_2^{-1}$ is a decomposition of $\mathsf{g}_1$ into elements that are contained in $\mathcal{G}_{0,+}$. This can be easily read off from the above graphical representation. Since
\[
\mu \colon (\mathcal{E}_{0,+})_{\mathsf{g}_1\mathsf{g}_2} \otimes_A (\mathcal{E}_{0,+})_{\mathsf{g}_2^{-1}} \to (\mathcal{E}_{0,+})_{\mathsf{g}_1}
\]
is an isomorphism, we can extend $\mu$ by defining it to be the upper horizontal arrow in the diagram below, in which the vertical arrow on the right hand side is given by right multiplication:
\[
\begin{tikzcd}
(\mathcal{E}_{0,+})_{\mathsf{g}_1} \otimes_A (\mathcal{E}_{-})_{\mathsf{g}_2} \ar[rr,"\mu"] & & (\mathcal{E}_{0,+})_{\mathsf{g}_1\mathsf{g}_2} \\
(\mathcal{E}_{0,+})_{\mathsf{g}_1\mathsf{g}_2} \otimes_A (\mathcal{E}_{0,+})_{\mathsf{g}_2^{-1}} \otimes_A (\mathcal{E}_{0,+})^\text{op}_{\mathsf{g}_2^{-1}} \ar[u,"\mu \otimes \id{}", "\cong" right] \ar[rr,"\id{} \otimes \lscal{A}{\cdot}{\cdot}" below, "\cong" above ] & & (\mathcal{E}_{0,+})_{\mathsf{g}_1\mathsf{g}_2} \otimes_A A \ar[u,"\cong"]
\end{tikzcd}
\]
If we label the arrows by Fell bundle elements instead of groupoid morphisms, this definition will graphically be represented as follows:
\begin{center}
\begin{tikzpicture}[->,>=stealth',node distance=1.4cm,scale=1]
\node at (4.5,0) (11) {$\bullet$};
\node (12) [right of=11] {$\bullet$};
\node (13) [right of=12] {$\bullet$};
\path (11) edge[bend left] node [above] {$e_{1}$} (13);
\path (13) edge node [below] {$e^*_{2}$} (12);
\node (eq) [right of=13] {$\leadsto$};
\node (21) [right of=eq] {$\bullet$};
\node (22) [right of=21] {$\bullet$};
\node (23) [right of=22] {$\bullet$};
\path (21) edge node [above] {$e'_{12}$} (22);
\path (22) edge[bend left] node [above] {$e'_{2}$} (23);
\path (23) edge[bend left] node [below] {$e^*_{2}$} (22);
\end{tikzpicture}
\end{center}
where $e_{12}' \cdot e_2' = e_1$ and the inner product is used to replace the loop on the right hand side with an element in $A$.
Consider case $3$, i.e.\ $(\mathsf{g}_1, \mathsf{g}_2) \in \mathcal{G}^{(2)}$ with $\mathsf{g}_1 \in \mathcal{G}_{0,+}$, $\mathsf{g}_2 \in \mathcal{G}_-$, $\mathsf{g}_1 \cdot \mathsf{g}_2 \in \mathcal{G}_{-}$. Using \eqref{eqn:op_of_tensor} and the multiplication $\mu$ we obtain an isomorphism
\[
\mu^\text{op} \colon (\mathcal{E}_{0,+})_{\mathsf{g}_1}^\text{op} \otimes_A (\mathcal{E}_{0,+})^\text{op}_{(\mathsf{g}_1\mathsf{g}_2)^{-1}} \to (\mathcal{E}_{0,+})^\text{op}_{\mathsf{g}_2^{-1}}
\]
and we can extend the multiplication to this case using the upper horizontal arrow in the diagram below:
\[
\begin{tikzcd}
(\mathcal{E}_{0,+})_{\mathsf{g}_1} \otimes_A (\mathcal{E}_{-})_{\mathsf{g}_2} \ar[rr,"\mu"] & & (\mathcal{E}_{-})_{\mathsf{g}_1\mathsf{g}_2} \\
(\mathcal{E}_{0,+})_{\mathsf{g}_1} \otimes_A (\mathcal{E}_{0,+})_{\mathsf{g}_1}^\text{op} \otimes_A (\mathcal{E}_{0,+})^\text{op}_{(\mathsf{g}_1\mathsf{g}_2)^{-1}} \ar[u,"\id{} \otimes \mu{}^\text{op}", "\cong" right] \ar[rr,"\lscal{A}{\cdot}{\cdot} \otimes \id{}" below, "\cong" above ] & & A \otimes (\mathcal{E}_{0,+})^\text{op}_{(\mathsf{g}_1\mathsf{g}_2)^{-1}} \ar[u,"\cong"]
\end{tikzcd}
\]
Graphically this definition is represented as follows:
\begin{center}
\begin{tikzpicture}[->,>=stealth',node distance=1.4cm,scale=1]
\node at (9,0) (11) {$\bullet$};
\node (12) [right of=11] {$\bullet$};
\node (13) [right of=12] {$\bullet$};
\path (12) edge node [above] {$e_{1}$} (13);
\path (13) edge[bend left] node [below] {$e^*_{2}$} (11);
\node (eq) [right of=13] {$\leadsto$};
\node (21) [right of=eq] {$\bullet$};
\node (22) [right of=21] {$\bullet$};
\node (23) [right of=22] {$\bullet$};
\path (22) edge[bend left] node [above] {$e_1$} (23);
\path (23) edge[bend left] node [below] {$(e'_{1})^*$} (22);
\path (22) edge node [below] {$(e'_{12})^*$} (21);
\end{tikzpicture}
\end{center}
i.e.\ we decompose $e_2^* = (e_1')^* \cdot (e_{12}')^*$ for some $(e_1')^* \in (\mathcal{E}_{0,+})_{\mathsf{g}_1}^\text{op}$ and $(e_{12}')^* \in (\mathcal{E}_-)_{\mathsf{g}_1\mathsf{g}_2}$ and define
\[
e_1 \cdot e_2^* = \lscal{A}{e_1}{e_1'}\cdot (e_{12}')^*\ .
\]
This finishes the extension of the multiplication map $\mu$.
Next we have to prove that the extended multiplication is still associative. Let $\mathsf{g}_1, \mathsf{g}_2, \mathsf{g}_3 \in \mathcal{G}$ such that $(\mathsf{g}_1,\mathsf{g}_2) \in \mathcal{G}^{(2)}$ and $(\mathsf{g}_2,\mathsf{g}_3) \in \mathcal{G}^{(2)}$. Let $e_i \in \mathcal{E}_{\mathsf{g}_i}$ for $i \in \{1,2,3\}$. We have to show that
\[
(e_1 \cdot e_2) \cdot e_3 = e_1 \cdot (e_2 \cdot e_3)
\]
Each $\mathsf{g}_i$ could be in $\mathcal{G}_{0,+}$ or in $\mathcal{G}_-$. Thus, if we neglect the compositions for a moment, this leaves us with six cases to consider. However, the above equality implies
\[
e_3^* \cdot (e_2^* \cdot e_1^*) = (e_3^* \cdot e_2^*) \cdot e_1^*\ .
\]
Therefore we can without loss of generality assume that $\mathsf{g}_2 \in \mathcal{G}_{0,+}$. The diagrams of all remaining cases are shown in Figure~\ref{fig:associativity}. To prove the associativity condition in each case we make the following observations:
The given multiplication $\mu$ on $\mathcal{G}_{0,+}$ is fibrewise a bimodule isomorphism. Thus, whenever we have to decompose an element of $e \in \mathcal{E}$, we may without loss of generality assume that it is maximally decomposed with respect to the diagram. This means that in terms of the graphical representation we may make the following replacements:
\begin{center}
\begin{tikzpicture}[->,>=stealth',node distance=1.4cm,scale=1]
\node (11) {$\bullet$};
\node (12) [right of=11] {$\bullet$};
\node (13) [right of=12] {$\bullet$};
\path (11) edge[bend left] node[above] {$e$} (13);
\node (eq) [right of=13] {$\leadsto$};
\node (21) [right of=eq] {$\bullet$};
\node (22) [right of=21] {$\bullet$};
\node (23) [right of=22] {$\bullet$};
\path (21) edge node[above] {$e_1$} (22);
\path (22) edge node[above] {$e_2$} (23);
\node at (-1.4,-1.6) (31) {$\bullet$};
\node (32) [right of=31] {$\bullet$};
\node (33) [right of=32] {$\bullet$};
\node (34) [right of=33] {$\bullet$};
\path (31) edge[bend left] node[above] {$e'$} (34);
\node (eq) [right of=34] {$\leadsto$};
\node (41) [right of=eq] {$\bullet$};
\node (42) [right of=41] {$\bullet$};
\node (43) [right of=42] {$\bullet$};
\node (44) [right of=43] {$\bullet$};
\path (41) edge node[above] {$e_1$} (42);
\path (42) edge node[above] {$e_2$} (43);
\path (43) edge node[above] {$e_3$} (44);
\end{tikzpicture}
\end{center}
where $e = e_1\cdot e_2$ in the first case and $e' = e_1\cdot e_2 \cdot e_3$ in the second. Note that associativity of $\mu$ over $\mathcal{G}_{0,+}$ ensures that we may drop the brackets in the second case. Likewise, we may assume the analogous decomposition for the mirror images of these diagrams with arrows pointing to the left.
By our definition of the extension of $\mu$ to $\mathcal{G}_-$ associativity is also satisfied in diagrams that have one of the following forms:
\begin{center}
\begin{tikzpicture}[->,>=stealth']
\node (11) {$\bullet$};
\node (12) [right of=11] {$\bullet$};
\node (13) [right of=12] {$\bullet$};
\path (11) edge node [above] {} (12);
\path (12) edge[bend left] node [below] {} (13);
\path (13) edge[bend left] node [below] {} (12);
\node (21) [right of=13] {$\bullet$};
\node (22) [right of=21] {$\bullet$};
\node (23) [right of=22] {$\bullet$};
\path (22) edge[bend left] node [above] {} (23);
\path (23) edge[bend left] node [below] {} (22);
\path (22) edge node [below] {} (21);
\node (31) [right of=23] {$\bullet$};
\node (32) [right of=31] {$\bullet$};
\node (33) [right of=32] {$\bullet$};
\path (33) edge node [above] {} (32);
\path (32) edge[bend right] node [below] {} (31);
\path (31) edge[bend right] node [below] {} (32);
\node (41) [right of=33] {$\bullet$};
\node (42) [right of=41] {$\bullet$};
\node (43) [right of=42] {$\bullet$};
\path (42) edge[bend right] node [above] {} (41);
\path (41) edge[bend right] node [below] {} (42);
\path (42) edge node [below] {} (43);
\end{tikzpicture}
\end{center}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[->,>=stealth',node distance=1.4cm,scale=1]
\node[shape=circle,draw,inner sep=1pt] at (0,0.8) {1};
\node (11) {$\bullet$};
\node (12) [right of=11] {$\bullet$};
\node (13) [right of=12] {$\bullet$};
\node (14) [right of=13] {$\bullet$};
\path (11) edge node [above] {$\mathsf{g}_{1}$} (12);
\path (12) edge node [above] {$\mathsf{g}_{2}$} (13);
\path (13) edge node [above] {$\mathsf{g}_{3}$} (14);
\node[shape=circle,draw,inner sep=1pt] at (6,0.8) {2};
\node at (6,0) (21) {$\bullet$};
\node (22) [right of=21] {$\bullet$};
\node (23) [right of=22] {$\bullet$};
\node (24) [right of=23] {$\bullet$};
\path (21) edge node [above] {$\mathsf{g}_{1}$} (22);
\path (22) edge[bend left] node [above] {$\mathsf{g}_{2}$} (24);
\path (24) edge node [below] {$\mathsf{g}_{3}$} (23);
\node[shape=circle,draw,inner sep=1pt] at (0,-1.2) {3};
\node at (0,-2) (31) {$\bullet$};
\node (32) [right of=31] {$\bullet$};
\node (33) [right of=32] {$\bullet$};
\node (34) [right of=33] {$\bullet$};
\path (31) edge[bend left] node [above] {$\mathsf{g}_{1}$} (33);
\path (33) edge node [above] {$\mathsf{g}_{2}$} (34);
\path (34) edge[bend left] node [below] {$\mathsf{g}_{3}$} (32);
\node[shape=circle,draw,inner sep=1pt] at (6,-1.2) {4};
\node at (6,-2) (41) {$\bullet$};
\node (42) [right of=41] {$\bullet$};
\node (43) [right of=42] {$\bullet$};
\node (44) [right of=43] {$\bullet$};
\path (42) edge node [above] {$\mathsf{g}_{1}$} (43);
\path (43) edge node [above] {$\mathsf{g}_{2}$} (44);
\path (44) edge[bend left] node [above] {$\mathsf{g}_{3}$} (41);
\node[shape=circle,draw,inner sep=1pt] at (0,-3.2) {5};
\node at (0,-4) (51) {$\bullet$};
\node (52) [right of=51] {$\bullet$};
\node (53) [right of=52] {$\bullet$};
\node (54) [right of=53] {$\bullet$};
\path (52) edge node [above] {$\mathsf{g}_{1}$} (51);
\path (51) edge[bend right] node [below] {$\mathsf{g}_{2}$} (53);
\path (53) edge node [below] {$\mathsf{g}_{3}$} (54);
\node[shape=circle,draw,inner sep=1pt] at (6,-3.2) {6};
\node at (6,-4) (61) {$\bullet$};
\node (62) [right of=61] {$\bullet$};
\node (63) [right of=62] {$\bullet$};
\node (64) [right of=63] {$\bullet$};
\path (63) edge[bend right] node [above] {$\mathsf{g}_{1}$} (61);
\path (61) edge node [below] {$\mathsf{g}_{2}$} (62);
\path (62) edge[bend right] node [below] {$\mathsf{g}_{3}$} (64);
\node[shape=circle,draw,inner sep=1pt] at (0,-5.2) {7};
\node at (0,-6) (71) {$\bullet$};
\node (72) [right of=71] {$\bullet$};
\node (73) [right of=72] {$\bullet$};
\node (74) [right of=73] {$\bullet$};
\path (74) edge[bend right] node [below] {$\mathsf{g}_{1}$} (71);
\path (71) edge node [below] {$\mathsf{g}_{2}$} (72);
\path (72) edge node [below] {$\mathsf{g}_{3}$} (73);
\node[shape=circle,draw,inner sep=1pt] at (6,-5.2) {8};
\node at (6,-6) (81) {$\bullet$};
\node (82) [right of=81] {$\bullet$};
\node (83) [right of=82] {$\bullet$};
\node (84) [right of=83] {$\bullet$};
\path (82) edge node [above] {$\mathsf{g}_{1}$} (81);
\path (81) edge[bend right] node [above] {$\mathsf{g}_{2}$} (84);
\path (84) edge node [above] {$\mathsf{g}_{3}$} (83);
\node[shape=circle,draw,inner sep=1pt] at (0,-7.7) {9};
\node at (0,-8.5) (91) {$\bullet$};
\node (92) [right of=91] {$\bullet$};
\node (93) [right of=92] {$\bullet$};
\node (94) [right of=93] {$\bullet$};
\path (93) edge[bend right] node [above] {$\mathsf{g}_{1}$} (91);
\path (91) edge[bend right=50] node [below] {$\mathsf{g}_{2}$} (94);
\path (94) edge[bend left] node [below] {$\mathsf{g}_{3}$} (92);
\node[shape=circle,draw,inner sep=1pt] at (6,-7.7) {10};
\node at (6,-8.5) (101) {$\bullet$};
\node (102) [right of=101] {$\bullet$};
\node (103) [right of=102] {$\bullet$};
\node (104) [right of=103] {$\bullet$};
\path (104) edge[bend right] node [above] {$\mathsf{g}_{1}$} (101);
\path (101) edge[bend right] node [below] {$\mathsf{g}_{2}$} (103);
\path (103) edge node [above] {$\mathsf{g}_{3}$} (102);
\node[shape=circle,draw,inner sep=1pt] at (0,-10.7) {11};
\node at (0,-11.5) (111) {$\bullet$};
\node (112) [right of=111] {$\bullet$};
\node (113) [right of=112] {$\bullet$};
\node (114) [right of=113] {$\bullet$};
\path (113) edge node [above] {$\mathsf{g}_{1}$} (112);
\path (112) edge[bend right] node [below] {$\mathsf{g}_{2}$} (114);
\path (114) edge[bend right] node [above] {$\mathsf{g}_{3}$} (111);
\node[shape=circle,draw,inner sep=1pt] at (6,-10.7) {12};
\node at (6,-11.5) (121) {$\bullet$};
\node (122) [right of=121] {$\bullet$};
\node (123) [right of=122] {$\bullet$};
\node (124) [right of=123] {$\bullet$};
\path (124) edge[bend right] node [above] {$\mathsf{g}_{1}$} (122);
\path (122) edge node [below] {$\mathsf{g}_{2}$} (123);
\path (123) edge[bend left=50] node [below] {$\mathsf{g}_{3}$} (121);
\end{tikzpicture}
\caption{\label{fig:associativity}The 12 compositions to consider in the proof of associativity.}
\end{figure}
Using this it follows that associativity holds in the cases $1$, $2$, $5$ and $8$ in Fig.~\ref{fig:associativity}. The multiplication isomorphism $\mu \colon \left(\mathcal{E}_{0,+}\right)_{\mathsf{g}_1} \otimes_A \left(\mathcal{E}_{0,+}\right)_{\mathsf{g}_2} \to \left(\mathcal{E}_{0,+}\right)_{\mathsf{g}_1\mathsf{g}_2}$ preserves inner products, hence we obtain
\[
\lscal{A}{e_1 \cdot e_2}{e_1'\cdot e_2'} = \lscal{A}{e_1\cdot {\lscal{A}{e_2}{e_2'}} }{e_1'}\ ,
\]
which is diagrammatically represented by the following relation:
\begin{center}
\begin{tikzpicture}[->,>=stealth',node distance=1.3cm,scale=1]
\node (11) {$\bullet$};
\node (12) [right of=11] {$\bullet$};
\node (13) [right of=12] {$\bullet$};
\path (11) edge[bend left] node [above] {$e_1 \cdot e_2$} (13);
\path (13) edge[bend left] node [below] {$(e_1' \cdot e_2')^*$} (11);
\node (eq) [right of=13] {$\leadsto$};
\node (21) [right of=eq] {$\bullet$};
\node (22) [right of=21] {$\bullet$};
\node (23) [right of=22] {$\bullet$};
\path (21) edge[bend left] node [above] {$e_1$} (22);
\path (22) edge[bend left] node [above] {$e_2$} (23);
\path (23) edge[bend left] node [below] {$(e_2')^*$} (22);
\path (22) edge[bend left] node [below] {$(e_1')^*$} (21);
\end{tikzpicture}
\end{center}
Again an analogous relation is true for the mirror images of the above diagrams. Our observation shows we can drop the brackets in expressions represented by diagrams of the form depicted on the right hand side. This implies that associativity holds in the cases $3$, $4$, $6$, $7$ in Fig.~\ref{fig:associativity}.
The last relation needed is the compatibility of the two inner products in each fibre. Let $e_1, e_2, e_3 \in (\mathcal{E}_{0,+})_{\mathsf{g}}$ for some $\mathsf{g} \in \mathcal{G}_{0,+}$. Then \eqref{eqn:comp_inner_prod} implies in our context that:
\begin{align*}
e_1\,\rscal{e_2}{e_3}{A} &= \lscal{A}{e_1}{e_2}\,e_3\ ,\\
e_1^*\,\lscal{A}{e_2}{e_3} &= \rscal{e_1}{e_2}{A}\,e_3^*\ .
\end{align*}
Expressed graphically this means that associativity holds for the diagrams:
\begin{center}
\begin{tikzpicture}[->,>=stealth',node distance=1.3cm,scale=1]
\node (11) {$\bullet$};
\node (12) [right of=11] {$\bullet$};
\path (11) edge[bend left=90] node [above] {$e_1$} (12);
\path (12) edge node [above] {$e_2^*$} (11);
\path (11) edge[bend right=80] node [below] {$e_3$} (12);
\node (21) [right of=12] {$\bullet$};
\node (22) [right of=21] {$\bullet$};
\path (22) edge[bend right=90] node [above] {$e_1^*$} (21);
\path (21) edge node [above] {$e_2$} (22);
\path (22) edge[bend left=80] node [below] {$e_3^*$} (21);
\end{tikzpicture}
\end{center}
Using this relation it follows that associativity holds in the cases $9$, $10$, $11$ and $12$ as well. Since the computations are slightly more involved than in the previous cases, we give the details for diagram $10$. Let $e_1^* \in \left(\mathcal{E}_-\right)_{\mathsf{g}_1}$, $e_2 \in \left(\mathcal{E}_{0,+}\right)_{\mathsf{g}_2}$ and $e_3^* \in \left(\mathcal{E}_-\right)_{\mathsf{g}_3}$. Let $e_2 = e_{21} \cdot e_{22}$ and $e_3^*= e_{31}^* \cdot e_{32}^* \cdot e_{33}^*$ be the maximal decompositions of $e_2$ and $e_3$. We have
\begin{align*}
(e_1^* \cdot e_2) \cdot e_3^* &= \lscal{A}{{\rscal{e_1}{e_{21}}{A}}\,e_{22}}{e_{31}}\,e_{32}^* \cdot e_{33}^* \\
e_1^* \cdot (e_2 \cdot e_3^*) &= e_1^*\,\lscal{A}{e_{21} \cdot e_{22}}{e_{32} \cdot e_{31}}\,e_{33}^*
\end{align*}
and by our considerations above we obtain:
\begin{align*}
e_1^*\,\lscal{A}{e_{21} \cdot e_{22}}{e_{32} \cdot e_{31}}\,e_{33}^* &= e_1^*\,\lscal{A}{e_{21}\,{\lscal{A}{e_{22}}{e_{31}}}}{e_{32}}e_{33}^* \\
&= \rscal{e_1}{e_{21}\,{\lscal{A}{e_{22}}{e_{31}}}}{A}\,e_{32}^*\cdot e_{33}^* \\
&= \rscal{e_1}{e_{21}}{A}\,\lscal{A}{e_{22}}{e_{31}}\,e_{32}^*\cdot e_{33}^* \\
&= \lscal{A}{{\rscal{e_1}{e_{21}}{A}}\,e_{22}}{e_{31}}\,e_{32}^* \cdot e_{33}^*
\end{align*}
The other cases are similar. This finishes the proof of associativity.
Thus, we obtain a Banach bundle $\mathcal{E} \to \mathcal{G}$ with a continuous, bilinear, associative multiplication $\mu$ and a compatible continuous conjugate linear involution $\ast \colon \mathcal{E} \to \mathcal{E}$. Our definition implies that
\[
e^* \cdot e = \rscal{e}{e}{A}
\]
for all $e \in \mathcal{E}$. This implies the $C^*$-norm condition $\lVert e^* e \rVert = \lVert \rscal{e}{e}{A}\rVert = \lVert e \rVert^2$, which also ensures that the norm is submultiplicative for all $e \in \mathcal{E}$. Therefore $\mathcal{E} \to \mathcal{G}$ defines a saturated Fell bundle.
To address the question about uniqueness let $\mathcal{F} \to \mathcal{G}$ be another saturated Fell bundle that satisfies the conditions in the theorem. In particular, we have $\mathcal{F}_{0,+} = \mathcal{E}_{0,+}$. Therefore the involution yields a (linear!) isomorphism of Banach bundles
\[
\Theta \colon \mathcal{F}_{-} \to \left(\mathcal{E}_+\right)^\text{op} = \mathcal{E}_-
\]
The relation $(e_1 \cdot e_2)^* = e_2^* \cdot e_1^*$ shows that $\Theta$ has to intertwine the restrictions $\mu_{\mathcal{F}_-} \colon \mathcal{F}_- \otimes_A \mathcal{F}_- \to \mathcal{F}_-$ and $\mu_{\mathcal{E}_-}$. The relation $\rscal{f_1^*}{f_2^*}{A} = \lscal{A}{f_1}{f_2}$ for all $f_1^*,f_2^* \in (\mathcal{F}_-)_g$ implies that $\Theta$ preserves the inner product as well. We have already seen above that our extension of the multiplication map was forced upon us by associativity considerations, the fact that $\mu$ induces fibrewise bimodule isomorphisms and the relation $f_1^* \cdot f_2 = \rscal{f_1}{f_2}{A}$ for $f_1, f_2 \in \mathcal{F}_g$. Consequently, if we extend $\Theta$ by the identity on $\mathcal{F}_{0,+} = \mathcal{E}_{0,+}$ it yields an isomorphism of Fell bundles $\mathcal{F} \to \mathcal{E}$ over $\mathcal{G}$.
\end{proof}
\begin{remark}
Let $A$ be a unital separable $C^*$-algebra and let $\mathcal{MR}(A)$ be the $2$-groupoid that has $A$ as its objects, the $A$-$A$ equivalence bimodules as $1$-morphisms and bimodule isomorphisms as $2$-morphisms\footnote{The notation $\mathcal{MR}$ is for Morita-Rieffel.}. The groupoid $\mathcal{G}$ is a $2$-groupoid with just identity $2$-morphisms. If we forget about the topology, then a saturated Fell bundle is a functor
\[
\mathcal{E} \colon \mathcal{G} \to \mathcal{MR}(A)\ ,
\]
whereas saturated half-bundles correspond to functors
\[
\mathcal{E} \colon \mathcal{G}_{0,+} \to \mathcal{MR}(A)\ .
\]
Theorem~\ref{thm:extension_of_Fell_bdls} can be rephrased by saying that the restriction functor
\[
\text{res} \colon \mathcal{F}un(\mathcal{G},\mathcal{MR}(A)) \to \mathcal{F}un(\mathcal{G}_{0,+},\mathcal{MR}(A))
\]
induced by the inclusion $\mathcal{G}_{0,+} \to \mathcal{G}$ is an isomorphism of functor categories. This seems to suggest that there should be a proof of Theorem~\ref{thm:extension_of_Fell_bdls} based on category theory. However, a first step would require identifying the right topology on $\mathcal{MR}(A)$ to obtain a bijection between saturated Fell bundles and continuous functors.
\end{remark}
Since we want to construct an equivariant Fell bundle from an equivariant half-bundle, we need to understand group actions on both structures. Restricting to the following kind of actions is natural in this context:
\begin{definition} \label{def:admissible_action}
Let $G$ be a compact group and let $\mathcal{G}$ be a groupoid with a decomposition as in \eqref{eqn:decomp} that has the properties \eqref{eqn:inv_pm} -- \eqref{eqn:m_zz}. We call an action of $G$ on $\mathcal{G}$ by groupoid automorphisms \emph{admissible} if it preserves the decomposition from \eqref{eqn:decomp}, i.e.\ each group element yields a homeomorphism $\mathcal{G}_- \to \mathcal{G}_-$ and similarly for $\mathcal{G}_0$ and $\mathcal{G}_+$, respectively.
\end{definition}
\begin{definition} \label{def:eq_sat_half_bundle}
Let $G$ be a compact group that acts admissibly on $\mathcal{G}$ and let $A$ be a separable unital $G$-algebra. A \emph{$G$-equivariant saturated half-bundle} is a saturated half-bundle $\mathcal{E}_{0,+}$ carrying a continuous $G$-action such that the projection map $\mathcal{E}_{0,+} \to \mathcal{G}_{0,+}$ is equivariant and the following properties hold:
\begin{enumerate}[a)]
\item On $\mathcal{E}_0 = \left.\mathcal{E}_{0,+}\right|_{\mathcal{G}_0} = \mathcal{G}_0 \times A$ the action restricts to the diagonal action of $G$ on $\mathcal{G}_0$ and $A$.
\item The multiplication map $\mu \colon \mathcal{E}_{0,+}^{(2)} \to \mathcal{E}_{0,+}$ is $G$-equi\-variant (where the domain is equipped with the diagonal $G$-action) and the inner product satisfies
\[
\rscal{g \cdot e_1}{g \cdot e_2}{A} = \alpha_g(\rscal{e_1}{e_2}{A})
\]
for all $g \in G$.
\end{enumerate}
\end{definition}
\begin{corollary} \label{cor:G-action_on_ext}
Suppose that $\mathcal{G}$ carries an admissible action by a compact group $G$. Let $(\mathcal{E}_{0,+}, \mu, \rscal{\,\cdot\,}{\,\cdot\,}{A})$ be a $G$-equivariant saturated half-bundle. Let $\mathcal{E}$ be the extension of $\mathcal{E}_{0,+}$ to a saturated Fell bundle as in Thm.~\ref{thm:extension_of_Fell_bdls}.
Then the $G$-action on $\mathcal{E}_{0,+}$ extends to a continuous $G$-action on $\mathcal{E}$ in such a way that the multiplication map and the projection $\pi \colon \mathcal{E}\to \mathcal{G}$ are equivariant and $g\cdot e^* = (g \cdot e)^*$ for all $e \in \mathcal{E}$ and $g \in G$. This extension is unique.
\end{corollary}
\begin{proof}
The condition $g \cdot e^* = (g \cdot e)^*$ uniquely fixes the group action on $\mathcal{E}_-$ and has all properties stated in the corollary.
\end{proof}
\subsection{Construction of the Fell bundle over $SU(n)$} \label{subsec:Fell_bundle}
The groupoid $\mathcal{G}$ alluded to in the introduction to this section is now constructed as follows: Let $G = SU(n)$, $\mathbb{T} \subset \mathbb{C}$ the unit circle and let $Z = \mathbb{T} \setminus \{1\} \cong (0,1)$. For $g \in G$ denote the set eigenvalues of $g$ (in its standard representation on $\mathbb{C}^n$) by $\EV{g}$. Let $Y$ be the space
\[
Y = \left\{ (g,z) \in G \times Z \ | \ z \notin \EV{g} \right\}\ .
\]
There is a canonical quotient map $\pi \colon Y \to G$. The groupoid $\mathcal{G}$ is now given by the fibre product $Y^{[2]}$ of $Y$ with itself over $G$, i.e.\
\[
\mathcal{G} = Y^{[2]} = \{ (y_1,y_2) \in Y \times Y\ |\ \pi(y_1) = \pi(y_2) \}
\]
equipped with the subspace topology\footnote{We will view an element $(g,z_1,z_2) \in Y^{[2]}$ as a morphism from $(g,z_2)$ to $(g,z_1)$. Thus, the composition is $(g,z_1,z_2) \cdot (g,z_2,z_3) = (g,z_1,z_3)$.}. Note that we can identify this space with
\[
Y^{[2]} = \left\{ (g,z_1,z_2) \in G \times Z \times Z \ | \ z_i \notin \EV{g} \text{ for } i \in \{1,2\} \right\}\ .
\]
Since $Z$ is homeomorphic to an open interval via $e \colon (0,1) \to Z$ with $e(\varphi) = \exp(2\pi i\,\varphi)$ we can equip it with a total ordering by defining $z_2 = e(\varphi_2) \geq z_1 = e(\varphi_1)$ if and only if $\varphi_2 \geq \varphi_1$. Now we can decompose $Y^{[2]}$ into disjoint subspaces $Y^{[2]} = Y^{[2]}_+ \cup Y^{[2]}_0 \cup Y^{[2]}_-$ with
\begin{align*}
Y^{[2]}_+ & = \{ (g,z_1,z_2) \in Y^{[2]}\ | \ z_2 > z_1 \text{ and } \exists \lambda \in \EV{g},\ z_2 > \lambda > z_1 \}\ , \\
Y^{[2]}_0 & = \{ (g,z_1,z_2) \in Y^{[2]}\ | \ \nexists \lambda \in \EV{g}, \max(z_1,z_2) > \lambda > \min(z_1,z_2) \}\ , \\
Y^{[2]}_- & = \{ (g,z_1,z_2) \in Y^{[2]}\ | \ z_2 < z_1 \text{ and } \exists \lambda \in \EV{g},\ z_2 < \lambda < z_1 \}\ .
\end{align*}
Fix a (continuous) exponential functor $(F,\tau,\iota)$ on $\mathcal{V}^{{\mathrm{iso}}}_{\C}$ as in Def.~\ref{def:exp_functor}. Consider the standard representation of $G$ on $\mathbb{C}^n$ and let $\mathsf{M}_{\mathnormal{F}} = \Endo{F(\mathbb{C}^n)}$. Let $\mathsf{M}_\mathnormal{F}^{\infty}$ be the UHF-algebra given by the infinite tensor product
\[
\mathsf{M}_\mathnormal{F}^{\infty} = \bigotimes_{i=1}^{\infty} \mathsf{M}_{\mathnormal{F}}\ .
\]
We will construct a saturated half-bundle $\mathcal{E}_{0,+}$ over $\mathcal{G}_{0,+} = Y^{[2]}_+ \cup Y^{[2]}_0$ such that over $\mathcal{G}_0 = Y^{[2]}_0$ it coincides with the trivial $C^*$-algebra bundle
\[
\mathcal{E}_0 = \mathcal{G}_0 \times \mathsf{M}_\mathnormal{F}^{\infty}\ .
\]
To understand the fibre of $\mathcal{E}_{0,+}$ over $\mathcal{G}_+$ fix $g \in G$ and let $(g,z_1,z_2) \in \mathcal{G}_+$. Consider the following subspaces of $\mathbb{C}^n$:
\begin{gather*}
E(g,z_1,z_2) = \bigoplus_{\overset{z_1 < \lambda < z_2}{\lambda \in \EV{g}}} \Eig{g}{\lambda} \\
E^{\prec}(g,z_1) = \bigoplus_{\overset{\lambda < z_1}{\lambda \in \EV{g}}} \Eig{g}{\lambda} \quad , \quad
E^{\succ}(g,z_2) = \bigoplus_{\overset{z_2 < \lambda}{\lambda \in \EV{g}}} \Eig{g}{\lambda} \ .
\end{gather*}
and note that the natural transformation $\tau$ from the exponential functor turns the direct sum decomposition of $\mathbb{C}^n$ shown in the next line into a tensor product decomposition shown in the line below:
\begin{align} \label{eqn:FCn_decomp}
\mathbb{C}^n &= \Eig{g}{1} \oplus E^{\prec}(g,z_1) \oplus E(g,z_1,z_2) \oplus E^{\succ}(g,z_2) \\
F(\mathbb{C}^n) &\cong F(\Eig{g}{1}) \otimes F(E^{\prec}(g,z_1)) \otimes F(E(g,z_1,z_2)) \otimes F(E^{\succ}(g,z_2))\notag
\end{align}
Denote the corresponding endomorphism algebras of the tensor factors by
\begin{gather*}
\mathsf{M}_{\mathnormal{F}}(g, z_1, z_2) = \Endo{F(E(g,z_1,z_2))} \quad , \quad \mathsf{M}_{\mathnormal{F}}(g,1) = \Endo{F(\Eig{g}{1})}\\
\mathsf{M}_{\mathnormal{F}}^{\prec}(g,z_1) = \Endo{F(E^{\prec}(g,z_1))} \quad , \quad
\mathsf{M}_{\mathnormal{F}}^{\succ}(g,z_2) = \Endo{F(E^{\succ}(g,z_2))}\ .
\end{gather*}
Just as in \cite[Sec.~3]{MurrayStevenson-basic_gerbe:2008} it follows that the bundle $E \to \mathcal{G}_+$ with fibre $E(g,z_1,z_2)$ over $(g,z_1,z_2) \in \mathcal{G}_+$ is a locally trivial vector bundle. Therefore $F(E)$ is as well. Observe that the endomorphism bundle of $F(E)$ has fibre $\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2)$ over $(g,z_1,z_2) \in \mathcal{G}_+$. The fibre of our half-bundle $\mathcal{E}_{+}$ will be given by the following locally trivial bundle of right Hilbert $\mathsf{M}_\mathnormal{F}^{\infty}$-modules:
\[
\mathcal{E}_{(g,z_1,z_2)} = F(E(g,z_1,z_2)) \otimes \mathsf{M}_\mathnormal{F}^{\infty}
\]
where the right multiplication is given by right multiplication on $\mathsf{M}_\mathnormal{F}^{\infty}$. The transformation $\tau$ induces a $*$-isomorphism for $(g,z_1,z_2), (g,z_2,z_3) \in \mathcal{G}_+$ of the form
\begin{equation} \label{eqn:MF_tensor_prod}
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}(g,z_2,z_3) \to \mathsf{M}_{\mathnormal{F}}(g,z_1,z_3)
\end{equation}
To define the multiplication on the fibres of $\mathcal{E}_+$ we need the next lemma.
\begin{lemma} \label{lem:associative_isos_MF}
There is an isomorphism
\(
\varphi_{g,z_1,z_2} \colon \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \to \mathsf{M}_\mathnormal{F}^{\infty}
\)
(constructed in the proof) which is associative in the sense that for $(g,z_1,z_2),$ $(g,z_2,z_3) \in \mathcal{G}_+$ the following diagram commutes:
\[
\begin{tikzcd}[column sep=2.2cm]
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}(g,z_2,z_3) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \ar[d] \ar[r,"\id{} \otimes \varphi_{g,z_2,z_3}"] & \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \ar[d,"\varphi_{g,z_1,z_2}"] \\
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_3) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \ar[r,"\varphi_{g,z_1,z_3}"] & \mathsf{M}_\mathnormal{F}^{\infty}
\end{tikzcd}
\]
where the vertical arrow on the left is the isomorphism from \eqref{eqn:MF_tensor_prod}.
\end{lemma}
\begin{proof}
To construct $\varphi_{g,z_1,z_2}$ first note that the decomposition \eqref{eqn:FCn_decomp} yields a corresponding decomposition of the algebra $\mathsf{M}_{\mathnormal{F}}$, which we will also denote by $\tau$ by a slight abuse of notation:
\begin{equation}
\begin{tikzcd} \label{eqn:tau}
\mathsf{M}_{\mathnormal{F}} \ar[r,"\tau" above, "\cong" below] & \mathsf{M}_{\mathnormal{F}}(g,1) \otimes \mathsf{M}_{\mathnormal{F}}^{\prec}(g,z_1) \otimes \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\succ}(g,z_2)\ .
\end{tikzcd}
\end{equation}
Let $\mathsf{M}_{\mathnormal{F}}^{1,\prec}(g,z_1) = \mathsf{M}_{\mathnormal{F}}(g,1) \otimes \mathsf{M}_{\mathnormal{F}}^{\prec}(g,z_1)$ and define
\[
\varphi_{g,z_1,z_2}^{(k)} \colon \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\otimes k} \to \mathsf{M}_{\mathnormal{F}}^{\otimes (k+1)}
\]
to be the following composition
\[
\begin{tikzcd}
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\otimes k} \ar[d,"\id{} \otimes \tau^{\otimes k}"] \\
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes (\mathsf{M}_{\mathnormal{F}}^{1,\prec}(g,z_1) \otimes \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\succ}(g,z_2))^{\otimes k} \ar[d,"\alpha^{(k)}_{g,z_1,z_2}"] \\
(\mathsf{M}_{\mathnormal{F}}^{\prec}(g,z_1) \otimes \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\succ}(g,z_2))^{\otimes (k+1)} \ar[d,"(\tau^{-1})^{\otimes (k+1)}"] \\
\mathsf{M}_{\mathnormal{F}}^{\otimes (k+1)}
\end{tikzcd}
\]
with the endomorphism $\alpha^{(k)}_{g,z_1,z_2}$ given by
\begin{align*}
& \alpha^{(k)}_{g,z_1,z_2}(T \otimes (A_1 \otimes B_1 \otimes C_1) \otimes \dots \otimes (A_k \otimes B_k \otimes C_k)) \\
=\ & (A_1 \otimes T \otimes C_1) \otimes (A_2 \otimes B_1 \otimes C_2) \otimes \dots \otimes (A_k \otimes B_{k-1} \otimes C_k) \otimes (1 \otimes B_k \otimes 1)
\end{align*}
The endomorphism $\varphi^{(k)}_{g,z_1,z_2}$ fits into the following commutative diagram
\[
\begin{tikzcd}[column sep=2.8cm, row sep=1.5cm]
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\otimes k} \ar[r,"A \otimes B \mapsto A \otimes B \otimes 1"] \ar[d,"\varphi^{(k)}_{g,z_1,z_2}"] & \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\otimes (k+1)} \ar[d,"\varphi^{(k+1)}_{g,z_1,z_2}"] \\
\mathsf{M}_{\mathnormal{F}}^{\otimes (k+1)} \ar[r,"B \mapsto B \otimes 1"] \ar[ur,"\psi^{(k+1)}_{g,z_1,z_2}"] & \mathsf{M}_{\mathnormal{F}}^{\otimes (k+2)}
\end{tikzcd}
\]
where $\psi^{(k)}_{g,z_1,z_2}$ is an endomorphism constructed analogously to $\varphi^{(k)}_{g,z_1,z_2}$ by conjugating the endomorphism
\[
\begin{tikzcd}
(\mathsf{M}_{\mathnormal{F}}^{1,\prec}(g,z_1) \otimes \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\succ}(g,z_2))^{\otimes k} \ar[d,"\beta^{(k)}_{g,z_1,z_2}"] \\
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes (\mathsf{M}_{\mathnormal{F}}^{1,\prec}(g,z_1) \otimes \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\succ}(g,z_2))^{\otimes k}
\end{tikzcd}
\]
given by
\begin{align*}
& \beta^{(k)}_{g,z_1,z_2}((A_1 \otimes B_1 \otimes C_1) \otimes \dots \otimes (A_k \otimes B_k \otimes C_k)) \\
=\ & B_1 \otimes (A_1 \otimes B_2 \otimes C_1) \otimes \dots \otimes (A_{k-1} \otimes B_k \otimes C_{k-1}) \otimes (A_k \otimes 1 \otimes C_k)
\end{align*}
with the corresponding tensor products of $\tau$. We define the homomorphisms
\begin{align*}
\varphi_{g,z_1,z_2} & \colon \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\infty} \to \mathsf{M}_{\mathnormal{F}}^{\infty} \\
\psi_{g,z_1,z_2} & \colon \mathsf{M}_{\mathnormal{F}}^{\infty} \to \mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \otimes \mathsf{M}_{\mathnormal{F}}^{\infty}
\end{align*}
as the ones induced by $\varphi^{(k)}_{g,z_1,z_2}$ and $\psi^{(k)}_{g,z_1,z_2}$ on the colimits. The diagram above shows that $\varphi_{g,z_1,z_2}$ and $\psi_{g,z_1,z_2}$ are inverse to each other. The associativity condition stated above can be seen from the colimit of the following commutative diagram
\[
\begin{tikzcd}[column sep=2.7cm]
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \!\otimes\! \mathsf{M}_{\mathnormal{F}}(g,z_2,z_3) \!\otimes\! \mathsf{M}_{\mathnormal{F}}^{\otimes k} \ar[r,"\id{}\!\otimes\varphi^{(k)}_{g,z_2,z_3}"] \ar[d] &
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) \!\otimes\! \mathsf{M}_{\mathnormal{F}}^{\otimes (k+1)} \ar[d,"\varphi^{(k+1)}_{g,z_1,z_2}"] \\
\mathsf{M}_{\mathnormal{F}}(g,z_1,z_3) \!\otimes\! \mathsf{M}_{\mathnormal{F}}^{\otimes (k+1)} \ar[r,"\varphi^{(k+1)}_{g,z_1,z_3}"] &
\mathsf{M}_{\mathnormal{F}}^{\otimes (k+2)}
\end{tikzcd}
\]
in which the vertical arrow on the left is the map from \eqref{eqn:MF_tensor_prod} tensored with the homomorphism $A \mapsto A \otimes 1$.
\end{proof}
\begin{corollary} \label{cor:phi_plus}
Let $E \to \mathcal{G}_+$ be the vector bundle with fibre $E(g,z_1,z_2)$ over $(g,z_1,z_2) \in \mathcal{G}_+$. The isomorphisms $\varphi_{g,z_1,z_2}$ constructed in Lemma~\ref{lem:associative_isos_MF} yield a continuous isomorphism of $C^*$-algebra bundles of the form
\[
\varphi_+ \colon \Endo{F(E)} \otimes \mathsf{M}_\mathnormal{F}^{\infty} \to \mathcal{G}_+ \times \mathsf{M}_\mathnormal{F}^{\infty} \ .
\]
\end{corollary}
\begin{proof}
Since $\mathsf{M}_{\mathnormal{F}}(g,z_1,z_2) = \Endo{F(E(g,z_1,z_2))}$, the isomorphisms from Lemma~\ref{lem:associative_isos_MF} indeed piece together to give a map $\varphi_+$ as described in the statement. Therefore the only issue left to prove is continuity of $\varphi_+$. First observe that $E$ is by definition a subbundle of the trivial bundle $\mathcal{G}_+ \times \mathbb{C}^n$. Its orthogonal complement $E^{\perp}$ is a locally trivial vector bundle as well. By continuity of $F$ we obtain locally trivial bundles $F(E)$ and $F(E^\perp)$. Let $\mathsf{M}_\mathnormal{F}^{\infty}(E)$, respectively $\mathsf{M}_\mathnormal{F}^{\infty}(E^\perp)$, be the UHF-algebras obtained as the fibrewise infinite tensor product of $\Endo{F(E)}$, respectively $\Endo{F(E^\perp)}$, and note that the $*$-homomorphism $\tau$ from \eqref{eqn:tau} translates into a continuous isomorphism of $C^*$-algebra bundles
\[
\begin{tikzcd}[column sep=2cm]
\mathcal{G}_+ \times \mathsf{M}_\mathnormal{F}^{\infty} \ar[r,"\tau"] & \mathsf{M}_\mathnormal{F}^{\infty}(E) \otimes \mathsf{M}_\mathnormal{F}^{\infty}(E^\perp)
\end{tikzcd}
\]
The maps $\alpha^{(k)}_{g,z_1,z_2}$ from Lemma~\ref{lem:associative_isos_MF} induce another continuous isomorphism of $C^*$-algebra bundles:
\[
\alpha^{\infty} \colon \Endo{F(E)} \otimes \mathsf{M}_\mathnormal{F}^{\infty}(E) \otimes \mathsf{M}_\mathnormal{F}^{\infty}(E^\perp) \to \mathsf{M}_\mathnormal{F}^{\infty}(E) \otimes \mathsf{M}_\mathnormal{F}^{\infty}(E^\perp)
\]
which shifts $\Endo{F(E)}$ into the tensor factor $\mathsf{M}_\mathnormal{F}^{\infty}(E)$. By definition $\varphi_+$ is obtained by conjugating $\alpha^{\infty}$ by $\tau$ and therefore is continuous.
\end{proof}
Let $E \to \mathcal{G}_+$ be the vector bundle from Cor.~\ref{cor:phi_plus}. Let $\mathcal{E}_0 = \mathcal{G}_0 \times \mathsf{M}_\mathnormal{F}^{\infty}$,
\[
\mathcal{E}_+ = F(E) \otimes \mathsf{M}_\mathnormal{F}^{\infty}
\]
and let $\mathcal{E}_{0,+} = \mathcal{E}_0 \cup \mathcal{E}_+$. This is a locally trivial bundle of full right Hilbert $\mathsf{M}_\mathnormal{F}^{\infty}$-modules, where $\mathsf{M}_\mathnormal{F}^{\infty}$ acts by right multiplication on itself. The bundle of compact adjointable right $\mathsf{M}_\mathnormal{F}^{\infty}$-linear operators on $\mathcal{E}_+$ agrees with
\[
\Endo{F(E)} \otimes \mathsf{M}_\mathnormal{F}^{\infty}\ ,
\]
which we can identify with $\mathsf{M}_\mathnormal{F}^{\infty}$ using $\varphi_+$ to define a left $\mathsf{M}_\mathnormal{F}^{\infty}$-module structure on the fibres of $\mathcal{E}_+$ given by $a \cdot (\xi \otimes b) := \varphi_+^{-1}(a)(\xi \otimes b)$. To turn $\mathcal{E}_{0,+}$ into a saturated half-bundle we need to equip it with a bilinear and associative multiplication $\mu$. On $\mathcal{E}_+$ we define $\mu$ by the following diagram:
\[
\begin{tikzcd}[column sep=0.1cm]
(F(E_{g,z_1,z_2}) \otimes \mathsf{M}_\mathnormal{F}^{\infty}) \otimes_{\mathsf{M}_\mathnormal{F}^{\infty}} (F(E_{g,z_2,z_3}) \otimes \mathsf{M}_\mathnormal{F}^{\infty}) \ar[r,"\mu"] \ar[d, "\kappa" left, "\cong" right] & F(E_{g,z_1,z_3}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \\
F(E_{g,z_1,z_2}) \otimes F(E_{g,z_2,z_3}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \ar[r,"\tau \otimes \id{}" below, "\cong" above] & F(E_{g,z_1,z_2} \oplus E_{g,z_2,z_3}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \ar[u,equal]
\end{tikzcd}
\]
where the map $\kappa$ is given by
\(
\kappa((\xi \otimes a) \otimes (\eta \otimes b)) = \xi \otimes a\cdot (\eta \otimes b)\ .
\)
This is an isomorphism with inverse $\xi \otimes \eta \otimes a \mapsto (\xi \otimes 1) \otimes (\eta \otimes a)$. Let
\[
\ell_{z_i,z_j} \colon \mathsf{M}_\mathnormal{F}^{\infty} \otimes F(E_{z_i,z_j}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \to F(E_{z_i,z_j}) \otimes \mathsf{M}_\mathnormal{F}^{\infty}
\]
be defined by left multiplication. The associativity condition in Lemma~\ref{lem:associative_isos_MF} implies that the following diagram commutes:
\[
\begin{tikzcd}[column sep=1.5cm]
\mathsf{M}_\mathnormal{F}^{\infty} \otimes F(E_{z_1,z_2}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \otimes F(E_{z_2,z_3}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \ar[r,"\circled{1}"] \ar[d,"\ell_{z_1,z_2} \otimes \id{}" left] & \mathsf{M}_\mathnormal{F}^{\infty} \otimes F(E_{z_1,z_3}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \ar[d,"\ell_{z_1,z_3}"] \\
F(E_{z_1,z_2}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \otimes F(E_{z_2,z_3}) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \ar[r,"(\tau \otimes \id{}) \circ (\id{}\! \otimes \ell_{z_2,z_3})" below] & F(E_{z_1,z_3}) \otimes \mathsf{M}_\mathnormal{F}^{\infty}
\end{tikzcd}
\]
where we eliminated the group element $g$ from the notation for clarity, and where the map $\circled{1}$ is given by $\id{\mathsf{M}_\mathnormal{F}^{\infty}} \otimes ((\tau \otimes \id{\mathsf{M}_\mathnormal{F}^{\infty}}) \circ (\id{F(E_{z_1,z_2})} \otimes \,\ell_{z_2,z_3}))$. As a consequence we obtain that the multiplication $\mu$ is associative on $\mathcal{E}_+$.
On $\mathcal{E}_0$ we define the multiplication by the composition of the groupoid elements and the multiplication in $\mathsf{M}_\mathnormal{F}^{\infty}$, which is clearly bilinear and associative. If $(g,z_1,z_2) \in \mathcal{G}_+$ and $(g,z_2,z_3) \in \mathcal{G}_0$, then by definition $E(g,z_1,z_2)$ and $E(g,z_1,z_3)$ agree. This identification and the right multiplication by $\mathsf{M}_\mathnormal{F}^{\infty}$ defines the multiplication on elements from the set
\[
\{ (e_1,e_2) \in \mathcal{E}_{0,+}^{2}\ |\ \pi(e_1) \in \mathcal{G}_+, \pi(e_2) \in \mathcal{G}_0, (\pi(e_1),\pi(e_2)) \in \mathcal{G}^{(2)} \}\ .
\]
Using the left multiplication by $\mathsf{M}_\mathnormal{F}^{\infty}$ we can also extend $\mu$ over
\[
\{ (e_1,e_2) \in \mathcal{E}_{0,+}^{2}\ |\ \pi(e_1) \in \mathcal{G}_0, \pi(e_2) \in \mathcal{G}_+, (\pi(e_1),\pi(e_2)) \in \mathcal{G}^{(2)} \}\ .
\]
in an analogous way. The resulting multiplication map is still associative. The fibrewise inner products on the right Hilbert $A$-modules yield a global continuous inner product, i.e.\ for $\xi \otimes a,\ \eta \otimes b \in F(E(g,z_1,z_2)) \otimes \mathsf{M}_\mathnormal{F}^{\infty}$ we define
\[
\rscal{\xi \otimes a}{\eta \otimes b}{A} = \left(\rscal{\xi}{\eta}{\mathbb{C}}\,a^*b, (g,z_2)\right)
\]
This ensures that all of the properties in Def.~\ref{def:sat_half_bundle} b) hold. The multiplication also satisfies Def.~\ref{def:sat_half_bundle} c) by construction. Thus, we have proven:
\begin{lemma} \label{lem:cE_is_a_half_bundle}
The triple $(\mathcal{E}_{0,+}, \mu, \rscal{\,\cdot\,}{\,\cdot\,}{A})$ constructed above is a saturated half-bundle.
\end{lemma}
\subsection{The group action on $\mathcal{E}_{0,+}$} \label{subsec:groupaction}
The group $G = SU(n)$ acts on $\mathcal{G} = Y^{[2]}$ by conjugation, i.e.\ for $h \in G$ and $(g,z_1,z_2) \in \mathcal{G}$ we define
\[
h\cdot (g,z_1,z_2) = (hgh^{-1}, z_1, z_2)\ .
\]
Observe that conjugation is a group automorphism and does not change the set of eigenvalues. Therefore this action is admissible in the sense of Def.~\ref{def:admissible_action}. Let $(g,z_1,z_2) \in \mathcal{G}_{0,+}$. Any element $h \in G$ defines an isomorphism
\[
h \colon E(g,z_1,z_2) \to E(hgh^{-1},z_1,z_2) \quad , \quad \xi \mapsto h\xi\ ,
\]
where $h$ acts on $\Eig{g}{\lambda} \subset \mathbb{C}^n$ using the standard representation of $SU(n)$. The exponential functor $F$ turns this into a unitary isomorphism
\[
F(h) \colon F(E(g,z_1,z_2)) \to F(E(hgh^{-1},z_1,z_2))\ .
\]
The naturality of the structure isomorphism $\tau$ of $F$ ensures that the following diagram commutes:
\[
\begin{tikzcd}
F(E(g,z_1,z_2)) \otimes F(E(g,z_2,z_3)) \ar[d,"F(h) \otimes F(h)" left] \ar[r,"\tau"] & F(E(g,z_1,z_3)) \ar[d,"F(h)"] \\
F(E(hgh^{-1},z_1,z_2)) \otimes F(E(hgh^{-1},z_2,z_3)) \ar[r,"\tau" below] & F(E(hgh^{-1},z_1,z_3))
\end{tikzcd}
\]
Similarly, $G$ acts by conjugation on $\mathsf{M}_{\mathnormal{F}}$ and therefore also on the infinite tensor product $\mathsf{M}_\mathnormal{F}^{\infty}$. Denote this action by $\alpha \colon G \to
\Aut{\mathsf{M}_\mathnormal{F}^{\infty}}$. This turns $\mathsf{M}_\mathnormal{F}^{\infty}$ into a $G$-algebra. Combining $F(h)$ and $\alpha$ we obtain isomorphisms of $A$-$A$ bimodules
\begin{equation} \label{eqn:G-action_on_fibres}
F(E(g,z_1,z_2)) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \to F(E(hgh^{-1},z_1,z_2)) \otimes \mathsf{M}_\mathnormal{F}^{\infty}
\end{equation}
inducing a continuous action of $G$ on $\mathcal{E}_{0,+}$ covering the action of $G$ on $\mathcal{G}_{0,+}$.
\begin{lemma} \label{lem:associative_isos_equiv}
Let $E \to \mathcal{G}_{+}$ be the vector bundle with fibre $E(g,z_1,z_2)$ over $(g,z_1,z_2) \in \mathcal{G}_+$. The isomorphism $\varphi_+ \colon \Endo{F(E)} \otimes \mathsf{M}_\mathnormal{F}^{\infty} \to \mathcal{G}_+ \times \mathsf{M}_\mathnormal{F}^{\infty}$ constructed in Cor.~\ref{cor:phi_plus} is $G$-equivariant (where $h \in G$ acts on $\Endo{F(E)}$ via $\mathrm{Ad}_{F(h)}$ and on $\mathsf{M}_\mathnormal{F}^{\infty}$ via $\alpha_h$). In particular, \eqref{eqn:G-action_on_fibres} is an isomorphism of bimodules and $\mu$ from Lem.~\ref{lem:cE_is_a_half_bundle} is $G$-equivariant.
\end{lemma}
\begin{proof}
We use the notation introduced in Cor.~\ref{cor:phi_plus}. The action of $h \in SU(n)$ maps the eigenspace $\Eig{g}{\lambda}$ unitarily onto $\Eig{hgh^{-1}}{\lambda}$. This induces the given action of $G$ on $E$ and another unitary action of $G$ on $E^\perp$ in such a way that $\mathcal{G}_+ \times \mathbb{C}^n = E \oplus E^\perp$ is an equivariant direct sum decomposition. With respect to the induced actions on $\mathsf{M}_\mathnormal{F}^{\infty}(E)$ and $\mathsf{M}_\mathnormal{F}^{\infty}(E^\perp)$ the isomorphism $\tau \colon \mathcal{G}_+ \times \mathsf{M}_\mathnormal{F}^{\infty} \to \mathsf{M}_\mathnormal{F}^{\infty}(E) \otimes \mathsf{M}_\mathnormal{F}^{\infty}(E^\perp)$ from Cor.~\ref{cor:phi_plus} is $G$-equivariant. Since $G$ acts in the same way on each tensor factor of the infinite tensor product $\mathsf{M}_\mathnormal{F}^{\infty}(E)$ the shift isomorphism $\alpha^{\infty}$ from Cor.~\ref{cor:phi_plus} is equivariant as well. But these are the building blocks of $\varphi_+$. Thus, this implies the statement.
\end{proof}
Combining Thm.~\ref{thm:extension_of_Fell_bdls} and Cor.~\ref{cor:G-action_on_ext} we obtain the main result of this section:
\begin{corollary} \label{cor:Fell_bundle}
The triple $(\mathcal{E}_{0,+}, \mu, \rscal{\,\cdot\,}{\,\cdot\,}{A})$ together with the $G$-action defined above is a $G$-equivariant saturated half-bundle in the sense of Def.~\ref{def:eq_sat_half_bundle}. In particular, there is a saturated Fell bundle $\pi \colon \mathcal{E} \to \mathcal{G}$ with the properties
\begin{enumerate}[a)]
\item $\left.\mathcal{E}\right|_{\mathcal{G}_{0,+}} = \mathcal{E}_{0,+}$,
\item $e_1^* \cdot e_2 = \rscal{e_1}{e_2}{A}$ for all $e_1,e_2\in \mathcal{E}$ lying in the same fibre,
\item the group $G = SU(n)$ acts continuously on $\mathcal{E}$ such that $\pi \colon \mathcal{E} \to \mathcal{G}$ is equivariant and $g \cdot e^* = (g \cdot e)^*$.
\end{enumerate}
The Fell bundle $\mathcal{E}$ is unique up to isomorphism.
\end{corollary}
\begin{remark}
Note that for $F = \@ifnextchar^\@extp{\@extp^{\,}}^{\text{top}}$ the algebra $\mathsf{M}_\mathnormal{F}^{\infty}$ agrees with $\mathbb{C}$ and $F(E)$ is the determinant line bundle of $E$. Moreover, the fibre $(\mathcal{E}_{-})_{(g,z_1,z_2)}$ can be identified with $(\mathcal{E}_{+}^*)_{(g,z_2,z_1)}$. Thus, our definition generalises the equivariant basic gerbe as constructed in \cite{MurrayStevenson-basic_gerbe:2008}.
\end{remark}
\section{The $C^*$-algebra associated to $\mathcal{E}$}
In this section we will review the construction of the $C^*$-algebra $C^*(\mathcal{E})$ associated to the Fell bundle $\mathcal{E}$. A priori there are several $C^*$-completions of the section algebra of $\mathcal{E}$, but an amenability argument shows that all of them have to agree. We will also see that $C^*(\mathcal{E})$ is a continuous $C(G)$-algebra, which is stably isomorphic to a section algebra of a locally trivial bundle of $C^*$-algebras. As a consequence we obtain a Mayer-Vietoris sequence in equivariant $K$-theory.
We start by reviewing the construction of the reduced $C^*$-algebra associated to the Fell bundle $\mathcal{E}$. Let $A = C_0(Y, \mathsf{M}_\mathnormal{F}^{\infty})$. We can equip the space of compactly supported sections $C_c(Y^{[2]},\mathcal{E})$ with an $A$-valued inner product as follows:
\[
\rscal{\sigma}{\tau}{A}(g,z) = \int_{\mathbb{T} \setminus \{1\}} \sigma(g,w,z)^* \cdot \tau(g,w,z) dw\ ,
\]
where $\sigma, \tau \in C_c(Y^{[2]},\mathcal{E})$, the dot denotes the Fell bundle multiplication and we used the Lebesgue measure on $\mathbb{T} \setminus \{1\}$ with respect to which the subset $\EV{g} \cap (\mathbb{T} \setminus \{1\})$ is of measure zero. The space $C_c(Y^{[2]},\mathcal{E})$ also carries a natural right $A$-action given for $a \in A$ and $\sigma \in C_c(Y^{[2]},\mathcal{E})$ by
\[
(\sigma \cdot a)(g,z_1,z_2) = \sigma(g,z_1,z_2) \cdot a(g,z_2)\ .
\]
Denote by $L^2(\mathcal{E})$ the completion of $C_c(Y^{[2]},\mathcal{E})$ to a right Hilbert $A$-module with respect to the norm
\[
\lVert \sigma \rVert^2 = \sup_{(g,z) \in Y} \lVert\rscal{\sigma}{\sigma}{A}(g,z)\rVert\ .
\]
The space $C_c(Y^{[2]},\mathcal{E})$ can also be equipped with a convolution product, which assigns to $\sigma, \tau \in C_c(Y^{[2]},\mathcal{E})$ the section
\[
(\sigma \ast \tau)(g,z_1,z_2) = \int_{\mathbb{T} \setminus \{1\}} \sigma(g,z_1,w) \cdot \tau(g,w,z_2)\,dw
\]
Likewise, the $*$-operation on the Fell bundle induces an involution that maps $\sigma \in C_c(Y^{[2]},\mathcal{E})$ to
\(
\sigma^*(g,z_1,z_2) = \sigma(g,z_2,z_1)^*
\) and we have
\[
\rscal{\sigma \ast \tau_1}{\tau_2}{A} = \rscal{\tau_1}{\sigma^* \ast \tau_2}{A}\ ,
\]
i.e.\ convolution by $\sigma$ is an adjointable and hence bounded operator on $L^2(\mathcal{E})$. Let $\rbdd{A}{L^2(\mathcal{E})}$ be the adjointable right $A$-linear operators on the Hilbert $A$-module $L^2(\mathcal{E})$. By the above considerations we obtain a well-defined $*$-homomorphism
\[
C_c(Y^{[2]},\mathcal{E}) \to \rbdd{A}{L^2(\mathcal{E})}\ .
\]
\begin{definition} \label{def:red_CStar_of_E}
We define $C^*_r(\mathcal{E})$ to be the $C^*$-algebra obtained as the norm-closure of $C_c(Y^{[2]},\mathcal{E})$ in $\rbdd{A}{L^2(\mathcal{E})}$. It is called the \emph{reduced $C^*$-algebra associated to the Fell bundle $\mathcal{E}$}.
\end{definition}
Denote by $C_{\text{max}}^*(\mathcal{E})$ the maximal cross-sectional $C^*$-algebra of $\mathcal{E}$. The grou\-poid $\mathcal{G} = Y^{[2]}$ is equivalent in the sense of Renault to the trivial groupoid
\[
\begin{tikzpicture}
\node (G1) at (0,0) {$G$};
\node (G2) at (1.5,0) {$G$};
\draw[-latex] ([yshift=2]G1.east) -- ([yshift=2]G2.west) node[midway,above] {\scriptsize id};
\draw[-latex] ([yshift=-2]G1.east) -- ([yshift=-2]G2.west)node[midway,below] {\scriptsize id};
\end{tikzpicture}
\]
which is (topologically) amenable. Since amenability is preserved by equivalence, the same is true for $Y^{[2]}$. Therefore \cite[Thm.~1]{SimsWilliams-Amenability:2013} implies that the reduced and the universal norm agree on $C_c(Y^{[2]},\mathcal{E})$ and thus $C_{\text{max}}^*(\mathcal{E}) \cong C^*_r(\mathcal{E})$. Hence, we will drop the subscript from now on and write $C^*(\mathcal{E})$ for this $C^*$-algebra.
Observe that $C^*(\mathcal{E})$ carries a continuous $G$-action defined on sections $\sigma \in C_c(Y^{[2]},\mathcal{E})$ by
\[
(g \cdot \sigma)(h,z_1,z_2) = g \cdot \sigma(g^{-1}hg, z_1,z_2)\ .
\]
It is also a $C(G)$-algebra in a natural way via the action that is defined on sections $\sigma \in C_c(Y^{[2]},\mathcal{E})$ with $f \in C(G)$ as follows
\[
(f\cdot \sigma)(g,z_1,z_2) = f(g)\sigma(g,z_1,z_2)\ .
\]
Note that this is indeed central and therefore provides a $*$-homomorphism $C(G) \to Z(M(C^*(\mathcal{E})))$
\begin{lemma} \label{lem:C(G)-algebra}
The multiplication by elements in $C(G)$ defined above turns $C^*(\mathcal{E})$ into a continuous $C(G)$-algebra. For $g \in G$ let $Y^{[2]}_g$ be the subgroupoid defined by
\[
Y^{[2]}_g = \left(\pi^{-1}(g)\right)^{[2]} \ .
\]
and let $\mathcal{E}_g = \left. \mathcal{E} \right|_{Y^{[2]}_g}$. Then the fibre of $C^*(\mathcal{E})$ over $g$ is given by $C^*(\mathcal{E}_g)$.
\end{lemma}
\begin{proof}
We can identify $G$ with the orbit space of the action of $Y^{[2]}$ on $Y$. Thus, \cite[Cor.~10]{SimsWilliams-Amenability:2013} implies that $C^*(\mathcal{E})$ is a $C(G)$-algebra with fibres $C^*(\mathcal{E}_g)$. The only statement left to show is that $g \mapsto \lVert a_g \rVert$ is lower semi-continuous for every $a \in C^*(\mathcal{E})$, where $a_g$ denotes the image of $a$ in $C^*(\mathcal{E}_g)$. Without loss of generality we may assume that $a$ is a section $\sigma \in C_c(Y^{[2]},\mathcal{E})$. Let $g \in G$ and $\epsilon > 0$. Denote by $\tau_g \in L^2(\mathcal{E}_g)$ the restriction of $\tau \in L^2(\mathcal{E})$. Note that
\[
\lVert \tau_g \rVert_{L^2(\mathcal{E}_g)}^2 = \sup_{z \in \mathbb{T}}\,\lVert \rscal{\tau}{\tau}{A}(g,z) \rVert\ .
\]
Take $\tau \in L^2(\mathcal{E})$ with $\lVert \tau \rVert_{L^2(\mathcal{E})} = 1$, $\lVert \tau_g \rVert_{L^2(\mathcal{E}_g)} = 1$ and
\[
\rVert (\sigma \ast \tau)_g \lVert_{L^2(\mathcal{E}_g)} \geq \lVert \sigma_g \rVert_{C^*(\mathcal{E}_g)} - \frac{\epsilon}{2}\ .
\]
Since the inner product on $L^2(\mathcal{E})$ takes values in $A = C_0(Y, \mathsf{M}_\mathnormal{F}^{\infty})$, the function $f \colon Y \to \mathbb{R}$ given by
\(
f(h,z) = \lVert \rscal{\sigma \ast \tau}{\sigma \ast \tau}{A}(h,z) \rVert
\)
is continuous and extends to $G \times \mathbb{T}$. Since $\mathbb{T}$ is compact, there is $z_0 \in \mathbb{T}$ with $(g,z_0) \in Y$ and $f(g,z_0) = \sup_{z \in \mathbb{T}} f(g,z)$. By continuity of $h \mapsto f(h,z_0)$ there is an open neighbourhood $U$ of $g$ such that for all $h \in U$
\[
\lVert \rscal{\sigma \ast \tau}{\sigma \ast \tau}{A}(h,z_0) \rVert^{\frac{1}{2}} \geq \lVert \rscal{\sigma \ast \tau}{\sigma \ast \tau}{A}(g,z_0) \rVert^{\frac{1}{2}} - \frac{\epsilon}{2} \geq \lVert \sigma_g \rVert_{C^*(\mathcal{E}_g)} - \epsilon\ .
\]
But $\lVert \rscal{\sigma \ast \tau}{\sigma \ast \tau}{A}(h,z_0) \rVert^{\frac{1}{2}} \leq \lVert (\sigma \ast \tau)_h\rVert_{L^2(\mathcal{E}_h)}$ and since $\lVert \tau_h \rVert_{L^2(\mathcal{E}_h)} \leq 1$ we have
\[
\lVert \sigma_h \rVert_{C^*(\mathcal{E}_h)} \geq \lVert \sigma_g \rVert_{C^*(\mathcal{E}_g)} - \epsilon
\]
for all $h \in U$, which shows that the map is lower semi-continuous.
\end{proof}
We are going to prove that $C^*(\mathcal{E})$ is stably isomorphic to the section algebra of a locally trivial bundle with fibre $\mathsf{M}_\mathnormal{F}^{\infty} \otimes \mathbb{K}$. The following lemma will provide a first step and shows that local sections of $\pi \colon Y \to G$ give rise to trivialisations via Morita equivalences.
\begin{lemma} \label{lem:equiv_trivialisation}
Let $\sigma \colon V \to Y$ be a continuous section of $\pi \colon Y \to G$ over a closed subset $V \subset G$. Let $Y_V = \pi^{-1}(V)$. Denote the corresponding restriction of $\mathcal{G}$ (respectively $\mathcal{E}$) by $\mathcal{G}_V$ (respectively $\mathcal{E}_V$). Let $p_{\mathbb{T}} \colon Y \to \mathbb{T}$ be the restriction of the projection map to $Y$, let $t = p_{\mathbb{T}} \circ \sigma$ and let
\[
\iota \colon Y_V \to \mathcal{G}_V \qquad, \qquad (g,z) \mapsto (g,z,t(g))
\]
The Banach bundle $\mathcal{C}_V = \iota^*\mathcal{E}_V$ gives rise to a Morita equivalence $\mathsf{X}_V$ between $C^*(\mathcal{E}_V)$ and $C(V, \mathsf{M}_\mathnormal{F}^{\infty})$. If $V$ is $G$-invariant, then $\mathsf{X}_V$ is a $G$-equivariant Morita equivalence.
\end{lemma}
\begin{proof}
We will prove the first part of the statement by showing that $\mathcal{C}_V$ provides an equivalence of Fell bundles in the sense of \cite[Sec.~6]{MuhlyWilliams-Disintegration:2008} between $\mathcal{E}_V$ and the $C^*$-algebra bundle $V \times \mathsf{M}_\mathnormal{F}^{\infty}$ over $V$.
First note that the space $Y_V$ is an equivalence between $\mathcal{G}_V$ and the trivial groupoid over $V$, which we will also denote $V$ by a slight abuse of notation. Let $\beta \colon V \to \mathcal{G}$ be given by $\beta(g) = (g, t(g), t(g))$. We can and will identify $V \times \mathsf{M}_\mathnormal{F}^{\infty}$ with $\beta^*\mathcal{E}_V$. The bundle $\kappa \colon \mathcal{C}_V \to Y_V$ carries a left action of $\mathcal{E}_V$ and a right action of $\beta^*\mathcal{E}_V = V \times \mathsf{M}_\mathnormal{F}^{\infty} \to V$ such that \cite[Def.~6.1~(a)]{MuhlyWilliams-Disintegration:2008} holds. The two sesquilinear forms
\begin{align*}
\mathcal{C}_V \times \mathcal{C}_V \to \mathcal{E}_V \quad &, \quad (c,d) \mapsto \lscal{\mathcal{E}_V}{c}{d} := c \cdot d^* \ ,\\
\mathcal{C}_V \times_{\kappa} \mathcal{C}_V \to V \times \mathsf{M}_\mathnormal{F}^{\infty} \quad &, \quad (c,d) \mapsto \rscal{c}{d}{\beta^*\mathcal{E}_V} := c^* \cdot d
\end{align*}
satisfy the conditions listed in \cite[Def.~6.1~(b)]{MuhlyWilliams-Disintegration:2008}. Since $\mathcal{E}_V \to \mathcal{G}_V$ is a saturated Fell bundle, \cite[Def.~6.1~(c)]{MuhlyWilliams-Disintegration:2008} is also true for the bundle $\mathcal{C}_V \to Y_V$. Therefore, by \cite[Thm.~6.4]{MuhlyWilliams-Disintegration:2008} the completion $\mathsf{X}_V$ of $C_c(Y_V,\mathcal{C}_V)$ with respect to the norms induced by the above inner products is an imprimitivity bimodule between the $C^*$-algebras $C^*(\mathcal{E}_V)$ and $C(V,\mathsf{M}_\mathnormal{F}^{\infty})$.
For the proof of the second part suppose that $V$ is $G$-invariant. Note that the adjoint action lifts to $Y_V$. Moreover, the action described in Sec.~\ref{subsec:groupaction} restricts to $C^*(\mathcal{E}_V)$. Let $\alpha \colon G \to \Aut{\mathsf{M}_\mathnormal{F}^{\infty}}$ be the action of $G$ on $\mathsf{M}_\mathnormal{F}^{\infty}$ induced by $\Ad_{F(\rho)}$ where $\rho \colon G \to U(n)$ is the standard representation. Then $G$ acts on $C(V, \mathsf{M}_\mathnormal{F}^{\infty})$ via the adjoint action on $V$ and by $\alpha$ on $\mathsf{M}_\mathnormal{F}^{\infty}$. It is straightforward to check on sections that these definitions turn $\mathsf{X}_V$ into an equivariant Morita equivalence.
\end{proof}
\begin{remark}
By \cite[Lem.~9]{SimsWilliams-Amenability:2013} the $C(G)$-algebra structure is compatible with the Fell bundle restriction in the sense that restricting the sections to~$\mathcal{G}_V$ induces a natural $*$-isomorphism $C^*(\mathcal{E})(V) \cong C^*(\mathcal{E}_V)$.
\end{remark}
\begin{definition} \label{def:Fell_condition}
Let $X$ be a locally compact metrisable space. A continuous $C_0(X)$-algebra $A$ whose fibres are stably isomorphic to strongly self-absorbing $C^*$-algebras is said to satisfy the \emph{(global) generalised Fell condition} if for each $x \in X$ there exists a closed neighbourhood $V$ of $x$ and a projection $p \in A(V)$ such that $[p(v)] \in GL_1(K_0(A(v)))$ for all $v \in V$.
\end{definition}
\begin{lemma} \label{lem:Fell_condition}
The fibre algebra $C^*(\mathcal{E}_g)$ is Morita equivalent to $\mathsf{M}_\mathnormal{F}^{\infty}$ and the continuous $C(G)$-algebra $C^*(\mathcal{E})$ satisfies the generalised Fell condition.
\end{lemma}
\begin{proof}
Let $V = \{g\} \subset G$ and choose $z_0 \in \mathbb{T}$ such that $(g,z_0) \in Y$. The first statement is now a consequence of Lemma~\ref{lem:equiv_trivialisation} for $V = \{g\} \subset G$ and $\sigma \colon \{g\} \to Y$ given by $\sigma(g) = (g,z_0)$. For any $g \in G$ let $\mathsf{X}_g$ be the resulting Morita equivalence between $C^*(\mathcal{E}_g)$ and $\mathsf{M}_\mathnormal{F}^{\infty}$.
It remains to be proven that $C^*(\mathcal{E})$ satisfies the generalised Fell condition. Let $g \in G$ and choose an open neighbourhood $U$ of $g$ with the property that
\[
S = \{ z \in \mathbb{T} \setminus \{1\}\ |\ z \notin \EV{h} \text{ for any } h \in U \}
\]
contains an open interval $J \subset S$. Note that $U \times J^2 \subset Y^{[2]}$. Since there are no eigenvalues in between any two points of~$J$, the restriction of $\mathcal{E}$ to this subspace is just the trivial bundle with fibre~$\mathsf{M}_\mathnormal{F}^{\infty}$. Thus, extension of a section by $0$ produces an inclusion of convolution algebras
\[
C_c(U \times J^2, \mathsf{M}_\mathnormal{F}^{\infty}) \to C_c(Y^{[2]},\mathcal{E})
\]
and the completion of the left hand side in the representation on $L^2(\mathcal{E})$ is isomorphic to $C_0(U, \mathbb{K}(L^2(J)) \otimes \mathsf{M}_\mathnormal{F}^{\infty})$. The resulting $*$-homomorphism
\[
C_0(U, \mathbb{K} \otimes \mathsf{M}_\mathnormal{F}^{\infty}) \to C^*(\mathcal{E})
\]
is an inclusion of $C(G)$-algebras. Pick a closed neighbourhood $V \subset U$ of~$g$. If we restrict both sides to $V$ we obtain $C(V, \mathbb{K} \otimes \mathsf{M}_\mathnormal{F}^{\infty}) \to C^*(\mathcal{E})(V)$. Let $e \in \mathbb{K}$ be a rank $1$-projection and define $p \in C^*(\mathcal{E})(V)$ to be the image of $1_{C(V,\mathsf{M}_\mathnormal{F}^{\infty})} \otimes e$ with respect to this inclusion. Fix $v \in V$. The isomorphism
\[
K_0(C^*(\mathcal{E})(v)) \cong K_0(C^*(\mathcal{E}_v)) \cong K_0(\mathsf{M}_\mathnormal{F}^{\infty})
\]
induced by the Morita equivalence $\mathsf{X}_v$ maps the $K$-theory element $[p(v)] \in K_0(C^*(\mathcal{E})(v))$ to the class of the right Hilbert $\mathsf{M}_\mathnormal{F}^{\infty}$-module $p(v)\mathsf{X}_v$ in $K_0(\mathsf{M}_\mathnormal{F}^{\infty})$. We can choose the value of $z_0 \in \mathbb{T}$ used to define $\mathsf{X}_v$ such that $z_0 \in J$. Moreover, we can without loss of generality assume that $e \in \mathbb{K}(L^2(J))$ is the projection onto the subspace spanned by a compactly supported function $f \in C_c(J) \subset L^2(J)$. Then we have
\[
p(v)\mathsf{X}_v \cong p(v) \overline{C_c(J, \mathsf{M}_\mathnormal{F}^{\infty})}^{\lVert\cdot\lVert_{L^2}} \cong eL^2(J) \otimes \mathsf{M}_\mathnormal{F}^{\infty} \cong \mathsf{M}_\mathnormal{F}^{\infty}\ ,
\]
which represents the unit in $K_0(\mathsf{M}_\mathnormal{F}^{\infty})$ and is therefore invertible.
\end{proof}
\begin{corollary} \label{cor:CStarBundle}
The continuous $C(G)$-algebra $C^*(\mathcal{E})$ is stably isomorphic to the section algebra of a locally trivial bundle $\mathcal{A} \to G$ of $C^*$-algebras with fibre $\mathsf{M}_\mathnormal{F}^{\infty} \otimes \mathbb{K}$. In particular, it is classified by a continuous map
\[
G \to B\Aut{\mathsf{M}_\mathnormal{F}^{\infty} \otimes \mathbb{K}} \simeq BGL_1\left(KU\left[d_F^{-1}\right]\right)
\]
where $d_F = \dim(F(\mathbb{C}))$.
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:Fell_condition} the algebra $C^*(\mathcal{E})$ satisfies the generalised Fell condition and its fibres are Morita equivalent to the infinite UHF-algebra $\mathsf{M}_\mathnormal{F}^{\infty}$. Therefore the statement follows from \cite[Cor.~4.9]{DadarlatP-DixmierDouady:2016}.
\end{proof}
\subsection{The spectral sequence}
For $G = SU(n)$ let $\ell = n-1$ be the rank of~$G$. Choose a maximal torus $\mathbb{T}^{\ell}$ of $G$ with Lie algebra $\mathfrak{t}$. Let $\Lambda \subset \mathfrak{t}$ be the integral lattice with dual lattice $\Lambda^*$. Denote by $\rscal{\,\cdot\,}{\,\cdot\,}{\mathfrak{g}}$ the basic inner product on $\mathfrak{g}$. Choose a collection $\alpha_1, \dots, \alpha_{\ell} \in \Lambda^*$ of simple roots and let
\[
\mathfrak{t}_+ = \left\{ \xi \in \mathfrak{t} \ |\ \rscal{\alpha_j}{\xi}{\mathfrak{g}} \geq 0 \ \forall j \in \{0, \dots, \ell\} \right\}
\]
be the corresponding positive Weyl chamber. Let $\Delta^{\ell}$ be the standard $\ell$-simplex defined as
\[
\Delta^{\ell} = \left\{ (t_0, \dots, t_{\ell}) \in \mathbb{R}^{\ell+1} \ |\ \sum_{i=0}^\ell t_i = 1 \text{ and } t_j \geq 0\ \forall j \in \{0,\dots, \ell\} \right\}
\]
This simplex can be identified with the fundamental alcove of $G$, which is the subset cut out from $\mathfrak{t}_+$ by the additional inequality $\rscal{\alpha_0}{\xi}{\mathfrak{g}} \geq -1$, where $\alpha_0$ is the lowest root. The alcove parametrises conjugacy classes of $G$ in the sense that each such class contains a unique element $\exp(\xi)$ with $\xi \in \Delta^\ell$. Denote the corresponding continuous quotient map by
\[
q \colon G \to \Delta^\ell\ .
\]
A sketch of the situation in the case $n = 3$ is shown in Fig.~\ref{fig:alcoveSU(3)}.
\begin{figure}[htp]
\centering
\begin{tikzpicture}[scale=1.8]
\coordinate (0) at (0,0);
\coordinate (alpha2) at (0,{sqrt(2)});
\coordinate (alpha1) at ({sqrt(2)*sin(120)}, {sqrt(2)*cos(120)});
\coordinate (alpha3) at ($(alpha1)+(alpha2)$);
\coordinate (mu0) at ($1/3*(alpha1) + 2/3*(alpha2)$);
\coordinate (mu1) at ($2/3*(alpha1) + 1/3*(alpha2)$);
\draw[-latex] (0) -- (alpha2) node[midway,left] {$\alpha_2\!$};
\draw[-latex] (0) -- (alpha1) node[midway,right] {$\alpha_1$};
\draw[-latex] (0) -- ($-1*(alpha3)$) node[midway,right] {$\,\,\alpha_0$};
\foreach \n in {-2,...,0} {
\foreach \m in {0,...,2} {
\draw[black!20,fill=black!20] ($-\n*(mu0) - \m*(mu1)$) circle (0.8pt);
\draw[black!20,fill=black!20] ($\n*(mu0) + \m*(mu1)$) circle (0.8pt);
}
}
\draw[blue!10,fill=blue!10] (0,0) -- ($(mu0)$) -- ($(mu1)$) -- cycle;
\node at ($0.36*(mu0) + 0.36*(mu1)$) {$\Delta^2$};
\draw[dashed,blue] ($(mu0)$) -- ($(mu1)$);
\draw[dashed,blue] (0,0) -- ($2*(mu0)$);
\draw[dashed,blue] (0,0) -- ($2*(mu1)$);
\foreach \n in {0,...,2} {
\pgfmathsetmacro{\k}{2-\n}
\foreach \m in {0,...,\k} {
\draw[blue,fill=blue] ($\n*(mu0) + \m*(mu1)$) circle (0.9pt);
}
}
\end{tikzpicture}
\caption{\label{fig:alcoveSU(3)}The root system of $G = SU(3)$ with positive roots $\alpha_1$ and $\alpha_2$ and lowest root $\alpha_0$. The blue dots mark the weights inside the Weyl chamber and the fundamental alcove $\Delta^2$ is shown in blue.}
\end{figure}
For a non-empty subset $I \subset \{0, \dots, \ell\}$ we let $\Delta_I \subset \Delta^{\ell}$ be the closed subsimplex spanned by the vertices in $I$. Let $\xi_I \in \mathfrak{g}$ be the barycentre of $\Delta_I$ and let $G_I$ be the centraliser of $\exp(\xi_I)$. In fact, the isomorphism class of $G_I$ does not depend on our choice of $\xi_I$ as long as it is an element in the interior of~$\Delta_I$. For $J \subset I$ we have $G_I \subset G_J$, which induces a group homomorphism $G/G_I \to G/G_J$. Let
\[
\mathsf{G}_n = \coprod_{\lvert I \rvert = n+1} G/G_I\ .
\]
Denote the set $\{0, \dots, n\}$ by $[n]$. Let $f \colon [m] \to [n]$ be an order-preserving injective map. For each $I \subset \{0, \dots, \ell\}$ with $\lvert I \rvert = n+1$ there is a unique order-preserving identification $[n] \cong I$. Let $J \subset I$ be the subset corresponding to $f([m]) \subset [n]$ in this way. The above construction induces a continuous map
\(
f^*_I \colon G/G_I \to G/G_J
\)
and those maps combine to
\[
f^* \colon \mathsf{G}_n \to \mathsf{G}_m\ .
\]
This turns $[n] \mapsto \mathsf{G}_n$ with $f \mapsto f^*$ into a contravariant functor. Therefore $\mathsf{G}_{\bullet}$ is a semi-simplicial space. The group $G$ can be identified with its geometric realisation, i.e. \
\[
G \cong \lVert \mathsf{G}_{\bullet}\rVert = \left(\coprod_{I} G/G_I \times \Delta_I\right)/\!\!\sim
\]
where the equivalence relation identifies the faces of $\Delta_I$ using the maps $G/G_I \to G/G_J$ in the other component. The map $q \colon G \to \Delta^{\ell}$ is induced by the projection maps $G/G_I \times \Delta_I \to \Delta_I$ in this picture. Let
\[
A_i = \left\{ (t_0, \dots, t_{\ell}) \in \Delta^{\ell}\ \left|\ \sum_{k \neq i} t_k \leq \delta_n \right.\right\} \subset \Delta^{\ell}\ ,
\]
where $0 < \delta_n < 1$ is chosen such that the closed sets $(A_i)_{i \in \{0,\dots,\ell\}}$ cover $\Delta^{\ell}$. Then $(V_i)_{i \in \{0,\dots,\ell\}}$ with $V_k = q^{-1}(A_k)$ is a cover of $G$ by closed sets. Note that each $V_k$ is $G$-homotopy equivalent to the open star of the $k$th vertex. For each non-empty subset $I \subset \{0,\dots, \ell\}$ let
\[
A_I = \bigcap_{i \in I} A_i \quad \text{and} \quad V_I = q^{-1}(A_I) = \bigcap_{i \in I} V_i\ .
\]
Note that the barycentre $\xi_I$ of $\Delta_I$ is contained in $A_I$. Therefore there is a canonical embedding $\iota_I \colon G/G_I \to V_I$.
\begin{lemma} \label{lem:GmodGI}
The embedding $\iota_I \colon G/G_I \to V_I$ defined above is a $G$-equi\-va\-riant deformation retract.
\end{lemma}
\begin{proof}
Observe that $\Delta_{\{0,\dots,\ell\} \setminus \{i\}} \cap A_i = \emptyset$, which implies that $\Delta_K \cap A_j = \emptyset$ if $j \notin K$. Hence, the intersection $\Delta_K \cap A_I$ can only be non-empty if $I \subset K$. Therefore
\[
V_I = \left(\coprod_{I \subset K} G/G_K \times (\Delta_K \cap A_I)\right)/\!\!\sim
\]
and the quotient maps $G/G_K \to G/G_I$ induce a well-defined $G$-equivariant continuous map $r_I \colon V_I \to G/G_I$ with the property that $r_I \circ \iota_I = \id{G/G_I}$. Note that the set $A_I$ is convex and consider the contraction $H^A$ given by
\[
H^A \colon A_I \times [0,1] \to A_I \qquad, \qquad (\eta, s) \mapsto \xi_I + (1-s)(\eta - \xi_I) \ .
\]
Since $\xi_I \in \Delta_I$, each $H^A_s$ maps $A_I \cap \Delta_K$ to itself for all sets $K$ with $I \subset K$. Thus, we can lift $H^A$ to a $G$-equivariant continuous map
\[
H \colon V_I \times [0,1] \to V_I
\]
which provides a homotopy between $\iota_I \circ r_I$ and $\id{V_I}$ that leaves $\iota_I(G/G_I)$ invariant.
\end{proof}
\begin{corollary} \label{cor:spectral_seq}
Let $\rho \colon G \to U(n)$ be the standard representation of $G$. For each non-empty subset $I \subset \{0,\dots, \ell\}$ let $\rho_I \colon G_I \to U(n)$ be the restriction of $\rho$ to $G_I$. There is a cohomological spectral sequence with $E_1$-page
\[
E_1^{p,q} = \bigoplus_{\lvert I \rvert = p+1} K^{G_I}_q(\mathsf{M}_\mathnormal{F}^{\infty}) \cong
\begin{cases}
\bigoplus_{\lvert I \rvert = p+1} R(G_I)\left[F(\rho_I)^{-1}\right] & \text{for $q$ even}\ ,\\
0 & \text{for $q$ odd}\ ,
\end{cases}
\]
where $R(H)$ denotes the representation ring of $H$. It converges to the associated graded of a filtration of $K^G_{*}(C^*(\mathcal{E}))$.
\end{corollary}
\begin{proof}
The cover of $G$ by closed sets gives rise to a semi-simplicial space $\mathsf{V}_{\bullet}$ with
\[
\mathsf{V}_n = \coprod_{\lvert I \rvert = n+1} V_I\ .
\]
Let $f\colon [m] \to [n]$ be an injective order-preserving map. Given $I$ with $\lvert I \rvert = n+1$ there is a unique order-preserving identification $[n] \cong I$ and an inclusion $V_I \subset V_J$, where $J \subset I$ is the subset corresponding to $f([m])$. This construction gives a map
\(
f^* \colon \mathsf{V}_n \to \mathsf{V}_m
\)
which turns $[n] \mapsto \mathsf{V}_n$ into a contravariant functor. Let $\mathcal{A} \to G$ be the $C^*$-algebra bundle found in Cor.~\ref{cor:CStarBundle} and denote by $\mathcal{A}_I \to V_I$ its restriction to $V_I$. The spaces $\mathcal{A}_I$ can be assembled to form a simplicial bundle of $C^*$-algebras $\mathsf{A}_{\bullet} \to \mathsf{V}_{\bullet}$ with
\[
\mathsf{A}_n = \coprod_{\lvert I \rvert = n+1} \mathcal{A}_I\ .
\]
Replacing each $V_I$ by $G$ and each $\mathcal{A}_I$ by $\mathcal{A}$ in the definitions of $\mathsf{V}_{\bullet}$ and $\mathsf{A}_{\bullet}$ we also obtain two `constant' semi-simplicial spaces $\mathsf{A}^c_{\bullet}$ and $\mathsf{G}^c_{\bullet}$, respectively. Their geometric realisations are $\lVert \mathsf{G}^c_{\bullet} \rVert \cong G \times \Delta^{\ell}$ and $\lVert \mathsf{A}^c_{\bullet} \rVert \cong \mathcal{A} \times \Delta^{\ell}$. The canonical morphisms $\mathsf{A}_{\bullet} \to \mathsf{A}^c_{\bullet}$ and $\mathsf{V}_{\bullet} \to \mathsf{G}^c_{\bullet}$ give rise to the following diagram:
\[
\begin{tikzcd}
\lVert \mathsf{A}_{\bullet} \rVert \ar[r] \ar[d] & \mathcal{A} \times \Delta^{\ell} \ar[r,"\text{pr}_{\mathcal{A}}"] \ar[d] & \mathcal{A} \ar[d]\\
\lVert \mathsf{V}_{\bullet} \rVert \ar[r] & G \times \Delta^{\ell} \ar[r,"\text{pr}_G" below] & G
\end{tikzcd}
\]
The second square is a pullback. It is not hard to check that the first square is a pullback as well (compare this with \cite[Rem.~2.23]{HenriquesGepner-Orbispaces:2007}). Moreover, the composition $q \colon \lVert \mathsf{V}_{\bullet}\rVert \to G \times \Delta^{\ell} \to G$ in the diagram is a $G$-homotopy equivalence.
For each pair $(X,f)$ where $X$ is a compact Hausdorff $G$-space and $f \colon X \to G$ is a $G$-equivariant continuous map, consider the contravariant functor
\[
(X,f) \mapsto K_*^G(C(X, f^*\mathcal{A}))
\]
from the category of compact Hausdorff $G$-spaces over $G$ to abelian groups. This functor satisfies the analogues of conditions (i) -- (iv) in \cite[\S 5]{Segal-SpecSeq:1968} in this category. Using the same argument as in the proof of \cite[Prop.~5.1]{Segal-SpecSeq:1968} we therefore end up with a spectral sequence with $E_1$-page
\[
E_1^{p,q} = \bigoplus_{\lvert I \rvert = p+1} K_q^G(C(V_I, \mathcal{A}_I)) \cong \bigoplus_{\lvert I \rvert = p+1} K_q^G(C^*(\mathcal{E})(V_I))\ ,
\]
whose termination is $K^G_*(C^*(\mathcal{E})) \cong K^G_*(C(G,\mathcal{A}))$, since the $*$-homo\-mor\-phism
\(
C(G,\mathcal{A}) \to C(\lVert \mathsf{V}_{\bullet} \rVert, q^*\mathcal{A})
\)
induces an isomorphism in equivariant $K$-theory by $G$-homotopy invariance. What remains to be done is to identify these $K$-theory groups. By Lemma~\ref{lem:GmodGI} the map $G/G_I \to V_I$ induces an isomorphism $K_q^G(C^*(\mathcal{E})(V_I)) \to K_q^G(C^*(\mathcal{E})(G/G_I))$. By the same lemma each $V_k$ is $G$-equivariantly contractible. Therefore $\left.\mathcal{A}\right|_{G/G_I}$ is equivariantly trivialisable and
\[
K_q^G(C^*(\mathcal{E})(G/G_I)) \cong K^G_q(C(G/G_I,\mathsf{M}_\mathnormal{F}^{\infty})) \cong K^{G_I}_q(\mathsf{M}_\mathnormal{F}^{\infty})\ .
\]
The matrix algebra $\mathsf{M}_{\mathnormal{F}}^{\otimes k}$ is $G$-equivariantly Morita equivalent via the imprimitivity bimodule $F(\mathbb{C}^n)^{\otimes k}$ to $\mathbb{C}$ with the trivial $G$-action. Therefore $K^{G_I}_0(\mathsf{M}_{\mathnormal{F}}^{\otimes k}) \cong K^{G_I}_0(\mathbb{C}) \cong R(G_I)$ and $K_1^{G_I}(\mathsf{M}_{\mathnormal{F}}^{\otimes k}) = 0$. The $*$-homomorphism $\mathsf{M}_{\mathnormal{F}}^{\otimes k} \to \mathsf{M}_{\mathnormal{F}}^{\otimes (k+1)}$ given by $a \mapsto a \otimes 1$ induces the multiplication with the $G_I$-representation $F(\mathbb{C}^n)$ on $K_0^{G_I}$. This implies
\[
K_{2q}^{G_I}(\mathsf{M}_\mathnormal{F}^{\infty}) \cong R(G_I)\left[F(\rho_I)^{-1}\right] \qquad \text{and} \qquad K_{2q+1}^{G_I}(\mathsf{M}_\mathnormal{F}^{\infty}) = 0
\]
for the $K$-theory of the direct limit.
\end{proof}
\subsection{The module structure of $K_*^G(\mathcal{E})$}
An important consequence of Cor.~\ref{cor:CStarBundle} and Cor.~\ref{cor:spectral_seq} is that $K_*^G(C^*(\mathcal{E}))$ has a canonical module structure over the ring $K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \cong R(G)[F(\rho)^{-1}]$. To see this, we need the following observation about strongly self-absorbing $C^*$-dynamical systems.
\begin{lemma} \label{lem:absorption}
Let $G$ be a compact Lie group and let $\sigma \colon G \to U(n)$ be a unitary representation of $G$. Let $D = M_n(\mathbb{C})^{\otimes \infty}$ be the infinite UHF-algebra obtained from $M_n(\mathbb{C})$ and let $\alpha \colon G \to \Aut{D}$ be the action of $G$, which acts on each tensor factor of $D$ via $\Ad_{\sigma}$. Let $X$ be a compact Hausdorff $G$-space. Then the first tensor factor embedding
\[
\iota_X \colon C(X, D) \to C(X, D) \otimes D \quad , \quad f \mapsto f \otimes 1_D
\]
is strongly asymptotically G-unitarily equivalent to a $*$-isomorphism.
\end{lemma}
\begin{proof}
By \cite[Prop.~6.3]{Szabo-strselfabsdyn-II:2018}, $(D,\alpha)$ is strongly self-absorbing in the sense of \cite[Def.~3.1]{Szabo-strselfabsdyn:2018}. Using the notation introduced in \cite[Def.~2.4]{Szabo-strselfabsdyn-II:2018} we see that
\[
\left(D_{\infty,\alpha}\right)^{\alpha_{\infty}} = \left(D^{\alpha}\right)_{\infty}
\]
by integrating over $G$. The fixed-point algebra $D^{\alpha}$ is an AF-algebra and thus has a path-connected unitary group. By \cite[Prop.~2.19]{Szabo-strselfabsdyn-II:2018} the action $\alpha$ is unitarily regular. The result will therefore follow from \cite[Thm.~3.2]{Szabo-strselfabsdyn-III:2017} if we can construct a unital equivariant $*$-homomorphism
\(
\theta \colon D \to F_{\infty,\alpha}(C(X,D))
\). Let $s_k \colon D \to D$ be an approximately central sequence of unital equivariant $*$-homomorphisms and define
\[
\theta(d)_k = 1_{C(X)} \otimes s_k(d)\ .
\]
This $*$-homomorphism satisfies all conditions.
\end{proof}
There is a canonical isomorphism $K^G_0(\mathbb{C}) \cong R(G)$. As we have seen above, we also have $K^G_0(\mathsf{M}_\mathnormal{F}^{\infty}) \cong R(G)[F(\rho)^{-1}]$, where the isomorphism can be chosen in such a way that the unit map $\mathbb{C} \to \mathsf{M}_\mathnormal{F}^{\infty}$ induces the localisation homomorphism $R(G) \to R(G)[F(\rho)^{-1}]$. Note that the multiplication in $R(G)$ corresponds to the tensor product in $K^G_0(\mathbb{C})$. Likewise, the identification $K^G_0(\mathsf{M}_\mathnormal{F}^{\infty}) \cong R(G)[F(\rho)^{-1}]$ is also an isomorphism of rings. The multiplication in $R(G)[F(\rho)^{-1}]$ corresponds to
\[
\begin{tikzcd}
K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \otimes K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \ar[r] & K_0^G(\mathsf{M}_\mathnormal{F}^{\infty} \otimes \mathsf{M}_\mathnormal{F}^{\infty}) & \ar[l,"\cong"] K_0^G(\mathsf{M}_\mathnormal{F}^{\infty})
\end{tikzcd}
\]
where the second map is induced by the first factor embedding. To see why this is true it suffices to note that the following diagram commutes
\[
\begin{tikzcd}
K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \otimes K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \ar[r] & K_0^G(\mathsf{M}_\mathnormal{F}^{\infty} \otimes \mathsf{M}_\mathnormal{F}^{\infty}) & \ar[l,"\cong" above] K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \\
K_0^G(\mathbb{C}) \otimes K_0^G(\mathbb{C}) \ar[r] \ar[u] & K_0^G(\mathbb{C}) \ar[u] \ar[ur]
\end{tikzcd}
\]
where the vertical homomorphism are induced by unit maps and turn into isomorphisms after localisation.
\begin{prop} \label{prop:module_structure}
The first factor embedding $C^*(\mathcal{E}) \to C^*(\mathcal{E}) \otimes \mathsf{M}_\mathnormal{F}^{\infty}$ given by $a \mapsto a \otimes 1_{\mathsf{M}_\mathnormal{F}^{\infty}}$ induces an isomorphism in equivariant $K$-theory and turns $K_*^G(C^*(\mathcal{E}))$ into a module over the ring $K_0^G(\mathsf{M}_\mathnormal{F}^{\infty})\cong R(G)[F(\rho)^{-1}]$ via
\[
\begin{tikzcd}
K_*^G(C^*(\mathcal{E})) \otimes K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \ar[r] & K_*^G(C^*(\mathcal{E}) \otimes \mathsf{M}_\mathnormal{F}^{\infty}) & \ar[l,"\cong" midway] K_*^G(C^*(\mathcal{E}))
\end{tikzcd}
\]
where the first homomorphism is induced by the tensor product in $K$-theory. The sequence constructed in Cor.~\ref{cor:spectral_seq} is a spectral sequence of modules.
\end{prop}
\begin{proof}
Fix a non-empty subset $I \subset \{0,\dots, \ell\}$. Let $X_I = \iota_I(G/G_I) \subset G$, denote the barycentre of $\Delta_I$ by $\xi_I$ and note that $X_I = q^{-1}(\xi_I)$. We will first show that the embedding $C^*(\mathcal{E})(X_I) \to C^*(\mathcal{E})(X_I) \otimes \mathsf{M}_\mathnormal{F}^{\infty}$ induces an isomorphism in equivariant $K$-theory. The group elements $g \in X_I$ share the same eigenvalues. Thus, there exists $z_0 \in \mathbb{T}$ with the property that $(g,z_0) \in Y$ for one (and hence all) $g \in X_I$. Define $\sigma_I \colon X_I \to Y$ by $\sigma_I(g) = (g,z_0)$ and let $\mathsf{X}_I$ be the $G$-equivariant Morita equivalence resulting from Lemma~\ref{lem:equiv_trivialisation} using the section $\sigma_I$. The claimed isomorphism is then a consequence of the following commutative diagram
\[
\begin{tikzcd}
K^G_*(C^*(\mathcal{E})(X_I)) \ar[r] \ar[d,"\cong"] & K^G_*(C^*(\mathcal{E})(X_I) \otimes \mathsf{M}_\mathnormal{F}^{\infty}) \ar[d,"\cong"] \\
K^G_*(C(X_I,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[r,"\cong"] & K^G_*(C(X_I,\mathsf{M}_\mathnormal{F}^{\infty} \otimes \mathsf{M}_\mathnormal{F}^{\infty}))
\end{tikzcd}
\]
in which the vertical maps are induced by $\mathsf{X}_I$ and the horizontal isomorphism follows from Lemma~\ref{lem:absorption}.
The first factor embedding $C^*(\mathcal{E}) \to C^*(\mathcal{E}) \otimes \mathsf{M}_\mathnormal{F}^{\infty}$ induces a natural transformation between the spectral sequences from Cor.~\ref{cor:spectral_seq} associated to the functors $(X,f) \mapsto K^G_*(C^*(f^*\mathcal{E}))$ and $(X,f) \mapsto K^G_*(C^*(f^*\mathcal{E}) \otimes \mathsf{M}_\mathnormal{F}^{\infty})$, which is an isomorphism on all pages by our previous observation. This implies that $K^G_*(C^*(\mathcal{E})) \to K^G_*(C^*(\mathcal{E}) \otimes \mathsf{M}_\mathnormal{F}^{\infty})$ is also an isomorphism, which gives rise to the module structure as described. A diagram chase shows that this structure is compatible with the $K$-theoretic description of the multiplication in $K^G_0(\mathsf{M}_\mathnormal{F}^{\infty})$ described above.
\end{proof}
\begin{remark}
It would be interesting to know whether the first factor embedding $C^*(\mathcal{E}) \to C^*(\mathcal{E}) \otimes \mathsf{M}_\mathnormal{F}^{\infty}$ itself is strongly asymptotically $G$-unitarily equivalent to a $*$-isomorphism. The analogous non-equivariant statement is true by \cite[Lemma~3.4]{DadarlatP-DixmierDouady:2016} and Cor.~\ref{cor:CStarBundle} .
\end{remark}
\section{The equivariant higher twisted $K$-theory of $SU(n)$}
In this section we will compute the equivariant higher twisted $K$-theory of $G = SU(n)$ for $n \in \{2,3\}$ with respect to the adjoint action of $G$ on itself and the equivariant twist described by the Fell bundle $\mathcal{E}$ constructed in Cor.~\ref{cor:Fell_bundle}. This is defined to be the $G$-equivariant operator algebraic $K$-theory of the $G$-$C^*$-algebra $C^*(\mathcal{E})$, i.e.
\(
K^G_{\ast}(C^*(\mathcal{E}))
\).
\subsection{The case $SU(2)$}
For $G = SU(2)$ we have $\ell =1$. The map $q \colon G \to \Delta^1$ can be described as follows: Since the eigenvalues of any $g \in SU(2)$ are conjugate to one another, each $g \neq \pm 1$ has a unique eigenvalue $\lambda_g$ with non-negative imaginary part. Note that $g \mapsto \arg(\lambda_g)$ extends to all of $G$. Let $q \colon SU(2) \to [0,1]$ be given by
\[
q(g) = \frac{\arg(\lambda_g)}{\pi} \in [0,1]\ .
\]
If we pick $\delta_n = \frac{2}{3}$ the spectral sequence from Cor.~\ref{cor:spectral_seq} boils down to the Mayer-Vietoris sequence for the $G$-equivariant closed cover of $SU(2)$ by $V_0 = q^{-1}([0,\tfrac{2}{3}])$ and $V_1 = q^{-1}([\tfrac{1}{3},1])$ in this case, which reduces to the following six-term exact sequence due to the vanishing $K_1^G$-terms:
\[
\begin{tikzcd}[column sep=0.8cm]
K_0^G(C^*(\mathcal{E})) \ar[r] & R_F(SU(2)) \oplus R_F(SU(2)) \ar[r,"d"] & R_{F}(\mathbb{T}) \ar[d] \\
0 \ar[u] & \ar[l] 0 & \ar[l] K_1^G(C^*(\mathcal{E}))
\end{tikzcd}
\]
where $R_F(\mathbb{T}) = R(\mathbb{T})[F(\rho_{\{0,1\}})^{-1}]$ and $R_F(SU(2)) = R(SU(2))[F(\rho)^{-1}]$. Let $\rho$ be the standard representation of $SU(2)$, then
\begin{align*}
R_F(SU(2)) & \cong \mathbb{Z}[\rho][F(\rho)^{-1}] \ ,\\
R_F(\mathbb{T}) & \cong \mathbb{Z}[t,t^{-1}][F(t + t^{-1})^{-1}]\ .
\end{align*}
\begin{lemma} \label{lem:basis}
As a module over $R_F(SU(2))$ the ring $R_F(\mathbb{T})$ is free of rank~$2$ and $\beta = \{1, t\}$ is a basis.
\end{lemma}
\begin{proof}
Let $q = t + t^{-1}$. It suffices to show that any $f \in \mathbb{Z}[t,t^{-1}]$ can be uniquely written as $f = g_1 + tg_2$ with $g_i \in \mathbb{Z}[q] \subset \mathbb{Z}[t,t^{-1}]$. Let $\alpha \in \Aut{\mathbb{Z}[t,t^{-1}]}$ be given by $\alpha(t) = t^{-1}$. Let
\[
g_1 = \frac{t^{-1}f - t\alpha(f)}{t^{-1} - t} \qquad, \qquad g_2 = \frac{\alpha(f) - f}{t^{-1} - t} \ .
\]
Note that $\alpha(g_i) = g_i$ for $i \in \{1,2\}$ and $f = g_1 + tg_2$. Let $m \in \mathbb{Z}$ and consider $f = t^m$. In this case the numerator is divisible by the denominator and $g_i \in \mathbb{Z}[q]$. Using the linearity of the expressions in $f$ we see that $g_i \in \mathbb{Z}[q]$ holds in general. Suppose that $g_1 + tg_2 = g_1' + tg_2'$ for another pair $g_i' \in \mathbb{Z}[q]$. Applying $\alpha$ to both sides we obtain
\[
\begin{pmatrix}
1 & t \\
1 & t^{-1}
\end{pmatrix}
\begin{pmatrix}
g_1 \\
g_2
\end{pmatrix} =
\begin{pmatrix}
1 & t \\
1 & t^{-1}
\end{pmatrix}
\begin{pmatrix}
g_1' \\
g_2'
\end{pmatrix} \ .
\]
Multiplication by the matrix $\left(\begin{smallmatrix}
t^{-1} & -t \\
-1 & 1
\end{smallmatrix}\right)$ yields $(t^{-1} - t)g_i = (t^{-1} - t)g_i'$ and a comparison of coefficients gives $g_i = g_i'$.
\end{proof}
Observe that $F(\mathbb{C})$ is a representation of $\mathbb{T}$, which we will identify with its corresponding polynomial in $\mathbb{Z}[t,t^{-1}]$ and denote by $F(t)$. Let $\alpha \in \Aut{\mathbb{Z}[t,t^{-1}]}$ be the ring automorphism given by $\alpha(t) = t^{-1}$ and note that
\[
\alpha(F(t)) = F(t^{-1})\ .
\]
\begin{lemma} \label{lem:differential}
If we identify $R_F(\mathbb{T})$ with $R_F(SU(2)) \oplus R_F(SU(2))$ using the basis $\beta = \{1,t\}$ from Lemma~\ref{lem:basis}, then the homomorphism
\[
d \colon R_F(SU(2)) \oplus R_F(SU(2)) \to R_F(\mathbb{T})
\]
is represented by the matrix
\[
\begin{pmatrix}
1 & -g_1(F) \\
0 & -g_2(F)
\end{pmatrix}
\]
for polynomials $g_i(F) \in R_F(SU(2))$ given by
\begin{equation} \label{eqn:coeff_F}
g_1(F) = \frac{t^{-1}F(t) - tF(t^{-1})}{t^{-1} - t} \qquad \text{and} \qquad g_2(F) = \frac{F(t^{-1}) - F(t)}{t^{-1} - t}\ .
\end{equation}
(Note that $g_i(F)$ satisfies $\alpha(g_i(F)) = g_i(F)$ and therefore describes an element in $R_F(SU(2))$.
\end{lemma}
\begin{proof}
We can identify $K^G_0(\mathsf{M}_\mathnormal{F}^{\infty}) \cong R_F(SU(2))$ and $K^{\mathbb{T}}_0(\mathsf{M}_\mathnormal{F}^{\infty}) \cong R_F(\mathbb{T})$. Using these isomorphisms, the homomorphism $d$ fits into the commutative diagram
\begin{equation} \label{eqn:diag_d}
\begin{tikzcd}
K^G_0(C^*(\mathcal{E})(V_0)) \oplus K^G_0(C^*(\mathcal{E})(V_1)) \ar[d,"\cong" left] \ar[r] & K^G_0(C^*(\mathcal{E})(V_0 \cap V_1)) \ar[d,"\cong"] \\
K^G_0(\mathsf{M}_\mathnormal{F}^{\infty}) \oplus K^G_0(\mathsf{M}_\mathnormal{F}^{\infty}) \ar[r,"d" below] & K^{\mathbb{T}}_0(\mathsf{M}_\mathnormal{F}^{\infty})
\end{tikzcd}
\end{equation}
where the upper horizontal map is induced by the inclusions $V_0 \cap V_1 \to V_i$ for $i \in \{0,1\}$. The two vertical isomorphisms are constructed as follows: Both $X_{i} \cong G/G_{i} = \ast$ are one-point spaces. By Lemma~\ref{lem:GmodGI} the inclusions $X_{i} \to V_i$ are $G$-equivariant homotopy equivalences. We will choose specific Morita equivalences between $C^*(\mathcal{E})(V_i)$ and $C(V_i, \mathsf{M}_\mathnormal{F}^{\infty})$, which give rise to the following isomorphisms
\[
\begin{tikzcd}
K^G_0(C^*(\mathcal{E})(V_i)) \ar[r,"\cong"] & K^G_0(C(V_i,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[r,"\cong"] & K^G_0(\mathsf{M}_\mathnormal{F}^{\infty})
\end{tikzcd}
\]
where the first map is induced by the Morita equivalence and the second by the inclusion $X_i \to V_i$. Let $\omega_0 = -1$ and $\omega_1 = \exp(\tfrac{\pi i}{6})$. Consider the continuous sections $\sigma_i \colon V_i \to Y$ given by $\sigma_i(g) = (g,\omega_i)$ for $i \in \{0,1\}$, which are well-defined by the definition of $V_i$. By Lem.~\ref{lem:equiv_trivialisation} they induce equivariant Morita equivalences $\mathsf{X}_{V_i}$ between $C^*(\mathcal{E})(V_i)$ and $C(V_i,\mathsf{M}_\mathnormal{F}^{\infty})$.
For the vertical isomorphism on the right hand side we restrict the Morita equivalence induced by $\mathsf{X}_{V_0}$ to $V_0 \cap V_1$ and use
\[
\begin{tikzcd}
K^G_0(C^*(\mathcal{E})(V_0 \cap V_1)) \ar[r,"\cong"] & K^G_0(C(V_0 \cap V_1,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[r,"\cong"] & K^G_0(C(G/\mathbb{T}, \mathsf{M}_\mathnormal{F}^{\infty}))
\end{tikzcd}
\]
with the first map induced by the equivalence and the second by
\[
G/\mathbb{T} \cong X_{\{0,1\}} \to V_0 \cap V_1 \qquad , \qquad [g] \mapsto g\begin{pmatrix}
i & 0 \\
0 & -i
\end{pmatrix}g^{-1}\ .
\]
We obtain the following description of $d$: If $d(H_0,H_1) = d^{(0)}(H_0) + d^{(1)}(H_1)$, then $d^{(i)}$ fits into the following commutative diagram:
\[
\begin{tikzcd}[column sep=2cm]
K_0^G(C^*(\mathcal{E})(V_i)) \ar[r,"\text{res}"] \ar[d,"\cong" left,"\mathsf{X}_{V_i}" right] & K_0^G(C^*(\mathcal{E})(V_0 \cap V_1)) \ar[d,"\cong" left, "\left.\mathsf{X}_{V_0}\right|_{V_0 \cap V_1}" right] \\
K_0^G(C(V_i,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[d,"\cong" left] \ar[r,"\mathsf{X}_{V_i}^{\text{op}} \otimes \mathsf{X}_{V_0}"] & K_0^G(C(V_0 \cap V_1,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[d,"\cong" left] \\
K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \ar[r, "d^{(i)}"] & K_0^{G}(C(G/\mathbb{T},\mathsf{M}_\mathnormal{F}^{\infty}))
\end{tikzcd}
\]
where the tensor product on the middle horizontal arrow is taken over $C^*(\mathcal{E})(V_0 \cap V_1)$ and the bimodules have to be restricted to $V_0 \cap V_1$. In the case $i = 0$ the bimodule $\mathsf{X}_{V_0}^{\text{op}} \otimes \mathsf{X}_{V_0}$ is trivial. Let $H_0 \in R(G) \subset K^G_0(\mathsf{M}_\mathnormal{F}^{\infty})$. Using the isomorphism $K_0^{G}(C(G/\mathbb{T},\mathsf{M}_\mathnormal{F}^{\infty})) \cong K_0^{\mathbb{T}}(\mathsf{M}_\mathnormal{F}^{\infty})$ the element $d^{(0)}(H_0)$ agrees with the restriction of $H_0$ to $\mathbb{T}$. This gives the first column of the matrix in the statement.
Let $I = \{0,1\}$ and define $\sigma_{I} \colon V_I \to \mathcal{G}$ by $\sigma_{I}(g) = (g,\omega_1,\omega_0)$. Note that this is well-defined and we have the following isomorphism of bimodules:
\[
\mathsf{X}_{V_1}^{\text{op}} \otimes \mathsf{X}_{V_0} \cong C(V_{I}, \sigma_{I}^*\mathcal{E})\ .
\]
All elements of $X_I$ have eigenvalues $\pm i$. Therefore over $X_I$ this bimodule restricts to continuous sections of the bundle with fibre $E_g \otimes \mathsf{M}_\mathnormal{F}^{\infty}$, where
\[
E_{g} = F(\Eig{g}{i})\ ,
\]
i.e.\ it corresponds to taking the tensor product with the vector bundle $E \to G/\mathbb{T}$, which is isomorphic to $F(L)$, where $L \to G/\mathbb{T}$ is the canonical line bundle associated to the principal $\mathbb{T}$-bundle $G \to G/\mathbb{T}$. The fibre of $F(L)$ over $[e] \in G/\mathbb{T}$ is the representation $F(\mathbb{C})$, where $\mathbb{T}$ acts on $\mathbb{C}$ by left multiplication. By Lemma~\ref{lem:basis} the decomposition of $F(t) \in R(\mathbb{T})$ with respect to the basis $\beta$ is given by $g_1(F)$ and $g_2(F)$. Together with the sign convention in the exact sequence this explains the second column.
\end{proof}
\begin{theorem} \label{thm:twisted_eq_K_of_SU2}
Let $F$ be an exponential functor with $F(\mathbb{C}) \ncong F(\mathbb{C}^*)$ as $\mathbb{T}$-representations and let $g_2(F) \in R_F(SU(2))$ be the polynomial from \eqref{eqn:coeff_F}. The equivariant higher twisted $K$-theory of $G = SU(2)$ with twist described by the Fell bundle $\mathcal{E}$ constructed from $F$ is given by
\begin{align*}
K^G_0(C^*(\mathcal{E})) & = 0\ , \\
K^G_1(C^*(\mathcal{E})) & = R_F(SU(2))/(g_2(F))
\end{align*}
In particular, note that $K^G_1(C^*(\mathcal{E}))$ is always a quotient ring of $R_F(SU(2))$.
\end{theorem}
\begin{proof}
The localisation of an integral domain at any multiplicative subset continues to be an integral domain. By hypothesis we have $g_2(F) \neq 0$. Thus, the matrix representation of $d$ from Lemma~\ref{lem:differential} implies that $d$ is injective, which proves $K^G_0(C^*(\mathcal{E})) = 0$. The group $K^G_1(C^*(\mathcal{E}))$ is isomorphic to the cokernel of $d$ and the matrix representation of $d$ implies that it has the claimed form.
\end{proof}
We will conclude this section with explicit computations for some exponential functors.
\begin{example}
Let $b_1,\dots,b_k \in \mathbb{N}$ and let $W_j = \mathbb{C}^{b_j}$. Let
\[
F_j(V) = \bigoplus_{m \in \mathbb{N}_0} W_j^{\otimes m} \otimes \@ifnextchar^\@extp{\@extp^{\,}}^m(V)\ .
\]
and define $F = F_1 \otimes \dots \otimes F_k$. By \cite[Sec.~2.2]{Pennig:2018} each $F_j$ is an exponential functor and so is $F$. The character of the irreducible representation $\rho_m$ of $SU(2)$ with highest weight $m$ in $R(\mathbb{T})$ is $t^{m} + t^{m-1} + ... + t^{-m+1} + t^{-m}$. From this we compute $g_2(F) = g_2(F_1)\cdot \dots \cdot g_2(F_k)$ with $g_2(F_i) = 1 + b_i\,t$ as follows:
\begin{align*}
g_2(F) & = \frac{\prod_{i=1}^k (1+b_i\,t) - \prod_{i=1}^k (1+b_i\,t^{-1})}{t - t^{-1}}
= \sum_{\ell=1}^k\left(\sum_{ \genfrac{}{}{0pt}{3}{I \subset \{1,\dots,k\}}{\lvert I \rvert = \ell}} b_I\right)\rho_{\ell-1}
\end{align*}
where $b_I$ is the product over all $b_i$ with $i \in I$. In case $b_1 = \dots = b_k = 1$, i.e.\ for $F(V) = \@ifnextchar^\@extp{\@extp^{\,}}^*(V)^{\otimes k}$ we obtain
\[
g_2(F) = \sum_{\ell=1}^k \binom{k}{\ell} \rho_{\ell-1}\ .
\]
Using the fusion rules for $SU(2)$ the corresponding rings can be computed explicitly by expressing the ideals in terms of $\rho = \rho_1$. For example,
\begin{align*}
k = 3:& \qquad g_2(F) = 3 + 3\rho_1 + \rho_2 = \rho^2 + 3\rho + 2 = (\rho + 2)(\rho+1)\ ,\\
k = 4:& \qquad g_2(F) = 4 + 6\rho_1 + 4\rho_2 + \rho_3 = \rho(\rho + 2)^2\ ,\\
k = 5:& \qquad g_2(F) = 5 + 10\rho_1 + 10\rho_2 + 5\rho_3 + \rho_4 = (\rho+2)^2(\rho^2 + \rho -1)\ ,\\
k = 6:& \qquad g_2(F) = 6 + 15\rho_1 + 20\rho_2 + 15\rho_3 + 6\rho_4 + \rho_5 = (\rho+2)^3(\rho^2-1)\ .
\end{align*}
Note that the element $\rho+2 = \@ifnextchar^\@extp{\@extp^{\,}}^*(\rho)$ is a unit in $R_F(SU(2))$. The case $k=5$ seems to be particularly interesting as the following corollary shows.
\begin{corollary}
The equivariant higher twisted $K$-theory of $SU(2)$ with twist given by the exponential functor $F=\left(\@ifnextchar^\@extp{\@extp^{\,}}^*\right)^{\otimes 5}$ satisfies
\[
K^G_1(C^*(\mathcal{E})) \cong \mathbb{Z} \oplus \mathbb{Z}
\]
with basis $\{1,x\}$, where $x$ is the class of $-\rho$. It carries a ring structure given by the Yang-Lee fusion rules
\(
x^2 = x + 1
\).
\end{corollary}
\begin{proof}
Note that $K_1^G(C^*(\mathcal{E})) \cong R_F(SU(2))/(g_2(F))$ carries a ring structure and in this quotient ring the relation $(\rho+2)(1-\rho) = 1$ holds, which implies that the localisation is not necessary, since $\rho+2$ is already invertible with
\[
(\rho + 2)^{-1} = (1-\rho)\ .
\]
The second statement follows directly from the relation $\rho^2 + \rho -1 = 0$.
\end{proof}
\end{example}
\begin{example}
The classical case at level $k \in \mathbb{N}$ corresponds to the choice $F = \left(\@ifnextchar^\@extp{\@extp^{\,}}^{\textrm{top}}\right)^{\otimes k}$. In this situation we have $\mathsf{M}_\mathnormal{F}^{\infty} \cong \mathbb{C}$. This implies
\[
R_F(SU(2)) \cong R(SU(2))\ .
\]
Together with
\[
g_2(F) = \frac{t^k - t^{-k}}{t - t^{-1}} = \rho_{k-1}
\]
we obtain $K^G_1(C^*(\mathcal{E})) = R(SU(2))/(\rho_{k-1})$.
\end{example}
\subsection{The case $SU(3)$}
The group $G = SU(3)$ has rank $\ell = 2$. Let $F$ be an exponential functor and let $\rho$ be the standard representation of $G$ on $\mathbb{C}^3$. Consider the following localisations of representation rings:
\begin{align*}
R_F(G_I) &= R(G_I)\left[F(\left.\rho\right|_{G_I})^{-1}\right] \ .
\end{align*}
For $\lvert I \rvert = 1$ we have $G_I = SU(3)$ and we will denote $G_{\{i\}}$ by $G_i$. In case $\lvert I \rvert = 2$ the group $G_I$ is isomorphic to $U(2)$ and the choice of $I$ determines an embedding $U(2) \subset SU(3)$. If $I = \{0,1,2\}$, then $G_I$ is the subgroup of all diagonal matrices, which is our choice of maximal torus $\mathbb{T}^2 \subset SU(3)$. The $E_1$-page of the spectral sequence from Cor.~\ref{cor:spectral_seq} vanishes in odd rows and has the following chain complex in the even rows:
\[
\begin{tikzcd}[column sep=1.5cm]
R_F(SU(3))^3 \ar[r,"d_0" above] & R_F(U(2))^3 \ar[r,"d_1" above] & R_F(\mathbb{T}^2)
\end{tikzcd}
\]
The generators for the representation rings are chosen as follows:
\begin{align*}
R(SU(3)) &\cong \mathbb{Z}[s_1,s_2] \ ,\\
R(U(2)) &\cong \mathbb{Z}[s,d,d^{-1}] \ ,\\
R(\mathbb{T}^2) &\cong \mathbb{Z}[t_1^{\pm 1},t_2^{\pm 1},t_3^{\pm 1}]/(t_1t_2t_3 - 1)\ ,
\end{align*}
where $s_1 = \rho$ is the standard representation of $SU(3)$, $s_2 = \@ifnextchar^\@extp{\@extp^{\,}}^2s_1$, $s$ denotes the standard representation of $U(2)$ on $\mathbb{C}^2$ and $d$ is its determinant representation. The characters $t_i$ are obtained by restricting the standard representation of $SU(3)$ to the maximal torus $\mathbb{T}^2$ and projecting to the $i$th diagonal entry. Let $r \colon R_F(SU(3)) \to R_F(U(2))$ be the restriction homomorphism\footnote{The three inclusions $G_I \subset G$ for $\lvert I \rvert = 2$ induce the same map on representation rings.}. Then we have
\begin{align*}
r(s_1) &= s + d^{-1} \ ,\\
r(s_2) &= d^{-1}s + d\ .
\end{align*}
Let $\lambda_F = F(d^{-1})$ and $\mu_F = F(s)$. Note that $F(s + d^{-1})= F(s) \cdot F(d^{-1})$ is a unit in $R_F(U(2))$. Hence, the same is true for $\lambda_F,\mu_F \in R_F(U(2))$. To express the differential $d_0$ in terms of the $r$, $\lambda_F$ and $\mu_F$ we first need to give an explicit description of the map $q \colon SU(3) \to \Delta^2$: The eigenvalues of each $g \in SU(3)$ can be uniquely written in the form
\[
\exp(2\pi i \kappa_0),\quad \exp(2\pi i \kappa_1),\quad \exp(2\pi i \kappa_2)\ ,
\]
where $\kappa_0, \kappa_1, \kappa_2 \in \mathbb{R}$ satisfy $\sum_{j=0}^2 \kappa_j = 0$ and
\(
\kappa_0 \geq \kappa_1 \geq \kappa_2 \geq \kappa_0 - 1
\).
Let $\mu_1 = \text{diag}\left(\tfrac{2}{3}, -\tfrac{1}{3}, -\tfrac{1}{3}\right) \in \mathfrak{t}$ and $\mu_2 = \text{diag}\left(\tfrac{1}{3}, \tfrac{1}{3}, -\tfrac{2}{3}\right) \in \mathfrak{t}$ be the (duals of the) fundamental weights. For any triple $\left(\kappa_0, \kappa_1, \kappa_2\right)$ as above, there are unique values $s,t \geq 0$ with $s + t \leq 1$ and
\begin{equation} \label{eqn:kappa}
\text{diag}\left(\kappa_0, \kappa_1, \kappa_2\right) = s\,\mu_1 + t\,\mu_2\ .
\end{equation}
The map $q \colon SU(3) \to \Delta^2$ sends $g \in SU(3)$ to the point $(1 - (s+t), s,t)$ in the simplex, i.e.\ if $\{e_0, e_1, e_2\}$ denotes the standard basis of $\mathbb{R}^3$, then $q(\exp(2\pi i \mu_j))$ agrees with $e_j$. We choose $\delta_n = \frac{17}{24}$ as the constant for the closed cover of $\Delta^2$ given by $A_0, A_1, A_2$. The result is shown in Fig.~\ref{fig:cover_of_Delta}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=3.8]
\coordinate (0) at (0,0);
\coordinate (alpha2) at (0,{sqrt(2)});
\coordinate (alpha1) at ({sqrt(2)*sin(120)}, {sqrt(2)*cos(120)});
\coordinate (alpha3) at ($(alpha1)+(alpha2)$);
\coordinate (mu0) at ($1/3*(alpha1) + 2/3*(alpha2)$);
\coordinate (mu1) at ($2/3*(alpha1) + 1/3*(alpha2)$);
\draw[blue!10,fill=blue!10] (0,0) -- ($(mu0)$) -- ($(mu1)$) -- cycle;
\draw[blue,fill=blue!50,opacity=0.3] (0,0) -- ($17/24*(mu0)$) -- ($17/24*(mu1)$) -- cycle;
\draw[blue,fill=red!50,opacity=0.3] ($(mu0)$) -- ($7/24*(mu0)$) -- ($17/24*(mu1) + 7/24*(mu0)$) -- cycle;
\draw[blue,fill=green!50,opacity=0.3] ($(mu1)$) -- ($7/24*(mu1)$) -- ($17/24*(mu0) + 7/24*(mu1)$) -- cycle;
\draw[blue] ($(mu0)$) -- ($(mu1)$);
\draw[blue] (0,0) -- ($(mu0)$);
\draw[blue] (0,0) -- ($(mu1)$);
\draw[fill=blue, blue, thick] (0,0) circle (0.3pt) node[below,black] {$0$};
\draw[fill=blue, blue, thick] ($(mu0)$) circle (0.3pt) node[above,black] {$2$};
\draw[fill=blue, blue, thick] ($(mu1)$) circle (0.3pt) node[below,black] {$1$};
\node[blue] at ($3/20*(mu0) + 3/20*(mu1)$) {$A_0$};
\node[red] at ($14/20*(mu0) + 3/20*(mu1)$) {$A_2$};
\node[green!70!blue!100] at ($3/20*(mu0) + 14/20*(mu1)$) {$A_1$};
\end{tikzpicture}
\caption{\label{fig:cover_of_Delta}The three closed sets $A_i$ covering $\Delta^2$.}
\end{figure}
To express the differential $d_1$ in terms of the representation rings, we first observe that we have three inclusions $\iota_{I} \colon G_{\{0,1,2\}} \to G_I$ for $I \subset \{0,1,2\}$ with $\lvert I \rvert = 2$. These induce three restriction maps
\[
r_I \colon R_F(G_I) \to R_F(G_{\{0,1,2\}}) \cong R_F(\mathbb{T}^2)\ .
\]
Let $\nu_F = F(t_1)$ for $t_1 \in R(\mathbb{T}^2) \subset R_F(\mathbb{T}^2)$ as defined above. In the next lemma we write $r_{ij}$ for $r_I$ with $I = \{i,j\}$.
\begin{lemma} \label{lem:diff_d0_d1}
The trivialisations $R_F(G_I) \cong K^G_0(C^*(\mathcal{E})(X_I))$ in the spectral sequence can be chosen in such a way that the differential $d_0$ is given by the following expression
\begin{align*}
& d_0(x_0,x_1,x_2) \\
=& (-r(x_0) + \lambda_F \cdot r(x_1), -r(x_1) + \mu_F^{-1} \cdot r(x_2), -r(x_0) + \lambda_F^{-1}\cdot r(x_2))
\end{align*}
where $x_i \in R_F(G_i) = R_F(SU(3))$ and the three components on the right hand side correspond to the subsets $I = \{0,1\}$, $\{1,2\}$ and $\{0,2\}$ respectively. Moreover, $d_1$ takes the following form
\[
d_1(y_{01},y_{12},y_{02}) = r_{01}(y_{01}) + \nu_F \cdot r_{12}(y_{12}) - r_{02}(y_{02}) \ ,
\]
where $y_{ij} \in R_F(G_{\{i,j\}})$.
\end{lemma}
\begin{proof}
As above we write $X_i$ for $X_{\{i\}}$ and similarly for $G_i$ and $V_i$. Observe that $G_{i} = G$ implies that $X_{i}$ is a one-point space for $i \in \{0,1,2\}$. We will first discuss the construction of the differential $d_0$. Restriction along the $G$-equivariant homotopy equivalence $X_{i} \to V_{i}$ induces an isomorphism
\[
\begin{tikzcd}
K_0^G(C^*(\mathcal{E})(V_{i})) \ar[r,"\cong" above] & K_0^G(C^*(\mathcal{E})(X_{i}))\ .
\end{tikzcd}
\]
The differential $d_0$ is an alternating sum of restriction homomorphisms along the inclusions of the form $V_{\{i,j\}} \to V_{k}$ with $k \in \{i,j\}$ composed with isomorphisms as shown in the following diagram:
\[
\begin{tikzcd}
R_F(G_k) \ar[r,"\cong"] & K_0^G(C^*(\mathcal{E})(X_{k})) & \ar[l,"\cong" above] K_0^G(C^*(\mathcal{E})(V_{k})) \ar[d] \\
R_F(G_{\{i,j\}}) \ar[r,"\cong"] & K_0^G(C^*(\mathcal{E})(X_{\{i,j\}})) & \ar[l,"\cong" above] K_0^G(C^*(\mathcal{E})(V_{\{i,j\}}))
\end{tikzcd}
\]
We will fix the isomorphisms on the left hand side by choosing an equivariant trivialisation of $C^*(\mathcal{E})(V_k)$ via Morita equivalences given by Lemma~\ref{lem:equiv_trivialisation}. Let
\begin{align*}
\omega_0 = -1 \ ,\qquad
\omega_1 = \exp\left(2\pi i \tfrac{1}{6}\right)\ ,\qquad
\omega_2 = \exp\left(2\pi i \tfrac{5}{6}\right)
\end{align*}
and define $\sigma_k \colon V_k \to Y$ by $\sigma_k(g) = (g,\omega_k)$. We claim that this is well-defined and will show this for $k = 0$. The other cases follow similarly. To see that $\omega_0 \notin \EV{g}$ for all $g \in V_0$ it suffices to prove that all coordinates $\kappa_i$ of $q(g)$ are different from $\pm \frac{1}{2}$ for all $g \in V_0$. By \eqref{eqn:kappa} we have
\[
\kappa_0 = \frac{2}{3}s + \frac{1}{3}t = \frac{1}{2} \qquad \Leftrightarrow \qquad s = \frac{3 - 2t}{4}
\]
and the condition $s+t \leq 1$ implies that $0 \leq t \leq \frac{1}{2}$. Likewise,
\[
\kappa_2 = -\frac{1}{3}s' - \frac{2}{3}t' = -\frac{1}{2} \qquad \Leftrightarrow \qquad s' = \frac{3 - 4t'}{2}
\]
and $s'+t' \leq 1$, $s' \geq 0$ yield the constraints $\frac{1}{2} \leq t' \leq \frac{3}{4}$. Note that the coordinate $\kappa_1$ is never equal to $\pm \frac{1}{2}$, since this would contradict the constraints imposed by $\kappa_0 \geq \kappa_1 \geq \kappa_2 \geq \kappa_0 -1$ and $\kappa_0 + \kappa_1 + \kappa_2 = 0$. Therefore the matrices with at least one eigenvalue equal to $-1$ are parametrised by the two line segments described above and shown in Fig.~\ref{fig:triv_over_V0}. Our choice for $\delta_n$ was made in such a way that the resulting $A_0$ avoids this set proving our claim for $V_0$. The situation will look similar for $V_1$ and $V_2$ insofar as Fig.~\ref{fig:triv_over_V0} just has to be rotated accordingly.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=3.2]
\coordinate (0) at (0,0);
\coordinate (alpha2) at (0,{sqrt(2)});
\coordinate (alpha1) at ({sqrt(2)*sin(120)}, {sqrt(2)*cos(120)});
\coordinate (alpha3) at ($(alpha1)+(alpha2)$);
\coordinate (mu0) at ($1/3*(alpha1) + 2/3*(alpha2)$);
\coordinate (mu1) at ($2/3*(alpha1) + 1/3*(alpha2)$);
\draw[blue!10,fill=blue!10] (0,0) -- ($(mu0)$) -- ($(mu1)$) -- cycle;
\draw[blue,fill=blue!40,opacity=0.3] (0,0) -- ($17/24*(mu0)$) -- ($17/24*(mu1)$) -- cycle;
\draw[blue] ($(mu0)$) -- ($(mu1)$);
\draw[blue] (0,0) -- ($(mu0)$);
\draw[blue] (0,0) -- ($(mu1)$);
\draw[fill=blue, blue, thick] (0,0) circle (0.3pt) node[below,black] {$0$};
\draw[fill=blue, blue, thick] ($(mu0)$) circle (0.3pt) node[above,black] {$2$};
\draw[fill=blue, blue, thick] ($(mu1)$) circle (0.3pt) node[below,black] {$1$};
\draw[red, very thick] ($3/4*(mu1)$) -- ($1/2*(mu0) + 1/2*(mu1)$);
\draw[red, very thick] ($3/4*(mu0)$) -- ($1/2*(mu0) + 1/2*(mu1)$);
\node[blue] at ($1/5*(mu0) + 1/5*(mu1)$) {$A_0$};
\end{tikzpicture}
\caption{\label{fig:triv_over_V0}The red lines correspond to $SU(3)$ elements with at least one eigenvalue equal to $-1$. As can be seen from this picture the set $A_0$ avoids those two lines.}
\end{figure}
By Lem.~\ref{lem:equiv_trivialisation} the section $\sigma_k$ constructed above gives an equivariant Morita equivalence $\mathsf{X}_{V_k}$ between $C^*(\mathcal{E})(V_k)$ and $C(V_k, \mathsf{M}_\mathnormal{F}^{\infty})$. Let $I \subset \{0,1,2\}$ and denote the minimal element of $I$ by $i_0$. The restriction of $\mathsf{X}_{V_{i_0}}$ to $V_I$ is a Morita equivalence between $C^*(\mathcal{E})(V_I)$ and $C(V_I,\mathsf{M}_\mathnormal{F}^{\infty})$ and the trivialisation $R_F(G_I) \cong K_0^G(C^*(\mathcal{E})(X_I))$ is induced by the restricting further to $X_I \subset V_I$. The differential $d_0$ is a signed sum of components of the form
\[
d_k^I \colon K^G_0(\mathsf{M}_\mathnormal{F}^{\infty}) \to K_0^G(C(G/G_I,\mathsf{M}_\mathnormal{F}^{\infty}))
\]
with $k \in I$. Just as in Lem.~\ref{lem:differential} the $d_k^I$ fits into the following commutative diagram:
\[
\begin{tikzcd}[column sep=2cm]
K_0^G(C^*(\mathcal{E})(V_k)) \ar[r,"\text{res}"] \ar[d,"\cong" left,"\mathsf{X}_{V_k}" right] & K_0^G(C^*(\mathcal{E})(V_I)) \ar[d,"\cong" left, "\mathsf{X}_{V_{i_0}}" right] \\
K_0^G(C(V_k,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[d,"\cong" left] \ar[r,"\mathsf{X}_{V_k}^{\text{op}} \otimes \mathsf{X}_{V_{i_0}}"] & K_0^G(C(V_I,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[d,"\cong" left] \\
K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \ar[r, "d_k^{I}"] & K_0^{G}(C(G/G_I,\mathsf{M}_\mathnormal{F}^{\infty}))
\end{tikzcd}
\]
From this we see that if $I = \{i,j\}$ with $i < j$ and $k = i$, then after identifying the domain of $d_k^I$ with $R_F(G)$ and the codomain with $R_F(G_I)$ the map agrees with the restriction homomorphism. Let $I = \{i,j\}$ with $i < j$ and define $\sigma_I \colon V_{I} \to Y$ by $\sigma_I(g) = (g,\omega_j, \omega_i)$. In this situation we have
\[
\mathsf{X}_{V_j}^{\text{op}} \otimes \mathsf{X}_{V_{i}} \cong C(V_I, \sigma_I^*\mathcal{E})\ .
\]
Let $E \to V_I$ be the vector bundle with fibre over $g \in V_I$ given by
\[
E_g = F(\Eig{g}{\lambda})
\]
where $\lambda \in \EV{g}$ is the eigenvalue of $g$ between $\omega_j$ and $\omega_i$. Then $\sigma_I^*\mathcal{E} \to V_I$ is either of the form $E \otimes \mathsf{M}_\mathnormal{F}^{\infty}$ if $\omega_j < \omega_i$ or $(E \otimes \mathsf{M}_\mathnormal{F}^{\infty})^{\text{op}}$ if $\omega_i < \omega_j$. Note that $\left.E\right|_{X_I} \cong F(Q)$, where $Q \to X_I$ is the vector bundle associated to the principal $G_I$-bundle $G \to G/G_I$ either using the inverse determinant representation~$d^{-1}$ or the standard representation $s$ depending on whether $\dim(\Eig{g}{\lambda}) = 1$ or $2$ respectively. Using the identifications $K_0^G(\mathsf{M}_\mathnormal{F}^{\infty}) \cong R_F(G)$ and $K_0^G(C(X_I,\mathsf{M}_\mathnormal{F}^{\infty})) \cong R_F(G_I)$, the map $d_j^I$ therefore corresponds to a factor of the form $F(d^{-1})^{\pm 1}$ or $F(s)^{\pm 1}$ times the restriction homomorphism. The resulting factors are listed in Fig.~\ref{tab:factor_table}.
\begin{figure}[ht]
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|c|c|c|c|c|}
\hline
I & order & eigenvalues & representation & factor \\
\hline
$\{0,1\}$ & $\omega_1 < \omega_0$ & $\color{red} e(\frac{2}{6})$, $e(-\frac{1}{6})$, $e(-\frac{1}{6})$ & $d^{-1}$ & $F(d^{-1}) = \lambda_F$ \\
$\{1,2\}$ & $\omega_1 < \omega_2$ & $\color{red} e(\frac{1}{2})$, $1$, $\color{red} e(-\frac{1}{2})$ & $s$ & $F(s)^{-1} = \mu_F^{-1}$ \\
$\{0,2\}$ & $\omega_0 < \omega_2$ & $e(\frac{1}{6})$, $e(\frac{1}{6})$, $\color{red} e(-\frac{2}{6})$ & $d^{-1}$ & $F(d^{-1})^{-1} = \lambda_F^{-1}$ \\
\hline
\end{tabular}
\caption{\label{tab:factor_table}The table shows the eigenvalue $\lambda$ of $g \in X_I$ between $\omega_i$ and $\omega_j$ in red with $e(\varphi) = e^{2\pi i \varphi}$ and the resulting factor in the right hand column.}
\end{figure}
Together with the sign convention for the exact sequence this explains the form of $d_0$.
The same reasoning can be used for $d_1$. Let $I \subset \{0,1,2\}$ be a subset with $\lvert I \rvert = 2$ and let $J = \{0,1,2\}$. The differential $d_1$ decomposes into a sum
\[
d_1(x_{01}, x_{12}, x_{02}) = d^{\{0,1\}}(x_{01}) + d^{\{1,2\}}(x_{12}) - d^{\{0,2\}}(x_{02})
\]
with three maps $d^I \colon K_0^G(C(X_I,\mathsf{M}_\mathnormal{F}^{\infty})) \to K_0^G(C(X_J,\mathsf{M}_\mathnormal{F}^{\infty}))$ that fit into the following commutative diagram:
\[
\begin{tikzcd}[column sep=2cm]
K_0^G(C(V_I,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[d,"\cong" left] \ar[r,"\mathsf{X}_{V_I}^{\text{op}} \otimes \mathsf{X}_{V_{J}}"] & K_0^G(C(V_J,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[d,"\cong" left] \\
K_0^G(C(X_I,\mathsf{M}_\mathnormal{F}^{\infty})) \ar[r, "d^I"] & K_0^{G}(C(X_J,\mathsf{M}_\mathnormal{F}^{\infty}))
\end{tikzcd}
\]
Just as above we see that $d^I$ agrees with the restriction homomorphism $r_I$ in the cases $I = \{0,1\}$ and $I = \{0,2\}$, since $\mathsf{X}_{V_I}^{\text{op}} \otimes \mathsf{X}_{V_{J}}$ is trivial then. The only remaining case is $I = \{1,2\}$, where we have
\[
\mathsf{X}_{V_I}^{\text{op}} \otimes \mathsf{X}_{V_{J}} \cong \mathsf{X}_{V_1}^{\text{op}} \otimes \mathsf{X}_{V_{0}} \cong C(V_J, \sigma_{\{0,1\}}^*\mathcal{E})
\]
The eigenvalues for all $g \in X_J$ are $e(\frac{1}{3})$, $1$, $e(-\frac{1}{3})$ and only the first one lies between $\omega_1$ and $\omega_0$. Let $\lambda = e(\frac{1}{3})$ and let $E \to X_J$ be the vector bundle with fibres given by
\[
E_g = F(\Eig{g}{\lambda})\ .
\]
It is isomorphic to $F(Q)$, where $Q \to X_J$ is the vector bundle associated to the principal $G_J$-bundle $G \to G/G_J$ via the representation $t_1$. Thus, by the same argument as before the map $d^{\{1,2\}}$ agrees with $F(t_1)$ times the restriction homomorphism $r_{12}$.
\end{proof}
\subsubsection{Restriction to maximal torus} As above, denote by $\mathfrak{t}$ the Lie algebra of the maximal torus $\mathbb{T}^2 \subset SU(3)$. In this section we will prove that the chain complex in Lemma~\ref{lem:diff_d0_d1} computes the equivariant (Bredon) cohomology of $\mathfrak{t}$ with respect to an extended Weyl group action and a twisted coefficient system. This approach is reminiscent of the method used in \cite{AdemCantareroGomez-TwistedK:2018} to compute the (rational) twisted equivariant $K$-theory of actions with isotropy of maximal rank and classical twist. We will focus here on the action of $G = SU(3)$ on itself by conjugation with a non-classical twists. An extension of this approach to $G = SU(n)$ will be part of upcoming work.
Let $W = S_3$ be the Weyl group of $SU(3)$. Our identification of $\mathbb{T}^2$ with the diagonal matrices induces a corresponding isomorphism
\[
\mathfrak{t} \cong \{ (h_1,h_2,h_3) \in \mathbb{R}^3 \ |\ h_1 + h_2 + h_3 = 0 \} \ .
\]
The fundamental group $\pi_1(\mathbb{T},e)$ agrees with the lattice $\Lambda$ in $\mathfrak{t}$ obtained as the kernel of the exponential map. We will identify the two, which gives
\begin{equation} \label{eqn:lattice}
\pi_1(\mathbb{T}^2,e) = \Lambda = \{ (k_1,k_2,k_3) \in \mathbb{Z}^3 \ |\ k_1 + k_2 + k_3 = 0\}
\end{equation}
The Weyl group acts on $\mathfrak{t}$ and $\Lambda$ by permuting the coordinates and we define
\[
\widetilde{W} = \pi_1(\mathbb{T}^2) \rtimes W\ .
\]
Note that $W$ also acts on $\mathbb{Z}^3$ in the same way. Let
\(
\widehat{W} = \mathbb{Z}^3 \rtimes W
\)
and observe that $\widetilde{W} \subset \widehat{W}$ as a normal subgroup. Given an exponential functor $F$ we obtain a group homomorphism
\[
\varphi \colon \pi_1(\mathbb{T}^2,e) \to GL_1(R_F(\mathbb{T}^2)) \ , \ \varphi(k_1, k_2, k_3) = F(t_1)^{k_1} \cdot F(t_2)^{k_2} \cdot F(t_3)^{k_3}\ .
\]
If we define $F(-t_i) = F(t_i)^{-1}$ we can rewrite the right hand side as
\[
\varphi(k_1,k_2,k_3) = F(k_1t_1 + k_2t_2 + k_3t_3) \ .
\]
Combining $\varphi$ with the permutation action of $W$ on $R_F(\mathbb{T}^2)$ results in an action of $\widetilde{W}$ on $R_F(\mathbb{T}^2)$. Just as in \cite{AdemCantareroGomez-TwistedK:2018} this gives rise to local coefficient systems $\mathcal{R}$ and $\mathcal{R}_{\mathbb{Q}}$ as follows
\[
\mathcal{R}(\widetilde{W}/H) = R_F(\mathbb{T}^2)^H \qquad , \qquad \mathcal{R}_{\mathbb{Q}}(\widetilde{W}/H) = R_F(\mathbb{T}^2)^H \otimes \mathbb{Q}\ .
\]
The simplex $\Delta^2 \subset \mathfrak{t}$ is a fundamental domain for the action of $\widetilde{W}$ on $\mathfrak{t}$ and turns it into a $\widetilde{W}$-CW-complex, in which the $k$-cells are labelled by subsets $I \subset \{0,1,2\}$ with $\lvert I \rvert = k+1$. We have three $0$-cells, three $1$-cells and one $2$-cell. Let $\widetilde{W}_I$ be the stabiliser of $\xi_I$. Likewise let $W_I \subset W$ be the stabiliser of $\exp(2\pi i \xi_I)$. The restriction maps $R_F(G_I) \to R_F(\mathbb{T}^2)$ induce ring isomorphisms
\[
r_I \colon R_F(G_I) \to R_F(\mathbb{T}^2)^{W_I}\ .
\]
As above let $q \colon G \to \Delta^2$ be the quotient map that parametrises conjugacy classes. Let $\hat{q} = \left.q\right|_{\mathbb{T}^2} \circ q_{\mathfrak{t}}$, where $q_{\mathfrak{t}} \colon \mathfrak{t} \to \mathbb{T}^2$ is the universal covering. Let $B_I = \hat{q}^{-1}(A_I) \subset \mathfrak{t}$. Note that $\{B_0, B_1, B_2\}$ is a $\widetilde{W}$-equivariant cover of $\mathfrak{t}$ as shown in Fig.~\ref{fig:cover_of_t}. It has the property that the inclusion map $\widetilde{W} \cdot \xi_I \to B_I$ is an equivariant homotopy equivalence.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1.5]
\coordinate (0) at (0,0);
\coordinate (alpha2) at (0,{sqrt(2)});
\coordinate (alpha1) at ({sqrt(2)*sin(120)}, {sqrt(2)*cos(120)});
\coordinate (alpha3) at ($(alpha1)+(alpha2)$);
\coordinate (mu0) at ($1/3*(alpha1) + 2/3*(alpha2)$);
\coordinate (mu1) at ($2/3*(alpha1) + 1/3*(alpha2)$);
\clip (-1.6,-1.4) rectangle + (3.2,2.8);
\foreach \k in {-1,0,1}
{
\coordinate (origin) at ($+\k*(mu0) + \k*(mu1)$);
\coordinate (mu0) at ($1/3*(alpha1) + 2/3*(alpha2)$);
\coordinate (mu1) at ($2/3*(alpha1) + 1/3*(alpha2)$);
\foreach \i in {0,60,...,300}
{
\draw[blue!10,fill=blue!10] let \p0 = ($(origin)$), \p1 = ($(mu0)+(origin)$), \p2 = ($(mu1)+(origin)$) in
[rotate around={\i:(origin)}] (\p0) -- (\p1) -- (\p2) -- cycle;
\draw[blue,fill=blue!50,opacity=0.3] let \p0 = ($(origin)$), \p1 = ($17/24*(mu0)+(origin)$), \p2 = ($17/24*(mu1)+(origin)$) in
[rotate around={\i:(origin)}] (\p0) -- (\p1) -- (\p2) -- cycle;
}
\foreach \i in {0,120,240}
{
\draw[blue,fill=red!50,opacity=0.3] let \p1 = ($(mu0)+(origin)$), \p2 = ($7/24*(mu0)+(origin)$), \p3 = ($17/24*(mu1) + 7/24*(mu0)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
\draw[blue,fill=green!50,opacity=0.3] let \p1 = ($(mu1)+(origin)$), \p2 = ($7/24*(mu1)+(origin)$), \p3 = ($17/24*(mu0) + 7/24*(mu1)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
}
\foreach \i in {60,180,300}
{
\draw[blue,fill=green!50,opacity=0.3] let \p1 = ($(mu0)+(origin)$), \p2 = ($7/24*(mu0)+(origin)$), \p3 = ($17/24*(mu1) + 7/24*(mu0)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
\draw[blue,fill=red!50,opacity=0.3] let \p1 = ($(mu1)+(origin)$), \p2 = ($7/24*(mu1)+(origin)$), \p3 = ($17/24*(mu0) + 7/24*(mu1)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
}
}
\foreach \k in {-1,1}
{
\coordinate (origin) at ($-\k*(mu0) + 2*\k*(mu1)$);
\coordinate (mu0) at ($1/3*(alpha1) + 2/3*(alpha2)$);
\coordinate (mu1) at ($2/3*(alpha1) + 1/3*(alpha2)$);
\foreach \i in {0,60,...,300}
{
\draw[blue!10,fill=blue!10] let \p0 = ($(origin)$), \p1 = ($(mu0)+(origin)$), \p2 = ($(mu1)+(origin)$) in
[rotate around={\i:(origin)}] (\p0) -- (\p1) -- (\p2) -- cycle;
\draw[blue,fill=blue!50,opacity=0.3] let \p0 = ($(origin)$), \p1 = ($17/24*(mu0)+(origin)$), \p2 = ($17/24*(mu1)+(origin)$) in
[rotate around={\i:(origin)}] (\p0) -- (\p1) -- (\p2) -- cycle;
}
\foreach \i in {0,120,240}
{
\draw[blue,fill=red!50,opacity=0.3] let \p1 = ($(mu0)+(origin)$), \p2 = ($7/24*(mu0)+(origin)$), \p3 = ($17/24*(mu1) + 7/24*(mu0)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
\draw[blue,fill=green!50,opacity=0.3] let \p1 = ($(mu1)+(origin)$), \p2 = ($7/24*(mu1)+(origin)$), \p3 = ($17/24*(mu0) + 7/24*(mu1)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
}
\foreach \i in {60,180,300}
{
\draw[blue,fill=green!50,opacity=0.3] let \p1 = ($(mu0)+(origin)$), \p2 = ($7/24*(mu0)+(origin)$), \p3 = ($17/24*(mu1) + 7/24*(mu0)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
\draw[blue,fill=red!50,opacity=0.3] let \p1 = ($(mu1)+(origin)$), \p2 = ($7/24*(mu1)+(origin)$), \p3 = ($17/24*(mu0) + 7/24*(mu1)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
}
}
\foreach \k in {-1,1}
{
\coordinate (origin) at ($-2*\k*(mu0) + \k*(mu1)$);
\coordinate (mu0) at ($1/3*(alpha1) + 2/3*(alpha2)$);
\coordinate (mu1) at ($2/3*(alpha1) + 1/3*(alpha2)$);
\foreach \i in {0,60,...,300}
{
\draw[blue!10,fill=blue!10] let \p0 = ($(origin)$), \p1 = ($(mu0)+(origin)$), \p2 = ($(mu1)+(origin)$) in
[rotate around={\i:(origin)}] (\p0) -- (\p1) -- (\p2) -- cycle;
\draw[blue,fill=blue!50,opacity=0.3] let \p0 = ($(origin)$), \p1 = ($17/24*(mu0)+(origin)$), \p2 = ($17/24*(mu1)+(origin)$) in
[rotate around={\i:(origin)}] (\p0) -- (\p1) -- (\p2) -- cycle;
}
\foreach \i in {0,120,240}
{
\draw[blue,fill=red!50,opacity=0.3] let \p1 = ($(mu0)+(origin)$), \p2 = ($7/24*(mu0)+(origin)$), \p3 = ($17/24*(mu1) + 7/24*(mu0)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
\draw[blue,fill=green!50,opacity=0.3] let \p1 = ($(mu1)+(origin)$), \p2 = ($7/24*(mu1)+(origin)$), \p3 = ($17/24*(mu0) + 7/24*(mu1)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
}
\foreach \i in {60,180,300}
{
\draw[blue,fill=green!50,opacity=0.3] let \p1 = ($(mu0)+(origin)$), \p2 = ($7/24*(mu0)+(origin)$), \p3 = ($17/24*(mu1) + 7/24*(mu0)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
\draw[blue,fill=red!50,opacity=0.3] let \p1 = ($(mu1)+(origin)$), \p2 = ($7/24*(mu1)+(origin)$), \p3 = ($17/24*(mu0) + 7/24*(mu1)+(origin)$) in [rotate around={\i:(origin)}] (\p1) -- (\p2) -- (\p3) -- cycle;
}
}
\end{tikzpicture}
\caption{\label{fig:cover_of_t}The cover of $\mathfrak{t}$ induced by the cover of $\Delta^2$.}
\end{figure}
For any subset $I \subset \{0,1,2\}$ the Bredon cohomology $H^k_{\widetilde{W}}(B_I, \mathcal{R})$ is only non-zero in degree $k = 0$ where we have a natural isomorphism
\[
H^0_{\widetilde{W}}(B_I; \mathcal{R}) \cong R_F(\mathbb{T}^2)^{\widetilde{W}_I}\ .
\]
For $J \subset I$ the restriction homomorphism $H^0_{\widetilde{W}}(B_J; \mathcal{R}) \to H^0_{\widetilde{W}}(B_I, \mathcal{R})$ translates into the natural inclusion $R_F(\mathbb{T}^2)^{W_J} \subset R_F(\mathbb{T}^2)^{W_I}$. The sum over all $H^q_{\widetilde{W}}(B_I; \mathcal{R})$ with $\lvert I \rvert = p+1$ forms the $E^1$-page of a spectral sequence that converges to $H^{p+q}_{\widetilde{W}}(\mathfrak{t}; \mathcal{R})$. By our above considerations this $E^1$-page boils down to the chain complex
\[
C^k_{\widetilde{W}}(\mathfrak{t}; \mathcal{R}) = \bigoplus_{\lvert I \rvert = k+1} R_F(\mathbb{T}^2)^{\widetilde{W}_I}
\]
with the differentials $d_k^{\mathrm{cell}} \colon C^k_{\widetilde{W}}(\mathfrak{t}; \mathcal{R}) \to C^{k+1}_{\widetilde{W}}(\mathfrak{t}; \mathcal{R})$ given by alternating sums of inclusion homomorphisms. We can identify $W$ with the subgroup of $\widetilde{W}$ consisting of elements of the form $(0,w) \in \pi_1(\mathbb{T}^2,e) \rtimes W$. Observe that $W_i = W = \widetilde{W}_0$ for $i \in \{0,1,2\}$, $W_{\{0,2\}} = \widetilde{W}_{\{0,2\}}$, $W_{\{0,1\}} = \widetilde{W}_{\{0,1\}}$ and we have group isomorphisms
\begin{align*}
\widetilde{W}_1 \to W_1 \qquad &,& \qquad x \mapsto ((-1,0,0),e_W) \cdot x \cdot ((1,0,0),e_W) \ ,\\
\widetilde{W}_2 \to W_2 \qquad &,& \qquad x \mapsto ((0,0,1),e_W) \cdot x \cdot ((0,0,-1),e_W) \ .
\end{align*}
Here, $e_W$ denotes the neutral element of $W$ and we used the conjugation action of $\widehat{W}$ on $W$. The first isomorphism restricts to $\widetilde{W}_{\{1,2\}} \to W_{\{1,2\}}$. These identifications induce corresponding isomorphisms of the fixed point rings
\begin{align*}
\widetilde{r}_1 &\colon R_F(\mathbb{T}^2)^{W_1} \to R_F(\mathbb{T}^2)^{\widetilde{W}_1} \quad , \quad p \mapsto F(t_1) \cdot p \\
\widetilde{r}_2 &\colon R_F(\mathbb{T}^2)^{W_2} \to R_F(\mathbb{T}^2)^{\widetilde{W}_2} \quad , \quad p \mapsto F(t_3)^{-1} \cdot p \\
\widetilde{r}_{\{1,2\}} &\colon R_F(\mathbb{T}^2)^{W_{\{1,2\}}} \to R_F(\mathbb{T}^2)^{\widetilde{W}_{\{1,2\}}} \quad , \quad p \mapsto F(t_1) \cdot p
\end{align*}
Define $\widetilde{r}_I \colon R_F(\mathbb{T}^2)^{W_I} \to R_F(\mathbb{T}^2)^{\widetilde{W}_I}$ for all other $I \subset \{0,1,2\}$ to be the identity. Let $\hat{r}_I = \widetilde{r}_I \circ r_I \colon R_F(G_I) \to R_F(\mathbb{T}^2)^{\widetilde{W}_I}$.
\begin{lemma} \label{lem:coh_with_coeff}
The isomorphisms $\hat{r}_I$ fit into the following commutative diagram:
\[
\begin{tikzcd}
\bigoplus_{\lvert I \rvert = 1} R_F(G_I) \ar[r,"d_0"] \ar[d,"\bigoplus_{\lvert I \rvert = 1} \hat{r}_I" right, "\cong" left] & \bigoplus_{\lvert I \rvert = 2} R_F(G_I) \ar[r,"d_1"] \ar[d,"\bigoplus_{\lvert I \rvert = 2} \hat{r}_I" right, "\cong" left] & R_F(\mathbb{T}^2) \ar[d,"\hat{r}_{\{0,1,2\}} = \id{}" right, "\cong" left] \\
C_{\widetilde{W}}^0(\mathfrak{t};\mathcal{R}) \ar[r,"d_0^{\mathrm{cell}}" below] & C_{\widetilde{W}}^1(\mathfrak{t};\mathcal{R}) \ar[r,"d_1^{\mathrm{cell}}" below] & C_{\widetilde{W}}^2(\mathfrak{t};\mathcal{R})
\end{tikzcd}
\]
In particular, the chain complex from Lemma~\ref{lem:diff_d0_d1} computes the $\widetilde{W}$-equivariant Bredon cohomology $H^*_{\widetilde{W}}(\mathfrak{t}; \mathcal{R})$ of $\mathfrak{t}$ with coefficients in $\mathcal{R}$.
\end{lemma}
\begin{proof}
Let $r^{(k)}$ (respectively $\widetilde{r}^{(k)}$) for $k \in \{1,2,3\}$ be the sum over all $r_I$ (respectively $\widetilde{r}_I$) for all $I \subset \{0,1,2\}$ with $\lvert I \rvert = k+1$. Consider
\begin{align*}
\widehat{d}_0 &\colon \bigoplus_{\lvert I \rvert = 1} R_F(\mathbb{T}^2)^{W_I} \to \bigoplus_{\lvert I \rvert = 2} R_F(\mathbb{T}^2)^{W_I} \ ,\\
\widehat{d}_1 &\colon \bigoplus_{\lvert I \rvert = 2} R_F(\mathbb{T}^2)^{W_I} \to R_F(\mathbb{T}^2)
\end{align*}
with $\widehat{d}_0(x_0,x_1,x_2) = (-x_0 + F(t_1)x_1, -x_1 + F(t_1 + t_3)^{-1}x_2, -x_0 + F(t_3)^{-1}x_2)$ and $\widehat{d}_1(y_{01}, y_{12}, y_{02}) = y_{01} + F(t_1)y_{12} - y_{02}$. Then we have $r^{(2)} \circ d_0 = \widehat{d}_0 \circ r^{(1)}$ and $\widehat{d}_1 \circ r^{(2)} = d_1$. The statement follows from the following two observations:
\begin{align*}
& (\widetilde{r}^{(2)} \circ \widehat{d}_0)(x_0,x_1,x_2) \\
=\ & (-x_0 + F(t_1)x_1, -F(t_1)x_1 + F(t_3)^{-1}x_2, -x_0 + F(t_3)^{-1}x_2) \\
=\ & (-\widetilde{r}_0(x_0) + \widetilde{r}_1(x_1), -\widetilde{r}_1(x_1) + \widetilde{r}_2(x_2), -\widetilde{r}_0(x_0) + \widetilde{r}_2(x_2)) \\
=\ & (d_0^{\mathrm{cell}} \circ \widetilde{r}^{(1)})(x_0,x_1,x_2)
\end{align*}
and
\[
(d_1^{\mathrm{cell}} \circ \widetilde{r}^{(2)})(y_{01},y_{12},y_{02}) = y_{01} + F(t_1)y_{12} - y_{02} = \widehat{d}_1(y_{01},y_{12},y_{02})\ . \hfill \qedhere
\]
\end{proof}
The above lemma reduces the problem of computing the equivariant higher twisted $K$-theory of $SU(3)$ to the computation of Bredon cohomology groups with local coefficients. We will determine these groups after rationalising the coefficients, i.e.\ we compute $H^*_{\widetilde{W}}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}})$. The inclusion $\mathcal{R} \to \mathcal{R}_{\mathbb{Q}}$ induces a homomorphism
\begin{equation} \label{eqn:rational_coh}
H^*_{\widetilde{W}}(\mathfrak{t}; \mathcal{R}) \to H^*_{\widetilde{W}}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}})\ .
\end{equation}
Even though \cite[Thm.~3.11]{AdemCantareroGomez-TwistedK:2018} is only stated for coefficient systems $\mathcal{R}_{\mathbb{Q}}$ where the module structure is induced by a homomorphism $\pi_1(\mathbb{T}^2) \to \hom(\mathbb{T}^2, S^1)$, the proof works verbatim in our situation, where $R_F(\mathbb{T}^2) \otimes \mathbb{Q}$ carries a $\pi_1(\mathbb{T}^2)$-action induced by $\pi_1(\mathbb{T}^2) \to GL_1(R_F(\mathbb{T}^2) \otimes \mathbb{Q})$. Hence, we obtain
\begin{equation}
H^*_{\widetilde{W}}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}}) \cong H^*_{\pi_1(\mathbb{T}^2)}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}})^W\ .
\end{equation}
\begin{lemma} \label{lem:reg_seq}
Let $F$ be an exponential functor with $\deg(F(t_1)) > 0$, then $(F(t_2) - F(t_1), F(t_3) - F(t_2))$ is a regular sequence in $R_F(\mathbb{T}^2) \otimes \mathbb{Q}$.
\end{lemma}
\begin{proof}
Let $R_F = R_F(\mathbb{T}^2) \otimes \mathbb{Q}$, $R = R(\mathbb{T}^2) \otimes \mathbb{Q} \subset R_F$ and let $q_F = F(\rho) = F(t_1 + t_2 + t_3)$. Let $I_{jk} \subset R_F$ be the ideal generated by $F(t_k) - F(t_j)$. We have to show that multiplication by $F(t_3) - F(t_2)$ is injective on $R_F/I_{12}$. On this quotient $F(t_3) - F(t_2)$ agrees with $F(t_3) - F(t_1)$. Suppose we have elements $p,q \in R_F$ with the property that $p$ is not divisible by $F(t_2) - F(t_1)$ and
\begin{equation} \label{eqn:div}
p\cdot (F(t_3) - F(t_1)) = q \cdot (F(t_2) - F(t_1))\ .
\end{equation}
Multiplying both sides by an appropriate power of $q_F$ we may assume that $p, q \in R$. Now we can use the relation $t_3 = (t_1t_2)^{-1}$ to express both sides of~\eqref{eqn:div} in terms of $t_1, t_2, t_1^{-1}, t_2^{-1}$. Since we may multiply both sides by $t_1^kt_2^l$ for appropriate $k,l \in \mathbb{N}_0$, we can without loss of generality assume that $p, q \in \mathbb{Q}[t_1,t_2]$. Let
\[
F(t_1) = \sum_{k=0}^m a_kt_1^k
\]
with $a_m \neq 0$. We have $\deg(F(t_1)) = \deg(F(t_2)) = m$ and by our assumption $m > 0$. However, note that $\deg(F(t_3)) \leq 0$. The highest order term of $F(t_2)$ can be expressed as follows
\[
a_mt_2^m = a_mt_1^m - \sum_{k=1}^{m-1} a_k(t_2^k - t_1^k) + F(t_2) - F(t_1)\ .
\]
Since we are working over $\mathbb{Q}$, we can therefore assume that $p$ is a linear combination of terms $t_1^kt_2^l$ with $l < m$ by adapting $q$ accordingly. Suppose that $p$ has total degree $r$ and let $p_r$ be the corresponding homogeneous part. Comparing the terms of highest degree in \eqref{eqn:div} we obtain
\[
-p_r t_1^m = q_r (t_2^m - t_1^m)\ ,
\]
where $q_r$ is the homogeneous part of $q$ of degree $r$. Since the left hand side contains no summands $t_1^kt_2^l$ with $l \geq m$, this equation implies $q_r = 0$, therefore $p_r = 0$ and hence $p = 0$. This is a contradiction to our initial divisibility assumption. Hence $p$ must be divisible by $F(t_2) - F(t_1)$ proving that multiplication by $F(t_3) - F(t_2)$ is injective on $R_F/I_{12}$.
\end{proof}
\begin{theorem} \label{thm:Koszul}
Let $F$ be an exponential functor with $\deg(F(t_1)) > 0$. Then $H^k_{\pi_1(\mathbb{T}^2)}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}}) = 0$ for $k \neq 2$ and
\[
H^2_{\pi_1(\mathbb{T}^2)}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}}) \cong R_F(\mathbb{T}^2) \otimes \mathbb{Q}/(F(t_2) - F(t_1), F(t_3) - F(t_2)) \ .
\]
Moreover, $W$ acts on $H^2_{\pi_1(\mathbb{T}^2)}(\mathfrak{t};\mathcal{R}_{\mathbb{Q}})$ by signed permutations.
\end{theorem}
\begin{proof}
Let $\Lambda = \pi_1(\mathbb{T}^2)$ be as in \eqref{eqn:lattice}. The two vectors $\kappa_1 = (1,-1,0)$ and $\kappa_2 = (0,1,-1)$ form a basis of $\Lambda$. Note that
\[
\mathbb{Q}[\Lambda] \cong \mathbb{Q}[r_1,r_2,r_3]/(r_1r_2r_3 - 1)
\]
and $\kappa_1$ corresponds to $r_1r_2^{-1}$ under this isomorphism. Likewise $\kappa_2$ agrees with $r_2r_3^{-1}$. The action of $\Lambda$ on $R_F(\mathbb{T}^2) \otimes \mathbb{Q}$ extends to a ring homomorphism
\(
\varphi \colon \mathbb{Q}[\Lambda] \to R_F(\mathbb{T}^2) \otimes \mathbb{Q}
\)
given by
\[
\varphi\left(\sum_{k,l,m} a_{klm} s_1^k s_2^l s_3^m\right) = \sum_{k,l,m} a_{klm} F(t_1)^kF(t_2)^lF(t_3)^m\ .
\]
In particular, $\varphi(r_1r_2^{-1}) = F(t_1)F(t_2)^{-1}$ and similarly for $r_2r_3^{-1}$. The equivariant cohomology groups $H^*_{\Lambda}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}})$ are computed by the cochain complex
\begin{equation} \label{eqn:cochain_complex}
\hom_{\mathbb{Q}[\Lambda]}(C_*(\mathfrak{t}) \otimes \mathbb{Q}, R_F(\mathbb{T}^2) \otimes \mathbb{Q})
\end{equation}
(see \cite[I.9, (9.3)]{Bredon-Cohomology:1967}), where $C_*(\mathfrak{t})$ is the cellular chain complex of $\mathfrak{t}$ viewed as a $\Lambda$-CW-complex. Note that $\mathfrak{t}$ has a $\Lambda$-CW-structure with one $0$-cell given by the orbit of the origin, two $1$-cells corresponding to the orbits of the edge from $(0,0,0)$ to $(1,-1,0)$ and to $(0,1,-1)$, respectively, and one $2$-cell. As pointed out in the proof of \cite[Thm.~4.2]{AdemCantareroGomez-TwistedK:2018} (see also \cite[p.~96]{Charlap-FlatMfds:1986}) the chain complex $C_*(\mathfrak{t})$ can be identified with the Koszul complex
\[
K_n = \@ifnextchar^\@extp{\@extp^{\,}}^n\mathbb{Z}^2 \otimes \mathbb{Z}[\Lambda]
\]
for the sequence $(r_1r_2^{-1} - 1, r_2r_3^{-1} - 1)$. If we identify $C_*(\mathfrak{t})$ and $K_*$ in this way, the cochain complex in \eqref{eqn:cochain_complex} turns into
\[
C^n_F = \@ifnextchar^\@extp{\@extp^{\,}}^n R_F^2 \quad \text{with} \quad d^n(y) = x \wedge y
\]
where $R_{F} = R_F(\mathbb{T}^2) \otimes \mathbb{Q}$ and $x = (F(t_1)F(t_2)^{-1}-1, F(t_2)F(t_3)^{-1}-1)$, which is a regular sequence in $R_F$ by Lemma~\ref{lem:reg_seq}. The first part of the statement now follows from \cite[Cor.~17.5]{Eisenbud-CommAlg:1995}. We can identify $\mathbb{Z}^2$ with $\Lambda$ using $\kappa_1$ and $\kappa_2$. This induces an action of $W$ on $\mathbb{Z}^2$. The group $W$ acts diagonally on $K_n$ using its natural action on $\mathbb{Z}[\Lambda]$. If we equip the cochain complex
\[
\hom_{\mathbb{Q}[\Lambda]}(K_n \otimes \mathbb{Q}, R_F(\mathbb{T}^2) \otimes \mathbb{Q})
\]
with the $W$-action given by $(g \cdot \varphi)(y) = g\varphi(g^{-1}y)$, then the isomorphism of this cochain complex with \eqref{eqn:cochain_complex} is equivariant. Likewise, $W$ acts diagonally on $C_F^n \cong \@ifnextchar^\@extp{\@extp^{\,}}^n\mathbb{Q}^2 \otimes R_F$ making the last isomorphism equivariant as well. In particular, $W$ acts on $\@ifnextchar^\@extp{\@extp^{\,}}^2\mathbb{Q}^2$ via the sign representation. This proves the second statement.
\end{proof}
To distinguish the signed permutation action of $W$ on $R_F(\mathbb{T}^2) \otimes \mathbb{Q}$ from the usual one, we denote the two modules by $R_F^{\mathrm{sgn}}$ and $R_F$ respectively as in~\cite{AdemCantareroGomez-TwistedK:2018}. We also define $I_F^{\mathrm{sgn}} = (F(t_2) - F(t_1), F(t_3) - F(t_2))$. Over the rational numbers taking invariants with respect to a finite group action is an exact functor. Hence, Thm.~\ref{thm:Koszul} gives isomorphisms of $R_F^W$-modules
\[
H^2_{\widetilde{W}}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}}) \cong H^2_{\Lambda}(\mathfrak{t}; \mathcal{R}_{\mathbb{Q}})^W \cong (R_F^{\mathrm{sgn}})^W/(I_F^{\mathrm{sgn}})^W\ .
\]
The ring $R_F^W$ is a localisation of the quotient of the ring of symmetric polynomials in the variables $t_1, t_2, t_3$ by the ideal generated by $(t_1t_2t_3 -1)$. The module $(R_F^{\mathrm{sgn}})^W$ is a similar quotient of the antisymmetric polynomials by the submodule $(t_1t_2t_3 -1) (R_F^{\mathrm{sgn}})^W$. Any antisymmetric polynomial is divisible by the Vandermonde determinant
\[
\Delta = \Delta(t_1,t_2,t_3) = (t_1 - t_2)(t_2 - t_3)(t_3 - t_1)\ .
\]
This induces an $R_F^W$-module isomorphism
\[
\Psi \colon (R_F^{\mathrm{sgn}})^W \to R_F^W \quad , \quad p \mapsto \frac{p}{\Delta}
\]
(compare with \cite[Sec.~5.1]{AdemCantareroGomez-TwistedK:2018}). Let $\theta_{jk} = F(t_j) - F(t_k)$.
\begin{lemma} \label{lem:submod}
The submodule $(I_F^{\mathrm{sgn}})^W$ is generated by the two antisymmetric polynomials $q_+ = \theta_{12}t_3 + \theta_{23}t_1 + \theta_{31}t_2$ and $q_- = \theta_{12}t_3^{-1} + \theta_{23}t_1^{-1} + \theta_{31}t_2^{-1}$. The corresponding submodule $\Psi((I_F^{\mathrm{sgn}})^W)$ is generated by
\begin{equation} \label{eqn:determinant}
\Psi(q_{\pm}) = -\frac{1}{\Delta} \det \begin{pmatrix}
F(t_1) & F(t_2) & F(t_3) \\[2mm]
t_1^{\pm 1} & t_2^{\pm 1} & t_3^{\pm 1} \\[2mm]
1 & 1 & 1
\end{pmatrix}\ .
\end{equation}
\end{lemma}
\begin{proof}
First note that $\theta_{12} + \theta_{23} + \theta_{31} = 0$. Therefore $q_{\pm} \in (I_F^{\mathrm{sgn}})^W$. The module $R(\mathbb{T}^2)$ is free of rank $6$ over $R(\mathbb{T}^2)^W$ by \cite[Thm.~2.2]{Steinberg-Pittie:1975}. Thus, the same is true for $R_F$ as a module over $R_F^W$. An explicit basis is given by $\beta = \{e, t_2, t_3, t_2^{-1}, t_1^{-1}, t_1^{-1}t_3 \}$. Consider the averaging map
\[
\alpha \colon R_F^{\mathrm{sgn}} \to (R_F^{\mathrm{sgn}})^W \quad, \quad p \mapsto \frac{1}{6} \sum_{g \in W} g \cdot p\ .
\]
This is a surjective module homomorphism, which maps the submodule $I_F^{\mathrm{sgn}}$ onto $(I_F^{\mathrm{sgn}})^W$. Let $q \in (I_F^{\mathrm{sgn}})^W$ and choose $p \in I_F^{\mathrm{sgn}}$ such that $\alpha(p) =q$. Then we have
\[
p = \theta_{12}p_{1} + \theta_{23}p_2
\]
for suitable $p_i \in R_F$. We have to see that $q = \alpha(p)$ is in the submodule generated by $q_{\pm}$. After decomposing $p_1$ and $p_2$ with respect to $\beta$ we see that it suffices to check that $\alpha(\theta_{12}y)$ and $\alpha(\theta_{23}y)$ lie in this submodule for all $y \in \beta$. Since $\alpha(\theta_{12}) = \alpha(\theta_{23}) = 0$, this is true for $y = e$. We have
\begin{align*}
\alpha(\theta_{12}t_2) &= \frac{1}{6}\left( (\theta_{12} + \theta_{23})t_2 + (\theta_{12} + \theta_{31})t_1 + (\theta_{31} + \theta_{23})t_3 \right) \\
&= -\frac{1}{6}\left(\theta_{31}t_2 + \theta_{23}t_1 + \theta_{12}t_3\right) = - \frac{1}{6}q_+ \\
\alpha(\theta_{12}t_3) &= \frac{1}{3}(\theta_{12}t_3 + \theta_{31} t_2 + \theta_{23}t_1) = \frac{1}{3} q_+
\end{align*}
The cases $\alpha(\theta_{23}t_2)$ and $\alpha(\theta_{23}t_3)$ work in a similar way. The expressions $\alpha(\theta_{12}t_2^{-1})$, $\alpha(\theta_{23}t_2^{-1})$, $\alpha(\theta_{12}t_1^{-1})$ and $\alpha(\theta_{23}t_1^{-1})$ produce corresponding multiples of $q_-$. In the remaining two cases a short computation shows that
\begin{align*}
\alpha(\theta_{12}t_1^{-1}t_3) &= \frac{1}{6} q_+ \cdot (t_1^{-1} + t_2^{-1} + t_3^{-1}) \ ,\\
\alpha(\theta_{23}t_1^{-1}t_3) &= \frac{1}{6} q_- \cdot (t_1 + t_2 + t_3)\ .
\end{align*}
This shows that the submodule $(I_F^{\mathrm{sgn}})^W$ is generated by $q_+$ and $q_-$. The determinant formula follows from a straightforward computation.
\end{proof}
\begin{example}
In case of the classical twist, i.e.\ for $F = (\@ifnextchar^\@extp{\@extp^{\,}}^{\mathrm{top}})^{\otimes m}$ we have $F(t_i) = t_i^m$. For $\Psi(q_+)$ equation \eqref{eqn:determinant} boils down to the definition of the Schur polynomial for the partition with just one element \cite[I.3, p.~40]{Macdonald-symmetric:2015}. In this case the Schur polynomial agrees with the complete homogeneous symmetric polynomial $h_{m-2}$. Using the properties of the determinant we also have
\[
\Psi(q_-) = -\frac{1}{\Delta} \det \begin{pmatrix}
t_1^{m+1} & t_3^{m+1} & t_3^{m+1} \\
1 & 1 & 1 \\
t_1 & t_2 & t_3
\end{pmatrix} = \frac{1}{\Delta} \det \begin{pmatrix}
t_1^{m+1} & t_3^{m+1} & t_3^{m+1} \\
t_1 & t_2 & t_3 \\
1 & 1 & 1
\end{pmatrix}
\]
which produces $h_{m-1}$. Altogether we obtain
\[
\Psi(q_+) = -h_{m-2} \quad , \quad \Psi(q_-) = h_{m-1}\ .
\]
For $m = 0$ we have $q_- = q_+ = 0$, for $m=1$ we get $q_+ = 0$ and $q_- = 1$ and in the case $m = 2$ the submodule $(I^{\mathrm{sgn}}_F)^W$ is generated by $q_+ = 1$ and $q_- = h_1$. Thus, for $m \in \{1,2\}$ the quotient $R_F^W/I_F^W$ vanishes.
\end{example}
\begin{example} \label{ex:full_twist}
For the $m$th power of the exterior algebra twist $F = \left( \@ifnextchar^\@extp{\@extp^{\,}}^* \right)^{\otimes m}$ we have $F(t_i) = (1 + t_i)^m$ and since
\[
F(t_j) = \sum_{l=0}^m \binom{m}{l} t_j^l
\]
the determinant formula for $\Psi(q_{\pm})$ gives
\[
\Psi(q_+) = -\sum_{l=2}^m \binom{m}{l} h_{l-2} \quad , \quad \Psi(q_-) = \sum_{l=1}^m \binom{m}{l} h_{l-1}\ .
\]
\end{example}
We are now in the position to provide a complete computation of the equivariant higher twisted $K$-theory for $SU(3)$ after rationalisation and summarise our results in the following theorem.
\begin{theorem} \label{thm:main_theorem_SU(3)}
For the rational equivariant higher twisted $K$-theory of $SU(3)$ with twist given by an exponential functor $F$ with $\deg(F(t)) > 0$ we have the following isomorphism of $R_F(SU(3))$-modules:
\[
K_0^G(C^*(\mathcal{E})) \otimes \mathbb{Q} \cong (R_F(SU(3)) \otimes \mathbb{Q})/J_F \quad, \quad K_1^G(C^*(\mathcal{E})) \otimes \mathbb{Q} \cong 0
\]
where $J_F$ is the submodule generated by the two representations $\sigma_1^F$ and $\sigma_2^F$ whose characters $\chi_1, \chi_2$ are the symmetric polynomials
\[
\chi_1 = \frac{1}{\Delta} \det \left(\begin{smallmatrix}
F(t_1) & F(t_2) & F(t_3) \\[1mm]
t_1 & t_2 & t_3 \\[1mm]
1 & 1 & 1
\end{smallmatrix}\right) \quad , \quad
\chi_2 = \frac{1}{\Delta} \det \left(\begin{smallmatrix}
F(t_1)t_1 & F(t_2)t_2 & F(t_3)t_3 \\[1mm]
t_1 & t_2 & t_3 \\[1mm]
1 & 1 & 1
\end{smallmatrix}\right)\ .
\]
In the case $F = \left(\@ifnextchar^\@extp{\@extp^{\,}}^*\right)^{\otimes m}$ the submodule $J_F$ is generated by the representations
\[
\sigma_1^F = \sum_{l=2}^m \binom{m}{l} {\mathrm{Sym}}^{l-2}(\rho) \quad , \quad \sigma_2^F = \sum_{l=1}^m \binom{m}{l} {\mathrm{Sym}}^{l-1}(\rho)\ .
\]
\end{theorem}
\begin{proof}
Let $\mathcal{Q}$ be the universal UHF-algebra equipped with the trivial action. By continuity of the $K$-functor the $K$-theory of $C^*(\mathcal{E}) \otimes \mathcal{Q}$ is the rationalisation of the $K$-theory of $C^*(\mathcal{E})$. To compute $K_*^G(C^*(\mathcal{E}) \otimes \mathcal{Q})$ we can use the corresponding spectral sequence from Cor.~\ref{cor:spectral_seq}. The resulting cochain complex will have $R_F(G_I) \otimes \mathbb{Q}$ in place of $R_F(G_I)$ in each degree. Lemma~\ref{lem:coh_with_coeff} identifies it as the complex computing the $\widetilde{W}$-equivaraint cohomology of $\mathfrak{t}$ with respect to the coefficient system~$\mathcal{R}_{\mathbb{Q}}$. Thus, the homomorphism \eqref{eqn:rational_coh} is now an isomorphism. Combining Thm.~\ref{thm:Koszul} with Lemma~\ref{lem:submod} we obtain the first part of the statement. The second part follows from Example~\ref{ex:full_twist} by identifying the characters given by the homogeneous symmetric polynomials with symmetric powers of the standard representation.
\end{proof}
\begin{remark}
The choice of the orientation of $\mathbb{T}$ that went into the construction of $\mathcal{E}$ through the choice of order on $\mathbb{T} \setminus \{1\}$ features in the computations of the twisted $K$-groups as follows: Changing the orientation to its opposite corresponds to replacing all factors of the form $F(t)$ by $F(t)^{-1}$. Since the ideal $I_F^{\mathrm{sgn}}$ is invariant under this transformation, we obtain isomorphic higher twisted $K$-groups in both cases.
\end{remark}
We expect Thm.~\ref{thm:main_theorem_SU(3)} also to be true without the rationalisation. Since the modules $R_F(G_I)$ are free over $R_F(SU(3))$ the differentials from Lemma~\ref{lem:diff_d0_d1} in the cochain complex can be expressed in terms of matrices, which allowed us to perform an extensive computer analysis in the case of the full twist $F = \left( \@ifnextchar^\@extp{\@extp^{\,}}^* \right)^{\otimes m}$ for $m \in \{1,\dots, 8\}$. For these levels we thereby confirmed the above conjecture.
The approach presented above should also be seen as a blueprint for the computation of the rationalised equivariant higher twisted $K$-theory of $SU(n)$. We will return to this discussion in future work.
|
2,877,628,091,259 | arxiv | \section{Introduction}
The Casimir force was originally derived by estimating the zero-point
energy (vacuum fluctuations) of the electromagnetic field comprised
in-between two ideal, perfectly reflecting, semi-infinite metals (two
halves of space) separated by distance $d$.\cite{key-1} As it is
well-known, it goes like $d^{-4}$ for distances greater than the
characteristic electromagnetic wavelengths of the bodies (plasmon
\char`\"{}wavelengths\char`\"{}). Further on, the calculations have
been cast in a different form, by resorting to the fluctuations theory,\cite{key-2,key-3}
and a $d^{-3}$-force has been obtained for the non-retarded (Coulomb)
interaction, which corresponds to the van der Waals-London force.
The matter polarization is usually represented in this case by a dielectric
function. Recently, there is a renewed interest in this subject, motivated,
on one hand, by the role played by plasmons, polaritons and other
surface effects arising from the interaction between the electromagnetic
field and matter and, on the other hand, by the querries related to
the applicability of a dielectric function for discontinuous bodies.\cite{key-4}-\cite{key-20}
We report here on a different investigation of these forces, based
on the calculation of the eigenfrequencies of the electromagnetic
field interacting with matter.
We assume a simple model of matter, consisting of mobile particles
with charge $-e$ and mass $m$, moving in a rigid neutralizing background,
and subjected to certain forces. Such a model is reminiscent of the
well-known jellium model of electron plasma, though it is generalized
here to some extent. In the presence of the electromagnetic field
matter polarizes. We leave aside the magnetization (we consider only
non-magnetic matter) and relativistic effects. We represent the small
disturbance in the density of the mobile charges as $\delta n=-ndiv\mathbf{u}$,
where $n$ is the (constant) concentration of the charges and $\mathbf{u}$
is a displacement field in the positions of these charges. The charge
disturbance is therefore $\rho=endiv\mathbf{u}$. This representation
is valid for $\mathbf{K}\mathbf{u}(\mathbf{K})\ll1$, where $\mathbf{K}$
is the wavevector and $\mathbf{u}(\mathbf{K})$ is the Fourier transform
of the displacement field.
For homogeneous and isotropic matter the displacement field obeys
an equation of motion which can be taken of the form \begin{equation}
m\ddot{\mathbf{u}}=-e\mathbf{E}-e\mathbf{E}_{0}-m\omega_{0}^{2}\mathbf{u}-m\gamma\dot{\mathbf{u}}\,\,\,,\label{1}\end{equation}
where \textbf{$\mathbf{E}$ }is the (internal) electric field, $\mathbf{E}_{0}$
is an external electric field, $\omega_{0}$ is a frequency parameter
corresponding to an elastic force and $\gamma$ is a dissipation parameter.
Making use of the temporal Fourier transform we get \begin{equation}
\mathbf{u}(\omega)=\frac{e}{m}\frac{1}{\omega^{2}-\omega_{0}^{2}+i\omega\gamma}(\mathbf{E}+\mathbf{E}_{0})\label{2}\end{equation}
(where we dropped out the argument $\omega$ of the electric fields).
On the other hand, from Maxwell's equation $div\mathbf{E}=4\pi endiv\mathbf{u}$,
we get the (internal) electric field $\mathbf{E}=4\pi ne\mathbf{u}$
(equal to $-4\pi\mathbf{P}$, where $\mathbf{P}$ is the polarization).
Making use of equation (\ref{2}) we get the dielectric function \begin{equation}
\varepsilon(\omega)=1-\frac{\omega_{p}^{2}}{\omega^{2}-\omega_{0}^{2}+i\omega\gamma}\label{3}\end{equation}
from its definition $\mathbf{E}_{0}=\varepsilon(\mathbf{E}+\mathbf{E}_{0})$,
where $\omega_{p}$, given by $\omega_{p}^{2}=4\pi ne^{2}/m$, is
the plasma frequency. The dielectric function given by equation (\ref{3})
is well known in the elementary theory of dispersion.\cite{key-21}
It proves to be a fairly adequate representation for matter polarization
in various bodies. We can view $\omega_{p},$$\omega_{0}$ and $\gamma$
as free parameters, thus being able to simulate various models of
matter. For $\omega_{0}=\gamma=0$ we get the well-known dielectric
function of an ideal plasma; if $\omega_{0}=0$ we have the dielectric
function of the optical properties of simple metals for $\omega\gg\gamma$
(Drude model), and the dielectric function corresponding to the static
(or quasi-static) currents in metals for $\omega\ll\gamma$; for $\omega_{0}\gg\omega_{p}$
we have a dielectric function of dielectrics with loss; and so on.
In addition, making use of equation (\ref{2}), we can compute also
the electric conductivity $\sigma$, from its definition $\mathbf{j}=\sigma(\mathbf{E}+\mathbf{E}_{0})$,
where $\mathbf{j}=-en\dot{\mathbf{u}}$ is the current density. We
get the well-known conductivity \begin{equation}
\sigma(\omega)=\frac{\omega_{p}^{2}}{4\pi}\frac{i\omega}{\omega^{2}-\omega_{0}^{2}+i\omega\gamma}\,\,\,,\label{4}\end{equation}
whence, for instance, the static conductivity for metals $\sigma=\omega_{p}^{2}/4\pi\gamma$;
parameter $\gamma$ can be viewed as the reciprocal of a damping time
$\tau$ (or relaxation time, or lifetime), $\gamma=1/\tau$, and we
get the well-known static conductivity $\sigma=ne^{2}\tau/m\gamma$.
Therefore, the equation of motion (\ref{1}) turns out to be an adequate
starting point for representing the matter polarization. However,
we must note that for dielectrics, which may imply oscillations in
localized atoms (in our model through the frequency $\omega_{0}$),
the classical dynamics assumed here turns out to be inadequate in
the retarded regime, and a quantum treatment is then required.
In the non-retarded limit the electric field $\mathbf{E}$ in equation
(\ref{1}) is given by the Coulomb law, \emph{i.e.} $\mathbf{E}=-grad\Phi$,
where $\Phi$ is the static Coulomb potential arising in matter. The
latter depends on the charge disturbance $\rho=-e\delta n$, therefore
on $\mathbf{u}$. Then, it is easy to see that the equation of motion
(\ref{1}) leads to an integral equation for the displacement field
$\mathbf{u}$. Its eigenvalues give the plasmon modes. For retarded
interaction, the electric field $\mathbf{E}$ in equation (\ref{1})
is given by the vector potential $\mathbf{A}$ and the scalar potential
$\Phi$ trough $\mathbf{E}=-\frac{1}{c}\frac{\partial\mathbf{A}}{\partial t}-grad\Phi$.
Making use of the radiation (Kirchhoff) formulae, these potentials
can be expressed as integrals containing the displacement field $\mathbf{u}$
(through the charge and current densities), and we get again an integral
equation for $\mathbf{u}$. Its eigenvalues give polariton-like modes.
The use of integral equations in treating the electromagnetic field
interacting with matter was previously indicated in connection with
the so-called Ewald-Oseen extinction theorem.\cite{key-22} We have
applied this approach to a semi-infinite (half-space) body, as well
as to a slab of finite thickness.\cite{key-23} In this case, beside
the bulk displacement field, there appears a surface displacement
field also, and the integral equations couple these degrees of freedom.
We have solved these coupled integral equations and computed bulk
and surface plasmons and polaritons, dielectric response, reflected,
refracted and transmitted fields, and derived generalized Fresnel
relations. We employ the same procedure here for two semi-infinite
bodies (two halves of the space) separated by distance $d$, in order
to get the electromagnetic eigenfrequencies and to derive van der
Waals-London and Casimir forces. We do it in two steps: first, for
static Coulomb (non-retarded) interaction (valid for wavelengths much
longer than the characteristic size of the bodies) and, second, for
retarded interaction.
\section{Surface plasmons. van der Waals-London forces}
We consider two semi-infinite bodies (two halves of space) with parallel
surfaces in the $(x,y)$-plane, separated by distance $d$. The bodies
occupy the regions $z<-d/2$ and, respectively, $z>d/2$. We take
two displacement fields $\mathbf{u}_{1,2}$, giving rise to two charge
disturbances $\delta n_{1,2}=-n_{1,2}div\mathbf{u}_{1,2}$. We consider
first the equation of motion for an ideal plasma. In general, we leave
aside the dissipation (parameter $\gamma$ in equation (\ref{1})),
which is irrelevant for our discussion. The equation of motion reads
\begin{equation}
m\ddot{\mathbf{u}}_{1}=grad\int d\mathbf{R}^{'}U(\left|\mathbf{R}-\mathbf{R}^{'}\right|)\left[n_{1}div\mathbf{u}_{1}(\mathbf{R}^{'})+n_{2}div\mathbf{u}_{2}(\mathbf{R}^{'})\right]\,\,\,,\label{5}\end{equation}
and a similar equation for $\mathbf{u}_{2}$, which can be obtained
from equation (\ref{5}) by interchanging the labels $1$ and $2$
($1\longleftrightarrow2$); $U(R)=e^{2}/R$ in equation (\ref{5})
is the Coulomb interaction. Since we are interested in the eigenmodes,
we leave aside the external field $\mathbf{E}_{0}$. We use $\mathbf{R}=(\mathbf{r},z)$
for the position vector $\mathbf{R}$, where $\mathbf{r}=(x,y),$
and the representation \begin{equation}
\mathbf{u}_{1,2}=(\mathbf{v}_{1,2},w_{1,2})\theta(\pm z-d/2)\label{6}\end{equation}
for the displacement fields, where $\theta(z)=1$ for $z>0$ and
$\theta(z)=0$ for $z<0$ is the step function; the $\pm$ sign is
associated with labels $1$ and $2$, respectively. The divergence
in equation (\ref{5}) can now be written as \begin{equation}
div\mathbf{u}_{1,2}=\left(div\mathbf{v}_{1,2}+\frac{\partial w_{1,2}}{\partial z}\right)\theta(\pm z-d/2)+w_{1,2}(\pm d/2)\delta(\pm z-d/2)\,\,\,,\label{7}\end{equation}
where $w_{1,2}(\pm d/2)$ means $w_{1,2}(\mathbf{r},z=\pm d/2)$.
We notice in equation (\ref{7}) the (de)polarization charge arising
at the surfaces $z=\pm d/2$. We employ Fourier representations of
the form \begin{equation}
\mathbf{v}_{1,2}(\mathbf{r},z;t)=\sum_{\mathbf{k}}\int d\omega\mathbf{v}_{1,2}(\mathbf{k},z;\omega)e^{i\mathbf{kr}}e^{-i\omega t}\label{8}\end{equation}
and similar ones for $w_{1,2}$, and use the Fourier transform \begin{equation}
\frac{1}{\sqrt{r^{2}+z^{2}}}=\sum_{\mathbf{k}}\frac{2\pi}{k}e^{-k\left|z\right|}e^{i\mathbf{kr}}\label{9}\end{equation}
for the Coulomb potential. Then, we notice that equation (\ref{5})
implies that $\mathbf{v}_{1,2}$ are parallel with the wavevector
$\mathbf{k}$ (in-plane \char`\"{}longitudinal\char`\"{} modes), and
$i\mathbf{k}w_{1,2}=\frac{\partial\mathbf{v}_{1,2}}{\partial z}$.
We use this latter relation to eliminate $w_{1,2}$ from the equations
of motion. In addition, we introduce the notation $v_{1,2}=\mathbf{k}\mathbf{v}_{1,2}/k$.
Then, it is easy to see that equation (\ref{5}) yields two coupled
integral equations \begin{equation}
\begin{array}{c}
\omega^{2}v_{1}=\frac{\omega_{1}^{2}k}{2}\int_{d/2}^{\infty}dz^{'}e^{-k\left|z-z'\right|}v_{1}+\frac{\omega_{1}^{2}}{2k}\int_{d/2}^{\infty}dz^{'}\frac{\partial}{\partial z^{'}}e^{-k\left|z-z'\right|}\frac{\partial v_{1}}{\partial z^{'}}+\\
\\+\frac{\omega_{2}^{2}k}{2}\int_{-\infty}^{-d/2}dz^{'}e^{-k(z-z')}v_{2}+\frac{\omega_{2}^{2}}{2}\int_{-\infty}^{-d/2}dz^{'}e^{-k(z-z')}\frac{\partial v_{2}}{\partial z^{'}}\,\,,\,\, z>d/2\,\,\,,\\
\\\omega^{2}v_{2}=\frac{\omega_{1}^{2}k}{2}\int_{d/2}^{\infty}dz^{'}e^{k(z-z')}v_{1}-\frac{\omega_{1}^{2}}{2}\int_{d/2}^{\infty}dz^{'}e^{k(z-z')}\frac{\partial v_{1}}{\partial z^{'}}+\\
\\+\frac{\omega_{2}^{2}k}{2}\int_{-\infty}^{-d/2}dz^{'}e^{-k\left|z-z'\right|}v_{2}+\frac{\omega_{2}^{2}}{2k}\int_{-\infty}^{-d/2}dz^{'}\frac{\partial}{\partial z^{'}}e^{-k\left|z-z'\right|}\frac{\partial v_{2}}{\partial z^{'}}\,\,,\,\, z<-d/2\,\,\,,\end{array}\label{10}\end{equation}
where $\omega_{1,2}^{2}=4\pi n_{1,2}e^{2}/m$ and we dropped out
the arguments $\omega,\mathbf{k}$. Integrating by parts in equations
(\ref{10}) we obtain a system of two algebraic equations \begin{equation}
\begin{array}{c}
\left(\omega^{2}-\omega_{1}^{2}\right)v_{1}=-\frac{1}{2}e^{-kz}\left[\omega_{1}^{2}e^{kd/2}v_{1}(d/2)-\omega_{2}^{2}e^{-kd/2}v_{2}(-d/2)\right]\,\,,\,\, z>d/2\,\,\,,\\
\\\left(\omega^{2}-\omega_{2}^{2}\right)v_{2}=\frac{1}{2}e^{kz}\left[\omega_{1}^{2}e^{-kd/2}v_{1}(d/2)-\omega_{2}^{2}e^{kd/2}v_{2}(-d/2)\right]\,\,,\,\, z<-d/2\,\,.\end{array}\label{11}\end{equation}
We can see that in this non-retarded limit the two bodies are coupled
only through their surfaces.
For $v_{1}(d/2)=v_{2}(-d/2)=0$ in equations (\ref{11}) we get the
bulk plasmons $\omega=\omega_{1,2}$. Making $z=\pm d/2$ in equations
(\ref{11}) we get the system of equations for the surface modes.
The corresponding dispersion equation is given by \begin{equation}
\left(\omega^{2}-\frac{1}{2}\omega_{1}^{2}\right)\left(\omega^{2}-\frac{1}{2}\omega_{2}^{2}\right)-\frac{1}{4}\omega_{1}^{2}\omega_{2}^{2}e^{-2kd}=0\,\,.\label{12}\end{equation}
For $d=0$ we obtain the surface plasmon of a metallic interface given
by $\omega^{2}=\frac{1}{2}\left(\omega_{1}^{2}+\omega_{2}^{2}\right)$,
while for $d\rightarrow\infty$ we get the surface plasmons $\omega=\omega_{1,2}/\sqrt{2}$
for free (uncoupled) surfaces. If the body labelled by $2$ for instance
is a dielectric, then $\omega^{2}$ in the second equation (\ref{11})
is replaced by $\omega^{2}-\omega_{0}^{2}$. In the limit $\omega_{0}\gg\omega_{2}$
and for $d=0$ we get the surface plasmon $\omega=\omega_{1}/\sqrt{1+\varepsilon_{2}}$,
corresponding to a dielectric-metal interface, where $\varepsilon_{2}=1+\omega_{2}^{2}/\omega_{0}^{2}$.
For two identical metals $\omega_{1}=\omega_{2}=\omega_{p}$ we get
the surface plasmons given by\begin{equation}
\omega^{2}=\frac{1}{2}\omega_{p}^{2}\left(1\pm e^{-kd}\right)\,\,.\label{13}\end{equation}
They are identical with the surface plasmons of a plasma slab of
thickness $d$. These are well-known results.\cite{key-24}-\cite{key-31}
Let us label by $\alpha$ all the eigenvalues $\Omega_{\alpha}$ of
the system of equations (\ref{11}). We compute the force acting between
the two bodies by \begin{equation}
F=\frac{\partial}{\partial d}\sum_{\alpha}\frac{1}{2}\hbar\Omega_{\alpha}\,\,\,,\label{14}\end{equation}
where we recognize the zero-point energy of harmonic oscillators.
Although it can be included straightforwardly, it is easy to see that
the temperature plays no significant role, so we may neglect the temperature
effects, as usually. We may also leave aside the bulk plasmons, since
they do not depend on the distance $d$. We are left with the two
surface modes $\Omega_{1,2}$ given by equation (\ref{12}), labeled
by wavevector $\mathbf{k}$. We can see that these eigenvalues are
function of $kd$, so the force depends on distance $d$ as $F\sim1/d^{3}$.
As it is well-known, such a force between two bodies implies an inter-atomic
interaction $\sim1/R^{6}$ , where $R$ is the distance between two
atoms. This is the well-known van der Waals-London interaction.\cite{key-32}
We compute here the force $F$ for the eigenvalues given by equation
(\ref{13}), \emph{i.e.} for two identical plasmas (metals). Equation
(\ref{14}) gives a force \begin{equation}
F=\frac{\hbar\omega_{p}}{8\pi\sqrt{2}d^{3}}\int_{0}^{\infty}dx\cdot x^{2}e^{-x}\left(\frac{1}{\sqrt{1-e^{-x}}}-\frac{1}{\sqrt{1+e^{-x}}}\right)\label{15}\end{equation}
per unit area. The integral in equation (\ref{15}) is $\simeq4$,
so we get $F\simeq\hbar\omega_{p}/2\pi\sqrt{2}d^{3}$. In like manner
we can compute the force between two (identical) dielectrics, by replacing
$\omega^{2}$ in equation (\ref{13}) by $\omega^{2}-\omega_{0}^{2}$
and taking the limit $\omega_{0}\gg\omega_{p}$. The result is a much
weaker force $F=\hbar\omega_{p}^{4}/128\omega_{0}^{3}d^{3}$. It can
also be written as $F=\hbar\omega_{0}(\varepsilon-1)^{2}/128d^{3}$,
where $\varepsilon\simeq1+\omega_{p}^{2}/\omega_{0}^{2}$ is the (static)
dielectric function in the limit $\omega\ll\omega_{0}$. The same
result is obtained by making use of the formulae given in Ref. \cite{key-32}
for non-retarded interaction within the framework of the fluctuations
theory (equation 82.3 p. 343 in Ref. \cite{key-32}). Making use of
the eigenvalues given by the roots of the dispersion equation (\ref{12}),
we can compute in the same manner the force acting between two distinct
bodies. For instance, we can consider a dielectric-metal pair and
get straightforwardly the force $F=\hbar\omega_{1}\omega_{2}^{2}/32\pi\sqrt{2}\omega_{0}^{2}d^{3}$,
where $\omega_{1}$ belongs to the metal and $\omega_{2},\,\omega_{0}$
represent the dielectric.
\section{Surface plasmon-polariton modes. Casimir force}
We pass now to the retarded interaction. The electric field in equation
(\ref{1}) is given by $E=-\frac{1}{c}\frac{\partial\mathbf{A}}{\partial t}-grad\Phi$,
where $\mathbf{A}$ is the vector potential and $\Phi$ is the scalar
potential. These potentials are given by \begin{equation}
\mathbf{A}(\mathbf{r},z;t)=\frac{1}{c}\int d\mathbf{r}'\int dz'\frac{\mathbf{j}(\mathbf{r}',z';t-R/c)}{R}\label{16}\end{equation}
and \begin{equation}
\Phi(\mathbf{r},z;t)=\int d\mathbf{r}'\int dz'\frac{\rho(\mathbf{r}',z';t-R/c)}{R}\,\,\,,\label{17}\end{equation}
where \begin{equation}
\mathbf{j}=-en_{1}(\dot{\mathbf{v}}_{1},\dot{w}_{1})\theta(z-d/2)-en_{2}(\dot{\mathbf{v}}_{2},\dot{w}_{2})\theta(-z-d/2)\label{18}\end{equation}
is the current density, \begin{equation}
\begin{array}{c}
\rho=en_{1}\left(div\mathbf{v}_{1}+\frac{\partial w_{1}}{\partial z}\right)\theta(z-d/2)+w_{1}(d/2)\delta(z-d/2)+\\
\\+en_{2}\left(div\mathbf{v}_{2}+\frac{\partial w_{2}}{\partial z}\right)\theta(-z-d/2)+w_{2}(-d/2)\delta(z+d/2)\end{array}\label{19}\end{equation}
is the charge density and $R=\sqrt{(\mathbf{r}-\mathbf{r}')^{2}+(z-z')^{2}}$.
We use the Fourier representations given by equation (\ref{8}) and
the Fourier transform\cite{key-33} \begin{equation}
\frac{e^{i\frac{\omega}{c}\sqrt{r^{2}+z^{2}}}}{\sqrt{r^{2}+z^{2}}}=\sum_{\mathbf{k}}\frac{2\pi i}{\kappa}e^{i\mathbf{kr}}e^{i\kappa\left|z\right|}\,\,\,,\label{20}\end{equation}
where $\kappa=\sqrt{\frac{\omega^{2}}{c^{2}}-k^{2}}$. Then we compute
the electric field from the potentials given by equations (\ref{16})
and (\ref{17}) and use equation (\ref{1}) for $\omega_{0}=0,\,\gamma=0,\,\mathbf{E}_{0}=0$
in order to get integral equations for $\mathbf{v}_{1,2},\, w_{1,2}$.
We define the wavevector $\mathbf{k}_{\perp}$ of magnitude $k$ and
perpendicular to the wavevevctor $\mathbf{k}$, and introduce the
notations $v_{1,2}=\mathbf{k}\mathbf{v}_{1,2}/k$, $v_{1,2}^{\perp}=\mathbf{k}_{\perp}\mathbf{v}_{1,2}/k$.
Doing so, we get the first set of integral equations\begin{equation}
\begin{array}{c}
v_{1}^{\perp}=-\frac{i\omega_{1}^{2}}{2c^{2}\kappa}\int_{d/2}^{\infty}dz'e^{i\kappa\left|z-z'\right|}v_{1}^{\perp}(z')-\frac{i\omega_{2}^{2}}{2c^{2}\kappa}\int_{-\infty}^{-d/2}dz'e^{i\kappa(z-z')}v_{2}^{\perp}(z')\,\,,\,\, z>d/2\,\,\,,\\
\\v_{2}^{\perp}=-\frac{i\omega_{1}^{2}}{2c^{2}\kappa}\int_{d/2}^{\infty}dz'e^{-i\kappa(z-z')}v_{1}^{\perp}(z')-\frac{i\omega_{2}^{2}}{2c^{2}\kappa}\int_{-\infty}^{-d/2}dz'e^{i\kappa\left|z-z'\right|}v_{2}^{\perp}(z')\,\,,\,\, z<-d/2\,\,\,,\end{array}\label{21}\end{equation}
where we dropped out the arguments $\omega,\mathbf{k}$.
Then, from the integral equations for $v_{1,2}$ and $w_{1,2}$ we
notice the relationship \begin{equation}
w_{1,2}=\frac{ik}{\kappa^{2}-\omega_{1,2}^{2}/c^{2}}\frac{\partial v_{1,2}}{\partial z}\,\,\,,\label{22}\end{equation}
which we use to eliminate $w_{1,2}$ from these equations; so, we
are left with the second set of two integral equations in $v_{1,2}$:
for $z>d/2$\begin{equation}
\begin{array}{c}
\frac{c^{2}\kappa^{2}(\omega^{2}-\omega_{1}^{2})}{c^{2}\kappa^{2}-\omega_{1}^{2}}v_{1}=-\frac{i\kappa\omega_{1}^{2}(\omega^{2}-\omega_{1}^{2})}{2(c^{2}\kappa^{2}-\omega_{1}^{2})}\int_{d/2}^{\infty}dz'e^{i\kappa\left|z-z'\right|}v_{1}(z')-\\
\\-\frac{i\kappa\omega_{2}^{2}(\omega^{2}-\omega_{2}^{2})}{2(c^{2}\kappa^{2}-\omega_{2}^{2})}\int_{-\infty}^{-d/2}dz'e^{i\kappa(z-z')}v_{2}(z')+\\
\\+\frac{c^{2}k^{2}\omega_{1}^{2}}{2(c^{2}\kappa^{2}-\omega_{1}^{2})}e^{i\kappa(z-d/2)}v_{1}(d/2)-\frac{c^{2}k^{2}\omega_{2}^{2}}{2(c^{2}\kappa^{2}-\omega_{2}^{2})}e^{i\kappa(z+d/2)}v_{2}(-d/2)\end{array}\label{23}\end{equation}
and \begin{equation}
\begin{array}{c}
\frac{c^{2}\kappa^{2}(\omega^{2}-\omega_{2}^{2})}{c^{2}\kappa^{2}-\omega_{2}^{2}}v_{2}=-\frac{i\kappa\omega_{1}^{2}(\omega^{2}-\omega_{1}^{2})}{2(c^{2}\kappa^{2}-\omega_{1}^{2})}\int_{d/2}^{\infty}dz'e^{-i\kappa(z-z')}v_{1}(z')-\\
\\-\frac{i\kappa\omega_{2}^{2}(\omega^{2}-\omega_{2}^{2})}{2(c^{2}\kappa^{2}-\omega_{2}^{2})}\int_{-\infty}^{-d/2}dz'e^{i\kappa\left|z-z'\right|}v_{2}(z')-\\
\\-\frac{c^{2}k^{2}\omega_{1}^{2}}{2(c^{2}\kappa^{2}-\omega_{1}^{2})}e^{-i\kappa(z-d/2)}v_{1}(d/2)+\frac{c^{2}k^{2}\omega_{2}^{2}}{2(c^{2}\kappa^{2}-\omega_{2}^{2})}e^{-i\kappa(z+d/2)}v_{2}(-d/2)\end{array}\label{24}\end{equation}
for $z<-d/2$. It is worth observing in deriving these equations the
non-intervertibility of the derivatives and the integrals, according
to the identity \begin{equation}
\frac{\partial}{\partial z}\int_{d/2}^{\infty}dz^{'}f(z^{'})\frac{\partial}{\partial z^{'}}e^{i\kappa\left|z-z^{'}\right|}=\kappa^{2}\int_{d/2}^{\infty}dz^{'}f(z^{'})e^{i\kappa\left|z-z^{'}\right|}-2i\kappa f(z)\label{25}\end{equation}
for any function $f(z)$, $z>d/2$; a similar identity holds for
$z,z'<-d/2$. It is due to the discontinuity in the derivative of
the function $e^{i\kappa\left|z-z^{'}\right|}$ for $z=z^{'}$. We
can see that these equations become equations (\ref{10}) in the non-retarded
limit by taking formally the limit $c\rightarrow\infty$. However,
this is not so for their dispersion equations, as we shall see below.
One can also see from equations (\ref{21}), (\ref{23}) and (\ref{24})
that the coupling between the two bodies is performed through both
bulk and surface degrees of freedom, in contrast to the non-retarded
situation, where this coupling occurs only through surfaces (equations
(\ref{11})).
We turn now to equations (\ref{21}). Taking the second derivative
with respect to $z$ in these equations we get \begin{equation}
\frac{\partial^{2}v_{1,2}^{\perp}}{\partial z^{2}}+\left(\kappa^{2}-\frac{\omega_{1,2}^{2}}{c^{2}}\right)v_{1,2}^{\perp}=0\,\,\,,\label{26}\end{equation}
which tells that $v_{1,2}^{\perp}$ are a superposition of two waves
$e^{\pm i\kappa_{1,2}z}$, where \begin{equation}
\kappa_{1,2}=\sqrt{\kappa^{2}-\frac{\omega_{1,2}^{2}}{c^{2}}}\,\,.\label{27}\end{equation}
We note that such modes are polaritonic modes, since $\omega^{2}=c^{2}\left(k^{2}+\kappa^{2}\right)=c^{2}\left(k^{2}+\kappa_{1,2}^{2}\right)+\omega_{1,2}^{2}=c^{2}K_{1,2}^{2}+\omega_{1,2}^{2}$,
where $\mathbf{K}_{1,2}=(\mathbf{k},\kappa_{1,2})$, which is the
well-kown dispersion relation for the polaritonic modes. It can also
be written as $\omega^{2}\varepsilon_{1,2}=c^{2}K_{1,2}^{2}$, where
$\varepsilon_{1,2}=1-\omega_{1,2}^{2}/\omega^{2}$ is the dielectric
function for metals. This relation is well-known in the so-called
thery of \char`\"{}effective medium permittivity\char`\"{}. We take
$v_{1,2}^{\perp}=A_{1,2}e^{i\kappa_{1,2}z}$, where $A_{1,2}$ are
amplitudes to be determined. Then, equations (\ref{21}) have non-trivial
solutions for frequencies $\omega$ given by the roots of the dispersion
equation \begin{equation}
e^{2i\kappa d}=\frac{(\kappa_{1}+\kappa)(\kappa_{2}-\kappa)}{(\kappa_{1}-\kappa)(\kappa_{2}+\kappa)}\,\,.\label{28}\end{equation}
Equation (\ref{28}) has a branch of roots for the damped regime (evanescent
modes) $\kappa_{1}=i\alpha_{1}$, $\kappa_{2}=-i\alpha_{2}$, given
by \begin{equation}
\tan\kappa d=\frac{\kappa\left(\alpha_{1}+\alpha_{2}\right)}{\kappa^{2}-\alpha_{1}\alpha_{2}}\,\,\,,\label{29}\end{equation}
where\begin{equation}
\alpha_{1,2}=\sqrt{\frac{\omega_{1,2}^{2}}{c^{2}}-\kappa^{2}}\,\,,\,\,\omega_{1,2}>c\kappa\,\,\,,\label{30}\end{equation}
and $\kappa$ real. Since these modes are damped inside the bodies
and propagating in-between the bodies they may be called surface plasmon-polariton
modes. It is worth noting the correct choice of the sign of the square
root in this case, in order to get the correct behaviour at infinity,
$v_{1}^{\perp}=A_{1}^{\perp}e^{-\alpha_{1}z}$ for $z>d/2$ and $v_{2}^{\perp}=A_{2}^{\perp}e^{\alpha_{2}z}$
for $z<-d/2$. The roots of equation (\ref{29}) can be written as
\begin{equation}
\Omega_{1}=c\sqrt{k^{2}+\frac{\pi^{2}x_{n}^{2}}{d^{2}}}\,\,\,,\label{31}\end{equation}
where $x_{0}=0$ and $n-1/2<x_{n}<n+1/2$, $n=1,2,3,...$ for $x_{n}<\min\left(\omega_{1},\omega_{2}\right)d/\pi c$.
For identical bodies the roots are given by \begin{equation}
\Omega=c\sqrt{k^{2}+\frac{\pi^{2}n^{2}}{d^{2}}}\label{32}\end{equation}
for any integer $n=0,1,2...$. They correspond to propagating (polariton)
modes ($\kappa_{1}=\kappa_{2}$ and $\kappa$ all real numbers) and
arise from equation (\ref{28}) for $e^{2i\kappa d}=1$. Equation
(\ref{29}) may have another solution in the vicinity of the vertical
asymptote of the function in its \emph{rhs}, which, however, is irrelevant
for our discussion.
Similarly, $v_{1,2}$ from equations (\ref{23}) and (\ref{24}) obey
the same equation (\ref{26}). We look again for solutions of the
form $v_{1,2}=A_{1,2}e^{i\kappa_{1,2}z}$, where $A_{1,2}$ are amplitudes
to be determined. According to equations (\ref{22}) these modes are
transverse modes, as they should be (for $\kappa_{1,2}$ real). The
relevant dispersion equation is given by \begin{equation}
e^{2i\kappa d}=\frac{(\kappa_{1}+\kappa)(\kappa_{2}-\kappa)(\kappa\kappa_{1}+k^{2})(\kappa\kappa_{2}-k^{2})}{(\kappa_{1}-\kappa)(\kappa_{2}+\kappa)(\kappa\kappa_{1}-k^{2})(\kappa\kappa_{2}+k^{2})}\,\,.\label{33}\end{equation}
We note that this dispersion equation does not become the non-retarded
dispersion equation (\ref{28}) by taking formally the limit $c\rightarrow\infty$.
An analysis similar to the one performed above for equation (\ref{28})
shows that equation (\ref{33}) has a branch of roots \begin{equation}
\Omega_{2}=c\sqrt{k^{2}+\frac{\pi^{2}y_{n}^{2}}{d^{2}}}\,\,\,,\label{34}\end{equation}
where $y_{0}=0$ and $y_{n}<\min\left(\omega_{1},\omega_{2}\right)d/\pi c$.
They correspond to surface plasmon-polariton modes $\kappa_{1}=i\alpha_{1},\kappa_{2}=-i\alpha_{2}$
and $\kappa$ real. We note that $y_{n}$ may differ from $x_{n}$.
For identical bodies these roots are those given by equation (\ref{32}).
Some other isolated roots may appear, as for instance the one corresponding
to an overall damping, \emph{i.e.} $\kappa_{1}=i\alpha_{1},\,\kappa_{2}=-i\alpha_{2},\,\kappa=i\alpha$,
where $\alpha=\sqrt{k^{2}-\omega^{2}/c^{2}}$, $\omega<ck$. It is
given by \begin{equation}
\Omega_{0}=c\sqrt{k^{2}-\frac{\pi^{2}z_{0}^{2}}{d^{2}}}\,\,\,,\label{35}\end{equation}
where $\min\left(\omega_{1},\omega_{2}\right)<\pi\sqrt{2}cz_{0}/d<\max\left(\omega_{1},\omega_{2}\right)$.
Such an isolated mode does not contribute significantly to the energy,
so we may neglect it in our subsequent analysis.
We can take the limit $d\rightarrow\infty$ in equation (\ref{33}).
It can be shown that this limit amounts formally to put $e^{2i\kappa d}=0$.\cite{key-23}
We get in this case the surface plasmon-polariton modes corresponding
to a semi-infinite body, given by $\alpha\alpha_{1,2}=k^{2}$, \emph{i.e.}
\begin{equation}
\omega^{2}=\frac{2\omega_{1,2}^{2}c^{2}k^{2}}{\omega_{1,2}^{2}+2c^{2}k^{2}+\sqrt{\omega_{1,2}^{4}+4c^{4}k^{4}}},\label{36}\end{equation}
as derived previously.\cite{key-23} In general, there are problems
with taking formally the limits $d\rightarrow0$ or $d\rightarrow\infty$
in the above equations, as expected.
It is also worth interesting to look for solutions of the type \begin{equation}
v_{1,2}=A_{1,2}\left[e^{i\kappa_{1,2}z}-e^{\pm i\kappa_{1,2}(d\mp z)}\right]\,\,\label{37}\end{equation}
for equations (\ref{23}) and (\ref{24}), which are vanishing on
the surfaces, $v_{1,2}(\pm d/2)$ (\char`\"{}fixed surfaces\char`\"{}
boundary conditions). In this case, we get again the resonance modes
$\Omega$ given by equation (\ref{32}), irrespective of the bodies
being distinct or identical. In addition, we may get special modes
$\omega=\omega_{1,2}$, $\omega^{2}=c^{2}k^{2}+\omega_{1,2}^{2}$
($\kappa_{1,2}=0$) or $\omega=ck$ ($\kappa=0$), which do not depend
on distance $d$. Other boundary conditions can be put on surfaces
$z=\pm d/2$, and we can get the corresponding eigenmodes.
We note that the dispersion equations (\ref{28}) and (\ref{33})
appear, though in a disguised form, in various formulations of the
fluctuations theory.\cite{key-2},\cite{key-3},\cite{key-5},\cite{key-32}
Within the framework of this theory the dielectric function is included
from the beginning. On the contrary, we recover the dielectric function
in the final results of the present approach, which shows that our
approach is equivalent with the so-called \char`\"{}effective medium
permittivity\char`\"{} theory.
We pass now to the zero-point energy corresponding to the $\Omega_{1,2}$-eigenmodes
given by equations (\ref{31}) and (\ref{34}), or the $\Omega$-branch
given by equation (\ref{32}) (for identical bodies or \char`\"{}fixed
surfaces\char`\"{}), in the limit $\min\left(\omega_{1},\omega_{2}\right)d/\pi c\gg1$.
These are the only eigenfrequencies which depend on distance $d$.
In the limit $\min\left(\omega_{1},\omega_{2}\right)d/\pi c\gg1$
these modes are dense sets, and it is easy to see that their contributions
to the zero-point energy are equal (corresponding to the two polarizations),
so we can write the total zero-point energy as \begin{equation}
E=\hbar c\sum_{\mathbf{k}n=0}\sqrt{k^{2}+\frac{\pi^{2}x_{n}^{2}}{d^{2}}}\,\,\,,\label{38}\end{equation}
where $x_{n}$ are defined above; for identical bodies (or for \char`\"{}
fixed surfaces\char`\"{}) $x_{n}=n$. We follow the standard regularization
procedure by removing the ultraviolet divergencies and using the Euler-MacLaurin
formula.\cite{key-34} As it is well-known, the energy thus regularized
reads \begin{equation}
E=\frac{\hbar c}{2\pi}\sum_{k=1}\frac{B_{2k}}{(2k)!}f^{(2k-1)}(x_{0})\,\,\,,\label{39}\end{equation}
where $B_{2k}$ are Bernoulli's numbers and \begin{equation}
f(x)=\int_{0}dk\cdot k\sqrt{k^{2}+\frac{\pi^{2}x^{2}}{d^{2}}}=\frac{1}{2}\int_{\pi^{2}x^{2}/d^{2}}du\cdot\sqrt{u}\,\,.\label{40}\end{equation}
Since $x_{0}=0$ (and $y_{0}=0$), we get the well-known energy $E=-\pi^{2}\hbar cB_{4}/4!d^{3}=-\pi^{2}\hbar c/720d^{3}$
and Casimir force $F=\pi^{2}\hbar c/240d^{4}$ per unit area. The
same result is obtained for the $\Omega$-modes given by equation
(\ref{32}) with $n=0,1,2...$, corrresponding to identical bodies
or the \char`\"{}fixed surfaces\char`\"{} boundary conditions $v_{1,2}(\pm d/2)$.
It is easy to see that for decreasing $\min\left(\omega_{1},\omega_{2}\right)d/\pi c$
the number of $x_{n}$-roots contributing to energy decreases, the
numerical coefficient of the Casimir force decreases gradually, and
the $d^{-4}$-dependence deteriorates, untill a cross-over may occur
to the non-retarded van der Waals-London $d^{-3}$-force.
The dispersion equations (\ref{28}) and (\ref{33}) hold also for
dielectrics, providing the wavevectors $\kappa_{1,2}$ are changed
according to \begin{equation}
\kappa_{1,2}^{2}\rightarrow\kappa^{2}-\frac{\omega_{1,2}^{2}}{c^{2}}\frac{\omega^{2}}{\omega^{2}-\omega_{01,2}^{2}}\,\,.\label{41}\end{equation}
We can get a usual model of dielectric for $\omega_{01,2}\gg\omega_{1,2}$.
In this case, the wavevectors $\kappa_{1,2}$ become \begin{equation}
\kappa_{1,2}=\sqrt{\kappa^{2}+\frac{\omega_{1,2}^{2}}{\omega_{01,2}^{2}}\frac{\omega^{2}}{c^{2}}}\,\,\,,\label{42}\end{equation}
and we cannot have anymore surface plasmon-polariton modes (evanescent
modes). In general, under these circumstances, the dispersion equations
(\ref{28}) and (\ref{33}) have no solutions, except for identical
bodies when we may have the $\Omega$-modes given by equation (\ref{32})
($e^{2i\kappa d}=1$) for $n=0,1,2...$. These modes correspond to
propagating polaritons and give again the classical result for the
Casimir force $F=\pi^{2}\hbar c/240d^{4}$ per unit area. Similarly,
for a dielectric-metal pair there is no force, except for boundary
conditions $v_{1,2}(\pm d/2)$ when the resonant $\Omega$-modes given
by equation (\ref{32}) for $n=0,1,2...$ are present. The latter
result holds for any pair of bodies. It is, however, worth stressing
that such results depend on our model of dielectric function for dielectrics,
and, in general, it is necessary to have a quantum-mechanical treatment
for the internal dynamics of the dielectrics.
\section{Discussion and conclusions}
In conclusion, we may say that we have derived here van der Waals-London
and Casimir forces acting between two semi-infinite bodies with parallel
surfaces by calculating the electromagnetic eigenmodes in matter and
estimating their zero-point energy (vacuum fluctuations). We have
adopted well-known, simple, usual models for matter polarization in
metals and dielectrics and made use of the equation of motion for
the polarization in order to get coupled integral equations. The eigenfrequencies
of these equations have been identified and used in calculating the
zero-point energy. In the non-retarded (Coulomb) limit we get the
well-known van der Waals-London $d^{-3}$-force, arising from the
surface plasmons, where $d$ is the distance between the two bodies.
The numerical coefficient of this force acquires various values, depending
on the nature of the bodies and on their being distinct or identical.
When retardation is included we get the Casimir $d^{-4}$-force arising
from surface plasmon-polariton modes (evanescent modes) for a pair
of metals. The classical numerical coefficient of this force ($\pi^{2}/240$)
is obtained for distances much larger than the characteristic wavelengths
($\sim c/\omega_{1,2}$, where $\omega_{1,2}$ are the plasmon frequencies)
of the bodies, and it diminishes gradually for shorter distances,
while the force loses its characteristic $d^{-4}$-dependence. For
a pair of identical dielectrics we get the classical Casimir result
arising from propagating polariton modes. The same result holds for
any pair of bodies with \char`\"{}fixed surfaces\char`\"{} boundary
conditions.
As it is well-known, the fluctuations theory\cite{key-32} predicts
Casimir forces between any pair of bodies, in contrast with our results,
which give a vanishing force for two distinct dielectrics, for instance.
The difference originates in the circumstance, usually overlooked,
that the equivalent of our dispersion equations (\ref{28}) and (\ref{33})
in the fluctuations theory have no solutions in some cases, as, for
instance, for distinct dielectrics. The usual theorem of meromorphic
functions, applied within the framework of the fluctuations theory,\cite{key-4}-\cite{key-6}
gives then a finite result, but it does not represent the energy of
the eigenmodes. The problem does not appear in the non-retarded regime,
where our results coincide with those of the fluctuations theory.
On the other hand, we must stress again upon the fact that our model
for the dielectric function may not be perfectly adequate for describing
the internal polarization of dielectric matter. Again, this is immaterial
in the non-retarded regime, and we succeeded to compute a $d^{-4}$-van
der Waals-London force between a classical model of polarizable point-like
particle and a semi-infinite body. But our approach fails in this
case in the retarded regime, where a quantum mechanical treatment
is necessary, as in the original attempt in Ref. \cite{key-35}).
Finally, it is worth noting that the dispersion equations (\ref{28})
and (\ref{33}) can also be obtained by calculating the reflected
field in-between the bodies (fields for semi-infinite bodies).\cite{key-23}
If $r_{1,2}$ are the amplitudes of these fields (for a given polarization),
then the dispersion equations (\ref{28}) and (\ref{33}) are obtained
from $r_{1}=r_{2}e^{2i\kappa d}$. We note that $\left|r_{1,2}\right|^{2}$
are the reflection coefficients, and for two perfectly reflecting
bodies $\left|r_{1}\right|=\left|r_{2}\right|=1$. If we neglect the
phases of the coefficients $r_{1,2}$, and put $r_{1}=r_{2}=1$, we
get the Casimir dispersion equation $e^{2i\kappa d}=1$ ($\Omega$-modes
given by equation (\ref{32})). However, it is precisely these phases
that give the damped surface plasmon-polariton regime, as we have
shown in the present paper, and these phases are not equal in the
damped regime, not even for identical bodies. This is related to the
correct choice of the sign of the square root in $\kappa_{1,2}$,
which, as we have shown here, is $\kappa_{1}=i\alpha_{1}$ and $\kappa_{2}=-i\alpha_{2}$
(equations (\ref{29}) and (\ref{30})). For the propagating regime
(vanishing phases) and identical bodies ($r_{1}=r_{2}$) we get again
the Casimir dispersion equation $e^{2i\kappa d}=1$, as we do for
\char`\"{}fixed surfaces\char`\"{} boundary conditions (in the latter
case irrespective of the bodies).
\textbf{Acknowledgments.} The authors are indebted to the members
of the Theoretical Physics Laboratory at Magurele-Bucharest for valuable
discussions, and to their colleague dr. L. C. Cune for important help
in various stages of this work.
|
2,877,628,091,260 | arxiv | \section{Introduction}
Techniques are being developed by several groups to
use high energy neutrinos
as a probe for the highest
energy phenomena observed in the Universe. Neutrinos yield information
complementary to that obtained from observations of high energy
photons and charged particles
since they interact only weakly
and can reach the observer unobscured by intervening matter
and undeflected by magnetic fields.
The primary mission of large neutrino telescopes
is to probe the Universe in a new observational window and
to search for the sources of the highest
energy phenomena. Presently suggested candidates for these
sources are, for instance, Active Galactic Nuclei (AGN)
and Gamma Ray Bursts (GRB). A neutrino signal from a certain object
would constitute the clearest signature of the hadronic nature of
that cosmic accelerator \cite{GHS}. Apart from that,
neutrino telescopes search for neutrinos produced in annihilations
of Weakly Interacting Massive Particles (WIMPs) which may
have accumulated in the center of the Earth or in the Sun.
WIMPS might contribute to the cold dark matter content
of the Universe, their detection being of extreme importance
for cosmology \cite{WIMP}.
Neutrino telescopes can be also used to
monitor the Galaxy for supernova explosions \cite{SNHalzen}
and to search for exotic particles
like magnetic monopoles \cite{Mon}.
In coincidence with surface air shower arrays, deep neutrino
detectors can be used to study the chemical composition
of charged cosmic rays.
Finally, environmental investigations --
oceanology or limnology in water, glaciology in ice --
have proved to be exciting applications of these devices
\cite{Baikal,Glac}.
Planned high-energy neutrino telescopes
differ in many aspects from existing underground neutrino
detectors. Their
architecture is optimized to achieve a large detection
area rather than a low energy threshold.
They are deployed in transparent "open" media like water
in oceans or lakes, or deep polar ice. This
brings additional
inherent technological challenges compared with the assembly of a detector
in an accelerator tunnel or underground cavities.
Neutrinos are inferred from the arrival times of Cherenkov
light emitted by charged secondaries produced in neutrino
interactions.
The light is mapped by photomultiplier tubes
(PMTs) spanning a coarse three-dimensional grid.
The traditional approach to muon neutrino detection is the observation
of upward moving muons produced in charged current interactions in the
rock, water or ice below the detector.
The Earth is used as a filter with respect to atmospheric muons.
Still, suppression of
downward-going muons is of top importance, since their flux exceeds
that of upward-going muons from atmospheric neutrinos
by several orders of magnitude.
An array of PMTs can also
be used to reconstruct the energy and location of isolated cascades due
to neutrino interactions. Burst-like events, like the onset
of a supernova, might be detected by measuring the increased
count rates of all individual PMTs.
Technologies for under{\it water} telescopes have been pioneered
by the since decommissioned DUMAND
project near Hawaii \cite{DUMANDWWW, DUMAND} and
by the Baikal collaboration \cite{Baikal,BAIKALWWW}.
In contrast to these approaches, the AMANDA detector \cite{Am0}
uses deep polar ice as target and radiator.
Two projects in the Mediterranean,
NESTOR \cite{NESTORWWW} and ANTARES \cite{ANTARESWWW}, have joined the
worldwide effort towards large-scale underwater telescopes.
BAIKAL and AMANDA are presently taking data with first stage
detectors.
The present paper describes results obtained with the first four
(out of the current thirteen) strings of the AMANDA detector.
The paper is organized as follows: In section~\ref{concept} we give a general
overview of the AMANDA
concept. Section~\ref{amandaa} summarizes the results obtained with a shallow
survey detector called AMANDA-A. Section~\ref{deployment} describes the design
of the first four strings of the deeper array AMANDA-B4.
Calibration of time response and of geometry are explained in section 5.
In section~\ref{simureco} we describe the simulation and
reconstruction methods with respect to atmospheric muons and compare
experimental data to Monte Carlo calculations.
Section~\ref{spase} demonstrates the performance of AMANDA-B4 operated in
coincidence with SPASE, a surface air shower array.
In section~\ref{depth}, the angular spectrum
of atmospheric muons is derived and transformed into
a dependence of the vertical intensity on depth.
Section~\ref{upward} describes the separation of first
upward going muon candidates.
Finally, a summary of
the status of AMANDA and results is presented in
section~\ref{conclusion}.
\section{The AMANDA Concept \label{concept}}
\begin{figure}[htbp]
\centering
\hspace{0.5cm}
\mbox{\epsfig{file=amanda_eiffel_B13_nylon.eps,height=17.0cm}}
\caption {\small \label{fullamanda}
Scheme of the 1998 AMANDA installations. The left picture is
drawn with true scaling. A zoomed view on AMANDA-A
(top) and AMANDA-B10 (bottom) is shown at the center. The right zoom
depicts the optical module.
}
\end{figure}
AMANDA (Antarctic Muon And Neutrino Detector Array) uses the natural
Antarctic ice as both target and Cherenkov medium.
The detector consists of
strings of optical modules (OMs) frozen in the
3 km thick ice sheet at the South Pole. An OM consists of a
photomultiplier in a
glass vessel. The strings are
deployed into holes drilled with pressurized hot water. The
water column in the hole then refreezes within 35-40 hours, fixing
the string in its final position. In our basic design, each OM has
its own cable supplying the high voltage (HV)
as well as transmitting the anode signal.
The components under the ice are kept as simple as possible, all the
data acquisition electronics being housed in a building at the surface.
The simplicity of the components under ice and the non-hierarchical
structure make the detector highly reliable.
Fig.~\ref{fullamanda} shows the current configuration of the AMANDA detector.
The shallow array, AMANDA-A, was deployed at a depth
of 800 to 1000\,m
in 1993/94 in an exploratory phase of the project.
Studies of the optical properties of the ice
carried out with AMANDA-A showed that a high
concentration of residual air bubbles remaining at these depths
leads to strong scattering of light, making
accurate track reconstruction impossible \cite{Aske}.
Therefore, in the polar season
1995/96 a deeper
array consisting of 80 OMs arranged on four strings
(AMANDA-B4)
was deployed at depths ranging from 1545 to 1978 meters, where the
concentration of bubbles was predicted to be negligible according to
extrapolation of AMANDA-A results.
The detector was upgraded in 1996/97 with 216 additional OMs on
6 strings. This detector of 4+6 strings was named AMANDA-B10 and is sketched
at the right side of fig.~\ref{fullamanda}. AMANDA-B10 was upgraded in the
season 1997/98 by 3 strings instrumented between 1150\,m and 2350\,m
which fulfill several
tasks. Firstly, they explore the very deep and very shallow ice
with respect to a future cube kilometer array. Secondly, they
form one corner of AMANDA-II which is the next stage of AMANDA
with altogether about 700 OMs. Thirdly, they have been used
to test data transmission via optical fibers.
There are several advantages that make the
South Pole a unique site for a neutrino telescope:
\begin{itemize}
\item The geographic location is unique:
A detector located at the South Pole
observes the northern hemisphere, and
complements any other of the planned or existing detectors.
\item Ice is a sterile medium.
The noise is given only by the PMT dark
noise and by $K^{40}$ decays in the glass housings,
which are 0.5-1.5 kHz for the PMTs and spheres we used.
Ocean and lake experiments
have to cope with 100 kHz noise rates due to bioluminescence
or $K^{40}$ decays (25-30 kHz if normalized to the photocathode
area of the 8$^{\prime \prime}$ PMT used in AMANDA).
This fact not only facilitates counting rate experiments
like the search for
low energy neutrinos from supernovae or GRBs,
but also leads to fewer accidental hits
in muon events -- an essential
advantage for trigger formation and track reconstruction.
\item AMANDA can be operated in coincidence with air shower arrays
located
at the surface. Apart from complementing the information from
the surface arrays by measurements of muons penetrating to
AMANDA depths, the air shower information can be used to
calibrate AMANDA.
\item The South Pole station has an excellent infrastructure. Issues
of vital importance to run big experiments like transportation,
power supply, satellite communication and technical support are solved
and tested during many years of operation.
Part of an existing building can be used to house the surface electronics.
\item The drilling and deployment procedures are
tested and well under control. AMANDA benefits from the
drilling expertise of the Polar Ice Coring Office (PICO).
Currently about
five days are needed to drill a hole and to deploy a string with PMTs
to a depth of 2000\,m. Future upgrades of the drilling
equipment are expected to result in a further speed-up.
\end{itemize}
The optical properties of the ice turned out to be very
different from what had been expected before the AMANDA-A phase.
Whereas absorption is
much weaker than in oceans, scattering effects turned out to
be much stronger. Even at depths below 1400 meters,
where residual bubbles have collapsed almost completely
into air hydrates,
scattering is nearly an order of magnitude stronger than in water
(see below).
Since scattering of light smears out the arrival
times of Cherenkov flashes, a main question was whether under
these conditions track reconstruction
was possible. As shown below, the answer is yes.
\section{AMANDA-A: A First Survey \label{amandaa}}
Preliminary explorations of the site and the drilling technology
were performed in the Antarctic Summer 1991/92 \cite{Am0}.
During the 1993/94 campaign, four strings each carrying 20 OMs
("AMANDA-A") were deployed between 800 and 1000\,m
depth. None of the 73
OMs (equipped with 8$^{\prime \prime}$ EMI PMTs)
surviving the refreezing process failed during the following two
years, giving a mean time between failures (MTBF) $>$ 40 years for
individual OMs in AMANDA-A.
The OMs are connected to the surface electronics by
coaxial cables.
Along with the coaxial cables,
optical fibers carry light from a Nd:YAG laser at the surface to
nylon light diffusers placed about 30\,cm below each PMT
(see fig.~\ref{fullamanda}).
Time calibration is performed by
sending nanosecond laser pulses to individual diffusers and measuring the
photon arrival time distributions at the closest PMT.
From the distribution of the arrival times at {\it distant} PMTs,
the optical
properties of the medium were derived \cite{Aske,Glac}.
The measured timing distributions indicated that photons do not propagate
along straight paths
but are scattered and considerably delayed due to
residual bubbles in the ice. The distributions
could be fitted well with an analytic function describing the
three-dimensional random walk (scattering) including absorption.
These results showed that polar ice at these depths
has a very large absorption
length, exceeding 200\,m at a wavelength of 410\,nm.
Scattering is described by the effective
scattering length $L_{eff} = L_{sc}/ (1 - \langle \cos \theta
\rangle)$,
where $L_{sc}$ is the geometrical scattering length and
$\langle \cos \theta \rangle$ the average cosine of the scattering
angle \cite{Aske}. $L_{eff}$
increases with depth, from 40\,cm at 830\,m
depth to 80\,cm at 970\,m.
In accordance with measurements
at the Vostok Station (East Antarctica \cite{Vostok})
and Byrd Station (West Antarctica)
these results suggested that at depths greater than 1300-1400\,m
the phase transformation from bubbles into
air-hydrate crystals would be complete and bubbles would
disappear.
Although not suitable for track reconstruction, AMANDA-A
can be used as a calorimeter for energy measurements of
neutrino-induced cascade-like events \cite{Rodin}. It is also used as
a supernova monitor \cite{Ralf}. Events that simultaneously trigger
AMANDA-A and the deeper AMANDA-B have been used for methodical
studies like the investigation of the optical
properties of the ice or the assessment of events with a lever
arm of one kilometer.
\section{Deployment and Design of AMANDA-B4 \label{deployment}}
\subsection{{\bf Drilling and} Deployment Procedure}
Drilling is performed by melting the ice with pressurized
water at 75$^o$C. The drilling equipment
operates at a power of 1.9 MW and the typical drill speed is about 1 cm/s.
It takes about 3.5 days
to drill a 50-60\,cm diameter hole to 2000\,m depth.
In the season 1995/96, we drilled four holes,
the deepest of them reaching 2180\,m.
It took typically 8 hours to remove the drill and the
water recycling pump from the completed hole.
The deployment of one string
with 20 OMs and several calibration devices
took about 18 hours (with a limit of 35 hours set by the
refreezing of the water in the hole).
Several diagnostic devices allow
monitoring of the mechanical and thermal parameters during the entire
refreezing process and
afterwards.
It was shown that the temperature increases with
depth in good agreement with the prediction of a standard
heat flow calculation for South Pole ice.
At the greatest depth, the temperature of the ice is
$\approx$ -31$^o$C, about 20$^o$ warmer than at the surface.
During the refreezing, the pressure reached a maximum of 460 atm,
more than twice the hydrostatic pressure which is asymptotically
established.
\subsection{Detector Design}
The four strings of AMANDA-B4 were deployed
at depths between 1545 and 1978\,m.
An OM consists of a 30 cm diameter glass sphere
equipped with a 8$^{\prime \prime}$ Hamamatsu R5912-2 photomultiplier,
a 14-dynode version of the standard 12-dynode R5912 tube.
The PMTs are operated at a gain of 10$^9$ in order to drive the
pulses through 2\,km of coaxial cable without in-situ amplification.
The amplitude of a one-photoelectron pulse is about 1 V.
The coaxial cable is also used for the HV supply, with the advantage
that only one cable and one electrical penetrator into the
sphere are required for each OM.
The measured noise rate of the AMANDA-B4 PMTs is typically 400 Hz
(threshold 0.4 photoelectrons).
The photocathode is in
optical contact with the glass sphere by the use of silicon gel.
The transmission of the glass of the pressure sphere
is about 90\% in the spectral range between 400 and 600\,nm;
the 50\% cutoff on the UV side is at about 365 nm.
The glass spheres are designed to withstand pressures of
about 660 atm.
\begin{center}
\begin{figure}[htbp]
\centering
\hspace{0.5cm}
\mbox{\epsfig{file=b4_design.eps,height=17.5cm}}
\caption {\small
AMANDA-B4: Top view, with distances
between strings given in meters, and side view
showing optical modules and calibration light sources. Upward
looking PMTs are marked by arrows.}
\label{b4_design}
\end{figure}
\end{center}
Each string carries
20 OMs with a vertical spacing of 20\,m. The fourth string
carries six additional OMs connected by a twisted pair cable.
These six OMs will not be used in the analyses presented in this
paper.
Fig.~\ref{b4_design} shows a schematic view of AMANDA-B4.
All PMTs look down with the exception of \# 1,10 in strings 1 to
3 and \#1,2,10,19,20 in string 4 (with the numbers running
from top to bottom of a string). Strings 1-3
form a triangle with side lengths 77-67-61\,m; string 4 is close to
the center.
The OMs are arranged at depths 1545--1925\,m (string 1),
1546--1926\,m (string 2),
1598--1978\,m (string 3) and
1576--1956\,m (string 4). The additional six OMs equipped
with twisted pair cables are at string 4 between 2009 and 2035\,m.
Seven of the 80 PMTs which define AMANDA-B4
were lost due to overpressure and
shearing forces to the electrical connectors during
the refreezing period.
These losses can be reduced by computer controlled
drilling avoiding strong irregularities in the hole
diameter, and by
improved connectors. Another 3 PMTs failed in the course of
the first 3 years of operation, giving a MTBF of 73 years.
\subsection{Electronics and DAQ}
Each PMT can give a series of pulses which can be resolved if
separated from each other by more than a few hundred nanoseconds.
The data recorded consist of the leading and trailing edges of
the pulses. The time-over-threshold gives a measure of the
amplitude of individual pulses. Another measure of the amplitude is
obtained by a voltage sensitive ADC which records the peak value
out of the subsequent hits of an event in a PMT. Actually,
the information consists of leading and trailing edges
of the last 8 resolved pulses, and of the largest amplitude of
those of them which lie in a 4\,$\mu$sec window centered at the
array trigger time. Also recorded is the GPS time at which the event
occurred.
A scheme of the AMANDA electronics layout is shown in fig.~\ref{DAQ}.
\begin{center}
\begin{figure}[htbp]
\centering
\hspace{0.5cm}
\mbox{\epsfig{file=b4_daq.eps,height=8.5cm}}
\caption {\small
DAQ system used for AMANDA-B4 during 1996
}
\label{DAQ}
\end{figure}
\end{center}
The signal from each
cable is fed to a module consisting of a DC blocking high-pass
filter which picks up the pulse, a fan-out sending it
to 2 amplifiers with 100$\times$ and 25$\times$ gain,
and a 2 $\mu$sec delay for the low-gain signal.
The delayed signal is sent to a Phillips 7164 peak sensing ADC.
The other pulse is split and sent to LeCroy 4413 discriminators with
thresholds set at 100 mV corresponding to about 0.3-0.4 photoelectrons
at the given high voltage.
One of the resulting ECL pulses is fed into a LeCroy 3377 TDC while
the other is sent to the majority trigger. The TDC records
the last 16 time edges occurring within a 32 $\mu$sec time window.
The majority logic requests $\ge$ 8 hit PMTs within a sliding window
of 2 $\mu$sec. The trigger produced by this majority scheme
is sent to the NIM trigger logic. The latter
accepts also triggers from
AMANDA-A or the air shower experiments SPASE-1, SPASE-2 and GASP.
Thus AMANDA also records data when these detectors trigger even
if a proper AMANDA trigger is not fulfilled.
The total trigger
rate during 1996 was about 26 Hz on average. The coincidences
from the other detectors contributed about 8 Hz to the total rate.
The differences
in cable length are not compensated before triggering. Therefore
the true trigger window would be about 300 nsec for a vertically
downgoing relativistic particle and $\approx 4 \mu$sec for
an upgoing one. As a result downgoing particles are suppressed
compared to upgoing.
Upon triggering, an ADC gate of 4\,$\mu$sec width
is formed, a stop signal is sent to
the TDCs and a readout signal is sent to a Hytec LP1341 list
processor.
Then a veto lasting several microseconds inhibits
further trigger signals.
A separate system ("SN scalers" in fig.~\ref{DAQ}) monitors the
counting rates of individual PMTs and searches for rate excesses
lasting several seconds. Such an increase would be expected for
multiple low-energy neutrino interactions close to each PMT due to
a supernova burst \cite{SNHalzen,Ralf}.
The AMANDA-B4 DAQ was running on a MacIntosh Power PC communicating
through a SCSI bus with the CAMAC crate controller.
From the distribution of the time differences between
subsequent events, the dead time of the DAQ is
estimated to be about 12\,\%. The MacIntosh has
been replaced by a Pentium-II PC running under LINUX in 1998, and
part of the CAMAC electronics by VME modules.
Fig.~\ref{LE} shows the distribution of the leading-edge times
of one PMT for data taken with the 8-fold majority trigger.
The sharp peak at 23 $\mu$sec is given by the
time when this PMT was the triggering one (i.e. the eighth) within a
2$\mu$sec window.
The flat part is due to noise hits and the bulge after the
main distribution to afterpulses (about 6\%.)
\begin{center}
\begin{figure}[htbp]
\centering
\hspace{0.5cm}
\mbox{\epsfig{file=LE.eps,height=9.5cm}}
\caption {\small
Leading edge times of PMT \# 10 of AMANDA-B4 for data
taken with an 8-fold majority trigger.
}
\label{LE}
\end{figure}
\end{center}
\subsection{Calibration Light Sources and Ice Properties}
An essential ingredient to the operation of a detector like
AMANDA is the knowledge of the optical properties of the
ice, as well as a precise time calibration of the detector.
Various light calibration sources have been deployed at
different depths in order to tackle these questions:
\vspace{12mm}
\begin{itemize}
\item {\bf The YAG laser calibration system}. It uses optical fibers
with diffusers located at each PMT. This system is
similar to that used for AMANDA-A.
The range of transmittable wavelengths is $\ge$ 450\,nm, the
time resolution is
about 15\,nsec at 530\,nm, the maximum intensity
emitted by the diffusers is $10^{8}$
photons/pulse. Apart from ice investigations, the
laser system is used for time calibration of the PMT closest
to the diffuser and for position calibration (see section~\ref{calib_time_geo}).
\item {\bf A nitrogen laser} at 1850\,m depth, wavelength 337\,nm,
pulse duration 1\,nsec, with a maximum intensity of
$10^{10}$ photons/pulse.
\item {\bf Three DC halogen lamps} (one broadband and two with
filters for 350 and 380\,nm), maximum intensity $10^{14}$
(UV-filtered)
and $10^{18}$ (broadband) photons/second.
\item {\bf LED beacons}, operated in pulsed mode (500 Hz,
pulse duration 7~nsec, $10^6$
photons/pulse) and DC mode ($10^{14}$ to $10^{15}$ photons/sec), wavelength
450\,nm. A filter restricts the output of a few beacons
to 390\,nm, with reduced intensity.
\end{itemize}
Time-of-flight measurements have been made for
a large variety of combinations of optical fiber
emitters and PMTs for the YAG laser system,
and at different wavelengths and intensities.
The nitrogen laser provided data at 337 nm.
The result is a considerable data base of hundreds of time distributions.
The width of the distributions is sensitive predominantly
to scattering and the tail to absorption (see
\cite{desyproposal} for details).
The DC sources provide data for attenuation, i.e.
the combined effect of absorption and scattering.
The YAG laser results indicate a
dramatic improvement compared to AMANDA-A results.
Fig.~\ref{A-B-comparison}
shows the distributions of arrival time for source-detector
distances of 20 and 40 m, respectively,
for AMANDA-A as well as AMANDA-B depths. The much smaller widths
for AMANDA-B support the expectation that bubbles as the dominant
source of
scattering have mostly disappeared at
depths between 1550 and 1900\,m \cite{Vostok}.
\vspace{8mm}
\begin{center}
\begin{figure}[htbp]
\centering
\hspace{0.5cm}
\mbox{\epsfig{file=serap.eps,height=7.7cm}}
\caption {\small
Arrival time distributions for 510 nm photons for two source-detector
distances. Black histograms: AMANDA-B. Hatched histograms:AMANDA-A.
The histograms are normalized to the same area.
}
\label{A-B-comparison}
\end{figure}
\end{center}
\vspace{5mm}
Details of the analysis of the optical properties of the
ice at AMANDA-B4 depths
have been published elsewhere \cite{Kurt}.
Final results will be published in a separate paper.
Figure \ref{He} shows preliminary data on the wavelength
dependence of the coefficients for scattering, $b_e$,
and absorption, $a$. The absorption length $\lambda_a = 1/a$
is between 90 and 100~m
for wavelengths below 460~nm, i.e. ice does not degrade in
transparency towards smaller wavelengths down to 337 nm.
The effective scattering length
$\lambda_{eff} = 1/b_{e}$
varies
between 24 and 30 m in the relevant wavelength range.
$\lambda_{eff} = \lambda_{scatt}/(1 - \langle \cos{\theta} \rangle)$,
with $\lambda_{scatt}$ being the geometric scattering length.
$\langle \cos \theta \rangle $ is the average cosine of the
scattering angle and is supposed to be about 0.8 in deep ice.
The attenutation length $\lambda_{att}$
which characterizes
the decrease of the photon flux as a function of the distance
is about 27\,m.
These values are averages over the full
depth interval covered by AMANDA-B4. The variation
of attenuation over this
depth range is within $\pm 30 \%$.
\begin{center}
\begin{figure}[htbp]
\centering
\hspace{0.5cm}
\mbox{\epsfig{file=buford.ps.eps,height=9.7cm}}
\caption {\small
Absorption ($a$) and scattering ($b_e$) coefficients
at an average depth of 1.7\,km,
compared to theory of He and Price \cite{He}.
}
\label{He}
\end{figure}
\end{center}
\vspace{-5mm}
\section{Calibration of Time Response and Geometry \label{calib_time_geo}}
\subsection{Time Calibration}
The measured arrival times from each PMT have to be corrected for
the time offset $t_0$, that is,
the time it takes a
signal to propagate through the PMT and the coaxial cable and get
digitized by the DAQ.
The time offset is determined
by sending light pulses from the YAG laser
to the diffuser nylon balls
located below each OM. Two fibers are available
for each PMT, one single and one multi-modal.
The time it takes for light to travel though the fiber is measured
using an OTDR (Optical Time Domain Reflectometer) and
subtracted from the time distributions recorded.
For each PMT, the time difference between the laser
pulse at the surface and the PMT response arriving back is measured.
Upon arrival at the surface, the pulses have traveled through nearly
2000 meters of cable and are dispersed, with
typical time-over-thresholds of 550 nsec and rise times of 180 nsec.
The threshold used for TDC measurements is set to a constant value
with the consequence that small pulses will reach that value later
than larger ones. This causes an amplitude-dependent offset
or "time walk", which can be corrected for by
\begin{equation}
\label{eq:adc_correction}
t_{true} = t_{LE} - t_0 - \alpha / \sqrt{ADC}.
\end{equation}
Here, $t_{LE}$ is the measured leading edge
time and $t_{true}$ the true time at
which the light pulse reaches the photocathode.
The estimates of the time offset $t_0$ and the time-walk
term $\alpha$ are extracted from scatterplots like
the one shown in fig.~\ref{adc-correction}.
\begin{figure}[htbp]
\begin{center}
\mbox{\epsfig{figure=adc-correction.eps,width=15.5cm}}
\caption{\small {Example of a fitted leading edge (with
100$<$ADC$<$1200) for module 19 on string 3. The ADC value
measures the peak value of the amplitude.}}
\label{adc-correction}
\end{center}
\end{figure}
The time resolution achieved in this way can be
estimated by the standard deviation of a Gaussian fit to the
distribution of time residuals after correction, yielding 4--7\,nsec (see
Fig.~\ref{time-resolution} for an OM with 4\,nsec resolution).
Part of the variation is due to quality variations of the
1996 optical fibers. Laboratory measurements yield a Gaussian
width of 3.5 nsec after 2\,km cable.
\begin{figure}[htbp]
\begin{center}
\mbox{\epsfig{figure=s3m19_resolution.eps,width=13cm}}
\caption{\small {Residuals after subtracting the time
correction obtained with the fitted parameters
$t_0$ and $\alpha$ for module 19 on string
3. The standard deviation of the Gaussian fit is 4 nsec.}}
\label{time-resolution}
\end{center}
\end{figure}
\subsection{Position Calibration}
Information about the exact geometry of the array can be
obtained by different methods. Firstly, the measured propagation times
of photons between different light emitters and receivers
can be used to determine their relative positions. Secondly,
absolute positions can be obtained from drill recordings
and pressure sensors.
\medskip
{\large \it Laser Calibration}
The YAG laser, the nitrogen laser and the pulsed LEDs can be used
to infer the OM positions from the time-of-flight of photons
between these light sources and the OMs. The zero time is
determined from the response of the OM closest to the light source
which is triggered by unscattered photons. This PMT is lowered in
voltage in order not to be driven in saturation, and a
time correction accounting for the longer PMT transit time
is added.
In contrast to the close OM,
the distant OMs see mostly scattered photons.
However, for a few of the events out of a series
of about 1500 laser pulses, the
leading edge should be produced by
photons which are only slightly scattered.
Therefore the distance between emitter and OM can be estimated
from the earliest events in the time-difference distribution (see~fig.~\ref{bias}).
\begin{figure}[htbp]
\begin{center}
\mbox{\epsfig{figure=bias_measurement.eps,width=13.5cm}}
\caption{\small {Simulated time-shift distribution for 1500
one-photoelectron events, for a distance of 60\,m
between emitter and receiver.
A Gaussian smearing of 10\,nsec
was applied to individual entries. Clear ice would yield a
10\,nsec wide peak at 0\,nsec.}}
\label{bias}
\end{center}
\end{figure}
In order to reduce the sensitivity to fluctuations in the number of
early hits and binning effects, the whole left flank of the distribution is
fitted with a Gaussian between the maximum of the distribution
({\tt height0} in fig.~\ref{bias}) and the first bin with
a height larger than {\tt height1} =1/10 {\tt height0}.
The corrected "first" time is given by that bin ({\tt bin1})
for which the
fitted Gaussian yields a height exceeding {\tt height1}.
This time has to be corrected further for the shifts due to
scattering which are expected even for the first bin of the
distribution. The corrections were obtained from
Monte Carlo (MC) calculations
and are almost insensitive to variations in absorption and scattering
length of a few meters.
Given the limited number of measured emitter-OM combinations available for
AMANDA-B4, it
would have been impossible to keep the coordinates of
each OM as free
parameters in a global position fit.
Therefore, all strings were
assumed to be straight and parallel and the OMs to be at a fixed
vertical distance (20\,m) relative to each other. For each emitter
covering enough OMs, a graph of the distance $d(z_i)$
between source and OM $i$ versus depth $z_i$ can be drawn
(see fig.~\ref{pos_princip}). The inter-string distance $D$ and
emitter depth $z_0$ with respect to the $z_i$
can be estimated from this graph by fitting
(fig.~\ref{yag-string2}):
\begin{equation}
\label{eq:distance_function}
d(z_i) = \sqrt{D^2+(z_i-z_0)^2}.
\end{equation}
The residuals from all fits to the 1996 AMANDA-B4 data
have a standard deviation of 2 m.
\begin{figure}[htbp]
\begin{center}
\mbox{\epsfig{figure= pos_princip.eps,height=8cm}}
\caption{\small {Principle of position measurement}}
\label{pos_princip}
\end{center}
\end{figure}
In 1996--1997, six more strings were added on the outside
of the B4 detector, and a new position calibration performed. The
increased statistics and possibilities of new cross-checks and
constraints enabled correction of the existing geometry with
an uncertainty of 1\,m in the horizontal plane and 0.5-1.0\,m in
depth.
\begin{figure}[htbp]
\begin{center}
\mbox{\epsfig{figure=yag.eps.eps,height=8cm}}
\caption{\small {Fit of the distance $d(z_i)$ versus depth-shift
$z_i - z_2$ between OMs at string 4 and
a laser emitter at string 2. String distance $D$ and
depth shift $z_0 - z_2$ are given by the minimum
of the parabola.}}
\label{yag-string2}
\end{center}
\end{figure}
\medskip
{\large \it Drill data}
The geometry of the array is surveyed in an
independent way by monitoring the position of the
drill-head while it is going down each hole.
The data were recorded by the drill instrumentation at each 10 cm step,
recording the path-length, the value of the Earth's magnetic field as
measured by a flux magnetometer and the angles (bank and elevation)
given by perpendicular pendulums. This information can then be used to
reconstruct the hole profiles. The results found are compatible
with the laser measurements within 1-2 m in the horizontal plane.
The advantage of this method is that it yields positions relative to
the surface, i.e. in a global reference frame. It also takes into
account tilts in the strings. However, it does not yield the depth
locations of the OMs. The absolute depths of the strings were given by
pressure sensors deployed with the OMs.
\section{Simulation and Reconstruction of Muons \label{simureco}}
\subsection{Simulation \label{simulation}}
Downgoing muons are generated by full atmospheric shower programs
which simulate the production of muons by isotropic primary
protons \cite{Boziev} or protons and nuclei \cite{Hemas} with
energies up to 1 PeV. The muons are propagated
down to a plane close to the detector.
Upgoing muons are generated from
atmospheric neutrinos, using the flux parameterization
given in \cite{Volkova}, from neutralinos annihilating in the
center of the Earth, using the
flux calculations of \cite{WIMP},
and from point sources, using arbitrary
energy distributions and source angles; they may start anywhere within
the fiducial volume (which increases with increasing neutrino
energy due to the muon range) and are propagated simulating the
full stochastic energy loss according to \cite{Lohmann}.
\pagebreak
It would be computationally impractical to generate
and follow the path of each of the multiply scattered Cherenkov
photons produced by muons and
secondary cascades for every simulated event. Therefore, this step is
accomplished by doing the photon propagation only once by a
separate MC program and storing
the results in large multidimensional tables. The tables give the
distribution of the mean number of photoelectrons expected and of
the time delay distribution, as a
function of the position and the orientation of a PMT
relative to the muon track. They include the effects of the
wavelength dependent quantum efficiency, the transmission
coefficients of glass spheres and optical gel, and the
absorption and scattering properties of the ice. Once the tables
are compiled, events can be simulated quickly by locating the
PMT relative to any input particle and looking up the expected
number and time distribution of photoelectrons in the tables\footnote{This
method assumes that ice is isotropic and
and homogeneous which is reasonable in
a first approximation: firstly, since
the variations of the original ice with depth have been
measured to be smaller than $\pm 25\%$, secondly, since the freshly
frozen ice in the holes occupies only a small volume of the array.}.
The known characteristics of the AMANDA PMTs, the
measured pulse shapes, pulse heights and delays after
signal propagation along the cables, and the effect of electronics
are then used to generate amplitude and time information
\cite{Stephan}.
\subsection{Reconstruction \label{reco}}
The reconstruction procedure for a muon track consists of
five steps:
\begin{enumerate}
\vspace{-2mm}
\item Rejection of noise hits, i.e. hits which have either a
very small ADC value or which are isolated in time with respect
to the trigger time or with respect to the nearest hit OM.
\vspace{-1mm}
\item A line approximation following \cite{Stenger} which yields a
point on the track, $\vec{r}$, and a velocity
$\vec{v}$:
\vspace{-1mm}
$$
\vec{r} = \langle r_i \rangle - \vec{v} \cdot \langle t_i \rangle
\hspace{2cm} \vec{v} = \frac{\langle \vec{r}_i t_i \rangle -
\langle \vec{r}_i \rangle \langle t_i \rangle}
{\langle t_i^2 \rangle - \langle t_i \rangle^2}.
$$
\vspace{-1mm}
with $\vec{r_i}$ and $t_i$ being the coordinate vector and response time
of the $i$-th PMT.
\vspace{-1mm}
\item A likelihood fit based on the measured times which takes
the the track parameters obtained from the line
fit as start values.
This "time fit" yields angles and coordinates
of the track as well as a likelihood ${\cal L}_{time}$.
\vspace{-1mm}
\item A likelihood fit using the fitted track parameters from the time fit
and varying the light emission per unit length until the probabilities
of the hit PMTs to be hit and non-hit PMTs to be not hit are
maximized. This fit does not vary the direction of the track
but yields a likelihood ${\cal L}_{hit}$
with can be used as a quality parameter.
\vspace{-1mm}
\item A quality analysis, i.e. application of cuts in order to reject badly
reconstructed events.
\vspace{-1mm}
\end{enumerate}
Steps 3 and 5 are outlined in the following two subsections.
\subsection{Time Fit}
\begin{figure}[htbp]
\begin{center}
\mbox{\epsfig{file=adam_1pe_vers_2pe.eps,height=13cm}}
\caption[2]{\small
Delay-time distributions for modules facing (full curves) and
back-facing (dashed curves) a muon track. Data are shown for
muon tracks with impact
parameters of 5 meters (a) and 150 meters (b).
}
\label{adam}
\end{center}
\end{figure}
In an ideal medium without scattering, one would reconstruct
the path of minimum ionizing muons most efficiently
by a $\chi^2$ minimization process.
Due to scattering in ice, the distribution of arrival times
of photoelectrons seen by a PMT is not Gaussian
but has a long tail at the high side -- see fig.~\ref{adam}.
To cope with the non-Gaussian timing distributions
we used a likelihood analysis. In this approach, a
normalized probability distribution function $p_i(t)$ gives the
probability of a certain time delay $t$ for a given hit $i$
with respect to straightly propagating photons.
This probability function is derived from the MC simulations
based on the photon propagation tables introduced in section~\ref{simulation}.
The probability
depends on the distance and the orientation of the PMT with respect
to the muon track.
By varying the track parameters the logarithm of a
likelihood function ${\cal L}$ is maximized.
$$
\log ({\cal L}) = \log \left ( \prod_{\mbox{all hits}} p_i \right )
= \sum_{\mbox{all hits}} \log ( p_i )
$$
In order to be used in the iteration process, the time delays as obtained
from the separate photon propagation Monte-Carlo have to be parameterized
by an analytic formula.
The parameterization of the
propagation model itself is
extended to allow for timing errors of PMTs
and electronics as well as
the probability of noise hits at random times.
The AMANDA collaboration has developed two independent reconstruction
programs, which are based on different parameterizations of
the photon propagation and different minimization methods
\cite{Bouchta,icrc_reco,wieb2}.
The comparison of these algorithms and the use of
different optical models show that the results of
both methods are in good agreement with each other
and do not depend on a fine-tuning
of the assumed optical parameters.
Fig.~\ref{adam} shows the result of the parameterization of the time
delay for two distances and for two angles between the PMT axis
and the muon direction.
At a distance of 5\,m and a PMT facing toward the muon
track, the delay curve is dominated by the time jitter of the
PMT. However, if the PMT looks in the opposite direction,
the contribution of scattered photons yields a long
tail towards large delays. At distances as large as 150\,m,
distributions for both directions of the PMT are close to each
other since all photons reaching the PMT are multiply scattered.
The parameterization used for most
of the results presented in this paper is
a Gamma distribution modified with an absorption term \cite{Pandel}
$$
p(d,t) = N \cdot { \tau^{-(d/\lambda)} \cdot t^{(d/\lambda -1)} \over \Gamma
(d/\lambda ) }
\cdot e^{- t/\tau + c_w \cdot t/X_0 + d/X_0 } ~,
$$
with the distance $r$ between OM and muon track,
the scaled distance $d \approx 0.8 / \sin (\theta_c ) \cdot r $,
the absorption length $X_0 $ and only two parameters
$\tau \approx 700 $\,ns
and $ \lambda \approx 50$\,m.
The second approach uses an F-distribution with an exponential
tail for large time-delays, which results in a comparable accuracy
\cite{Bouchta}.
\subsection{Quality Analysis}
Quality criteria are applied in order to
select events which are "well" reconstructed.
The criteria define cuts on topological event parameters
and observables derived from the
reconstruction.
Below we list those used in the following:
\begin{itemize}
\item
Speed $|\vec{v}|$ of the line fit. Values close to the speed of light
indicate a reasonable pattern of the measured times, values smaller
than 0.1 m/nsec indicate an obscure time pattern.
\item
"Time" likelihood per hit PMT $\log({\cal L}_{time})/N_{hit}$.
\item
Summed hit probability for all hit PMTs $\sum P_{hit}$.
\item
"Hit" likelihood normalized to all working channels,
$\log({\cal L}_{hit})/N_{all}$.
The latter two parameters are good indicators of whether the
location of the fitted track, which relies exclusively on the
time information, is compatible with the location of the hits
and non-hits within the detector.
\item
Number of direct hits, $N_{dir}$, which is defined to be
the number of hits with time residuals $t_i\mbox{(measured)} -
t_i\mbox{(fit)}$
smaller than a certain cut value. We use cut values of 15\,nsec,
25\,nsec and 75\,nsec, and denote the corresponding
parameters as $N_{dir}$(15), $N_{dir}$(25) and
$N_{dir}$(75), respectively. Increasing the time
window leads to higher cut values in $N_{dir}$ but allows
a finer gradation of the cut.
Events with more than a certain minimum number of direct
hits (i.e. only slightly delayed photons) are likely to be
well reconstructed. This cut turned
out to be the most powerful cut of all \cite{icrc_reco}.
\item
The projected length of direct hits onto the reconstructed
track, $L_{dir}$.
A cut in this parameter rejects events with a small lever arm.
\item
Vertical coordinate of the center of gravity, $z_{COG}$.
Cuts on this parameter are used to reject events close
to the borders of the array. Very distant tracks are not likely to be
well reconstructed.
\end{itemize}
Fig.~\ref{serap3} shows the distribution of
two of these observables, the number of direct hits within
15\,nsec, $N_{dir}$(15),
and the summed hit probability $\sum{P_{hit}}$ of all hit channels.
It demonstrates the
good agreement between results from MC and experiment.
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=myserap3.eps,width=13cm}}
\caption[2]{\small
Distributions of two reconstructed
event observables for MC down-going muon events
(dashed lines) and from
experimental data (full lines).
{\it Left:} Number of direct hits,
$N_{dir}$(15);
{\it Right:} summed hit probability, $\sum{P_{hit}}$,
}
\label{serap3}
\end{figure}
Fig.~\ref{serap4} demonstrates the effect of cuts on the
number of
direct hits and the summed hit probability
on the reconstructed
angular distribution of experimental
data and the MC sample.
The cuts are $N_{direct}\mbox{(15)} \ge 5$
and $\sum{P_{hit}} \ge 2.5$.
Both samples are dominantly due to down-going atmospheric muons.
Despite that, a small but similar fraction of events is
falsely reconstructed
as up-going events. After application of
the above quality criteria the tail below the horizon almost
disappears. Note that not
only the shapes but also the absolute passing rates
on all cut levels
are in good agreement between data and Monte Carlo.
The angular mismatch
between the reconstructed muon angle and the original angle
used in the MC simulation after both cuts
is 5.5 degrees. We note that this value strongly
depends on the particular set of cuts, the minimum
acceptable passing rate, the incident angle of the muon,
and the range of muons stopping in the array.
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=all_ang_4str.vrt.eps,width=11.5cm}}
\caption[2]{ \small
Reconstructed zenith angle distributions of experimental
data (line) and downward muon MC events (points)
after a stepwise application of quality cuts.
}
\label{serap4}
\end{figure}
\section{SPASE coincidences \label{spase}}
AMANDA is unique in that it can be
calibrated by muons with
known zenith and azimuth angles which are tagged by
air shower detectors at the surface. AMANDA-B4 has been running in
coincidence with the two SPASE (South Pole Air Shower Experiment)
arrays, SPASE-1 \cite{Beaman93}
and SPASE-2 \cite{Gais95}. SPASE-1 was located 840 m
from the center of the AMANDA array projected to the
surface, whereas SPASE-2 is located
370\,m away (see fig.\ref{Spase1}).
The scintillation detectors of SPASE-2 are complemented by
an array of air Cherenkov detectors \cite{Spase2,Vulcan}.
The primary goal of these
devices is the investigation of the chemical composition of
primary cosmic rays in the region of the "knee" \cite{Miller97}.
Another detector, the gamma imaging telescope GASP,
is also operated in coincidence with AMANDA.
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=sideview.ps,height=11cm}}
\caption [10]
{\small Side view of the two SPASE arrays relative to
AMANDA-A and AMANDA-B.
}
\label{Spase1}
\end{figure}
In this section, we summarize calibration results
obtained from the coincident operation
of AMANDA and SPASE-2. SPASE-2 consists of 30 scintillator stations
of 0.8\,m$^2$ on a 30\,m triangular grid. The area of the array is
$1.6 \cdot 10^4$\,m$^2$, and it has been running since January 1996.
For each air shower, the direction,
core location, shower size and GPS time are determined.
Showers
with sufficient energy to trigger SPASE-2 ($\approx$ 100\,TeV)
yield on average 1.2 muons penetrating
to the depth of AMANDA-B.
On every SPASE-1 or SPASE-2 trigger, a signal is sent
to trigger AMANDA.
The GPS times of the separate events
are compared offline to match coincident events.
A one-week sample
of these events has been analyzed in order to compare
the directions of muons determined by AMANDA-B4 to those
of the showers measured by SPASE-2. A histogram of the zenith mismatch
angle between SPASE-2 and AMANDA-B4
is shown in fig.\ref{Spase2}.
The selected events are required to
have $\ge$8 hits along 3 strings
and to yield a track which is closer than 150\,m to the
air shower axis measured by SPASE-2 (upper histogram).
The hatched histogram shows the distribution of the zenith
mismatch angle after application of the following quality cuts:
\begin{itemize}
\item likelihood $\log({\cal L}_{time})/N_{hit} > $ -12,
\item more than four hits with residuals smaller than 75 nsec
($N_{dir}\mbox{(75)} > 4$),
\item length of the projection
of OMs with direct hits to the track larger than 50 meters
($L_{dir}\mbox{(75)} > 50$\,m).
\end{itemize}
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=spase_4b_2.eps,height=8.5cm}}
\caption [11]{\small
Mismatch between zenith angles determined in AMANDA-B4 and SPASE-2.
}
\label{Spase2}
\end{figure}
428 of the originally
840 selected events pass these
quality cuts. The Gaussian fit has a mean of
$(0.14 \pm 0.19)$ degrees and a width of $\sigma = (3.6 \pm 0.17)$
degrees. This is nearly 2 degrees better than the resolution
obtained in the previous section for {\it all} downward
muons and for a different set of cuts. MC yields a resolution of about
4 degrees.
The small mean implies that there is little
systematic error in zenith angle
reconstruction.
The SPASE-2 pointing accuracy,
which contributes to the average mismatch,
depends on zenith angle and shower size.
For most of the coincidence events, the SPASE-2 pointing
resolution, defined as the angular distance within which
63$\%$ of events are contained,
is between 1$^\circ$ and 2$^\circ$ \cite{Spase2, Vulcan}.
\section{Intensity-vs-Depth Relation for Atmospheric Muons \label{depth}}
\subsection{Angular Dependence of the Muon Flux}
In section~\ref{simureco},
the muon angular distribution was shown as a function
of various cuts in order to demonstrate the agreement between
experimental data and MC simulations.
In this section, we calculate the muon intensity $I$
as a function of the zenith angle $\theta$.
$I(\theta_{\mu})$ is given by
\begin{equation}\label{fluxform1}
I(\theta_{\mu})= \frac{S_{dead}}{T \cdot \Delta \Omega}\,
\frac{
N_{\mu}(\theta)\,
\cdot m(\theta_{\mu})}
{{\epsilon_{rec}(\theta_{\mu})} \cdot
{A_{eff}}(\mbox{cut},\theta_{\mu})}
\end{equation}
where
\vspace{-2mm}
\begin{itemize}
\item $N_{\mu}(\theta$) is the number of muons assigned by the
analysis to a zenith angle interval centered around
$\cos \theta_{\mu}$. For the analysis presented in this
section, we start from the angular
distribution
$N_{\mu}(\theta_{rec})$ obtained from
the reconstruction, without applying cuts. This distribution
is strongly smeared (see fig.\,\ref{serap4}, top).
We have calculated the elements of the parent
angular distribution $N_{\mu}(\theta)$ from the
reconstructed distribution $N_{\mu}(\theta_{rec})$
using a regularized deconvolution procedure
\cite{Decon,Blobel}.
\item $T$ is the run time. We used the data from June 24, 1996,
with $T=22.03$ hours, and 9.86\,$\cdot \,10^5$ events triggering
AMANDA-B4.
\item $S_{dead}$ corrects for the dead time of the data
acquisition system.
This factor was determined from the
time difference distribution of subsequent events.
The dead-time losses for the two
runs used in this analysis are 12\%, i.e. $S_{dead}=1/0.88 = 1.14$.
\item $\Delta\Omega$ is the solid angle covered by the
corresponding $\cos{\theta_{\mu}}$ interval.
\item $A_{eff}(\mbox{cut},\theta_{\mu})$ is the effective area, after
the application of a multiplicity trigger,
for a given cut at zenith angle $\theta_{\mu}$. The effective
area is shown in fig.~\ref{aeff} as a function of the zenith
angle and for different cuts on the number of hit OMs.
\item $\epsilon_{rec}(\theta_{\mu}$) is the reconstruction efficiency for
zenith angle $\theta_{\mu}$ which ranges between 0.82 at
$\cos \theta = 1.0$ and 0.75 at $\cos \theta = 0.2$.
\item $m(\theta_{\mu})$ is the mean muon multiplicity at angle
$\theta_{\mu}$ at the "trigger depth".
The trigger depth
$h_{eff}$ was defined as
depth of
$\overline{z_{OM}}$, the center of gravity in the vertical coordinate $z$
of all hit OMs.
The average $h_{eff}$ depends on the angle.
It is highest for $\cos\theta$ between 0.4 and 0.8
(about 30 m below the detector center)
and falls toward the vertical (at maximum 80\,m below the center).
The mean muon multiplicity is about 1.2 for vertical tracks and
decreases towards the horizon.
Since the generator used in this analysis \cite{Boziev} simulates
only proton induced showers, this value is an
underestimation by about 10\%.
\end{itemize}
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=aefft.eps,height=9cm}}
\caption[2]{\small
Effective trigger area of AMANDA-B4 as a function
of zenith angle, for 3 different majority criteria.
}
\label{aeff}
\end{figure}
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=angularflux.eps,height=9cm}}
\caption[2]{\small
Angular distribution of the downward going muon flux,
$I(\theta_{\mu})$, as obtained from eq.(3).
}
\label{fluxtheta}
\end{figure}
Fig.~\ref{fluxtheta} shows the angular distribution of the
flux of downgoing muons, $I(\theta_{\mu})$, as obtained from
eq.3. In order to illustrate
the stability of the method with respect to cuts biasing
the measured angular distribution, the flux is shown
for samples defined by different majority triggers
($N_{hit} >$ 8,\,10,\,12,\,14,\,16,\,18). Apart from the point closest
to the horizon which is not only most strongly biased but also has
the lowest statistics, deviations are within 25\%.
For further studies we use the sample with $N_{hit} \ge 16$.
\subsection{Transformation of Angular Flux to
Vertical Intensity as a Function of Depth}
The measured flux $I(\theta)$ can be transformed into a
vertical flux $I(\theta=0,h)$, where $h$ is the ice thickness
seen under an angle $\theta$:
\begin{equation}
I(\theta=0,h)=I(\theta) \cdot \cos(\theta) \cdot c_{corr}
\end{equation}
The $\cos \theta$-conversion correcting for the
sec($\theta$) behavior of the muon flux is valid
for angles up to 60$^o$ \cite{Gaisser}.
The term $c_{corr}$ taken from \cite{Lipari} corrects
for larger angles and lies between 0.8 and 1.0 for the angular
and energy ranges considered here.
The vertical intensities obtained in this way are plotted in
fig.~\ref{ABD} and compared to
the depth-intensity data published by DUMAND \cite{SPS} and Baikal
\cite{Baikal}, and to the prediction by Bugaev et al. \cite{Bugaev}.
One observes satisfying agreement of all experiments with the
prediction.
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=d-i-all.eps,height=14cm}}
\caption[2]{\small
Vertical intensity versus depth for AMANDA,
BAIKAL and DUMAND.
The solid line gives the prediction of
\cite{Bugaev} which coincides with the curves obtained from
the parameterizations (5) and (6).
}
\label{ABD}
\end{figure}
We also fitted our data to
a parameterization taken from \cite{Allkover,Rhode}:
\begin{equation}
I(h)=I_{0}\cdot E_{crit}^{-\gamma}=I_{0}\cdot
\left(
\frac{a}{b_{eff}}\cdot \left[ e^{(b_{eff} \cdot h)}-1 \right]
\right)^{-\gamma}
\end{equation}
$E_{crit}$ is the minimum energy necessary to reach the depth $h$.
It is obtained from the parameterization \cite{Gaisser}
$ dE/dx = a + b \cdot E_{\mu} $ where $a \approx $ 2 MeV/(g$\cdot$cm$^{-2}$)
denotes the continuous energy loss due to ionization, and
$b(E_{\mu})$ is proportional to the
stochastic energy loss due to pair production,
bremsstrahlung and nuclear cascades. From this parameterization
one obtains $E_{crit} = a/b \cdot [\mbox{exp}(b \cdot h) - 1]$.
$I_{0}$ is the normalization parameter
and $\gamma \approx$ 2.78 \cite{Rhode} the spectral index.
We approximate $b(E_{\mu})$ by an energy independent parameter
$b_{eff}$. Fitted to equation (5), our data for the
vertical intensity result in the following values for
$I_{0}$ and $b_{eff}$:
\begin{table}[H]
\centering
\begin{math}
\begin{array}[H]{l}
I_{0}=(5.04 \pm 0.13)
\, \mbox{cm}^{-2}\mbox{s}^{-1}\mbox{ster}^{-1}\\
b_{eff}=(2.94 \pm 0.09)\,\cdot \, 10^{-6}\,\mbox{g}^{-1}\, \mbox{cm}^2.\\
\end{array}
\end{math}
\end{table}
This compares to $I_{0}=(5.01 \pm 0.01)
\, \mbox{cm}^{-2}\mbox{s}^{-1}\mbox{ster}^{-1}$ and
$b_{eff}=(3.08 \pm 0.06)\,10^{-6}\,\mbox{g}^{-1}\, \mbox{cm}^2$
obtained for $N_{hit} \ge 8$, showing that the result is rather
insensitive to the actual cut condition.
For the purpose of completeness we give also the results for
the more usual parameterization
\begin{equation}\label{macrofunc}
I(h,\theta_{\mu}=0)=a_{\mu}\left(\frac{\lambda}{h}\right)^\alpha
\,\mbox{e}^{-h/\lambda}
\end{equation}
where $\alpha$ is set to 0 \cite{CWI}, to 2 \cite{Frejus} or
is a free parameter \cite{Macro}. The purely
exponential dependence ($\alpha = 0$) clearly does not describe the
data at depths smaller than 4-5 km. Leaving all parameters
free \cite{Macro}, one obtains $a_{\mu}= (0.89 \pm 0.30) \cdot 10^{-6}
\, \mbox{cm}^{-2}\mbox{s}^{-1}\mbox{ster}^{-1}$,
$\lambda = (1453 \pm 612)$\,g\,cm$^{-2}$, and $\alpha = 2.0 \pm 0.25$,
being also in agreement with
$\alpha$ fixed as in \cite{Frejus}.
\section{Search for Upward Going Muons \label{upward}}
AMANDA-B4 was not intended to be a
full-fledged neutrino detector,
but instead a device which demonstrates the
feasibility of muon track
reconstruction in Antarctic ice.
The limited number of optical modules and
the small lever arms in all but the vertical direction
complicate the rejection of fake events.
In this section we demonstrate
that in spite of that
the separation of a few upward
muon candidates was possible.
We present the results of two independent analyses.
One uses the approximation of the likelihood function
by a F-function with an exponential tail \cite{Bouchta}, the
other the approximation by a Gamma function with an absorption term
\cite{wieb2} (see section 6.3).
Both analyses apply separation criteria which are obtained from a
stepwise tightening of cuts on different parameters, in a way
which improves the signal-to-fake ratio given by the
MC samples. Since the MC generated samples of downward-going muons
(a few million events) run out of statistics after a reduction
factor of about $10^6$, further tightening of cuts is
performed without background-MC control until the
experimental sample reaches
the same magnitude as the MC predicted signal.
For both analyses,
the full experimental data set of 1996,
starting with Feb.19th and ending with Nov.5th, was processed.
It consists of $3.5 \cdot 10^8$ events.
\bigskip
{\large \it Analysis 1}
In a first step, a fast pre-filter
reduced this sample to a more manageable size.
It consists of a number of cuts on quickly computable variables
which either correlate with the muon angle, or which to a certain
degree distinguish single muons from the downgoing multi-muon
background events like, e.g. a cut on the zenith angle from
a line fit \cite{Stenger}, cuts on time differences between
OMs at different vertical positions, and topological cuts requesting
a minimum vertical elongation of the event.
These cuts reduce the size of the experimental data sample to 5.2\%,
the simulated atmospheric muons to 4.8\% and simulated up-going
events to 49.8\%.
Simulated up-going events and experimental data
have been reduced by further cuts:
\begin{itemize}
\item At least 2 strings have to be hit
(this condition relaxes the
standard condition "$\ge$3 strings" and increases the
effective area in the vertical direction).
\item The events were reconstructed below horizon, i.e. $\theta >$ 90$^o$.
\item $\log({\cal L}_{time})/N_{hit} > -6$.
\item $\alpha \ge$ 0.15 m/nsec, where $\alpha$ is obtained
from a fit to $z_i = \alpha \cdot t_i + \beta$ and $z_i,\,t_i$
being the $z$\,coordinates and times of the hit OMs.
\end{itemize}
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=ndir.eps,height=10cm}}
\caption[77]{\small
Number of events surviving pre-filter and additional cuts as a
function of $N_{dir}$(15).
Solid line: 6-month experimental
data. dashed line: 6-month expectation from atmospheric neutrinos.
}
\label{ndirect}
\end{figure}
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=twoev.eps,height=15.5cm}}
\caption[2]{\small
The two experimental events reconstructed as upward muons.
{\it left:} ID 8427997, {\it right:} ID 4706870.
The line with an arrow symbolizes the fitted muon track, the lines
from this track to the OMs indicate light pathes.
The amplitudes are proportional to the size of the OMs.
The numbering of the
OMs refers to the time order in which they are hit.
}
\label{twoevents}
\end{figure}
Fig.~\ref{ndirect} shows the distribution of the number of direct hits,
$N_{dir}$(15),
of all events passing these cuts.
The highest
cut in $N_{direct}$ survived by {\it any} experimental event
is $N_{direct} \ge 6$. The two surviving events are shown
in fig.~\ref{twoevents}.
The Monte-Carlo
expectation for upward muons from atmospheric neutrinos
is 2.8 events, with an uncertainty of a factor 2, mostly due to
uncertainties in the sensitivity of the detector after all
cuts.
\bigskip
{\large \it Analysis 2}
The $3.5 \cdot 10^8$ experimental events were
compared to $3.5 \cdot 10^6$ MC events from atmospheric down-going
muons which
correspond to 2 days
effective live time. The MC data set for upward muons from
atmospheric neutrino interactions \cite{Nutomu} consists of $2.5 \cdot 10^3$
events triggering AMANDA-B4 -- corresponding to 1.7 years
effective live time.
In order to separate neutrino induced upward muons,
we applied a number of successively tightened cuts in the
variables defined in section 6.4.
This procedure
reduced the experimental sample to the expected
signal sample after the following cuts:
\begin{enumerate}
\item
reconstructed zenith angle $\theta > 120^o$,
\item
speed of the line fit $ 0.15 < |\vec{v}| < 1$ m/nsec,
\item
"time" likelihood $\log({\cal L}_{time})/(N_{hit}-5) > -10 $
(i.e. normalizing to the degrees of freedom instead of the
the number of hit PMTs),
\item
"hit" likelihood $\log({\cal L}_{hit})/(N_{hit}-5) > -8$,
\item
number of direct hits for 25 nsec window, $N_{dir}(25) \ge 5 $,
\item
number of direct hits for 75 nsec window, $N_{dir}(75) \ge 9 $,
\item
projected length of direct hits for 25 nsec window, $L_{dir}(25) > 200$\,m,
\item
absolute value of the vertical coordinate of the center of gravity
$ |z_{COG}| < 90$m \\
(with the center of the detector defining the origin of the coordinate
system).
\end{enumerate}
Three events of the experimental sample passed these cuts,
corresponding to a suppression
factor of $8.9 \cdot 10^{-9}$. The passing rate for MC upward moving
muons from atmospheric neutrinos is 1.3 \% which corresponds
to 4.0 events in 156 days. The corresponding enrichment
factor is $0.013/(8.9 \cdot 10^{-9}) \approx 1.5 \cdot 10^6$.
One of the three experimental events was identified also
in the search from the previous subsection. A second event
with $N_{dir} = 5$ passes
all cuts of the previous analysis, with the exception of
the $N_{dir}$ cut.
In order to check how well the parameter distributions of the
events agree with what one expects for atmospheric neutrino
interactions, and how well they are separated from the
rest of the experimental data, we relaxed two cuts at a time
(retaining the rest) and inspected the distribution in the
two "free" variables.
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=96_sepa_3d.epsi,width=9.7cm}}
\mbox{\epsfig{file=96_sepa_1d.epsi,width=9.7cm}}
\caption[2]{\small
{\it Top } -- distribution in parameters $L_{dir}$(25)
vs. $N_{dir}$(75),
{\it bottom:} distribution in the "combined" parameter
$S = N_{dir}$(75) $\cdot L_{dir}$(25) / 20.
The cuts applied to the event sample include
all cuts with the exception of cuts 6 and 7.
}
\label{sepa}
\end{figure}
Fig.~\ref{sepa} shows the distribution in $L_{dir}$(25)
and $N_{dir}$(75). The three events
passing {\it all} cuts are separated from
the bulk of the data. At the bottom of fig.~\ref{sepa},
the data are plotted
versus a combined parameter,
$ S = (N_{dir}$(75)-2) $\cdot L_{dir}$(25)/20.
In this parameter, the data exhibit
a nearly exponential decrease. Assuming the decrease of the
background dominated events to continue at higher $ S $ values,
one can calculate the probability that the separated events are
fake events. The probability to observe one event
at $S \ge 70$ is 15\%, the probability to
observe 3 events is only $6 \cdot 10^{-4}$.
Fig.~\ref{velo} shows the distribution when
$|\vec{v}|$ and $L_{dir}$(25) are relaxed. The 3 events are marked by
arrows. There is one additional event at high
$L_{dir}$(25), which, however, has a somewhat too small
$|\vec{v}|$. The 3 events fall into the
region populated by MC generated atmospheric neutrino events
passing the same cuts (bottom of fig.~\ref{velo}).
We attribute the lack of
experimental events between $L_{dir}$(25) $\sim$ 150--200
to statistical fluctuations.
\begin{figure}[htbp]
\centering
\mbox{\epsfig{file=96_velo_data.epsi,width=9.5cm}}
\mbox{\epsfig{file=96_velo_signal.epsi,width=9.5cm}}
\caption[2]{\small
Distribution in parameters $|\vec{v}|$ vs. $L_{dir}$(25),
after application of all cuts with the exception of
cuts 2 and 7, which have been relaxed.
{\it top:} experimental data,
{\it bottom:} signal Monte Carlo sample
}
\label{velo}
\end{figure}
Due to CPU limitation we could not check the
agreement between experimental data and atmospheric muon MC
down to a $8.9 \cdot 10^{-9}$ reduction. However, down to
a reduction of $10^{-5}$, the disagreement
does not exceed a factor of 3.
A less conservative estimate of the accuracy of the signal
prediction can be obtained
by replacing all dedicated cuts for $\theta > 90^o$
by the complementary cuts for $\theta < 90^o$. We
observed a better-than-10\% agreement between experimental data and
MC after all cuts.
In conclusion we estimate the uncertainty in the prediction of
upward muon neutrinos to be about a factor 2.
Table \#\ref{events} summarizes the characteristics of the
neutrino candidates identified in the two analyses.
\begin{table}[ht]
\caption{
Characteristics of the events reconstructed as up-going muons
}
\begin{center}
\begin{tabular} {|c||c|c|c|c|} \hline
event ID $\rightarrow$ & 147\,742 & 4\,706\,879 & 2\,324\,428 & 8\,427\,905 \\ \hline \hline
$N_{OM}$ & 13 & 14 & 15 & 8 \\ \hline
$N_{string}$ & 3 & 4 & 3 & 2 \\ \hline
log({\cal L}/$(N_{hit}-5))$ & -8.3 & -8.5 & -8.0 & -11.2 \\ \hline
$\theta_{rec}$, degrees & 168.7 & 165.9 & 166.7 & 175.4 \\ \hline
$\phi_{rec}$, degrees & 45.8 & 274.2 & 194.1 & -- \\ \hline
\end{tabular}
\end{center}
\label{events}
\end{table}
We conclude that tracks reconstructed as up-going
are found at a rate consistent with that
expected for atmospheric neutrinos. The three events
found in the second analysis
are well separated from background proving that, even
with a detector as small as AMANDA-B4, neutrino
candidates can be separated within a limited zenith
angle interval.
Meanwhile, a few tens of clear neutrino events have
been identified with the more powerful AMANDA-B10 telescope.
They will be the subject of a forthcoming paper.
\section{Conclusions \label{conclusion}}
We have described the design, operation, calibration and selected
results of the prototype neutrino telescope AMANDA-B4 at the
South Pole.
The main results can be summarized as follows:
\begin{itemize}
\item AMANDA-B4 consisting of 80 optical modules
(+ 6 OMs for technology tests) on 4 strings has
been deployed at depths between
1.5 and 2.0 km in 1996. Seven of the OMs failed during
refreezing.
We have developed reliable drilling and
instrumentation procedures allowing deployment of a 2 km deep
string in less than a week. In the mean time the detector has
been upgraded to 302 (AMANDA-B4, 1997) and 424 (1998)
optical modules.
\item The ice properties between 1.5 and 2.0 km are superior to those at
shallow depths. The absorption length is about 95\,m and
the effective scattering length about 24\,m.
\item The original calibration
accuracy reached for geometry and timing
of AMANDA-B4 was about 2 m and 7 nsec, respectively. With the
upgrade to 10 strings, these values have been improved to
0.5~-~1.0 m and 5 nsec.
\item We have developed proper methods for track reconstruction
in a medium with non-negligible scattering.
With tailored quality cuts, the remaining badly reconstructed
tracks can be removed. The quality of the reconstruction
and the efficiency of the cuts improve considerably with
increasing size of the array.
\item Geometry and tracking accuracy of AMANDA can be calibrated with
surface air shower detectors. The mismatch between showers
detected in the SPASE air shower array and muons detected
with AMANDA is about 4 degrees, in agreement with
Monte Carlo estimates of the angular accuracy.
\item The measured angular spectrum of the intensity of
atmospheric muons is in good agreement with other
experiments and with model calculations.
\item First neutrino candidates have been separated with AMANDA-B4.
The identification of upward muon candidates with an
array of only 73 operating 8-inch PMTs is a demonstration that deep
antarctic ice is an adequate medium for doing
neutrino astronomy.
\end{itemize}
Amanda-B4 is a first step towards a large neutrino telescope at the
South Pole. A ten-string array, AMANDA-B10, has been taking data
since 1997. Presently, B10 data are analyzed, and tens of clear
neutrino candidates have been extracted, with
a threshold of typically 50 GeV.
The construction of AMANDA-II, a 30\,000 m$^2$ array,
is underway. The long-term goal of the
collaboration is a cube kilometer detector,
ICECUBE.
\section{Acknowledgments}
This research was supported by
the U.S. National Science Foundation, Office of Polar Programs
and Physics Division,
the University of Wisconsin Alumni Research Foundation,
the U.S. Department of Energy,
the U.S. National Energy Research Scientific
Computing Center,
the Swedish Natural Science Research Council,
the Swedish Polar Research Secretariat,
the Knut and Alice Wallenberg Foundation, Sweden,
and the Federal Ministery for Education and Research, Germany.
C.P.H. acknowledges the support of the
European Commission through TMR contract No. ERBFMBICT91551.
We thank the
Polar Ice Coring Office, PICO, for bore hole
drilling, and the Antarctic Support Associates, ASA,
as well as the staff of the
Amundsen Scott station for support and assistance.
We gratefully acknowledge help from the
SPASE collaboration, Leeds University, and the
U.K. Particle Physics and Astrophysics Research
Council.
\newpage
|
2,877,628,091,261 | arxiv | \section{Introduction}
The study of asymptotic properties lies at the heart of Banach space theory. It is intertwined with other central notions of Banach spaces, e.g., distortion, bounded linear operators, and metric embeddings. There exists a wide plethora of examples that demonstrate deep connections between each of the aforementioned topics and asymptotic properties. A Banach space that is boundedly distortable must contain an asymptotic-$\ell_p$ subspace \cite{MT}, properties of spreading models can be manipulated to construct reflexive Banach spaces on which every bounded linear operator has a non-trivial closed invariant subspace \cite{AM1}, and reflexive asymptotic-$c_0$ spaces provide the first known class of Banach spaces into which there is no coarse embedding of the Hilbert space \cite{BLS}. There exists plenty of motivation to further understand asymptotic notions and to work on problems in the theory defined by them. It is highly likely that such understanding may play a crucial role in solving open problems in other branches of the theory.
One of the main goals of this article is to answer an old open problem regarding the relationship between spreading models and asymptotic-$\ell_p$ spaces: if $X$ admits a unique spreading model with a uniform constant, must $X$ contain an asymptotic-$\ell_p$ subspace? It was first formulated by E. Odell in \cite{O1} and it was reiterated in \cite{O} as well as in \cite{JKO}. We construct a Banach space $X_{\mathbf{iw}}$ that serves as a counterexample to this question. At the same time it reveals information regarding the relationship between asymptotic properties at a deeper level than the one suggested by the question of Odell. A property (P) of Banach spaces is called hereditary if whenever $X$ has (P) then all of its infinite dimensional closed subspaces have (P) as well. We discuss two degrees in which two asymptotic, and more generally hereditary, properties of Banach spaces can be distinct.
\begin{defn*}
\label{property separation}
Let (P) and (Q) be two hereditary properties of Banach spaces and assume that (P) implies (Q).
\begin{itemize}
\item[(i)] If (Q)$\not\Rightarrow$(P), i.e., there exists a Banach space $X$ satisfying (Q) and failing (P) then we say that (P) is separated from (Q).
\item[(ii)] If there exists a Banach space $X$ satisfying (Q) and every infinite dimensional closed subspace $Y$ of $X$ fails (P) then we way that (P) is completely separated from (Q) and write (Q) $\not\underset{\hookleftarrow}{\Rightarrow}$ (P).
\end{itemize}
\end{defn*}
For example, if (P) is super-reflexivity and (Q) is reflexivity then (Q) $\not\underset{\hookleftarrow}{\Rightarrow}$ (P). Indeed, Tsirelson space from \cite{T} is reflexive, yet it contains no super-reflexive subspaces. In this paper we mainly consider properties that are classified into the following three categories: the sequential asymptotic properties, the array asymptotic properties, and the global asymptotic properties. For expository purposes in this introduction we shall only consider reflexive Banach spaces with a basis and block sequences of vectors, although these are in general not necessary restrictions. More details on this can be found in Section \ref{asymptotic structures section}.
Sequential asymptotic properties are related to the spreading models generated by sequences in a space. Recall that a spreading model is a concept that describes the asymptotic behavior of a single sequence $(x_j)_j$ in a Banach space. It was introduced in \cite{BS} and it has been an integral part of Banach space theory ever since. We say that a Banach space has a unique block spreading model if any two spreading models generated by normalized block sequences in $X$ are equivalent and we say that $X$ has a uniformly unique block spreading model if the same as before holds with the additional assumption that the equivalence occurs for a uniform $C$. By the proof of Krivine's theorem from \cite{K}, uniform uniqueness of a spreading model implies that it has to be equivalent to the unit vector basis of $\ell_p$, for some $1\leq p<\infty$, or $c_0$.
The array asymptotic properties concern the asymptotic behavior of arrays of sequences $(x_j^{(i)})_j$, $i\in\mathbb{N}$, in a space. Two tools used for this purpose are the asymptotic models and the joint spreading models introduced in \cite{HO} and \cite{AGLM} respectively. Uniqueness of these notions is defined in a similar manner to uniform uniqueness of spreading models. They were used in \cite{BLMS} to show that the class of reflexive asymptotic-$c_0$ Banach spaces is coarsely rigid and in \cite{AGLM} to show that whenever a Banach space has a unique joint spreading model then it satisfies a property concerning its space of bounded linear operators, called the UALS. Although asymptotic models and joint spreading models are not identical they are strongly related. A Banach space has a unique block asymptotic model if and only if it has a unique block joint spreading model and then it has to be equivalent to the unit vector basis of $\ell_p$, for some $1\leq p<\infty$, or $c_0$. Another concept related to array asymptotic properties is that of asymptotically symmetric spaces from \cite{JKO}.
Global asymptotic properties, roughly speaking, describe the behavior of finite block sequences $(x_i)_{i=1}^n$ that are chosen sufficiently far apart in a space $X$ with a basis. If these exist $C\geq 1$ so that for all $n\in\mathbb{N}$, all normalized block sequences $(x_i)_{i=1}^n$ with support after $n$, are $C$-equivalent to one another, then it follows that they all have to be uniformly equivalent to the unit vector basis of $\ell_p^n$, for some $1\leq p\leq \infty$ and we say that $X$ is an asymptotic-$\ell_p$ space (or an asymptotic-$c_0$ space if $p=\infty$). This concept was introduced in \cite{MT} and it was generalized in \cite{MMT} to a coordinate free version for spaces with or without a basis. Given a Banach space $X$ with a basis we will mainly focus on the properties in the following list. Here, $1\leq p\leq \infty$ and whenever $p=\infty$ then $\ell_p$ should be replaced with $c_0$.
\begin{itemize}
\item[(a)$_p$] The space $X$ is asymptotic-$\ell_p$.
\item[(b)$_p$] The space $X$ admits a unique $\ell_p$ block asymptotic model
\item[(c)$_p$] The space $X$ admits a uniformly unique $\ell_p$ block spreading model.
\item[(d)$_p$] The space $X$ admits a unique $\ell_p$ block spreading model.
\end{itemize}
Given the precise definitions, which will be provided in Section \ref{asymptotic structures section}, the following implications are fairly straightforward for all $1\leq p\leq \infty$: (a)$_p\Rightarrow$ (b)$_p\Rightarrow$ (c)$_p\Rightarrow$ (d)$_p$. Whether the corresponding converse implications hold depends on $p$. In the case $1\leq p<\infty$ none of them is true: (d)$_p\not\Rightarrow$ (c)$_p$, $1\leq p<\infty$ is easy whereas (c)$_p\not\Rightarrow$ (b)$_p$, $1\leq p<\infty$ and (b)$_p\not\Rightarrow$ (a)$_p$, $1<p<\infty$ were shown in \cite{BLMS}. It was also shown in that paper that (c)$_\infty\not\Rightarrow$ (b)$_\infty$ and in \cite{AGM} it was shown that (b)$_1\not\Rightarrow$ (a)$_1$. However, it was proved in \cite{AOST} that (c)$_\infty\Leftrightarrow$ (d)$_\infty$ and a remarkable recent result from \cite{FOSZ} states that (b)$_\infty\Leftrightarrow$ (a)$_\infty$. This last result requires the coordinate free definition of asymptotic-$\ell_p$ from \cite{MMT}.
The problem of Odell that was mentioned earlier in the introduction can be formulated in the language of this paper as follows: is there $1\leq p\leq \infty$ so that (c)$_p$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (a)$_p$? We actually prove something deeper, namely that (c)$_p$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (b)$_p$ for all $1\leq p\leq \infty$. We also prove (d)$_1$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (c)$_1$, although the same argument works for $1<p<\infty$ (as it was mentioned earlier (c)$_\infty\Leftrightarrow$ (d)$_\infty$). To achieve these results we present three constructions of Banach spaces. Let us describe the properties of these spaces one by one and later give an outline of how they are defined. The first construction yields (c)$_1$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (b)$_1$.
\begin{thmA}
\label{ell1 case main theorem}
There exists a reflexive Banach space $X_{\mathbf{iw}}$ that has a $1$-unconditional basis and the following properties:
\begin{itemize}
\item[(i)] every normalized weakly null sequence in $X_{\mathbf{iw}}$ has a subsequence that generates a spreading model that is $4$-equivalent to the unit vector basis of $\ell_1$.
\item[(ii)] every infinite dimensional subspace of $X_{\mathbf{iw}}$ contains an array of normalized weakly null sequences that generate the unit vector basis of $c_0$ as an asymptotic model.
\end{itemize}
That is, (c)$_1$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (b)$_1$ and in particular (c)$_1$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (a)$_1$. Additionally,
\begin{itemize}
\item[(iii)] every finite dimensional Banach space with a $1$-unconditional basis is finitely block representable in every block subspace of $X_{\mathbf{iw}}$. More precisely, it is an asymptotic space of every infinite dimensional subspace of $X_{\mathbf{iw}}$.
\end{itemize}
\end{thmA}
The third property was first shown to be satisfied by a space constructed by Odell and Th. Schlumprecht in \cite{OS1}. It yields that the set $[1,\infty]$ is a stable Krivine set of $X_{\mathbf{iw}}$, i.e., it is a Krivine set of every block subspace of $X_{\mathbf{iw}}$. The second construction is a variation of the first one and it yields (c)$_p$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (b)$_p$, $1<p<\infty$.
\begin{thmA}
\label{ellp case main theorem}
For every $1<p<\infty$ there exists a reflexive Banach space with a $1$-unconditional basis that has the following properties.
\begin{itemize}
\item[(i)] Every normalized weakly null sequence in $X_{\mathbf{iw}}^p$ has a subsequence that generates a spreading model that is $8$-equivalent to the unit vector basis of $\ell_p$.
\item[(ii)] Every infinite dimensional subspace of $X_{\mathbf{iw}}^p$ contains an array of normalized weakly null sequences that generate the unit vector basis of $c_0$ as an asymptotic model.
\end{itemize}
That is, (c)$_p$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (b)$_p$ and in particular (c)$_p$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (a)$_p$. Additionally,
\begin{itemize}
\item[(iii)] For every $1\leq q \leq \infty$ and block subspace $Y$ of $X_{\mathbf{iw}}^p$ the unit vector basis of $\ell_q$ is finitely block representable in $Y$ if and only if $p\leq q\leq\infty$. More precisely, for $p\leq q\leq \infty$ and $n
\in\mathbb{N}$, $\ell_q^n$ is an asymptotic space of every infinite dimensional subspace of $X_{\mathbf{iw}}^p$.
\end{itemize}
\end{thmA}
Property (iii) resembles the corresponding property of Theorem \ref{ell1 case main theorem}. We point out that for $1<p<\infty$ the space $X_{\mathbf{iw}}^p$ is the first known example of a space with $[p,\infty]$ as a stable Krivine set. Recall that in \cite{BFM} for every discrete closed subset $F$ of $[1,\infty]$ a space is constructed with stable Krivine set $F$. The fact (c)$_\infty$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (b)$_\infty$ is not achieved via a separate construction.
\begin{thmA}
\label{c0 case main theorem}
The space $X_{\mathbf{iw}}^*$ has the following properties.
\begin{itemize}
\item[(i)] Every normalized weakly null sequence has a subsequence that generates a spreading model that is $4$-equivalent to the unit vector basis of $c_0$.
\item[(ii)] Every infinite dimensional subspace of $X_{\mathbf{iw}}^*$ contains an array of normalized weakly null sequences that generate the unit vector basis of $\ell_1$ as an asymptotic model.
\end{itemize}
That is, (c)$_\infty$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (b)$_\infty$ and in particular (c)$_\infty$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (a)$_\infty$.
\end{thmA}
We additomnally observe that the spaces $X_{\mathbf{iw}}$ and $X_{\mathbf{iw}}^*$ are asymptotically symmetric and obtain a negative answer to \cite[Problem 0.2]{JKO}.
\begin{corA}
There exist Banach spaces that are asymptotically symmetric and have no asympotic-$\ell_p$ or $c_0$ subspaces.
\end{corA}
A stronger version of the above corollary was obtained in \cite{KM} where it was shown that there exists an asymptotically symmetric Banach space with no subspace that admits a unique spreading model. The final construction yields (d)$_1$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (c)$_1$.
\begin{thmA}
\label{cd separation main theorem}
There exists a reflexive Banach space $\widetilde X_{\mathbf{iw}}$ that has a $1$-unconditional basis and the following properties.
\begin{itemize
\item[(i)] Every normalized weakly null sequence has a subsequence that generates a spreading model that is equivalent to the unit vector basis of $\ell_1$.
\item[(ii)] In every infinite dimensional subspace of $\widetilde X_{\mathbf{iw}}$ and for every $C\geq 1$ there exists a normalized weakly null sequence that generates a spreading model that is not $C$-equivalent to the unit vector basis of $\ell_1$.
\end{itemize}
That is, (d)$_1$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (c)$_1$.
\end{thmA}
It is also possible to construct for each $1<p<\infty$ a variation $\widetilde X_{\mathbf{iw}}^p$ of $\widetilde X_{\mathbf{iw}}$ that yields (d)$_p$ $\not\underset{\hookleftarrow}{\Rightarrow}$ (c)$_p$. In contrast to $X_{\mathbf{iw}}^*$, the space $\widetilde X_{\mathbf{iw}}^*$ does not have a unique $c_0$ spreading model.
Each of the aforementioned spaces are constructed with the use of a saturated norming set. We use the general scheme of saturation under constraints, which was first used in \cite{OS1} and \cite{OS2} and later refined in \cite{AM1}, \cite{ABM}, and others. In these aforementioned papers use Tsirelson-type constructions in which functionals in the norming set can only be constructed using very fast growing sequences of averages of elements in the same norming set. We shall refer to this particular version of the scheme as saturation under constraints with growing averages. In this paper we introduce a method that we call saturation under constraints with increasing weights. In this method the construction of functionals in the norming set is allowed only using sequences of functionals from the same norming set that have weights that increase sufficiently rapidly. The mainframe for the norming set $W_\mathbf{iw}$ of $X_{\mathbf{iw}}$ is the mixed-Tsirelson norming set $W = W(1/m_j,\mathcal{S}_{n_j})_{j\in\mathbb{N}}$ for appropriate increasing sequences of natural numbers $(m_j)_j$ and $(n_j)_j$. This is the smallest symmetric subset of $c_{00}(\mathbb{N})$ containing the unit vector basis and so that for all $\mathcal{S}_{n_j}$-admissible (see Subsection \ref{subsection Schreier sets}) elements $f_1<\cdots<f_d$ of $W$ the element $f = (1/m_j)\sum_{q=1}^df_q$ is also in $W$. The weight of such an $f$ is $w(f) = m_j$. In other words, $W$ is closed under the $(1/m_j,\mathcal{S}_{n_j})$-operations. It follows that if we take $i_1,\ldots,i_k$ in $\mathbb{N}$ then the set $W$ is closed under the $(1/(m_{i_1}\cdots m_{i_k}),\mathcal{S}_{n_{j_1}+\cdots +n_{j_k}})$-operation. The set $W_\mathbf{iw}$ is defined to be the smallest subset of $W$ that is closed under the operations $(1/(m_{i_1}\cdots m_{i_k}),\mathcal{S}_{n_{j_1}+\cdots+n_{j_k}})$ applied only to sequences the weights of which increase sufficiently rapidly, i.e. their weights are very fast growing. Consequently, every functional $f\inW_\mathbf{iw}$ is a weighted sum of the form
\begin{equation}
\label{operation equation}
f = \frac{1}{m_{i_1}\cdots m_{i_k}}\sum_{q=1}^d f_q,
\end{equation}
where $(f_q)_{q=1}^d$ is an $\mathcal{S}_{n_{j_1}+\cdots +n_{j_k}}$-admissible sequence of functionals in $W_\mathbf{iw}$ with very fast growing weights. The weight of such an $f$ is $w(f) = m_{i_1}\cdots m_{i_k}$.
The constraint applied to weights of functionals instead of sizes of averages yields relatively easily that the space has a unique $\ell_1$ spreading model whereas including all $(1/(m_{i_1}\cdots m_{i_k}),\mathcal{S}_{n_{j_1}+\cdots +n_{j_k}})$-operations makes this spreading model uniform. With some work it is then shown that finite arrays of sequences of so-called exact vectors with appropriate weights generate an asymptotic model equivalent to the unit vector basis of $c_0$. The proof of this uses a basic inequality where the auxiliary space is also defined with the use of constraints. The spaces $X_{\mathbf{iw}}^p$, $1<p<\infty$, are defined along the same lines with the difference being that in \eqref{operation equation} the functionals $f_q$ are multiplied by coefficients in the unit ball of $\ell_{p'}$, where $1/p+1/p'=1$. The proof of Theorem \ref{c0 case main theorem} is fundamentally different from the other cases. The fact that $X_{\mathbf{iw}}^*$ admits a uniformly unique block $c_0$ spreading model is shown directly for elements in the convex hull of $W_\mathbf{iw}$ by manipulating the definition of the norming set. However, the fact that $X_{\mathbf{iw}}^*$ is rich with arrays of sequences that generate an $\ell_1$ asymptotic model uses some of the structural properties of $X_{\mathbf{iw}}$.
The norming set of the space $\widetilde W_\mathbf{iw}$ is simpler that $W_\mathbf{iw}$. It is the the smallest subset of $W$ that is closed under the operations $(1/m_{j},\mathcal{S}_{n_{j}})$ applied to very fast growing sequences of weighted functionals. This means that this norming set is closed under fewer operations and hence it is a subset of $W_\mathbf{iw}$. The result is that the space admits only $\ell_1$ spreading models, albeit with arbitrarily bad equivalence constants in every subspace.
\begin{comment}
{\color{blue}
The main focus of the present paper is to study and compare two central asymptotic concepts in Banach space theory namely those of spreading models and of asymptotic-$\ell_p$ spaces. The first notion concerns the asymptotic behavior of sequences in Banach spaces and it was introduced by A. Brunel and L. Sucheston in \cite{BS} (see Definition \ref{definition spreading model}) whereas the second notion concerns the asymptotic behavior of an entire Banach space with a Schauder basis and it was introduced by V. D. Milman and N. Tomczak-Jaegermann in \cite{MT} (see Definition \ref{asymptotic ellp}). A somewhat different and more general definition of being asymptotic-$\ell_p$ that does not involve Schauder bases was given in \cite{MMT}. The two versions are closely related (see Remark \ref{asymptotic lp versions}). Although studying the aforementioned concepts on a given Banach space provides insight on its asymptotic structure they have also been linked to the study of a variety of seemingly unrelated problems, prominently ones in the study of bounded linear operators (see ,e.g., \cite{ADST}, \cite{AM1}, \cite{AM2}) and distortability \cite{MT}. Therefore, there is plenty of motivation to try understand these asymptotic notions and how they interact with each other other.
When a space is asymptotic-$\ell_p$ with constant $C$ then it is straightforward that there is $D\geq 1$ so that the spreading model admitted by any normalized block sequence in this space has to be $D$-equivalent to the unit vector basis of $\ell_p$ (in fact $D\leq C$). The formal inverse of the preceding statement is false as it is demonstrated by an example by E. Odell and Th. Schlumprecht of a reflexive space $X$ with an unconditional basis in \cite[Example 4.2, page 15]{OS3} which has the property that, for a predetermined $1<p<\infty$, any spreading model generated by a normalized block sequence of $X$ is isometrically equivalent to the unit vector basis of $\ell_p$ yet the entire space $X$ is not asymptotic-$\ell_p$ (this is explained during the exposition of this example in \cite[Page 66]{O}). This space $X$ contains asymptotic-$\ell_p$ subspaces, in fact it is $\ell_p$ saturated. Odell asked whether a weaker converse implication is true \cite[Page 66]{O}, namely if a space $X$ has a basis and there exists $C$ so that every spreading model admitted by a block sequence in $X$ is $C$-equivalent to the unit vector basis of $\ell_p$ does then $X$ contain a subspace that is asymptotic-$\ell_p$. {\color{red} (also asked in Odell - Extracta 2002 without uniform constant (Q7))}
As it was mentioned in the abstract the answer to this question is negative. We denote the counterexample that we provide by $X_{\mathbf{iw}}$. In this space every normalized block sequence has a subsequence that generates a spreading model that is $4$-equivalent to the unit vector basis of $\ell_1$. To show that $X_{\mathbf{iw}}$ contains no subspace that is asymptotic-$\ell_p$ we use the local notion of finite block representability, specifically we use the concept of the Krivine set of a space with a basis (see Section \ref{preliminary section}). If $X_{\mathbf{iw}}$ had an asymptotic-$\ell_p$ subspace then there would be a block subspace of $X_{\mathbf{iw}}$ the Krivine set of which would be the singleton $\{p\}$. We prove that the Krivine set of all block subspaces of $X_{\mathbf{iw}}$ is $[1,\infty]$. Recall that this is the main property of a space constructed by Odell and Schlumprecht in \cite{OS1}.
The norm of the space $X_{\mathbf{iw}}$ is defined with the use of a saturated norming set $W_\mathbf{iw}$ subject to certain constraints. The mainframe for $W_\mathbf{iw}$ is the mixed-Tsirelson norming set $W = W(1/m_j,\mathcal{S}_{n_j})_{j\in\mathbb{N}}$ for appropriate increasing sequences of natural numbers $(m_j)_j$ and $(n_j)_j$ with $m_1 = 2$ and $n_1 = 1$. This is the smallest symmetric subset of $c_{00}(\mathbb{N})$ containing the unit vector basis and so that for all $\mathcal{S}_{n_j}$-admissible (see Subsection \ref{subsection Schreier sets}) elements $f_1<\cdots<f_d$ of $W$ the element $f = (1/m_j)\sum_{q=1}^df_q$ is also in $W$. The particular choice of $m_1$ and $n_1$ yields that the set $W$ is closed under the $(1/2^lm_j,\mathcal{S}_{n_j+l})$-operation for all $j\in\mathbb{N}$, $l\in\mathbb{N}\cup\{0\}$. In other words $W = W(1/2^lm_j,\mathcal{S}_{n_j+l})_{j\in\mathbb{N},l\in\mathbb{N}\cup\{0\}}$. Any element of $W$ defined via the $(1/2^lm_j,\mathcal{S}_{n_j+l})$-operation is said to have weight $2^lm_j$. The set $W_\mathbf{iw}$ is defined to be the smallest subset of $W$ that is closed under the operations $(1/2^lm_j,\mathcal{S}_{n_j+l})$ applied only to sequences the weights of which increase sufficiently rapidly, i.e. their weights are very fast growing. Methods similar to the above have been used extensively in the past to define Banach spaces with heterogeneous asymptotic or rich local structure. The first such example was \cite{OS1} whereas in \cite{AM1} and \cite{AM2} this method is applied to construct spaces with the invariant subspace property. Until now the constraint has usually been that the operations of the norming set were applied to sequences of averages with very fast growing sizes. The constraint applied to weights of functionals instead of sizes of averages yields relatively easily that the space has a unique $\ell_1$ spreading model whereas including the factors $2^l$ makes this spreading model uniform. With some work it is then shown that finite sequences of so-called exact vectors with disjoint weights built far out enough on the basis are equivalent to the unit vector basis of $c_0$. The proof of this uses a basic inequality where the auxiliary space is also defined with the use of constraints. We then combine this with an argument similar to that used in \cite{OS1} to show that every 1-unconditional basic sequence is finitely block representable in every block subspace of $X_{\mathbf{iw}}$.
The final part of this paper is related to a question about Banach spaces with a unique spreading model. Assume that $X$ has a basis and that any two spreading models generated by normalized block sequences of $X$ are equivalent to each other. Does any spreading model of $X$ have to be equivalent to the unit vector basis of $\ell_p$ or $c_0$? In other words, if a space has a unique spreading model does it have to be some $\ell_p$ or $c_0$? This question is attributed to the first author but it can be found in \cite[Page 57]{O} {\color{red}(also Odell - Extracta (Q8) (a))}. We do not provide an answer to this but we consider the following. Assume that $X$ has a basis and any two spreading models generated by normalized block sequences of $X$ are equivalent to each other. Does there exist a block subspace of $X$ on which all spreading models generated by normalized block sequences are uniformly equivalent to each other? It is well known and it follows from a strong version of Krivine's theorem (see \cite[Subsection 1.6.3]{MMT}) that a positive answer to the first question would follow from a positive answer to the second one. We provide a counterexample to the latter by slightly modifying the definition of the space $X_{\mathbf{iw}}$. We remove the factors $2^l$ from the norming set to define a space $\widetilde X_{\mathbf{iw}}$ that retains the property of having a unique $\ell_1$ spreading model. However, this time the constant of isomorphism can be arbitrarily large in every block subspace of $\widetilde X_{\mathbf{iw}}$. {\color{red} (This space is also related to a question of Odell - Extracta (Q10), namely whether every Banach space contains a subspace $Y$ so that for any further subspace $Z$ of $Y$ every spreading models admitted by $Y$ is equivalent to some spreading model of $Z$. The space $\widetilde X_{\mathbf{iw}}$ is not a counterexample to this question however it shows that spreading models cannot be stabilized in a uniform manner.)}
The remainder of the paper is organized into seven sections. Section \ref{preliminary section} is devoted to the definition of basic concepts in Banach spaces with bases, asymptotic and local notions, as well as the Schreier families and the concept of special convex combinations. Section \ref{definition section} includes the definition of the space $X_{\mathbf{iw}}$ whereas Section \ref{uniform unique spreading model section} is devoted to showing that the space $X_{\mathbf{iw}}$ has a uniform unique $\ell_1$ spreading model. In Section \ref{section aux} we define a collection of auxiliary spaces and provide some estimates for the norm of certain vectors in those spaces. In Section \ref{ris and basic section} the basic inequality for rapidly increasing sequences is proved and this is used in Section \ref{fbr in X} to study the local block structure of the block subspaces of $X_{\mathbf{iw}}$. The final Section \ref{the other space} discusses the space $\widetilde X_{\mathbf{iw}}$ and its main properties.
}
\end{comment}
\section{Preliminaries}\label{preliminary section}
We remind basic notions such as Schreier families and special convex combinations. Given two non-empty subsets of the natural numbers $A$ and $B$ we shall write $A<B$ if $\max(A)<\min(B)$ and given $n\in\mathbb{N}$ we write $n \leq A$ if $n\leq \min(A)$. We also make the convention $\emptyset <A$ and $A < \emptyset$ for all $A\subset \mathbb{N}$. We denote by $c_{00}(\mathbb{N})$ the space of all real valued sequences $(c_i)_i$ with finitely many non-zero entries. We denote by $(e_i)_i$ the unit vector basis of $c_{00}(\mathbb{N})$. In some cases we shall denote it by $(e_i^*)_i$. For $x = (c_i)_i\in c_{00}(\mathbb{N})$, the support of $x$ is defined to be the set $\supp(x) = \{i\in\mathbb{N}:\;c_i\neq 0\}$ and the range of $x$, denoted by $\ran(x)$, is defined to be the smallest interval of $\mathbb{N}$ containing $\supp(x)$. We say that the vectors $x_1,\ldots,x_k$ in $c_{00}(\mathbb{N})$ are successive if $\supp(x_i) < \supp(x_{i+1})$ for $i=1,\ldots,k-1$. In this case we write $x_1<\cdots<x_k$. Given $n\in\mathbb{N}$ and $x\in c_{00}(\mathbb{N})$ we also write $n \leq x$ if $n\leq \min\supp(x)$. A (finite or infinite) sequence of successive vectors in $c_{00}(\mathbb{N})$ is called a block sequence.
\subsection{Schreier sets}
\label{subsection Schreier sets}
The Schreier families form an increasing sequence of families of finite subsets of the natural numbers, which first appeared in \cite{AA}. It is inductively defined in the
following manner. Set
\begin{equation*}
\mathcal{S}_0 = \big\{\{i\}: i\in\mathbb{N}\big\}\;\text{and}\;\mathcal{S}_1 = \{F\subset\mathbb{N}: \#F\leqslant\min(F)\}
\end{equation*}
and if $\mathcal{S}_n$ has been defined and set
\begin{equation*}
\begin{split}
\mathcal{S}_{n+1} = \left\{\vphantom{\cup_{i = 1}^d}\right.&F\subset\mathbb{N}:\; F = \cup_{i = 1}^d F_i, \;\text{where}\; F_1 <\cdots< F_d\in\mathcal{S}_n\\
&\left.\vphantom{\cup_{j = 1}^k}\text{and}\; d\leqslant\min(F_1)\right\}.
\end{split}
\end{equation*}
For each $n$, $\mathcal{S}_n$ is a regular family. This means that it is hereditary, i.e. if $F\in\mathcal{S}_n$ and $G\subset F$ then $G\in\mathcal{S}_n$, it is spreading, i.e. if $F = \{i_1<\cdots<i_d\} \in\mathcal{S}_n$ and $G = \{j_1 < \cdots < j_d\}$ with $i_p \leqslant j_p$ for $p=1,\ldots,d$, then $G\in\mathcal{S}_n$ and finally it is compact, if seen as a subset of $\{0,1\}^\mathbb{N}$. For each $n\in\mathbb{N}$ we also define the regular family $$\mathcal{A}_n = \{F\subset \mathbb{N}: \#F\leq n\}.$$ For arbitrary regular families $\mathcal{A}$ and $\mathcal{B}$ we define
\begin{equation*}
\begin{split}
\mathcal{A}*\mathcal{B} = \left\{\vphantom{\cup_{i = 1}^d}\right.&F\subset\mathbb{N}: F = \cup_{i = d}^k F_i,\; \mbox{where}\; F_1 <\cdots< F_d\in\mathcal{B}\\
&\left.\vphantom{\cup_{j = 1}^k}\mbox{and}\; \{\min(F_i): i=1,\ldots,d\}\in\mathcal{A}\right\},
\end{split}
\end{equation*}
then it is well known \cite{AD2} and follows easily by induction that $\mathcal{S}_n*\mathcal{S}_m = \mathcal{S}_{n+m}$. Of particular interest to us is the family $\mathcal{S}_n\ast\mathcal{A}_m$, that is the family of all sets of the form $F = \cup_{i=1}^dF_i$ with $F_1<\cdots< F_d$ with $\#F_i\leq m$ for $1\leq i\leq d$ and $\{\min(F_i):1\leq i\leq d\}\in\mathcal{S}_n$. From the spreading property of $\mathcal{S}_n$ it easily follows that such an $F$ is the union at most $m$ sets in $\mathcal{S}_n$. Given a regular family $\mathcal{A}$ a sequence of vectors $x_1 <\cdots<x_k$ in $c_{00}(\mathbb{N})$ is said to be $\mathcal{A}$-admissible if $\{\min\supp(x_i): i=1,\ldots,k\}\in\mathcal{A}$.
\subsection{Special convex combinations}
The reading of this subsection may be postponed until before Section \ref{section aux}. Here, we remind the notion of the $(n,\varepsilon)$ special convex combinations, (see \cite{AD2},\cite{AGR},\cite{AT}).
\begin{defn}\label{def of basic scc}
Let $x = \sum_{i\in F}c_ie_i$ be a vector in $c_{00}(\mathbb{N})$, $n\in\mathbb{N}$, and $\varepsilon>0$. The vector $x$ is called a $(n,\varepsilon)$-basic special convex combination (or a $(n,\varepsilon)$-basic s.c.c.) if the following are satisfied:
\begin{enumerate}
\item[(i)] $F\in\mathcal{S}_n$, $c_i\geqslant 0$ for $i\in F$ and $\sum_{i\in F}c_i = 1$,
\item[(ii)] for any $G\subset F$ with $G\in\mathcal{S}_{n-1}$ we have that $\sum_{i\in G}c_i < \varepsilon$.
\end{enumerate}
\end{defn}
The next result is from \cite{AMT}. For a proof see \cite[Chapter 2, Proposition 2.3]{AT}.
\begin{prop}\label{basic scc exist in abundance}
For every infinite subset of the natural numbers $M$, any $n\in\mathbb{N}$, and $\varepsilon>0$ there exist $F\subset M$ and non-negative real numbers $(c_i)_{i\in F}$ so that the vector $x = \sum_{i\in F}c_ie_i$ is a $(n,\varepsilon)$-basic s.c.c.
\end{prop}
\begin{defn}\label{def scc}
Let $x_1 <\cdots<x_d$ be vectors in $c_{00}(\mathbb{N})$ and $\psi(i) = \min\supp (x_i)$, for $i=1,\ldots,d$. If the vector $\sum_{i=1}^mc_ie_{\psi(i)}$ is a $(n,\varepsilon)$-basic s.c.c. for some $n\in\mathbb{N}$ and $\varepsilon>0$ then the vector $x = \sum_{i=1}^mc_ix_i$ is called a $(n,\varepsilon)$-special convex combination (or $(n,\varepsilon)$-s.c.c.).
\end{defn}
We make a few simple remarks to be used in the sequel.
\begin{rem}
\label{some remarks for the far future 1}
Let $n\in\mathbb{N}$, $\varepsilon>0$, and $x = \sum_{i\in F}c_ie_i$ be a $(n,\varepsilon)$ special convex combination. If $k,m\in\mathbb{N}$ with $k<n$ and $G\subset F$ with $G\in\mathcal{S}_k\ast\mathcal{A}_m$ then $\sum_{i\in G}c_i < m\varepsilon$.
\end{rem}
\begin{rem}
\label{some remarks for the far future 2}
Let $n\in\mathbb{N}$, $\varepsilon>0$, and $x = \sum_{i\in F}c_ie_i$ be a $(n,\varepsilon)$ special convex combination. If $F = \{t_1<\cdots<t_d\}$ we can write $x = \sum_{i=1}^d \tilde c_i e_{t_i}$. If $G\subset \mathbb{N}$ is of the form $G = \{s_1<\cdots<s_d\}$ with $t_i\leq s_i$ for $1\leq i\leq d$ and $s_i\leq t_{i+1}$ for $1\leq i<d$ then the vector $x = \sum_{i=1}^d\tilde c_i e_{s_i}$ is a $(n,2\varepsilon)$ special convex combination. In particular, if $x = \sum_{i=1}^mc_ix_i$ is a $(n,\varepsilon)$-s.c.c. and $\phi(i) = \max\supp(x_i)$ for $1\leq i\leq d$ then the vector $\sum_{i=1}^dc_ie_{\phi(i)}$ is a $(n,2\varepsilon)$-basic s.c.c.
\end{rem}
\section{Asymptotic structures}
\label{asymptotic structures section}
In this lengthy section we remind, compare, and discuss different types of asymptotic notions in Banach space theory. We state known examples that separate these notions in various ways and we discuss how the present paper is an advancement in this topic.
\subsection{Sequential asymptotic notions}
We remind the definition of spreading models, which was introduced in \cite{BS}.
\begin{defn}
Let $(x_i)_i$ be a sequence in a seminormed vector space $(E,\iii{\cdot})$. and $m\in\mathbb{N}$
\begin{itemize}
\item[(i)] A set $s = \{j_1,\ldots,j_m\}\in[\mathbb{N}]$ will be called a spread of $I = \{1,\ldots,m\}$.
\item[(ii)] If $x = \sum_{i=1}^ma_ix_i$ and $s = \{j_1,\ldots,j_m\}$ is a spread of $\{1,\ldots,m\}$ then we call the vector $s(x) = \sum_{i=1}^ma_ix_{j_i}$ a spread of the vector $x$.
\item[(iii)] The sequence $(x_i)_i$ will be called spreading if for every $m\in\mathbb{N}$, every $s\in[\mathbb{N}]^m$, and every $x = \sum_{i=1}^ma_ix_i$ we have $\iii{x} = \iii{s(x)}$.
\end{itemize}
\end{defn}
\begin{defn}
\label{definition spreading model}
Let $X$ be a Banach space and $(x_i)_i$ be a sequence in $X$. Let also $E$ be a vector space with a Hamel basis $(e_i)_i$ endowed with a seminorm $\iii{\cdot}$. We say that the sequence $(x_i)_i$ generates $(e_i)_i$ as a spreading model if for every $m\in\mathbb{N}$ and any vector $x = \sum_{i=1}^ma_ix_i$ we have
\[\lim_{\substack{\min(s)\to\infty\\s\in[\mathbb{N}]^m}}\|s(x)\| = \iii{\sum_{i=1}^ma_ie_i}.\]
Given a subset $A$ of $X$ we shall say that $A$ admits $(e_i)_i$ as a spreading model if there exists a sequence in $A$ that generates $(e_i)_i$ as a spreading model.
\end{defn}
The spreading model $(e_i)_i$ of a sequence $(x_i)_i$ is always a spreading sequence. The above definition was given by Brunel and Sucheston in \cite{BS} where is was also proved that every bounded sequence in a Banach space has a subsequence that generates some spreading model.
\subsection{Array asymptotic notions}
We remind the notion of joint spreading models from \cite{AGLM} and the one of asymptotic models from \cite{HO}. We compare these similar notions later in Subsection \ref{uniqueness sec}.
\begin{defn}
\label{plegma}
Let $k$, $l\in\mathbb{N}$, and $M\in[\mathbb{N}]^\infty$. A plegma is a sequence $(s_i)_{i=1}^l$ in $[M]^k$ satisfying
\begin{itemize}
\item[(i)] $s_{i_1}(j_1) < s_{i_2}(j_2)$ for $i_1\neq i_2$ in $\{1,\ldots,l\}$ and $j_1<j_2$ in $\{1,\ldots,k\}$ and
\item[(ii)] $s_{i_1}(j) \leq s_{i_2}(j)$ for $i_1<i_2$ in $\{1,\ldots,l\}$ and $j\in\{1,\ldots,k\}$.
\end{itemize}
If additionally the set $s_1,\ldots,s_l$ are pairwise disjoint then we say that $(s_i)_{i=1}^l$ is a strict plegma. Let $\mathrm{Plm}_l([M]^k)$ denote the collection of all plegmas in $[M]^k$ and let $\mathrm{S}$-$\mathrm{Plm}_l([M]^k)$ denote the collection of all strict plegmas in $[M]^k$.
\end{defn}
A plegma $(s_i)_{i=1}^l$ can also be described as follows
\[
\begin{split}
s_1(1) \leq s_2(1) \leq \cdots \leq s_l(1)&<s_1(2)\leq s_2(2)\leq\cdots\leq s_l(2)<\cdots\\
\cdots& <s_1(k)\leq s_2(k)\leq \cdots\leq s_l(k)
\end{split}
\]
whereas in a strict plegma all inequalities are strict.
\begin{defn}
\label{plegma spreading}
Let $l\in\mathbb{N}$ and $(x^{(i)}_j)_j$, $1\leq i\leq l$, be an array of sequences in a seminormed vector space $(E,\iii{\cdot})$.
\begin{itemize}
\item[(i)] For $m\in\mathbb{N}$ let $\pi = \{1,\ldots,l\}\times\{1,\ldots,m\}$. Given a plegma $\bar s = (s_i)_{i=1}^l$ in $[M]^\infty$, the set $\bar s(\pi) = \{(i,s_i(j)): (i,j)\in\pi\}$ will be called a plegma shift of $\pi$.
\item[(ii)] If $x = \sum_{i=1}^l\sum_{j=1}^ka_{i,j}x^{(i)}_j$ and $\bar s\in\mathrm{Plm}_l([\mathbb{N}])^k$ we call the vector $\bar s(x) = \sum_{i=1}^l\sum_{j=1}^ka_{i,j}x^{(i)}_{s_i(j)}$ a plegma shift of the vector $x$.
\item[(iii)] The array $(x^{(i)}_j)_j$, $1\leq i\leq l$, will be called plegma spreading if for every $k\in\mathbb{N}$, every $\bar s\in\mathrm{Plm}_l[\mathbb{N}]^k$, and every $x = \sum_{i=1}^l\sum_{j=1}^ka_{i,j}x^{(i)}_j$ we have $\iii{x} = \iii{\bar s(x)}$.
\end{itemize}
\end{defn}
\begin{defn}
\label{definition joint spreading model}
Let $X$ be a Banach space, $l\in\mathbb{N}$, and $(x^{(i)}_j)_j$, $1\leq i\leq l$, be an array of sequences in $X$. Let also $E$ be a seminormed vector space and let $(e^{(i)}_j)_j$, $1\leq i\leq l$, be an array of sequences in $E$. We say that $(x^{(i)}_j)_j$, $1\leq i\leq l$, generates $(e^{(i)}_j)_j$, $1\leq i\leq l$, as a joint spreading model if for every $k\in\mathbb{N}$ and any vector $x = \sum_{i=1}^l\sum_{j=1}^ka_{i,j}x^{(i)}_j$ we have
\[\lim_{\substack{\min(s_1)\to\infty\\\bar s\in\mathrm{S}\text{-}\mathrm{Plm}_l([\mathbb{N}]^k)}}\|\bar s(x)\| = \iii{\sum_{i=1}^l\sum_{j=1}^ka_{i,j}e^{(i)}_j}.\]
Given a subset $A$ of $X$ we shall say that $A$ admits $(e^{(i)}_j)_j$, $1\leq i\leq l$, as a joint spreading model if there exists an array $(x^{(i)}_j)_j$, $1\leq i\leq l$, in $A$ that generates $(e^{(i)}_j)_j$, $1\leq i\leq l$ as a joint spreading model.
\end{defn}
The above notion was introduced in \cite{AGLM} and it was shown that every finite array $(x^{(i)}_j)_j$, $1\leq i\leq l$, of normalized Schauder basic sequences in a Banach space $X$ has a subarray $(x^{(i)}_{k_j})_j$ that generates some joint spreading model $(e_j^{(i)})_j$, $1\leq i\leq l$, which has to be a plegma spreading sequence.
Joint spreading models are a similar notion to that of asymptotic models, from \cite{HO}, which was introduced and studied earlier. We modify the definition to make the connection to the above notions more clear.
\begin{defn}
Let $X$ be a Banach space, $(x^{(i)}_j)_j$, $i\in\mathbb{N}$ be an infinite array of normalized sequences in a Banach space $X$ and $(e_i)_i$ be a sequence in a seminormed space $E$. We say that $(x^{(i)}_j)_j$, $j\in\mathbb{N}$ generates $(e_i)_i$ as an asymptotic model if for any $l\in\mathbb{N}$ and vector $x = \sum_{i=1}^la_ix^{(i)}_1$ we have
\[ \lim_{\substack{\min(s_1)\to\infty\\ \bar s\in\mathrm{S}\text{-}\mathrm{Plm}_l([\mathbb{N}]^1)}}\left\|\bar s(x)\right\| = \iii{\sum_{i=1}^la_ie_i}.
\]
\end{defn}
It was proved in \cite{HO} that any array $(x^{(i)}_j)_j$, $i\in\mathbb{N}$ of normalized sequences that are all weakly null have a subarray $(x^{(i)}_{j_k})_k$, $i\in\mathbb{N}$ that generates a 1-suppression unconditional asymptotic model $(e_i)_i$.
\subsection{Global asymptotic notions}
We first remind the definition of an asymptotic-$\ell_p$ Banach space with a basis, introduced by V. D. Milman and N. Tomczak-Jaegermann in \cite{MT}, and then we remind a coordinate free version of this definition from \cite{MMT}.
\begin{defn}
\label{asymptotic ellp}
Let $X$ be a Banach spaces with a Schauder basis $(e_i)_i$ and $1\leq p <\infty$. We say that the Schauder basis $(e_i)_i$ of $X$ is asymptotic-$\ell_p$ if there exist positive constants $D_1$ and $D_2$ so that for all $n\in\mathbb{N}$ there exists $N(n)\in\mathbb{N}$ so that whenever $N(n)\leq x_1 <\cdots <x_n$ are vectors in $X$ then
\begin{equation*}
\frac{1}{D_1}\left(\sum_{i=1}^n\|x_i\|^p\right)^{1/p} \leq \left\|\sum_{i=1}^nx_i\right\| \leq D_2\left(\sum_{i=1}^n\|x_i\|^p\right)^{1/p}.
\end{equation*}
Specifically we say that $(e_i)_i$ is $D$-asymptotic-$\ell_p$ for $D = D_1D_2$. The definition of an asymptotic-$c_0$-space is given similarly.
\end{defn}
The classical examples of non-trivial asymptotic-$\ell_p$ spaces are Tsirelson's original Banach space from \cite{T} that is asymptotic-$c_0$ and the space constructed in \cite{FJ} (nowadays called Tsirelson space) that is asymptotic-$\ell_1$.
\begin{rem}
\label{asymptotic lp versions}
The definition above depends on the basis of $X$ and not only on $X$. A more general coordinate free version of the above for a whole space $X$ being asymptotic-$\ell_p$ can be found in \cite[Subsection 1.7]{MMT} (see also \cite{O}) and it is based on a game of two players. For each $n\in\mathbb{N}$ there is a version of this game that takes place in $n$ consecutive turns. In each turn $k$ of the game player (S) chooses a co-finite dimensional subspace $Y_k$ of $X$ and then player (V) chooses a normalized vector $y_k\in Y_k$. One of the formulations for being asymptotic-$\ell_p$ in this setting is that there exists $C$ so that for every $n\in\mathbb{N}$ player (S) has a wining strategy to force in $n$ turns player (V) to choose a sequence $(y_i)_{i=1}^n$ that is $C$-equivalent to the unit vector basis of $\ell_p^n$. Although this is not the initial formulation it is equivalent and this follows from \cite[Subsection 1.5]{MMT}. Using this definition it is easy to show that if $X$ has a Schauder basis that is asymptotic-$\ell_p$ then $X$ is asymptotic-$\ell_p$. It also follows fairly easily that if a space $X$ is asymptotic-$\ell_p$ then it contains an asymptotic-$\ell_p$ sequence. In particular, a Banach space contains an asymptotic-$\ell_p$ subspace if and only if it contains an asymptotic-$\ell_p$ sequence
\end{rem}
\begin{comment}
\begin{defn}
\label{definition spreading model}
Let $X$ be a Banach space and $(x_i)_i$ be a sequence in $X$. Let also $E$ be a vector space with a Hamel basis $(e_i)_i$ endowed with a seminorm $\|\cdot\|_*$. We say that the sequence $(x_i)_i$ generates $(e_i)_i$ as a spreading model if there exists a sequence of positive real numbers $(\delta_n)_n$ that decreases to zero so that for all natural numbers $n\leq k_1<\cdots<k_n$ and scalars $a_1,\ldots,a_n$ in $[-1,1]$ we have
\begin{equation*}
\left|\left\|\sum_{i=1}^na_ix_{k_i}\right\| - \iii{\sum_{i=1}^na_ie_i}\right| < \delta_n.
\end{equation*}
Given a subset $A$ of $X$ we shall say that $A$ admits $(e_i)_i$ as a spreading model if there exists a sequence in $A$ that generates $(e_i)_i$ as a spreading model.
\end{defn}
The above definition was given by Brunel and Sucheston in \cite{BS} where is was also proved that every bounded sequence in a Banach space has a subsequence that generates some spreading model. The definition below was given by V. D. Milman and N. Tomczak-Jaegermann in \cite{MT}.
\begin{defn}
\label{asymptotic ellp}
Let $X$ be a Banach spaces with a Schauder basis $(e_i)_i$ and $1\leq p <\infty$. We say that the Schauder basis $(e_i)_i$ of $X$ is asymptotic-$\ell_p$ if there exist positive constants $D_1$ and $D_2$ so that for all $n\in\mathbb{N}$ there exists $N(n)\in\mathbb{N}$ so that whenever $N(n)\leq x_1 <\cdots <x_n$ are vectors in $X$ then
\begin{equation*}
\frac{1}{D_1}\left(\sum_{i=1}^n\|x_i\|^p\right)^{1/p} \leq \left\|\sum_{i=1}^nx_i\right\| \leq D_2\left(\sum_{i=1}^n\|x_i\|^p\right)^{1/p}.
\end{equation*}
Specifically we say that $(e_i)_i$ is $D$-asymptotic-$\ell_p$ for $D = D_1D_2$. The definition of an asymptotic-$c_0$-space is given similarly.
\end{defn}
The classical examples of non-trivial asymptotic-$\ell_p$ spaces are Tsirelson's original Banach space from \cite{T} that is asymptotic-$c_0$ and the space constructed in \cite{FJ} (nowadays called Tsirelson space) that is asymptotic-$\ell_1$.
\begin{rem}
\label{asymptotic lp versions}
The definition above depends on the basis of $X$ and not only on $X$. A more general coordinate free version of the above for a whole space $X$ being asymptotic-$\ell_p$ can be found in \cite[Subsection 1.7]{MMT} (see also \cite{O}) and it is based on a game of two players. In each turn of the game player (S) chooses a co-finite dimensional subspace $Y$ of $X$ and player (V) chooses a normalized vector $y\in Y$. One of the formulations for being asymptotic-$\ell_p$ in this setting is that there exists $C$ so that for every $n\in\mathbb{N}$ player (S) has a wining strategy to force in $n$ steps player (V) to choose a sequence $(y_i)_{i=1}^n$ that is $C$-equivalent to the unit vector basis of $\ell_p^n$ (this is not the initial formulation however the equivalence follows from \cite[Subsection 1.5]{MMT}). Using this definition it is easy to show that if $X$ has a Schauder basis that is asymptotic-$\ell_p$ then $X$ is asymptotic-$\ell_p$. It also follows fairly easily that if a space $X$ is asymptotic-$\ell_p$ then it contains an asymptotic-$\ell_p$ sequence. In particular, a Banach space contains an asymptotic-$\ell_p$ subspace if and only if it contains an asymptotic-$\ell_p$ sequence. Therefore, using either notion of asymptotic-$\ell_p$ does not alter the meaning of the question of Odell mentioned in the introduction.
\end{rem}
\end{comment}
\subsection{Uniqueness of asymptotic notions}
\label{uniqueness sec}
The main purpose of this section is to discuss the property of a Banach space to exhibit a unique behavior with respect to the various asymptotic notions. Of particular interest to us is the question as to whether uniqueness with respect to one notion implies uniqueness with respect to another.
Throughout this subsection we let $\mathscr{F}$ denote one of two collections of normalized Schauder basic sequences in a given Banach space $X$, namely either $\mathscr{F}_0$, i.e., the collection of all normalized weakly null Schauder basic sequences, or $\mathscr{F}_b$, i.e. the collection of all normalized block sequences, if $X$ is assumed to have a basis.
\begin{defn}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$.
\begin{itemize}
\item[(i)] We say that $X$ admits a unique spreading model with respect to $\mathscr{F}$ if any two spreading models generated by sequences in $\mathscr{F}$ are equivalent.
\item[(ii)] We say that $X$ admits a uniformly unique spreading model with respect to $\mathscr{F}$ if there exists $C\geq 1$ so that any two spreading models generated by sequences in $\mathscr{F}$ are $C$-equivalent.
\end{itemize}
\end{defn}
The following is an open problem (see e.g. \cite[(Q8) on page 419]{O1}).
\begin{problem}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. Assume that $X$ admits a unique spreading model with respect to $\mathscr{F}$. Is this spreading model equivalent to the unit vector basis of some $\ell_p$, $1\leq p<\infty$, or $c_0$?
\end{problem}
It is well know that if the spreading model is uniformly unique then the answer is affirmative. This follows from an argument mentioned in \cite[Subsection 1.6.3]{MMT}.
\begin{defn}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. We say that $X$ admits a unique joint spreading model with respect to $\mathscr{F}$ if there exists a constant $C$ so that for any $l\in\mathbb{N}$ and any two $l$-arrays generated as joint spreading models by $l$-arrays in $\mathscr{F}$ are $C$-equivalent.
\end{defn}
The existence of a uniform constant is included in the definition of unique joint spreading models. The reason for this is to separate uniqueness of spreading models from uniqueness of joint spreading models. If one assumes that $X$ admits a unique spreading model with respect to $\mathscr{F}$ then it follows that all $l$-joint spreading models generated by weakly null $l$-arrays in $\mathscr{F}$ are equivalent as well.
We remind that it was proved in \cite{AGLM} that if a Banach space $X$ admits a unique joint spreading model with respect to $\mathscr{F}$ then $X$ satisfies a property called the uniform approximation on large subspace. This is a property of families of bounded linear operators on $X$.
\begin{defn}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. We say that $X$ admits a unique asymptotic model with respect to $\mathscr{F}$ if any two asymptotic models generated by arrays of sequences in $\mathscr{F}$ are equivalent.
\end{defn}
It can be seen that if $X$ has a unique asymptotic model with respect to $\mathscr{F}$ then there must exist a $C$ so that any two asymptotic models generated by arrays of sequences in $\mathscr{F}$ are $C$-equivalent. This is because asymptotic models are generated by infinite arrays.
As it was mentioned in passing in \cite{AGLM} uniqueness of joint spreading models and uniqueness of asymptotic models are equivalent. We briefly describe a proof.
\begin{prop}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. Then $X$ admits a unique joint spreading model with respect to $\mathscr{F}$ if and only if it admits a unique asymptotic model with respect to $\mathscr{F}$.
\end{prop}
\begin{proof}
If $X$ admits a unique asymptotic model then, as it was mentioned above, it does so for a uniform constant $C$. We start with two $l$-arrays $(x_j^{(i)})_j$, $(y^{(i)}_j)_j$, $1\leq i\leq l$, generating joint spreading models $(e_j^{(i)})_j$, $(d_j^{(i)})$, $1\leq i\leq l$, which we will show that they are equivalent. Define the infinite arrays $(\tilde x_j^{(i)})$, $(\tilde y_j^{(i)})$, $i\in\mathbb{N}$ given by $\tilde x^{(ml + i)}_j = x_j^{(i)}$ and $\tilde y^{(ml + i)}_j = y_j^{(i)}$ for $m\in\mathbb{N}\cup\{0\}$, $1\leq i\leq l$, and $j\in\mathbb{N}$. Any asymptotic model $(e_i)_i$ generated by a subarray of $(\tilde x_j^{(i)})_j$, $i\in\mathbb{N}$, is isometrically equivalent to $(e_j^{(i)})_j$, $1\leq i\leq l$ by mapping $e_{ml+i}$ to $e^{(i)}_{l}$, for $m\in\mathbb{N}\cup\{0\}$, $1\leq i\leq l$. We can make a similar observation about any asymptotic model $(d_i)_i$ generated by a subarray of $(\tilde y_j^{(i)})$, $i\in\mathbb{N}$. As $(e_i)_i$ and $(d_i)_i$ are $C$-equivalent we deduce that the same is true for $(e_j^{(i)})_j$, $(d_j^{(i)})$, $1\leq i\leq l$. The inverse implication is slightly easier. If we assume that there is $C$ so that for any $l\in\mathbb{N}$ any $l$-joint spreading models admitted by $l$-arrays in $\mathscr{F}$ then it is almost straightforward that the first $l$ elements of any two asymptotic models generated by arrays in $\mathscr{F}$ are $C$-equivalent.
\end{proof}
If a space admits a unique asymptotic model, and hence also spreading model, then it has to be equivalent to the unit vector basis of $\ell_p$ or $c_0$. This follows, e.g., from the uniform uniqueness of the spreading model.
We now compare uniqueness of the various asymptotic notions. Here, $1\leq p\leq \infty$ and whenever $p=\infty$ then $\ell_p$ should be replaced with $c_0$. The implications presented in the next statement are fairly obvious.
\begin{prop}
Let $1\leq p\leq \infty$, $X$ be a Banach space, and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. Consider the following properties.
\begin{itemize}
\item[(a1)$_p$] The space $X$ is coordinate free asymptotic-$\ell_p$.
\item[(a2)$_p$] The space $X$ has a basis that is asymptotic-$\ell_p$.
\item[(b)$_p$] The space $X$ admits a unique $\ell_p$ asymptotic model with respect to $\mathscr{F}$.
\item[(c)$_p$] The space $X$ admits a uniformly unique $\ell_p$ spreading model with respect to $\mathscr{F}$.
\item[(d)$_p$] The space $X$ admits a unique $\ell_p$-spreading model with respect to $\mathscr{F}$.
\end{itemize}
Then (a1)$_p \vee$(a2)$_p \Rightarrow$(b)$_p \Rightarrow$(c)$_p \Rightarrow$(d).
\end{prop}
The question as to whether any inverse implications hold is somewhat less straightforward. We can divide this problem into questions of separation and complete separation (see Definition on page \pageref{property separation}). We discuss this topic starting with the bottom of the list and moving upwards.
\begin{question}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. If $X$ admits a unique spreading model with respect to $\mathscr{F}$ does it also admit a uniformly unique spreading model with respect to $\mathscr{F}$?
\end{question}
In other words, can property (c) be separated from (d). This can be answered fairly easily. Fix $1<p<\infty$ and consider for each $n\in\mathbb{N}$ a norm on $\ell_p$ given by $\|x\|_n = \|x\|_\infty\vee \|x\|_{\ell_p}$. The space $X = (\sum_n\oplus (\ell_p,\|\cdot\|_n))_{\ell_p}$ admits a unique $\ell_p$-spreading model with respect to $\mathscr{F}_0$ but not a uniformly unique $\ell_p$-spreading model with respect to $\mathscr{F}_0$. A slightly less trivial example can be given for $p=1$ by using e.g. a norm $\|x\|_n$ defined on $T$ and taking a $T$-sum. Interestingly it is not possible to do this for $c_0$. It follows from \cite[Proposition 3.2]{AOST} that if a space $X$ admits a unique $c_0$ spreading model with respect to $\mathscr{F}_0$ then this has to happen uniformly. The, more interesting, complete separation analogue of the above question is the following.
\begin{question}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. If $X$ admits a unique spreading model with respect to $\mathscr{F}$ does $X$ have a subspace $Y$ that admit a uniformly unique spreading model with respect to $\mathscr{F}$?
\end{question}
This is less obvious. For example, if one considers $X = (\sum_n\oplus (\ell_2,\|\cdot\|_n))_{\ell_2}$ then $\ell_2$ is a subspace of $X$. To answer this question, in Section \ref{the other space} we construct a Banach space $\widetilde X_{\mathbf{iw}}$ with a unique $\ell_1$ spreading model with respect to $\mathscr{F}_0$ so that in every subspace of $\widetilde X_{\mathbf{iw}}$ one can find normalized weakly null sequences generating a spreading model with an arbitrarily ``bad'' equivalence to the unit vector basis of $\ell_1$.
\begin{question}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. If $X$ admits a uniformly unique spreading model with respect to $\mathscr{F}$ does $X$ admit a uniformly unique asymptotic model with respect to $\mathscr{F}$?
\end{question}
The answer to the above question is negative in all cases of unique spreading models (which have to be some $\ell_p$, $1\leq p<\infty$, or $c_0$). It was observed in \cite{BLMS} that the space $T^*(T^*)$ admits $c_0$ as a uniformly unique spreading model whereas the space admits the unit vector basis of $T^*$ as an asymptotic model. Proposition 3.12 of \cite{BLMS} can also be used to show that $T(T)$ admits a uniformly unique $\ell_1$-spreading model, yet $T(T)$ admits the unit vector basis of $T$ as an asymptotic model. We can replace $T$ with $T_p$, the $p$-convexification of $T$, for $1<p<\infty$. It follows, again from , \cite[Proposition 3.12]{BLMS} that $T_p(T_p)$ has a uniformly unique $\ell_p$ spreading model. Is also easy to see that $T_p(T_p)$ admits the unit vector basis of $T_p$ as an asymptotic model.
\begin{question}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. If $X$ admits a uniformly unique spreading model with respect to $\mathscr{F}$ does $X$ have a subspace that admits a uniformly unique asymptotic model with respect to $\mathscr{F}$?
\end{question}
We prove in this paper that the answer to the above question is conclusively negative, regardless of the assumption on the unique spreading model. We construct a Banach space $X_{\mathbf{iw}}$ that admits a uniformly unique $\ell_1$-spreading mode so that every block subspace of $X_{\mathbf{iw}}$ admits a $c_0$ asymptotic model. We also prove that $X_{\mathbf{iw}}^*$ admits a uniformly unique $c_0$-spreading model and that every block subspace of $X_{\mathbf{iw}}$ admits an $\ell_1$ asymptotic model. We also describe, for $1<p<\infty$, the construction of a space $X_{\mathbf{iw}}^p$ that admits a uniformly unique $\ell_p$-spreading mode so that every block subspace of $X_{\mathbf{iw}}$ admits a $c_0$ asymptotic model.
We remind that according to Remark \ref{asymptotic lp versions} a Banach space contains an asymptotic-$\ell_p$ subspace with a basis if and only if it contains a coordinate free asymptotic-$\ell_p$ subspace.
\begin{question}[E. Odell (Q7) \cite{O1} \& page 66 \cite{O} and M. Junge, D. Kutzarova, E. Odell Problem 1.2 \cite{JKO}]
Let $X$ be a Banach space that admits a uniformly unique spreading model with respect to $\mathscr{F}$. Does $X$ have an asymptotic-$\ell_p$ or asymptotic-$c_0$ subspace?
\end{question}
The spaces $X_{\mathbf{iw}}$, $X_{\mathbf{iw}}^*$, and $X_{\mathbf{iw}}^p$, $1<p<\infty$, provide a negative answer to the above question for all possible assumptions on the unique spreading model.
\begin{question}
Let $X$ be a Banach space and $\mathscr{F} = \mathscr{F}_0$ or $\mathscr{F} = \mathscr{F}_b$. If $X$ admits a unique asymptotic model with respect to $\mathscr{F}$ is $X$ asymptotic-$\ell_p$ or asymptotic-$c_0$ in the coordinate free sense of \cite{MMT}?
\end{question}
Interestingly, for this question the type of unique spreading model makes a difference to the result. It was proved in \cite{FOSZ} that if a separable Banach space $X$ contains no copy of $\ell_1$ and $X$ has a unique $c_0$ asymptotic model with respect to $\mathscr{F}_0$ then $X$ is asymptotic-$c_0$ (in the sense of \cite{MMT}). Replacing $c_0$ with $\ell_p$, for $1<p<\infty$, completely changes the situation. In \cite[Subsection 7.2]{BLMS}, for each $1<p<\infty$ a reflexive Banach space is presented all asymptotic models of which are isometrically equivalent to the unit vector basis of $\ell_p$, yet the space is not asymptotically-$\ell_p$, in the sense of \cite{MMT}. A slightly different approach to the same question is based on a construction in \cite[Example 4.2]{OS3}. One can consider an infinite hight countably branching and well founded tree $\mathcal{T}$. Then, for $1<p<\infty$, define a norm on $c_{00}(\mathcal{T})$ as follows. If $x = \sum_{\lambda\in\mathcal{T}}c_\lambda e_\lambda$ then set
\[\|x\| = \sup\left\{\left(\sum_{i=1}^m\left(\sum_{\lambda \in \beta_i}|c_\lambda|\right)^p\right)^{1/p}: (\beta_i)_{i=1}^m\text{ are disjoint segments of }\mathcal{T}\right\}.\]
One can show, using \cite[Proposition 3.12]{BLMS} and induction on the hight of $\mathcal{T}$, that the completion of this space has only the unit vector basis of $\ell_p$ as an asymptotic model and it is not asymptotically-$\ell_p$.
The Definition from \cite[Subsection 7.2]{BLMS} also yields a non-reflexive Banach space with an unconditional Schauder basis that admits the unit vector basis of $\ell_1$ as a unique asymptotic model with respect to all arrays of block sequences of the basis yet the space is not asymptotic-$\ell_1$. In fact, this space is a Schur space. The first example of a reflexive non-asymptotic-$\ell_1$ space with a unique $\ell_1$ asymptotic model was first given in \cite{AGM}.
The following open question is the remaining implication from the list and it first appeared in \cite[Problem 6.1]{HO}.
\begin{problem}
Let $1\leq p<\infty$ and $X$ be a Banach space not containing $\ell_1$ so that every asymptotic model generated by a weakly null array in $X$ is equivalent to the unit vector basis of $\ell_p$. Does $X$ contain an asymptotic-$\ell_p$-subspace?
\end{problem}
\begin{comment}
\vskip2cm
The global formulation of comparing spreading models to asymptotic models is the following. Let $X$ be a Banach space that admits a uniformly unique spreading model with respect to an $\mathscr{F}$. Does $X$ admit a unique asymptotic model (and hence also joint spreading model) with respect to $\mathscr{F}$? The answer to this question is negative in all cases of unique spreading models. It was observed in \cite{BLMS} that the space $T^*(T^*)$ admits $c_0$ as a uniformly unique spreading model whereas the space admits the unit vector basis of $T^*$ as an asymptotic model. Proposition 3.12 of \cite{BLMS} can also be used to show that $T(T)$ admits a uniformly unique $\ell_1$-spreading model, yet $T(T)$ admits the unit vector basis of $T$ as an asymptotic model. We can replace $T$ with $T_p$, the $p$-convexification of $T$, for $1<p<\infty$. It follows, again from , \cite[Proposition 3.12]{BLMS} that $T_p(T_p)$ has a uniformly unique $\ell_p$ spreading model. Is also easy to see that $T_p(T_p)$ admits the unit vector basis of $T_p$ as an asymptotic model. The global question of comparing uniqueness of asymptotic models to being asymptotically-$\ell_p$ is more interesting. It was proved in \cite{FOSZ} that if a separable Banach space $X$ contains no copy $\ell_1$ and every asymptotic model generated by an array of normalized weakly null sequence is equivalent to the unit vector basis of $c_0$ then $X$ is an asymptotic-$c_0$ space (in the sense of \cite{MMT}). Replacing $c_0$ with $\ell_p$, for $1<p<\infty$, completely changes the situation. In \cite[Subsection 7.2]{BLMS}, for each $1<p<\infty$ a reflexive Banach space is presented all asymptotic models of which are isometrically equivalent to the unit vector basis of $\ell_p$, yet the space is not asymptotically-$\ell_p$, in the sense of \cite{MMT}. A slightly different approach to the same question is based on a construction in \cite[Example 4.2]{OS3}. One can consider a well founded and countably branching tree of infinite hight $\mathcal{T}$. Then, for $1<p<\infty$, define a norm on $c_{00}(\mathcal{T})$ as follows. If $x = \sum_{\lambda\in\mathcal{T}}c_\lambda e_\lambda$ then set
\[\|x\| = \sup\left\{\left(\sum_{i=1}^m\left(\sum_{\lambda \in \beta_i}|c_\lambda|\right)^p\right)^{1/p}: (\beta_i)_{i=1}^m\text{ are disjoint segments of }\mathcal{T}\right\}.\]
One can show, using \cite[Proposition 3.12]{BLMS} and induction on the hight of $\mathcal{T}$, that the completion of this space has only the unit vector basis of $\ell_p$ as an asymptotic model and it is not asymptotically-$\ell_p$.
The Definition from \cite[Subsection 7.2]{BLMS} also yields a non-reflexive Banach space with an unconditional Schauder basis that admits the unit vector basis of $\ell_1$ as a unique asymptotic model with respect to all arrays of block sequences of the basis yet the space is not asymptotic-$\ell_1$. In fact, this space is a Schur space. The following is open.
\begin{problem}
Let $X$ be a separable Banach space not containing an isomorphic copy of $\ell_1$ so that every asymptotic model generated by a weakly null array in $X$ is equivalent to the unit vector basis of $\ell_1$. Is $X$ asymptotically-$\ell_1$?
\end{problem}
We now proceed to the hereditary part of the problem. The following question is from \cite[Problem 6.1]{HO} and it is still open.
\begin{problem}
Let $1\leq p<\infty$ and $X$ be a Banach space not containing $\ell_1$ so that every asymptotic model generated by a weakly null array in $X$ is equivalent to the unit vector basis of $\ell_p$. Does $X$ contain an asymptotic-$\ell_p$-subspace?
\end{problem}
In this paper we answer negatively the following question. Let $1\leq p <\infty$ and let $X$ be a Banach space with a uniformly unique spreading model with respect to all normalized weakly null sequences. Does $X$ contain a subspace that admits a unique asymptotic model with respect to all normalized weakly null sequences? The space $X_{\mathbf{iw}}$ that we construct has a uniformly unique $\ell_1$ spreading model yet all of its subspaces admit a $c_0$ asymptotic model. This also means that $X_{\mathbf{iw}}$ has no asymptotic-$\ell_p$ subspace, thus answering a question of Odell appearing in \cite{O1} and \cite{O}. Our method can be modified to yield spaces with uniformly unique $\ell_p$ spreading models, for fixed $1<p<\infty$ admitting a $c_0$ asymptotic model in every subspace. Interestingly, the method cannot be applied to construct a space that admits a uniformly unique $c_0$ spreading model without asymptotic-$c_0$ subspaces leaving the following unanswered.
\begin{problem}
Let $X$ be a Banach space not containing $\ell_1$ that admits a uniformly unique $c_0$ spreading model with respect to all normalized weakly null sequences. Does $X$ contain an asymptotic-$c_0$ subspace?
\end{problem}
Recall that by \cite{FOSZ} containing an asymptotic-$c_0$ subspace is equivalent to containing a subspace with a unique $c_0$ asymptotic model.
\end{comment}
\subsection{Finite block representability}
In this part of this section we recall the notion of finite block representability and the Krivine set of a space.
\begin{defn}
Let $X$ be a Banach space with a Schauder basis $(e_i)_i$ and let also $Y$ be a finite dimensional Banach space with a Schauder basis $(y_i)_{i=1}^n$. We say that $(y_i)_{i=1}^n$ is block representable in $X$ if for every $\varepsilon>0$ there exists a block sequence $(x_i)_{i=1}^n$ in $X$ that is $(1+\varepsilon)$-equivalent to $(y_i)_{i=1}^n$. Given an infinite dimensional Banach space $Z$ with a Schauder basis $(z_i)_i$ we say that $(z_i)_i$ is finitely block representable in $X$ if for every $n\in\mathbb{N}$ the sequence $(z_i)_{i=1}^n$ is block representable in $X$.
\end{defn}
Given a Banach space $X$ with a basis the Krivine set $K(X)$ of $X$ is the set of all $p\in [1,\infty]$ so that the unit vector basis of $\ell_p$ (or of $c_0$ in the case $p=\infty$) is finitely block representable in $X$. It was proved by J-L Krivine in \cite{K} that this set is always non-empty. It is observed in \cite[Subsection 1.6.3]{MMT} that a stronger result holds, namely that there is $p\in[1,\infty]$ so that for all $\varepsilon>0$ and $n\in\mathbb{N}$ there exists a block sequence $(x_i)_{i=1}^\infty$ so that for all $k_1<\cdots<k_n$ the sequence $(x_{k_i})_{i=1}^n$ is $(1+\varepsilon)$-equivalent to the unit vector basis of $\ell_p^n$. We shall refer to the set of all such $p$'s as the strong Krivine set of $X$ and denote it by $\widetilde K(X)$. Clearly, $\widetilde K(X)\subset K(X)$.
It is clear that if $X$ is asymptotic-$\ell_p$, for some $1\leq p \leq\infty$ then $K(X) = \widetilde K(X) = \{p\}$.
\begin{question}
Let $X$ be a Banach space with a basis. Does there exist a block subspace $Y$ of $X$ so that $K(Y) = \tilde K(Y)$?
\end{question}
We answer with question negatively by showing that for every block subspace $Y$ of $X_{\mathbf{iw}}$ we have $\tilde K(Y) = \{1\} \subsetneq [1,\infty] = K(Y)$. We also point out that for every $1<p<\infty$ and every block subspace $Y$ of $X_{\mathbf{iw}}^p$ we have $\tilde K(Y) = \{p\} \subsetneq [p,\infty] = K(Y)$.
We additionally show that all 1-unconditional sequences are finitely block representable in every block subspace $Y$ of $X_{\mathbf{iw}}$. To show this we use a result from \cite{OS1} where it was observed that there is a family of finite unconditional sequences that is universal for all unconditional sequences.
\begin{prop}[\cite{OS1}]
\label{universal for unc}
Let $n\in\mathbb{N}$ and $X_n$ be the finite dimensional space spanned by the sequence $(e_{i,j})_{i,j=1}^n$ ordered lexicographically and endowed with the norm
\begin{equation*}
\left\|\sum_{i=1}^n\sum_{j=1}^na_{i,j}e_{i,j}\right\| = \max_{1\leq j\leq n}\sum_{i=1}^n|a_{i,j}|.
\end{equation*}
If $X$ is a Banach space with a Schauder basis $(x_i)_i$ so that for each $n\in\mathbb{N}$ the sequence $(e_{i,j})_{i,j=1}^n$ is block representable in $X$, then every 1-unconditional basic sequence is finitely block representable in $X$.
\end{prop}
\subsection{Asymptotically symmetric spaces}
This is the final part of this section and we remind the notion of an asymptotically symmetric Banach space. It was introduced in \cite{JKO} and the motivation stems from the theory of non-commutative $L_p$ spaces.
\begin{defn}
\label{defas}
A Banach space $X$ is called asymptotically symmetric if there exists $C>0$ so that for all $l\in\mathbb{N}$, all bounded arrays of sequences $(x_j^{(i)})_j$, $1\leq i\leq l$ in $X$, and all permutations $\sigma$ of $\{1,\ldots,l\}$ we have
\begin{equation}
\label{a.s.}
\lim_{j_1\to\infty}\cdots\lim_{j_l\to\infty}\Big\|\sum_{i=1}^lx^{(i)}_{j_i}\Big\| \leq C\lim_{j_{\sigma(1)}\to\infty}\cdots\lim_{j_{\sigma(l)}\to\infty}\Big\|\sum_{i=1}^lx^{(i)}_{j_i}\Big\|
\end{equation}
provided that both iterated limits exist.
\end{defn}
This is a notion that is weaker than the one of stable Banach spaces. It also follows from the discussion leading up to \cite[Proposition 1.1]{JKO} that a reflexive asymptotic-$\ell_p$ space is asymptotically symmetric. It was also observed there that $L_p$ provides a counterexample to the converse.
\begin{question}[Junge, D. Kutzarova, E. Odell Problem 0.2 \cite{JKO}]
Let $X$ be an asymptotically symmetric Banach space. Does $X$ contain an asymptotic-$\ell_p$ (or asymptotic-$c_0$) subspace?
\end{question}
It turns out that the spaces $X_{\mathbf{iw}}$ and $X_{\mathbf{iw}}^*$ are asymptotically symmetric and therefore each of them provides a negative answer to the above question. This is an immediate consequence of the next result, which follows easily from \cite{JKO}. We include a proof for completeness.
\begin{prop}
\label{unique 1 or 0 am}
Let $X$ be a reflexive Banach space that satisfies one of the following conditions.
\begin{itemize}
\item[(i)] The space $X$ has a Schauder basis $(e_i)_i$ and it admits a uniformly unique $\ell_1$ spreading model with respect to $\mathscr{F}_b$.
\item[(ii)] The space $X$ is separable and it admits a unique $c_0$ spreading model with respect to $\mathscr{F}_0$.
\end{itemize}
Then $X$ is asymptotically symmetric.
\end{prop}
\begin{proof}
The statement of \cite[Theorem 2.3]{JKO} is that if a (not necessarily reflexive) Banach space satisfies (i) then it is block asymptotically symmetric, i.e., it satisfies \eqref{a.s.} for arrays of bounded block sequences in $X$. The statement of \cite[Theorem 1.1 (c)]{JKO} is that when $X$ is reflexive with a basis then being asymptotic symmetric is equivalent to being block asymptotic symmetric. Similarly, \cite[Theorem 2.4]{JKO} yields that any Banach space satisfying (ii) is weakly asymptotically symmetric and once more \cite[Theorem 1.1 (c)]{JKO} states that for reflexive spaces this is equivalent to being asymptotically symmetric.
\end{proof}
\begin{comment}
The definition of an asymptotic model was introduced and first studied in \cite{HO}.
\begin{defn}
Let $X$ be a Banach space, $(x^{(i)}_j)_j$, $i\in\mathbb{N}$ be an infinite array of normalized sequences in a Banach space $X$ and $(e_i)_i$ be a sequence in a seminormed space $E$. We say that the array $(x^{(i)}_j)_j$ generates $(e_i)_i$ as an asymptotic model if there exists a sequence of positive real numbers $(\delta_n)_n$ that decreases to zero so that for any $n\in\mathbb{N}$, any scalars $a_1,\ldots,a_n\in[-1,1]$, and any $n\leq j_1<\cdots<j_n$ we have
\[ \left|\left\|\sum_{i=1}^na_ix^{(i)}_{j_i}\right\| - \iii{\sum_{i=1}^na_ie_i}\right| < \delta_n.
\]
\end{defn}
It was proved in \cite{HO} that any array $(x^{(i)}_j)_j$, $i\in\mathbb{N}$ of normalized sequences that are all weakly null have a subarray $(x^{(i)}_{j_k})_k$, $i\in\mathbb{N}$ that generates a 1-suppression unconditional asymptotic model $(e_i)_i$.
A similar notion was introduced in \cite{AGLM}, namely that of joint spreading models. Let $k,l\in\mathbb{N}$. A sequence $(s_i)_{i=1}^k$ in $[\mathbb{N}]^l$ is called a strict plegma if
\[
\begin{split}
s_1(1) < s_2(1) <\cdots<s_k(1)&<s_1(2)<s_2(2)<\cdots<s_k(2)<\cdots\\
\cdots& <s_1(l)<s_2(l)<\cdots<s_k(l).
\end{split}
\]
\begin{defn}
Let $X$ be a Banach space, $k\in\mathbb{N}$, $(x^{(i)}_j)_j$, $1\leq i\leq k$ be a $k$-array of normalized Schauder basic sequences in a Banach space $X$ and $(e^{(i)}_j)_i$, $1\leq i\leq k$, be a $k$-array of normalized sequences in a Banach space $E$. We say that $(x^{(i)}_j)_j$, $1\leq i\leq k$ generates $(e^{(i)}_j)_i$, $1\leq i\leq k$, as a joint spreading model if there exists a sequence of positive real numbers $(\delta_n)_n$ that decreases to zero so that for every $l\in\mathbb{N}$, every collection of scalars $(a_{i,j})_{1\leq i\leq k, 1\leq j\leq l}$ in $[-1,1]$, and every strict plegma $(s_i)_{i=1}^k$ in $[\mathbb{N}]^l$ with $\min(s_1) \geq l$ we have
\[ \left|\left\|\sum_{i=1}^k\sum_{j=1}^la_{i,j}x^{(i)}_{s_i(j)}\right\| - \iii{\sum_{i=1}^k\sum_{j=1}^la_{i,j}e^{(i)}_j}\right| < \delta_n.
\]
\end{defn}
It was shown in \cite{AGLM} that every $k$-array of normalized Schauder basic sequences in a Banach space $X$ has a subarray that generates some joint spreading model $(e_j^{(i)})_j$, $1\leq i\leq k$, which has to be a plegma spreading sequence (see \cite[Definition 1.4]{AGLM}).
\end{comment}
\section{Definition of the space $X_{\mathbf{iw}}$}\label{definition section}
We define the space $X_{\mathbf{iw}}$ by first defining a norming set $W_\mathbf{iw}$. This is a norming set of the mixed-Tsirelson type with certain constraints applied to the weights of the functionals used in the construction. Fix a pair of strictly increasing sequences of natural numbers $(m_j)_j$, $(n_j)_j$ with $m_1 = 2$ and $n_1 = 1$ satisfying the growth conditions
\begin{itemize}
\item[(i)] for all $C>1$ we have $\displaystyle{\lim_j\frac{C^{n_j}}{m_j} = \infty}$,
\item[(ii)] $\displaystyle{\lim_j\frac{m_j}{m_{j+1}} = 0}$, and
\item[(iii)] $n_{j+1} > n_{j_1}+\cdots+n_{j_l} + 1$ for all $l\in\mathbb{N}$ and $1\leq j_1,\ldots,j_l \leq j$ with the property $m_{j_1}\cdots m_{j_l} < m_{j+1}^2$.
\end{itemize}
These properties can be achieved by taking any strictly increasing sequence of natural numbers $(m_j)_j$, with $m_1 = 2$, satisfying (ii) and afterwards choosing any strictly increasing sequence of natural numbers $(n_j)_j$, satisfying $n_1 = 1$ and so that $n_{j+1} > n_j\log(m^2_{j+1})$ for all $j\in\mathbb{N}$.
\begin{ntt}
Let $G$ be a subset of $c_{00}(\mathbb{N})$.
\begin{itemize}
\item[(i)] Given $j_1,\ldots,j_l\in\mathbb{N}$ and $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$ admissible functionals $f_1 < \cdots < f_d$ in $G$ we call a functional of the form
\begin{equation*}
f = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{q = 1}^d f_q
\end{equation*}
a weighted functional of $G$ of weight $w(f) = m_{j_1}\cdots m_{j_l}$ and vector weight $\vec w(f) = (j_1,\ldots,j_l)$. For all $i\in\mathbb{N}$, we also call $f = \pm e_i^*$ a weighted functional of weight $w(f) = \infty$ and in this case we do not define $\vec w(f)$.
\item[(ii)] A (finite or infinite) sequence $f_1<f_2<\cdots<f_q<\cdots$ of weighted functionals of $G$ is called very fast growing if $w(f_q) > \max\supp(f_{q-1})$ for $q>1$.
\end{itemize}
\end{ntt}
Note that if $(f_q)_q$ is a sequence of very fast growing weighted functionals then any of the $f_q$'s may be of the form $\pm e_i^*$ for $i\in\mathbb{N}$. Furthermore, the weight and vector weight of a functional may not be uniquely defined but this causes no problems.
\begin{defn}
Let $W_\mathbf{iw}$ be the smallest subset of $c_{00}(\mathbb{N})$ that satisfies the following two conditions.
\begin{itemize}
\item[(i)] $\pm e_i^*$ is in $W_\mathbf{iw}$ for all $i\in\mathbb{N}$ and
\item[(ii)] for every $j_1,\ldots,j_l\in\mathbb{N}$, and every $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$-admissible and very fast growing sequence of weighted functionals $(f_q)_{q=1}^d$ in $W_\mathbf{iw}$ the functional $$f = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{q=1}^df_q$$ is in $W_\mathbf{iw}$.
\end{itemize}
We define a norm on $c_{00}(\mathbb{N})$ given by $\|x\| = \sup\{f(x): x\in W_\mathbf{iw}\}$ and we set $X_{\mathbf{iw}}$ to be the completion of $(c_{00}(\mathbb{N}), \|\cdot\|)$.
\end{defn}
\begin{rem}
\label{inductive construction norming}
Alternatively the set $W_\mathbf{iw}$ can be defined to be the increasing union of a sequence of sets $(W_n)_{n=0}^\infty$ where $W_0 = \{\pm e_i: i\in\mathbb{N}\}$ and
\begin{equation*}
\begin{split}
W_{n+1} = W_n\cup\Bigg\{& \frac{1}{m_{j_1}\cdots m_{j_l}} \sum_{q=1}^d f_q: ~ j_1,\ldots,j_l\in\mathbb{N}, \mbox{ and }(f_q)_{q=1}^d \mbox{ is an}\\&
\mbox{$\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$ admissible and very fast growing sequence}\\
& \mbox{of weighted functionals in $W_n$}\Bigg\}.
\end{split}
\end{equation*}
\end{rem}
\begin{rem}
By induction on $n$ it easily follows that each set $W_n$ is closed under changing signs and under taking projections onto subsets, hence the same holds for $W_\mathbf{iw}$. This yields that the unit vector basis of $c_{00}(\mathbb{N})$ forms a 1-unconditional basis for the space $X_{\mathbf{iw}}$.
\end{rem}
\begin{rem}
It is easy to check by induction on the construction of $W_\mathbf{iw}$ that for every $f\inW_\mathbf{iw}$ each of it coordinates is either zero or of the form $1/d$ for some non-zero integer $d$. As $W_\mathbf{iw}$ is closed under projections onto arbitrary subsets, we deduce that for every $k\in\mathbb{N}$ the set $W_\mathbf{iw}|_k$ of all $f\inW_\mathbf{iw}$ with $\max\supp(f)\leq k$ is compact in the topology of point-wise convergence. This yields that for every $x\inX_{\mathbf{iw}}$ with $\supp(x)$ finite there is $f\inW_\mathbf{iw}$ with $f(x) = \|x\|$.
\end{rem}
\section{The spreading model of block sequences in $X_{\mathbf{iw}}$}
\label{uniform unique spreading model section}
We prove that every normalized block sequence in $X_{\mathbf{iw}}$ has a subsequence that generates a $4$-$\ell_1$ spreading model. This is unusual for constructions using saturations under constraints where typically at least two different spreading models appear (see, e.g., \cite{AM1}). As it will be shown later the constraints impose a variety of asymptotic models and local block structure in $X_{\mathbf{iw}}$.
\begin{prop}
\label{uniform ell1}
Let $(x_i)_i$ be a normalized block sequence in $X_{\mathbf{iw}}$. Then there exists $L\in[\mathbb{N}]^\infty$ so that for every $j_0\in\mathbb{N}$, every $F\subset L$ with $(x_i)_{i\in F}$ being $\mathcal{S}_{n_{j_0}}$-admissible, and every scalars $(c_i)_{i\in F}$ we have
\begin{equation*}
\left\|\sum_{i\in F}c_ix_i\right\| \geq \frac{1}{2m_{j_0}}\sum_{i\in F}|c_i|.
\end{equation*}
In particular, every normalized block sequence in $X_{\mathbf{iw}}$ has a subsequence that generates a spreading model that is 4-equivalent to the unit vector basis of $\ell_1$.
\begin{comment}
Let $(x_k)_k$ be a normalized block sequence in $X_{\mathbf{iw}}$. Then $(x_k)_k$ has a subsequence $(x_k')$ so that for all natural numbers $n\leq k_1 <\cdots <k_n$ and all scalars $(a_i)_{i=1}^n$ we have
\begin{equation}
\left\|\sum_{i=1}^na_ix_{k_i}'\right\| \geq \frac{1}{4}\sum_{i=1}^n|a_i|.
\end{equation}
\end{comment}
\end{prop}
\begin{proof}
We quickly observe that the second statement quickly follows from the first one and $m_1 = 2$, $n_1 = 1$. We now proceed to prove the first statement. For every $k\in\mathbb{N}$ choose $f_k\inW_\mathbf{iw}$ with $f(x_k) = 1$ so that $\ran(f_k)\subset \ran(x_k)$. We distinguish two cases, namely the one in which $\limsup_kw(f_k)$ is finite and the one in which it is infinite.
In the first case, take an infinite subset $L$ of $\mathbb{N}$ and $j_1,\ldots,j_l\in\mathbb{N}$ so that for all $k\in\mathbb{N}$ we have $\vec w(f_k) = m_{j_1}\cdots m_{j_l}$. For each $k\in L$ write
\begin{equation*}
f_k = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{q=1}^{d_k}f_q^k
\end{equation*}
where each sequence $(f_q^k)_{q=1}^{d_k}$ is $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$ admissible and very fast growing with $\min\supp(x_k) \leq \max\supp(f_1^l) < w(f_2^k)$, which implies that the sequence $((f_q^k)_{q=2}^{d_k})_{k\in L}$, enumerated in the natural way, is very fast growing. Also, for every $k_1<\cdots<k_d$ in $L$ so that $(x_k)_{k\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible the functionals $((f_q^{k_i})_{q=2}^{d_k})_{i=1}^n$ are $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l} + n_{j_0}}$ admissible and it follows that the functional
\begin{equation*}
\label{uniform ell1 eq1}
f = \frac{1}{m_{j_1}\cdots m_{j_l}m_{j_0}}\sum_{i=1}^n\sum_{q=2}^{d_{k_i}}f_q^{k_i} \text{ is in }W_\mathbf{iw}.
\end{equation*}
As for each $k\in\mathbb{N}$ the functional $f_1^k\inW_\mathbf{iw}$ we have $f_1^k(x_k) \leq 1$ and therefore
\begin{equation}
\label{uniform ell1 eq 1}
\frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{q=2}^{d_k}f_q^k(x_k) \geq f(x_k) - \frac{1}{m_{j_1}\cdots m_{j_l}}f_1^k(x_k) \geq 1 - 1/2 = 1/2.
\end{equation}
For any $k_1<\cdots<k_n$ in $L$ so that $(x_k)_{k\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible and scalars $a_1,\ldots,a_n$ we conclude
\begin{equation*}
\begin{split}
\left\|\sum_{i=1}^na_ix_{k_i}\right\| & = \left\|\sum_{i=1}^n|a_i|x_{k_i}\right\| \geq \frac{1}{m_{j_1}\cdots m_{j_l}m_{j_0}}\sum_{i=1}^n\sum_{q=2}^{d_{k_i}}f_q^{k_i}\left(\sum_{j=1}^n|a_j|x_{k_j}\right)\\
&= \frac{1}{m_{j_0}}\sum_{i=1}^n|a_i|\frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{q=2}^{d_{k_i}}f_q^{k_i}\left(x_{k_i}\right)\\
& \geq \frac{1}{m_{j_0}}\sum_{i=1}^n\frac{1}{2}|a_i| = \frac{1}{2m_{j_0}}\sum_{i=1}^n|a_i|.
\end{split}
\end{equation*}
In the second case we may choose an infinite subset of $L$ so that $(f_k)_{k\in L}$ is very fast growing. As $m_1 = 2$ and $n_1 = 1$ we deduce that for any $k_1<\cdots<k_n$ in $L$ so that $(x_k)_{k\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible the functional
\begin{equation*}
f = \frac{1}{m_{j_0}}\sum_{i=1}^nf_{k_i}
\end{equation*}
is in $W_\mathbf{iw}$. As before, for every $k_1<\cdots<k_n$ in $L$ so that $(x_k)_{k\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible and scalars $a_1,\ldots,a_n$ we conclude that $\|\sum_{i=1}^na_ix_{k_i}\| \geq (1/m_{j_0})\sum_{i=1}^n|a_i|$.
\end{proof}
An easy consequence of the above result is the following.
\begin{cor}
\label{strong Krivine singleton}
The strong Krivine set of $X_{\mathbf{iw}}$ is $\widetilde K(X) = \{1\}$.
\end{cor}
\section{The auxiliary space}\label{section aux}
For every $N$ we define an auxiliary space that is defined by a norming set $W^N_\mathrm{aux}$ very similar to $W_\mathbf{iw}$. The reason for which we define an infinite family of auxiliary spaces is because we are interested in the almost isometric representation of finite unconditional sequences as block sequences in $X_{\mathbf{iw}}$. To define this norming set we slightly alter the notions of weighted functionals and very fast growing sequences. In this case, given a subset $G$ of $c_{00}(\mathbb{N})$ we will call a functional $f$ an auxiliary weighted functional of weight $w(f) = m_{j_1}\cdots m_{j_l}$ and vector weight $\vec w(f) = (m_{j_1},\ldots,m_{j_l})$, for $j_1,\ldots,j_n\in\mathbb{N}$, if it is of the form
\begin{equation*}
f = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{q=1}^df_q
\end{equation*}
where the functionals $(f_q)_{q=1}^d$ are in $G$ and they are $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}\ast\mathcal{A}_3$ admissible. For all $i\in\mathbb{N}$ we will also say that $f = \pm e_i^*$ is an auxiliary weighted functional of weight $w(f) = \infty$ and we do not define $\vec w(f)$ in this case. A sequence of auxiliary weighted functionals $(f_q)_q$ will be called $N$-sufficiently large if $w(f_q) > N$ for $q\geq2$. There is no restriction on $w(f_1)$.
\begin{defn}
\label{aux}
For $N\in\mathbb{N}$ let $W_\mathrm{aux}^N$ be the smallest subset of $c_{00}(\mathbb{N})$ that satisfies the following to conditions.
\begin{itemize}
\item[(i)] $\pm e_i^*$ is in $W^N_\mathrm{aux}$ for all $i\in\mathbb{N}$ and
\item[(ii)] for every $j_1,\ldots,j_l\in\mathbb{N}$ and every $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}\ast\mathcal{A}_3$ admissible sequence of $N$-sufficiently large auxiliary weighted functionals $(f_q)_{q=1}^d$ in $W^N_\mathrm{aux}$ the functional $$f = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{q=1}^df_q$$ is in $W^N_\mathrm{aux}$.
\end{itemize}
We define a norm $\|\cdot\|_{\mathrm{aux},N}$ on $c_{00}(\mathbb{N})$ by setting $\|x\|_{\mathrm{aux},N} = \sup\{f(x): f\in W_\mathrm{aux}^N\}$ for $x\in c_{00}(\mathbb{N})$.
\end{defn}
\begin{rem}
As in Remark \ref{inductive construction norming} the set $W_\mathrm{aux}^N$ can be defined as an increasing union of sets $(W_n^N)_{n=0}^\infty$ where $W^N_0 = \{\pm e_i:i\in\mathbb{N}\}$ and for each $n\in\mathbb{N}$ the set $W_{n+1}^N$ is defined by using $N$-sufficiently large $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}\ast\mathcal{A}_3$ admissible sequences in $W_n^N$.
\end{rem}
The purpose of the following two lemmas is to bound the norm of linear combinations of certain vectors in the auxiliary spaces from above. The final estimate of this section is \eqref{upper auxiliary eq} which will be used to bound the norm of appropriately chosen vectors in $X_{\mathbf{iw}}$.
\begin{lem}
\label{basis on basis}
Let $j_0\in\mathbb{N}$, $\varepsilon>0$, $x = \sum_{r\in F}c_re_r$ be a $(n_{j_0}-1,\varepsilon)$ basic s.c.c., and $\tilde x = m_{j_0}x$. Let also $j_1,\ldots,j_l\in\mathbb{N}$ with $\max_{1\leq i\leq l} j_i \neq j_0$, $G\in \mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}\ast\mathcal{A}_3$ and $f = (m_{j_1}\cdots m_{j_l})\sum_{i\in G}e_i^*$. Then
\begin{equation}
\label{basis on basis eq}
|f(x)| \leq \max\left\{2\varepsilon m_{j_0},\frac{m_{j_0}}{m_{j_0+1}},\frac{1}{m_{j_0}}\right\}.
\end{equation}
\end{lem}
\begin{proof}
If $\max_{1\leq i\leq l} j_i > j_0$ then $\|f\|_\infty \leq 1/(m_{j_0 +1})$ which yields
\begin{equation}
\label{basis on basis eq proof 1}
|f(\tilde x)| \leq \|f\|_\infty\|\tilde x\|_1 \leq \frac{m_{j_0}}{m_{j_0+1}}.
\end{equation}
If $\max_{1\leq i\leq l} j_i < j_0$ we distinguish two cases, namely whether $n_{j_1}+\cdots+n_{j_l} < n_{j_0} - 1$ or otherwise. In the first case, as $G\in\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}\ast\mathcal{A}_3$ we obtain
\begin{equation}
\label{basis on basis eq proof 2}
|f(\tilde x)| \leq \frac{m_{j_0}}{m_{j_1}\cdots m_{j_l}}\sum_{i\in G\cap F}c_i \leq \frac{m_{j_0}}{2}3\varepsilon.
\end{equation}
If on the other hand $\max_{1\leq i\leq l} j_i < j_0$ and $n_{j_1}+\cdots+n_{j_l}\geq n_{j_0} - 1$, by property (iii) of the sequences $(m_j)_j$, $(n_j)_j$ we obtain $m_{j_1}\cdots m_{j_l} \geq m_{j_0}^2$ which gives $\|f\|_\infty \leq 1/m_{j_0}^2$. We conclude
\begin{equation}
\label{basis on basis eq proof 3}
|f(\tilde x)| \leq \|f\|_\infty\|\tilde x\|_1 \leq \frac{m_{j_0}}{m_{j_0}^2} = \frac{1}{m_{j_0}}.
\end{equation}
The result follows from combining \eqref{basis on basis eq proof 1}, \eqref{basis on basis eq proof 2}, and \eqref{basis on basis eq proof 3}.
\end{proof}
\begin{lem}
\label{upper auxiliary}
Let $N,k,l\in\mathbb{N}$, $\varepsilon>0$, $(t_i)_{i=1}^k$ be pairwise different natural numbers and $(x_{i,j})_{1\leq i \leq k, 1\leq j\leq l}$ be vectors in $c_{00}(\mathbb{N})$ so that for each $i,j$ the vector $x_{i,j}$ is of the form
\begin{equation}
x_{i,j} = m_{t_i}\tilde x_{i,j},\text{ where } \tilde x_{i,j} = \sum_{r\in F_{i,j}}c_r^{i,j}e_r \text{ is a } (n_{t_i}-1,\varepsilon) \text{ basic s.c.c.}
\end{equation}
Then, for any scalars $(a_{i,j})_{1\leq i \leq k, 1\leq j\leq l}$ and $f\in W_\mathrm{aux}^N$ we have
\begin{equation}
\label{upper auxiliary eq}
\left|f\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| \leq (1+\delta)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|,
\end{equation}
for any $\delta$ satisfying
\begin{equation}
\label{this long delta}
\delta \geq \sum_{i=1}^k \max\left\{12\varepsilon m_{t_i},12\frac{1}{m_{t_i}},6\frac{1}{N}m_{t_i},6\frac{m_{t_i}}{m_{t_i+1}}\right\}.
\end{equation}
\end{lem}
\begin{rem}
We point out that the vectors $x_{i,j}$, $1\leq i \leq k, 1\leq j\leq l$, are not required to have successive or disjoint supports.
\end{rem}
\begin{proof}[Proof of Lemma \ref{upper auxiliary}]
The proof is performed by induction on $m = 0,1,\ldots$ by showing that \eqref{upper auxiliary eq} holds for every $f\in W_m^N$. For $m = 0$ the result easily follows from the fact that for all $n\in\mathbb{N}$ and $1\leq i\leq k$, $1\leq j\leq l$ we have $|e_n^*(x_{i,j})| \leq m_{t_i}\varepsilon$ which yields
\[\left|e_n^*\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| \leq \left(\varepsilon \sum_{i=1}^km_{t_i}\right)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|.\]
Assume that the conclusion holds for every $f\in W_m^N$ and let $f\in W_{m+1}^N\setminus W_{m}^N$. Write
$$f = \frac{1}{m_{j_1}\cdots m_{j_a}}\sum_{q=1}^df_q$$
where $j_1,\ldots,j_a\in\mathbb{N}$ and $(f_q)_{q=1}^d$ is an $N$-sufficiently large and $\mathcal{S}_{n_{j_1}+\cdots+n_{j_a}}\ast\mathcal{A}_3$-admissible sequence of functionals in $W_m^N$. We define $b = \max\{j_1,\ldots,j_a\}$. The inductive assumption yields
\begin{equation}
\label{upper auxiliary proof eq1}
\left|f_1\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| \leq (1+\delta)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|.
\end{equation}
Set $B = \{2\leq q\leq d: f_q = \pm e_n^* \text{ for some }n\in\mathbb{N}\}$ and $C = \{2,\ldots,d\}\setminus B$. Define
$$g_1 = \frac{1}{m_{j_1}\cdots m_{j_a}}f_1,\; g_2 = \frac{1}{m_{j_1}\cdots m_{j_a}}\sum_{q\in B}f_q, \text{ and } g_3 = \frac{1}{m_{j_1}\cdots m_{j_a}}\sum_{q\in C}f_q.$$
Clearly, $f = g_1 + g_2 + g_3$. It follows from the definition of $N$-sufficiently large that $\|g_3\|_\infty\leq 1/(Nm_{j_1}\cdots m_{j_a})$ which implies that for all $1\leq i \leq k, 1\leq j\leq l$ we have $|g_3(x_{i,j})|\leq m_{t_i}/(Nm_{j_1}\cdots m_{j_a})$ and hence
\begin{equation}
\begin{split}
\left|g_3\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| &\leq \left(\frac{1}{Nm_{j_1}\cdots m_{j_a}}\sum_{i=1}^km_{t_i}\right)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|\\
&\leq \frac{\delta}{6}\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|.
\end{split}
\end{equation}
Lemma \ref{basis on basis} yields that if we set $D = \{1\leq i\leq k:t_i\neq b\}$ then
\begin{equation*}
\begin{split}
\left|g_2\left(\sum_{j=1}^l\sum_{i\in D}^na_{i,j}x_{i,j}\right)\right| &\leq
\sum_{j=1}^l\sum_{i\in D}|a_{i,j}|\max\left\{2\varepsilon m_{t_i},\frac{m_{t_i}}{m_{t_i+1}},\frac{1}{m_{t_i}}\right\}\\
&\leq
\frac{\delta}{6}\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|,\end{split}
\end{equation*}
whereas an easy computation yields that if there is $1\leq i_0\leq k$ with $b = t_{i_0}$ then for all $1\leq j\leq l$ we have $|g_2(x_{i_0,j})|\leq 1$ and hence
\begin{equation}
\left|g_2\left(\sum_{j=1}^la_{i_0,j}x_{i_0,j}\right)\right| \leq \sum_{j=1}^l|a_{i_0,j}| \leq \max_{1\leq i\leq k}\sum_{j=1}^l|a_{i_0,j}|.
\end{equation}
We now have all the necessary components to complete the inductive step. We consider two cases, namely one in which such an $i_0$ does not exist (i.e. when $D = \{1,\ldots,k\}$) and one in which such an $i_0$ exists (i.e. $b = t_{i_0}$ for some $1\leq i_0\leq k$). In the first case we obtain
\begin{align*}
\left|f\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right|\span\\
&\leq \left|g_1\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| + \left|g_2\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| + \left|g_3\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right|\\
& = \left|g_1\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| + \left|g_2\left(\sum_{j=1}^l\sum_{i\in D}ka_{i,j}x_{i,j}\right)\right| + \left|g_3\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right|\\
&\leq \left(\frac{1+\delta}{m_{j_1}\cdots m_{j_a}} + \frac{\delta}{6} + \frac{\delta}{6}\right)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}| \leq (1+\delta)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|.
\end{align*}
In the second case
\begin{align*}
\left|f\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right|\span\\
&\leq \left|g_1\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| + \left|g_2\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| + \left|g_3\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right|\\
& = \left|g_1\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| + \left|g_2\left(\sum_{j=1}^l\sum_{i\in D}a_{i,j}x_{i,j}\right)\right| + \left|g_2\left(\sum_{j=1}^la_{i_0,j}x_{i_0,j}\right)\right|\\
&+ \left|g_3\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right|\text{ (use $m_{j_1}\cdots m_{j_a} \geq m_{t_{i_0}}$)}\\
& \leq \left(\frac{1+\delta}{m_{t_{i_0}}} + \frac{\delta}{6} + 1 +\frac{\delta}{6}\right)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}| \text{ (use $\delta\geq 6/m_{t_{i_0}}$)}\\
&\leq\left(1 +3\frac{\delta}{6} + \frac{\delta}{2}\right)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|.
\end{align*}
The proof is complete
\end{proof}
\section{Rapidly increasing sequences and the basic inequality}
\label{ris and basic section}
Rapidly increasing sequences appear in every HI-type construction and this case is no different as the definition below follows the line of classical examples such as \cite{AH}. The basic inequality on such sequences is the main tool used to bound the norm of such vectors from above by the norm of vectors in the auxiliary spaces. To achieve the isometric representation of unconditional sequences as block sequences in subspaces of $X_{\mathbf{iw}}$ we give a rather tight estimate in the basic inequality \eqref{this is the basic inequality in the flesh}.
\begin{defn}
\label{definition ris}
Let $C\geq 1$, $I$ be an interval of $\mathbb{N}$ and $(j_i)_{i\in I}$ be a strictly increasing sequence of natural numbers. A block sequence $(x_i)_{i\in I}$ is called a $(C,(j_i)_{i\in I})$ rapidly increasing sequence (RIS) if the following are satisfied.
\begin{itemize}
\item[(i)] For all $i\in I$ we have $\|x_i\| \leq C$,
\item[(ii)] for $i\in I\setminus\{\min(I)\}$ we have $\max\supp(x_{i-1}) < \sqrt{m_{j_i}}$, and
\item[(iii)] $|f(x_i)| \leq C/w(f)$ for every $i\in I$ and $f\inW_\mathbf{iw}$ with $w(f) < m_{j_i}$.
\end{itemize}
\end{defn}
\begin{prop}[basic inequality]
\label{basic inequality}
Let $(x_i)_{i\in I}$ be a $(C,(j_i)_{i\in I})$-RIS, $(a_i)_{i\in I}$ be a sequence of scalars, and $N < \min\{m_{j_{\min(I)}}, \min\supp(x_{\min(I)})\}$ be a natural number. Then, for every $f\inW_\mathbf{iw}$ there exist $h\in\{\pm e_i^*:i\in\mathbb{N}\}\cup\{0\}$ and $g\in W_\mathrm{aux}^N$ with $w(f) = w(g)$ so that if $t_i = \max\supp(x_i)$ for $i\in I$ then we have
\begin{equation}
\label{this is the basic inequality in the flesh}
\left|f\left(\sum_{i\in I}a_i x_i\right)\right| \leq C\left(1+\frac{1}{\sqrt{m_{j_{i_0}}}}\right)\left|(h + g)\left(\sum_{i\in I}a_ie_{t_i}\right)\right|.
\end{equation}
\end{prop}
\begin{proof}
We use Remark \ref{inductive construction norming} to prove the statement by induction on $n=0,1,\ldots$ for every $f\in W_n$ and every RIS. We shall also include in the inductive assumption that $\supp(h)$ and $\supp(g)$ are subsets of $\{t_i:i\in I\}$ as well as the following:
\begin{itemize}
\item[(i)] either $h = 0$,
\item[(ii)] or $h$ is of the form $\pm e_{t_{i_1}}^*$ for some $i_1\in I$, $t_{i_1} < \min\supp(g)$, and $w(f) > N$.
\end{itemize}
For $n=0$ the result is rather straightforward so let us assume that the conclusion holds for every $f\in W_n$ and let $f\in W_{n+1}$.
Let
$$f = \frac{1}{m_{s_1}\cdots m_{s_l}}\sum_{q=1}^df_q$$
with $(f_q)_{q=1}^d$ being an $\mathcal{S}_{n_{s_1}+\cdots+n_{s_l}}$ admissible and very fast growing sequence of weighted functionals in $W_n$. By perhaps omitting an initial interval of the $f_q$'s we may assume that $\max\supp(f_1) \geq \min\supp (x_1)$. This means that for all $1<q\leq d$ we have $w(f_q) > \max\supp (f_1) > N$. We shall use this near the end of the proof. Define
$$i_0 = \max\left\{i\in I: m_{s_1}\cdots m_{s_l}\geq m_{j_i}\right\},$$
if such an $i_0$ exists (we will treat the case in which such an $i_0$ does not exist slightly further below). In this case $w(f) = m_{s_1}\cdots m_{s_l}\geq m_{j_{i_0}} >N$. Choose $\min(I)\leq i_1\leq i_0$ that maximizes the quantity $|a_i|$ for $i$ in $\{\min(I),\ldots,i_0\}$ and set $h = \mathrm{sign}(f(a_{i_1}x_{i_1}))e_{i_1}^*$. If $i_0>\min(I)$ it is straightforward to check $\|\sum_{i<i_0}a_ix_i\|_\infty \leq C|a_{i_1}|$ and we use this to show
\begin{equation}
\label{basic inequality eq1}
\begin{split}
\left|f\left(\sum_{i\leq i_0}a_i x_i\right)\right| &\leq \max\supp(x_{i_0-1})\left\|\sum_{i < i_0}a_i x_i\right\|_\infty \frac{1}{w(f)} + \left|f(a_{i_0}x_{i_0})\right|\\
&\leq C\frac{\max\supp(x_{i_0-1})}{m_{j_{i_0}}}|a_{i_1}| + C|a_{i_1}| \leq C\left(1+\frac{1}{\sqrt{m_{j_{i_0}}}}\right)|a_{i_1}|\\
& = C\left(1+\frac{1}{\sqrt{m_{j_{i_0}}}}\right)\left|h\left(\sum_{i\in I}a_ie_{t_i}\right)\right|.
\end{split}
\end{equation}
If $i_0 = \min(I)$ we simply obtain $|f(\sum_{i\leq i_0}a_ix_i)| \leq C|a_{i_1}|$. In either case estimate \eqref{basic inequality eq1} holds.
If such an $i_0$ does not exist (i.e. when $w(f) < m_{j_{\min(I)}}$) then set $h = 0$ and we have no lower bound for $w(f)$. This is of no concern as such a restriction is not included in the inductive assumption when $h=0$.
\begin{comment}
Otherwise set $i_0 = \min(I)$. Choose $\min(I)\leq i_1\leq i_0$ that maximizes the quantity $|a_i|$ for $i$ in $\{\min(I),\ldots,i_0\}$. If $i_0>\min(I)$ it is straightforward to check $\|\sum_{i<i_0}a_ix_i\|_\infty \leq C|a_{i_1}|$ and we use this to show
\begin{equation}
\label{basic inequality eq1}
\begin{split}
\left|f\left(\sum_{i\leq i_0}a_i x_i\right)\right| &\leq \max\supp(x_{i_0-1})\left\|\sum_{i < i_0}a_i x_i\right\|_\infty \frac{1}{w(f)} + \left|f(a_{i_0x_{i_0}})\right|\\
&\leq C\frac{\max\supp(x_{i_0-1})}{m_{j_{i_0}}}|a_{i_1}| + C|a_{i_1}| \leq C\left(1+\frac{1}{\sqrt{m_{j_{i_0}}}}\right)|a_{i_1}|.
\end{split}
\end{equation}
If $i_0 = \min(I)$ we simply obtain $|f(\sum_{i\leq i_0}a_ix_i)| \leq C|a_{i_1}|$. In either case estimate \eqref{basic inequality eq1} holds.
\end{comment}
Depending on whether the above $i_0$ exists or not define $\tilde I = \{i\in I:i>i_0\}$ or $\tilde I = I$. It remains to find $g\in W_\mathrm{aux}^N$ with $w(g) = w(f)$ and $\supp(g)\subset \{t_i:i\in\tilde I\}$ so that $|f(\sum_{i\in\tilde I}a_ix_i)| \leq C(1+1/m_{j_0})|g(\sum_{i\in \tilde I}a_ie_{t_i})|$. Define
\begin{align*}
A &= \left\{i\in\tilde I: \text{ there exists at most one } q \text{ with }\ran(x_i)\cap\ran(f_q)\neq\emptyset\right\},\\
I_q &= \left\{i\in A: \ran(f_q)\cap\ran(x_i)\neq\emptyset\right\}\text{ for }1\leq q\leq d,\\
D &= \{1\leq q\leq d: I_q\neq \emptyset\}\text{ and}\\
B &= \tilde I\setminus A.
\end{align*}
Observe that the $I_q$'s are pairwise disjoint intervals. Apply the inductive assumption for each $f_q$ with $q\in D$ and the $(C,(m_{j_i})_{i\in I_q})$ RIS $(x_i)_{i\in I_q}$ to find $h_q\in\{\pm e_{t_i}^*:i\in I_q\}\cup\{0\}$ and $g_q\in W_\mathrm{aux}^N$ satisfying the inductive assumption, in particular
\begin{equation*}
\left|f_q\left(\sum_{i\in I_q}a_i x_i\right)\right| \leq C\left(1+\frac{1}{\sqrt{m_{j_{i^q_0}}}}\right)\left|(h_q + g_q)\left(\sum_{i\in I_q}a_ie_{t_i}\right)\right|.
\end{equation*}
Using the above it is not hard to see that $h$ and
$$g = \frac{1}{m_{s_1}\cdots m_{s_l}}\left(\sum_{i\in B}\mathrm{sign}(f(a_ix_i))e^*_{t_i}+ \sum_{q=1}^dh_q +\sum_{q=1}^d g_q\right)$$
satisfy \eqref{this is the basic inequality in the flesh}. To complete the proof it remains to show that the vectors $(e^*_{t_i})_{i\in B}{}^\frown (h_q)_{q\in D}{}^\frown (g_q)_{q\in D}$ can be ordered to form an $\mathcal{S}_{n_{s_1}+\cdots+n_{s_l}}\ast\mathcal{A}_3$ admissible and $N$-sufficiently large sequence.
For each $1\leq q \leq d$ we shall define a collection of at most three functionals $\mathcal{F}_q$ (it may also be empty) with the following properties:
\begin{itemize}
\item[(a)] for each $\phi\in\mathcal{F}_q$ we have $\min\supp(f_q) \leq \min\supp(\phi)$ and if $1\leq q<d$ the $\max\supp(\phi) < \min\supp(f_{q+1})$
\item[(b)] $\cup_{1\leq q\leq d}\mathcal{F}_q = \{e^*_{t_i}: i\in B\}\cup\{h_q:q\in D\}\cup\{g_q:q\in D\}$
\end{itemize}
For each $i\in B$ set $q_i = \max\{1\leq q\leq d: \min\supp(f_q)\leq \max\supp(x_i)\}$. Note that the correspondence $i\to q_i$ is strictly increasing. For each $q$ for which there is $i$ so that $q = q_i$ set $\mathcal{F}_q = \{h_q,g_q,e_{t_i}^*\}$. Depending on whether $q\in D$ and whether $h_q = 0$, some of the functionals $h_q$, $g_q$ may be omitted. For $q$ for which there is no $i$ with $q=q_i$ define $\mathcal{F}_q= \{h_q,g_q\}$, omitting if necessary any of $h_q$ or $g_q$. Properties (a) and (b) are not very hard to show.
It now follows from (a) and the spreading property of the Schreier families that the set $\{\min\supp(h):h\in\cup_{1\leq q\leq d}\mathcal{F}_q$ is $\mathcal{S}_{n_{s_1}+\cdots+n_{s_l}}\ast\mathcal{A}_3\}$ admissible. It follows from (b) that ordering the functionals in $(e^*_{t_i})_{i\in B}{}^\frown (h_q)_{q\in D}{}^\frown (g_q)_{q\in D}$ according to the minimum of their supports they are $\mathcal{S}_{n_{s_1}+\cdots+n_{s_l}}\ast\mathcal{A}_3$ admissible.
We now show that the sequence is $N$ sufficiently large. Recall now that for all $q>1$ we have $w(f_q) > N$ and hence if $g_q$ is defined we have $w(g_q)>N$. It remains to show that if $g_1$ is defined and it does not appear first in the enumeration above then $w(g_1) > N$. For this to be the case, the set $\mathcal{F}_1$ must contain the functional $h_1\neq 0$. By the inductive assumption this means $w(g_1) = w(f_1) > N$ and the proof is complete.
\end{proof}
\subsection{Existence of rapidly increasing sequences}
\label{section ris existence}
As is the case in past constructions, rapidly increasing sequences are given by special convex combinations of normalized block vectors that are bounded from bellow. To achieve the desired isometric representation we show that this lower bound may be chosen arbitrarily close to one. We then show that such sequences can be chosen to be $C$-RIS for any $C>1$.
\begin{prop}
\label{very good ell1 vectors}
Let $Y$ be a block subspace of $X$. Then for every $n\in\mathbb{N}$, $\varepsilon$, and $\delta>0$ there exists a $(n,\varepsilon)$ s.c.c. $x = \sum_{i=1}^mc_ix_i$ with $\|x\| > 1/(1+\delta)$ where $x_1,\ldots,x_m$ are in the unit ball of $Y$.
\end{prop}
\begin{proof}
Towards a contradiction assume that the conclusion is false. That is, for all $\mathcal{S}_n$-admissible vectors $(x_i)_{i=1}^m$ in the unit ball of $Y$ so that the vector $x = \sum_{i=1}^mc_ix_i$ is a $(n,\varepsilon)$ s.c.c. we have $\|x\| \leq 1/(1+\delta)$.
Start with a normalized block sequence $(x_i)_i$ in $Y$ and take a subsequence $(x_i^0)_i$ that satisfies the conclusion of Proposition \ref{uniform ell1}. Using the properties of $(m_j)$, $(n_j)_j$ fix $j\in\mathbb{N}$ with $n_j \geq n$ and
\begin{equation}
\label{very good ell1 vectors eq1}
\frac{\left(\left(1+\delta\right)^{\frac{1}{n}}\right)^{n_j}}{m_j} \geq 2(1+\delta).
\end{equation}
Define inductively block sequences $(x^k_i)_i$ for $0\leq k\leq \lfloor n_j/n\rfloor$ satisfying.
\begin{itemize}
\item[(i)] for each $i,k$ there is a subset $F_i^k$ of $\mathbb{N}$ so that $(x_m^{k-1})_{m\in F_i^k}$ is $\mathcal{S}_n$ admissible and coefficients $(c_m^{k-1})_{m\in F_i^k}$ so that $\tilde x_i^k = \sum_{m\in F_i^k}c_m^{k-1}x_m^{k-1}$ is a $(n,\varepsilon)$ s.c.c.
\item[(ii)] for each $i,k$ we set $x_i^k = (1+\delta)\tilde x_i^k$.
\end{itemize}
Using the negation of the desired conclusion, it is straightforward to check by induction that $\|x_i^k\| \leq 1$ and that for $k\leq \lfloor n_j/n\rfloor$ each vector $x_i^k$ can be written in the form
$$x_i^k = (1+\delta)^k\sum_{m\in G_i^k}d_m^kx_m^0$$
for some subset $G_i^k$ of $\mathbb{N}$ so that $(x_m^0)_{m\in G_i^k}$ is $\mathcal{S}_{nk}$ admissible and the coefficients satisfy $\sum_{m\in G_i^k}d_m^k = 1$. As the sequence satisfies the conclusion of Proposition \ref{uniform ell1} we deduce that for $k = \lfloor n_j/n\rfloor$ we have $n_j - n < kn\leq n_j$
\begin{equation*}
1\geq \|x_i^k\| \geq \frac{(1+\delta)^k}{2m_j} > \frac{(1+\delta)^{\frac{n_j}{n}}}{2m_j},
\end{equation*}
and therefore by \eqref{very good ell1 vectors eq1} $1\geq1+\delta$ which is absurd.
\end{proof}
\begin{prop}
\label{scc are ris}
Let $x = \sum_{i=1}^mc_ix_i$ be a $(n,\varepsilon)$ s.c.c. with $\|x_i\| \leq 1$ for $1\leq i \leq m$ and $f\inW_\mathbf{iw}$ with $\vec w(f) = (j_1,\ldots,j_l)$ so that $n_{j_1} +\cdots +n_{j_l} < n$. Then we have
\begin{equation*}
|f(x)| \leq \frac{1+2\varepsilon w(f)}{w(f)}.
\end{equation*}
\end{prop}
\begin{proof}
Let $f = (1/m_{j_1}\cdots m_{j_l})\sum_{q=1}^df_q$ with $(f_q)_{q=1}^d$ $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$-admissible. Consider the subset of $\{1,\ldots,m\}$
$$A = \left\{i: \text{ there is at most one } 1\leq q\leq d \text{ with } \ran(x_i)\cap\ran(f_q)\neq\emptyset\right\}$$
and observe that for each $i\in A$ we have $|f(x_i)| \leq 1/(m_{j_1}\cdots m_{j_l})$ and hence
\begin{equation}
\label{scc are ris eq1}
\left|f\left(\sum_{i=1}^mc_ix_i\right)\right| \leq \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{i\in A}c_i + \sum_{i\notin A}c_i.
\end{equation}
Set $B = \{1,\ldots,m\}\setminus A$. By the shifting property of the Schreier families it follows that the vectors $(x_i)_{i\in B\setminus\{\min(B)\}}$ are $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$ admissible. As the singleton $\{x_1\}$ is $\mathcal{S}_1$ admissible we conclude that $\sum_{i\in B}c_i < 2\varepsilon$. Applying this to \eqref{scc are ris eq1} immediately yields the desired conclusion.
\end{proof}
\begin{cor}
\label{building ris}
Let $Y$ be a block subspace of $X$ and $C>1$. Then there exists an infinite $(C,(j_i)_{i})$-RIS $(x_i)_i$ in $Y$ with $\|x_i\| \geq 1$ for all $i\in\mathbb{N}$.
\end{cor}
\begin{proof}
We define the sequence $(x_i)_i$ inductively as follows. Fix $\delta > 0$ with $1+\delta<C$ and having chosen $x_1,\ldots,x_{i-1}$ choose $j_i$ with $\sqrt{m_{j_i}} > \max\supp(x_{i-1})$, choose a natural number $k_i$ with the property that for all $s_1,\ldots,s_l\in\mathbb{N}$ that satisfy $m_{s_1}\cdots m_{s_l} < m_{j_i}$ we have $n_{s_1}+\cdots+n_{s_l} < k_i$, and choose $\varepsilon_i > 0$ with $(1+\delta)(1+2\varepsilon_im_i) \leq C$. Use Proposition \ref{very good ell1 vectors} to find an $(k_i,\varepsilon_i)$ s.c.c. $(y_i)$ in $Y$ with $\min\supp(y_i)>\max\supp(x_i)$ and $1/(1+\delta)\leq \|y_i\| \leq 1$ and set $x_i = (1+\delta)y_i$. Proposition \ref{scc are ris} yields that $(x_i)_i$ is the desired vector.
\end{proof}
\section{Hereditary Asymptotic structure of $X_{\mathbf{iw}}$}
\label{fbr in X}
This section is devoted to the study of the asymptotic behavior of subspaces of $X_{\mathbf{iw}}$. As it was shown in Section \ref{uniform unique spreading model section} the space $X_{\mathbf{iw}}$ only admits spreading models 4-equivalent to the unit vector basis of $\ell_1$. We show that the joint behavior of arrays of sequences does not retain this uniform behavior. In fact, $c_0$ is an asymptotic model of every subspace of $X_{\mathbf{iw}}$ and every 1-unconditional sequence is block finitely representable in every block subspace of $X_{\mathbf{iw}}$. These results in particular yield that $X_{\mathbf{iw}}$ does not have an asymptotic-$\ell_p$ subspace.
\begin{prop}
\label{omega joint spreading models}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}$ and $\varepsilon>0$. Then there exists an array of block sequences $(x_j^{(i)})_j$, $i\in\mathbb{N}$, in $Y$ so that for any $k,l\in\mathbb{N}$, scalars $(a_{i,j})_{1\leq i\leq k, 1\leq j\leq l}$, and plegma family $(s_i)_{i=1}^k$ in $[\mathbb{N}]^l$ with $\min(s_1) \geq \max\{k,l\}$ we have
\begin{equation}
\label{omega equation}
\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}| \leq \left\|\sum_{i=1}^k\sum_{j=1}^la_{i,j}x_{s_i(j)}^{(i)}\right\| \leq (1+\varepsilon)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|.
\end{equation}
\end{prop}
\begin{proof}
Fix $1 < C < \min\{(1+\varepsilon)^{1/4},2\}$ and $0<\delta \leq ((1+\varepsilon)^{1/2}-1)/2$. Using the properties of the sequences $(m_j)_j$, $(n_j)_j$ from Section \ref{definition section}, page \pageref{definition section} we fix a sequence of pairwise different natural numbers $(t_i)_{i=1}^\infty$ satisfying for $i\in\mathbb{N}$
\begin{equation}
\frac{1}{m_{t_i}}\leq \frac{\delta}{12\cdot 2^i}\text{ and } \frac{m_{t_i}}{m_{t_i+1}} \leq \frac{\delta}{6\cdot 2^i}.
\end{equation}
For each $k\in\mathbb{N}$ fix $\bar \varepsilon_k >0$ and $N_k\in\mathbb{N}$ so that for $1\leq i \leq k$
\begin{equation}
\bar \varepsilon_k \leq \frac{\delta}{12m_{t_i}2^i} \text{ and } \frac{m_{t_i}}{N_k} \leq \frac{\delta}{6\cdot 2^i}.
\end{equation}
Observe that for any $k\in\mathbb{N}$ we have that $\delta$, $N_k$, $\bar\varepsilon_k,$ and $(m_{t_i})_{i=1}^k$ satisfy \eqref{this long delta}.
Use Corollary \ref{building ris} to find an infinite $(C,(\bar{j_s})_{s})$-RIS $(y_s)_s$ in $Y$ with $\|y_s\| \geq 1$ for all $s\in\mathbb{N}$. By perhaps passing to a subsequence we may assume that for all $s\in\mathbb{N}$ we have
\begin{equation}
\label{*how to find that one basis eqegg}
\begin{split}
&\frac{1}{\sqrt{m_{\bar{j_s}}}} \leq (1+\varepsilon)^{1/4} - 1,\\
&{N_s}\leq m_{\bar{j_s}}, \text{ and } \min\supp(y_s)\geq \max_{1\leq i\leq s}\{n_{t_i},N_i,6/\bar\varepsilon_i\}.
\end{split}
\end{equation}
For each $s$ find $f_s$ in $W_\mathbf{iw}$ with $\supp(f_s)\subset \supp(y_s)$ and $f_s(y_s) = \|y_s\| \geq 1$. Note that for all $s$ we have $w(f_s) \geq m_{\tilde{j_s}}$, otherwise by Property (iii) of \ref{definition ris} we would have $1\leq f_s(y_s) \leq C/w(f_s) < 2/w(f_s) \leq 1$ (because $m_1 = 2$) which is absurd. Hence, using Property (ii) of \ref{definition ris}, for all $s>1$ we have $w(f_s) \geq m_{\tilde{j_s}} \geq (\max\supp(y_{s-1}))^2 \geq (\max\supp(f_{s-1}))^2 > \max\supp(f_{s-1})$, i.e. $(f_s)_s$ is very fast growing.
Choose disjoint finite subsets of $\mathbb{N}$, $F^{(i)}_j$, $i,j\in\mathbb{N}$, so that for each $i,j\in\mathbb{N}$ we have $F^{(i)}_j < F^{(i)}_{j+1}$ and $\{\min\supp(y_s): s\in F^{(i)}_{j}\}$ is a maximal $\mathcal{S}_{n_{t_i}-1}$. Using Proposition \ref{basic scc exist in abundance} find coefficients $(c_{s}^{i,j})_{s\in F^{(i)}_{j}}$ so that the vector $\tilde x_{i,j} = \sum_{s\in F^{(i)}_{j}}c_s^{i,j}y_s$ is an $(n_{j_i}-1,\bar\varepsilon_j/2)$ s.c.c. Note that by Remark \ref{some remarks for the far future 2} if $\phi_s = \max\supp(y_s)$ then the vector $\tilde z_{i,j} = \sum_{s\in F^{(i)}_{j}}c_s^{i,j}e_{\phi_s}$ is a $(n_{j_i}-1,\bar\varepsilon_j)$ basic s.c.c. Hence, for any $k,l\in\mathbb{N}$ and $k \leq s_i(1) < \cdots<s_i(l)$, for $1\leq i\leq k$ the vectors $z^{(i)}_{s_i(j)} = m_{t_i}\tilde z_{i,s_i(j)}$, $1\leq i\leq k$ $1\leq j\leq l$ satisfy \eqref{upper auxiliary eq} of Lemma \ref{upper auxiliary} with the $\delta$, $N_k$, $\bar \varepsilon_k$ chosen above.
Define $x^{(i)}_{j} = m_{t_i}\tilde x_{i,j}$ for $i,j\in\mathbb{N}$. We will show that this is the desired sequence and to that end let $k,l\in\mathbb{N}$ and let $(s_i)_{i=1}^k$ be a plegma in $[\mathbb{N}]^l$ with $\min(s_1)\geq \max\{k,l\}$. For the upper inequality, Proposition \ref{basic inequality} yields that for any scalars $(a_{i,j})_{1\leq i\leq k, 1\leq j\leq l}$ we have
\begin{align*}
\left\|\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x^{(i)}_{s_i(j)}\right\|\leq\span \\
\leq& C\left(1+ \frac{1}{\sqrt{m_{\bar{j_1}}}}\right)\left(\max_{\substack{1\leq i\leq k\\1\leq j\leq l}}\max_{s\in F^{(i)}_{j}}\left(m_{t_i}|a_{i,j}|c^{i,j}_s\right) +\left\|\sum_{j=1}^l\sum_{i=1}^ka_{i,i}z^{(i)}_{s_i(j)}\right\|_{\mathrm{aux},N_k}\right)\\
\leq& (1+\varepsilon)^{1/4}\left(1+\varepsilon\right)^{1/4}\left(\max_{\substack{1\leq i\leq k\\1\leq j\leq l}}\left(m_{t_i}|a_{i,j}|\bar\varepsilon_k\right)+\left\|\sum_{j=1}^l\sum_{i=1}^ka_{i,i}z^{(i)}_{s_i(j)}\right\|_{\mathrm{aux},N_k}\right)\\
\leq& (1+\varepsilon)^{1/2}\left(\delta\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|+\left\|\sum_{j=1}^l\sum_{i=1}^ka_{i,i}z^{(i)}_{s_i(j)}\right\|_{\mathrm{aux},N_k}\right)\\
\leq& (1+\varepsilon)^{1/2}\left(\delta\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}|+(1+\delta)\max_{1\leq i\leq k}\sum_{j=1}^l|a_{i,j}| \right)\text{ (from \eqref{upper auxiliary eq})}\\
\leq& (1+\varepsilon)^{1/2}(1+2\delta)\max_{1\leq t\leq n}\sum_{s=1}^n|a_{s,t}| \leq (1+\varepsilon)\max_{1\leq t\leq n}\sum_{s=1}^n|a_{s,t}|.
\end{align*}
For the lower inequality we observe that for fixed $1\leq i_0\leq n$ the functionals $((f_s)_{s\in F^{(i_0)}_{s_{i_0}(j)}})_{j=1}^l$ are very fast growing and for each $1\leq j\leq l$ the functionals $(f_s)_{s\in F^{(i_0)}_{s_{i_0}(j)}}$ are $\mathcal{S}_{n_{t_{i_0}}-1}$ admissible. It follows from \eqref{*how to find that one basis eqegg} that $((f_s)_{s\in F^{(i_0)}_{s_{i_0}(j)}})_{j=1}^l$ is $\mathcal{S}_{n_{t_{i_0}}}$-admissible and hence $f = (1/m_{t_{i_0}})\sum_{j=1}^l\sum_{s\in F^{(i_0)}_{i_0,j}}f_s$ is in $W_\mathbf{iw}$. It follows that $f(x^{(i_0)}_{s_{i_0}(j)})\geq 1$ for all $1\leq j\leq l$ which means that for any coefficients $(a_{i,j})_{1\leq i\leq k, 1\leq j\leq l}$ we have
\begin{equation*}
\begin{split}
\left\|\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x^{(i)}_{j}\right\| &= \left\|\sum_{j=1}^l\sum_{i=1}^k|a_{i,j}|x^{(i)}_{j}\right\| \geq f\left(\sum_{j=1}^l\sum_{i=1}^k|a_{i,j}|x^{(i)}_{j}\right)\\
&= f\left(\sum_{j=1}^l|a_{i_0,j}|x^{(i_0)}_{j}\right)\geq \sum_{j=1}^l|a_{i_0,j}|.
\end{split}
\end{equation*}
\end{proof}
\begin{thm}
\label{c0 asmodel}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}$.
\begin{itemize}
\item[(a)] For every $\varepsilon>0$ there exists an array of block sequences in $Y$ that generate an asymptotic model that is $(1+\varepsilon)$-equivalent to the unit vector basis of $c_0$.
\item[(b)] For every $\varepsilon>0$ and $k\in\mathbb{N}$ there exists a $k$-array of block sequences in $Y$ that generate a joint spreading model $(1+\varepsilon)$-equivalent to the basis of $\ell_\infty^k(\ell_1)$.
\end{itemize}
In particular, $X$ does not contain an asymptotic-$\ell_1$ subspace.
\end{thm}
\begin{proof}
Let $(x_j^{(i)})_j$, $i\in\mathbb{N}$ be the infinite array given by Proposition \ref{omega joint spreading models}, for some fixed $\varepsilon >0$. Then, it easily follows that this infinite array generates the unit vector basis of $c_0$ as a spreading model. This is because the asymptotic model is witnessed by taking one vector from each sequence. It is entirely immediate by the definition of joint spreading models that the first $k$ sequences in the array generate the basis of $\ell_\infty^k(\ell_1)$ as a joint spreading model.
\end{proof}
\begin{cor}
\label{hereditary fbr}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}$. Every 1-unconditional basic sequence is finitely block representable in $Y$. In fact, for every $k\in\mathbb{N}$ every $k$-dimensional space with a 1-unconditional basis is an asymptotic space for $Y$, in the sense of \cite{MMT}.
\end{cor}
\begin{proof}
By Proposition \ref{universal for unc} it is sufficient to show that the sequence $(e_{i,j})_{i,j=1}^n$ mentioned in the statement of that result, with the lexicographical order, is an asymptotic space for $Y$. Fix $\varepsilon >0$ and let $(x_j^{(i)})_j$, $i\in\mathbb{N}$ be the infinite array given by Proposition \ref{omega joint spreading models}. It is an easy observation that for a sufficiently sparsely chosen strict plegma $(s_j)_{j=1}^n$ in $[\mathbb{N}]^n$ that the sequence $(x^{(j)}_{s_j(i)})_{i,j=1}^n$ is a block sequence with the lexicographical order. Moreover, if $\min(s_1) \geq n$ then $(x^{(j)}_{s_j(i)})_{i,j=1}^n$ is $(1+\varepsilon)$-equivalent to $(e_{i,j})_{i,j=1}^n$.
\end{proof}
\begin{cor}
\label{krivine set and refl}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}$. Then $K(Y) = [1,\infty] \supsetneq \{1\} = \widetilde K(Y)$. Furthermore, $\ell_1$ and $c_0$ don't embed into $X_{\mathbf{iw}}$, hence $X_{\mathbf{iw}}$ is reflexive.
\end{cor}
Reflexivity and Proposition \ref{unique 1 or 0 am} yield the following (see Definition \cite{defas}).
\begin{cor}
The space $X_{\mathbf{iw}}$ is asymptotically symmetric.
\end{cor}
\begin{rem}
The construction of $X_{\mathbf{iw}}$ can be modified to obtain for any $1\leq p <\infty$ a Banach space a space $X_\mathrm{iw}^p$. One takes a norming $W_\mathrm{iw}^p$ so that for any $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$ admissible sequence of very fast growing functionals $f_1<\cdots<f_d$ and any $(c_q)_{q=1}^d$ in the unit ball of $\ell_{p'}$ the function $f = (1/m_{j_1}\cdots m_{j_l})\sum_{q=1}^dc_qf_q$ is in $W_\mathrm{iw}^p$ as well. It is completely natural to expect that similar techniques will yield that this space has a unique and uniform $\ell_p$ spreading model, $c_0$ is an asymptotic model of every subspace, and the Krivine set of every subspace of $X_\mathrm{iw}^p$ is $[p,\infty]$. This modification does not apply to the case $p=\infty$. To obtain a space with a unique and uniform $c_0$ spreading model without an asymptotic-$c_0$ subspace we must look at the dual of $X_{\mathbf{iw}}$ and this is the subject of Section \ref{dual section}.
\end{rem}
\section{The spaces $X_{\mathbf{iw}}^p$, $1<p<\infty$}
We describe how the construction of $X_{\mathbf{iw}}$ can be modified to obtain a space with a uniformly unique $\ell_p$-spreading model, where $1<p<\infty$, and a $c_0$-asymptotic model in every subspace. We give the steps that need to be followed in order to reach the conclusion but we omit most proofs as they are in the spirit of $X_{\mathbf{iw}}$.
We fix a $p\in(1,\infty)$ and we denote by $p^*$ its conjugate. Given a subset $G$ of $c_{00}(\mathbb{N})$, $j_1,\ldots,j_l\in\mathbb{N}$, real numbers $(\lambda_q)_{q=1}^d$ with $\sum_{q=1}^d|\lambda_q|^{p^*}\leq 1$, and $f_1 < \cdots < f_d$ in $G$ that are $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$-admissible we call a functional of the form
\begin{equation*}
f = \frac{1}{m_{j_1}\cdots m_{j_l}}\lambda_q\sum_{q = 1}^d f_q
\end{equation*}
a weighted functional of $G$ of weight $w(f) = m_{j_1}\cdots m_{j_l}$ and vector weight $\vec w(f) = (j_1,\ldots,j_l)$. For all $i\in\mathbb{N}$, we also call $f = \pm e_i^*$ a weighted functional of weight $w(f) = \infty$. We define very fast growing sequences as in Section \ref{definition section}. We then let $W_\mathbf{iw}^p$ be the smallest subset of $c_{00}(\mathbb{N})$ that satisfies the following two conditions.
\begin{itemize}
\item[(i)] $\pm e_i^*$ is in $W_\mathbf{iw}^p$ for all $i\in\mathbb{N}$ and
\item[(ii)] for every $j_1,\ldots,j_l\in\mathbb{N}$, real numbers $(\lambda_q)_{q=1}^d$ with $\sum_{q=1}^d|\lambda_q|^{p^*}\leq 1$, and every $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$-admissible and very fast growing sequence of weighted functionals $(f_q)_{q=1}^d$ in $W_\mathbf{iw}^p$ the functional $$f = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{q=1}^d\lambda_qf_q$$ is in $W_\mathbf{iw}^p$.
\end{itemize}
Set $X_{\mathbf{iw}}^p$ to be the space defined by this norming set.
The following is similar to \cite[Proposition 2.9]{DM} and \cite[Proposition 4.2]{BFM}. We give a short proof.
\begin{prop}
\label{upper p}
Let $(x_i)_{i=1}^n$ be a a normalized block sequence in $X_{\mathbf{iw}}^p$. Then for any scalars $c_1,\ldots,c_n$ we have
\begin{equation}
\label{always upper ellp}
\left\|\sum_{i=1}^na_ix_i\right\| \leq 2 \left(\sum_{i=1}^n|a_i|^p\right)^{1/p}
\end{equation}
\end{prop}
\begin{proof}
This is proved by induction on $m$ with $W_\mathbf{iw}^p = \cup_{m=0}^\infty W_m$. Assume that for every $f\in W_m$, every normalized block vectors $x_1<\cdots<x_n$, and every scalars $c_1,\ldots,c_n$ with $(\sum|c_j|^p)^{1/p}\leq 1$ we have $|f(c_1x_1+\cdots +c_nx_n)| \leq 2$. Let now $f = (1/m_j\cdots m_{j_l})\sum_{q=1}^d\lambda_qf_q$ be in $W_{m+1}$ with $f_1,\ldots,f_d\in W_m$, $(x_j)_{j=1}^n$ be a normalized block sequence, and $(c_j)_{j=1}^l$ be scalars with $(\sum|c_j|^p)^{1/p}\leq 1$. Set $x = \sum_{i=1}^nc_ix_i$. Define the sets
\[
\begin{split}
D_j &= \{i: \supp (f_i)\cap \supp(x_j)\neq\varnothing\}, \text{ for }j=1,\ldots,n\\
E_j &= \{i\in D_j: j = \min\{j': i\in D_{j'}\}\},\text{ for }j=1,\ldots,n,\\
F_j &= D_j\setminus E_j, \text{ for }j=1,\ldots,n,\text{ and }\\
G_i &= \{j: i\in F_j\},\text{ for }i=1,\ldots,d.
\end{split}
\]
Observe that the sets $(E_j)_{j=1}^n$ are pairwise disjoint and the sets $(G_i)_{i=1}^d$ are pairwise disjoint as well. For $j=1,\ldots,n$ set $\Lambda_j = (\sum_{i\in E_j}|\lambda_i|^{p^*})^{1/p^*}$ and for $i=1,\ldots,d$ set $C_i = (\sum_{j\in G_i}|c_j|^p)^{1/p}$. Then,
\[
\begin{split}
|f(x)| &= \left|\sum_{j=1}^mc_j\Lambda_j\!\!\left(\frac{1}{m_j\cdots m_{j_l}}\!\sum_{i\in E_j}\frac{\lambda_i}{\Lambda_j}f_i\right)\!\!(x_j) + \frac{1}{m_j\cdots m_{j_l}}\!\sum_{j=1}^nc_j\!\!\sum_{i\in F_j}\lambda_if_i(x_j)\right|\\
&\leq \left(\sum_{j=1}^n|c_j|^p\right)^{1/p}\!\!\!\left(\sum_{j=1}^n\Lambda_j^{p^*}\right)^{1/p^*}\!\!\!\! + \frac{1}{2}\sum_{i=1}^d|\lambda_i|\left|f_i\left(\sum_{j\in G_i}c_jx_j\right)\right|\\
&\leq 1+ \frac{1}{2}\sum_{i=1}^d|\lambda_i|2C_i\leq 1 + \left(\sum_{i=1}^d|\lambda_i|^{p^*}\right)^{1/p^*}\!\!\!\!\left(\sum_{i=1}^dC_i^p\right)^{1/p}\!\!\!\leq 2.
\end{split}
\]
\end{proof}
The proof of the following Proposition is practically identical to the proof of Proposition \ref{uniform ell1}
\begin{prop}
\label{uniform ellp}
Let $(x_i)_i$ be a normalized block sequence in $X_{\mathbf{iw}}^p$. Then there exists $L\in[\mathbb{N}]^\infty$ so that for every $j_0\in\mathbb{N}$, every $F\subset L$ with $(x_i)_{i\in F}$ being $\mathcal{S}_{n_{j_0}}$-admissible, and every scalars $(c_i)_{i\in F}$ we have
\begin{equation*}
\left\|\sum_{i\in F}c_ix_i\right\| \geq \frac{1}{2m_{j_0}}\left(\sum_{i\in F}|c_i|^p\right)^{1/p}.
\end{equation*}
In particular, every normalized block sequence in $X_{\mathbf{iw}}$ has a subsequence that generates a spreading model that is 8-equivalent to the unit vector basis of $\ell_p$.
\end{prop}
The auxiliary spaces are each defined via collection of norming sets $W^{p,N}_{\mathrm{aux}}$, $N\in\mathbb{N}$. For each $N\in\mathbb{N}$ the set $W^{p,N}_{\mathrm{aux}}$ contains all
\[f = \frac{2^{1/p^*}}{m_{j_i}\cdots m_{j_l}}\sum_{q=1}^d\lambda_qf_q,\]
where $(f_q)_{q=1}^d$ is a sequence of $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}\ast\mathcal{A}_3$-admissible functionals in $W^{p,N}_{\mathrm{aux}}$ so that for $q\geq 2$ we have $w(f_q) > N$ and $(\lambda_q)_{q=1}^d$ satisfy $\sum_{q=1}^d|\lambda_q|^{p^*}\leq 1$. The factor $2^{1/p^*}$ is necessary to prove the basic inequality and it also appears in \cite[Section 3]{DM}
Recall from \cite[Section 3]{DM} that a vector $x = \sum_{i\in F}a_ie_i$ is called a $(n,\varepsilon)$ basic special $p$-convex combination (or basic s.$p$-c.c.) if $a_i\geq 0$, for $i\in F$, and $\sum_{i\in F}a_i^pe_i$ is a $(n,\varepsilon^p)$ basic s.c.c. The proof of the following is in the spirit of the proof of Lemma \ref{basis on basis} and Lemma \ref{upper auxiliary}
\begin{lem}
\label{p-upper auxiliary}
Let $\delta>0$. Then there exists $M\in\mathbb{N}$ so that for any $k\in\mathbb{N}$, any pairwise different natural numbers $(t_i)_{i=1}^k$ with $t_i\geq M$, for any $l\in\mathbb{N}$ and $\varepsilon>0$, there exists $N\in\mathbb{N}$, so that for any vectors $(x_{i,j})_{1\leq i \leq k, 1\leq j\leq l}$ of the form
\begin{equation}
x_{i,j} = \frac{m_{t_i}}{2^{1/p^*}}\tilde x_{i,j},\text{ where } \tilde x_{i,j} = \sum_{r\in F_{i,j}}c_r^{i,j}e_r \text{ is a } (n_{t_i},\varepsilon) \text{ basic s.$p$-c.c.,}
\end{equation}
$1\leq i \leq k, 1\leq j\leq l,$ any scalars $(a_{i,j})_{1\leq i \leq k, 1\leq j\leq l}$, and any $f\in W^{p,N}_{\mathrm{aux}}$ we have
\begin{equation}
\label{upper auxiliary eq}
\left|f\left(\sum_{j=1}^l\sum_{i=1}^ka_{i,j}x_{i,j}\right)\right| \leq (1+\delta)\max_{1\leq i\leq k}\left(\sum_{j=1}^l|a_{i,j}|^p\right)^{1/p}.
\end{equation}
\end{lem}
RIS are defined exactly like Definition \ref{definition ris}. The basic inequality is slightly different to Proposition \ref{basic inequality}.
\begin{prop}
\label{p-basic inequality}
Let $(x_i)_{i\in I}$ be a $(C,(j_i)_{i\in I})$-RIS, $(a_i)_{i\in I}$ be a sequence of scalars, and $N < \min\{m_{j_{\min(I)}}, \min\supp(x_{\min(I)})\}$ be a natural number. Then, for every $f\inW_\mathbf{iw}^p$ there exist $h\in\{\pm e_i^*:i\in\mathbb{N}\}\cup\{0\}$, $g\in W_\mathrm{aux}^{p,N}$ with $w(f) = w(g)$, and $\lambda$, $\mu$ with $|\lambda|^{p^*} + |\mu|^{p^*} \leq 1$, so that if $t_i = \max\supp(x_i)$ for $i\in I$ then we have
\begin{equation}
\label{this is the basic inequality in the flesh}
\left|f\left(\sum_{i\in I}a_i x_i\right)\right| \leq C\left(1+\frac{1}{\sqrt{m_{j_{i_0}}}}\right)\left|(\lambda h + \mu g)\left(\sum_{i\in I}a_ie_{t_i}\right)\right|.
\end{equation}
\end{prop}
Using Proposition \ref{upper p} and Proposition \ref{uniform ellp} one can perform an argument similar to that in the proof of Proposition \ref{very good ell1 vectors} to show that every block sequence in $X_{\mathbf{iw}}^p$ has a further block sequence, with norm at least $(1-\delta)$, that is a $(2+\varepsilon)$-RIS. The next result is similar to Proposition \ref{omega joint spreading models}.
\begin{prop}
\label{p omega joint spreading models}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}^p$. Then there exists an array of block sequences $(x_j^{(i)})_j$, $i\in\mathbb{N}$, in $Y$ so that for any $k,l\in\mathbb{N}$, scalars $(a_{i,j})_{1\leq i\leq k, 1\leq j\leq l}$, and plegma family $(s_i)_{i=1}^k$ in $[\mathbb{N}]^l$ with $\min(s_1) \geq \max\{k,l\}$ we have
\begin{equation}
\label{omega equation}
\frac{1}{2^{1/p^*}}\max_{1\leq i\leq k}\left(\sum_{j=1}^l|a_{i,j}|^p\right)^{1/p}\!\!\!\!\! \leq \left\|\sum_{i=1}^k\sum_{j=1}^la_{i,j}x_{s_i(j)}^{(i)}\right\| \leq 3\max_{1\leq i\leq k}\left(\sum_{j=1}^l|a_{i,j}|^p\right)^{1/p}\!\!\!\!.
\end{equation}
\end{prop}
The main result of this section follows in the same manner as Theorem \ref{c0 asmodel}
\begin{thm}
\label{c0 asmodel p}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}^p$.
\begin{itemize}
\item[(a)] There exists an array of block sequences in $Y$ that generate an asymptotic model that is $6$-equivalent to the unit vector basis of $c_0$.
\item[(b)] For every $k\in\mathbb{N}$ there exists a $k$-array of block sequences in $Y$ that generate a joint spreading model $6$-equivalent to the basis of $\ell_\infty^k(\ell_p)$.
\end{itemize}
In particular, $X$ does not contain an asymptotic-$\ell_p$ subspace.
\end{thm}
It is not true that all unconditional bases are finitely block representable in every subspace of $X_{\mathbf{iw}}^p$. However the following is true.
\begin{cor}
\label{krivine set p}
For every block subspace $Y$ of $X_{\mathbf{iw}}^p$ the Krivine set of $Y$ is $K(Y) = [p,\infty]$. In fact, for every $q\in[p,\infty]$ the unit vector basis of $\ell_q^k$ is and asymptotic space for $Y$.
\end{cor}
\begin{proof}
The inclusion $K(Y) \subset [p,\infty]$ is an immediate consequence of Proposition \ref{upper p}. To show the inverse inclusion we observe that by Theorem \ref{c0 asmodel p} (ii) for every $n\in\mathbb{N}$ the sequence $(e_{i,j})_{j=1}^n$, with the lexicographical order, endowed with the norm
\[\left\|\sum_{i,j}a_{i,j}e_{i,j} = \right\|\max_{1\leq i\leq k}\left(\sum_{j=1}^l|a_{i,j}|^p\right)^{1/p}\]
is an asymptotic space for $Y$, up to a constant 6.
A proof similar to Proposition \ref{universal for unc} gives that for any $\varepsilon>0$, $k\in\mathbb{N}$, and $p\leq q\leq \infty$ there is $n\in\mathbb{N}$ so that the unit vector basis of $\ell_q^k$ is $(1+\varepsilon)$-block representable in $(e_{i,j})_{j=1}^n$. To see this one needs to use the fact that for $p<q<\infty$ if we set $r = (qp)/(q-p)$ then
\[\left(\sum_{i=1}^k|a_i|^q\right)^{1/q} = \sup\left\{\left(\sum_{i=1}^k|a_ib_i|^p\right)^{1/p}: \left(\sum_{i=1}^k|b_i|^r\right)^{1/r}\leq 1\right\}.\]
The above follows from a simple application of H\"older's inequality.
\end{proof}
\begin{rem}
Because $X_{\mathbf{iw}}^p$ has a uniformly unique $\ell_p$-spreading model the strong Krivine set of every block subspace of $X_{\mathbf{iw}}^p$ is the singleton $\{p\}$.
\end{rem}
\section{The space $X_{\mathbf{iw}}^*$}\label{dual section}
\label{dual section}
In this section we study the space $X_{\mathbf{iw}}^*$. We prove that every normalized block sequence in $X_{\mathbf{iw}}^*$ has a subsequence that generates a spreading model that is 4-equivalent to the unit vector basis of $c_0$. In addition, every block subspace of $X_{\mathbf{iw}}^*$ admits the unit vector basis of $\ell_1$ as an asymptotic model and hence $X_{\mathbf{iw}}^*$ does not have an asymptotic-$c_0$ subspace.
\begin{lem}
\label{vfg covnex}
Let $j_0\in\mathbb{N}$, $(g_k)_{k=1}^m$ be an $\mathcal{S}_{n_{j_0}}$-admissible sequence in $\mathrm{co}(W_\mathbf{iw})$ and assume the following: each $g_k$ has the form $g_k = \sum_{j=1}^{d_k} c_j^k f_j^k$, where $d_k\in\mathbb{N}$ and $f_j^k\in W_\mathbf{iw}$, for $1\leq k \leq m$, so that
\[\min\{w(f_j^k):1\leq j\leq d_k\} > \max\supp(g_{k-1}), \text{ for } 2\leq k \leq m,\]
then we have that $(1/m_{j_0})\sum_{k=1}^mg_k$ is in $\mathrm{co}(W_\mathbf{iw})$.
\end{lem}
\begin{proof}
By repeating some entries we may assume that $d_k = d$ and $c_j^k = c_j$ for each $1\leq k\leq m$. That is, for each $1\leq k\leq m$, we may assume $g_k = \sum_{j=1}^dc_jf_j^k$, where perhaps some $f_j^k$'s are repeated and perhaps some are the zero functional. We can also assume that $\supp(f_j^k\subset\supp(g_k))$, for $1\leq k\leq m$ and $1\leq j\leq d$. We conclude that for $1\leq j\leq d$ the sequence $(f^k_j)_{k=1}^m$ is an $\mathcal{S}_{n_{j_0}}$-admissible and very fast growing sequence in $W_\mathbf{iw}$, so $f_j = (1/m_{j_0})\sum_{k=1}^mf_j^k$ is in $W_\mathbf{iw}$. We conclude that $(1/m_{j_0})\sum_{k=1}^mg_k = \sum_{j=1}^dc_jf_j$ is in $\mathrm{co}(W_\mathbf{iw})$.
\end{proof}
\begin{lem}
\label{same weight covnex}
Let $j_0\in\mathbb{N}$, $(g_k)_{k=1}^m$ be an $\mathcal{S}_{n_{j_0}}$-admissible sequence in $\mathrm{co}(W_\mathbf{iw})$ and assume the following: there is $(j_1,\ldots,j_l)\in\mathbb{N}^{<\infty}$ so that each $g_k$ has the form $g_k = \sum_{j=1}^{d_k} c_j^k f_j^k$, where $d_k\in\mathbb{N}$ and $f_j^k\in W_\mathbf{iw}$, for $1\leq k \leq m$, so that $\vec w(f_j^k) = (j_1,\ldots,j_l)$ and if
\[
f_j^k = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{r\in F_j^k} h_r^{k,j},
\]
with $(h_r^{k,j})_{r\in F_j^k}$ being $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$-admissible and very fast growing, then
\[\min\{w(h_r^{k,j}):r\in F_j^k\} > \max\supp(g_{k-1}), \text{ for } 2\leq k \leq m,\]
then we have that $(1/m_{j_0})\sum_{k=1}^mg_k$ is in $\mathrm{co}(W_\mathbf{iw})$.
\end{lem}
\begin{proof}
As in the proof of Lemma \ref{vfg covnex} we may assume that there are $d$ and $c_1,\ldots,c_d$ so that $g_k = \sum_{j=1}^{d} c_j f_j^k$ where perhaps some $f_j^k$'s are repeated and perhaps some are the zero functional. It follows that for fixed $1\leq j\leq d$ the sequence $((h_r^{k,j})_{r\in F_j^k})_{k=1}^m$ is $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}+n_{j_0}}$-admissible and very fast growing. This means that $f_j = (1/m_{j_1}\cdots m_{j_l}m_{j_0})\sum_{k=1}^m\sum_{r\in F_j^k}h_r^{k,j}$ is in $W_\mathbf{iw}$. We conclude that $(1/m_{j_0})\sum_{k=1}^mg_k = \sum_{j=1}^dc_jf_j$ is in $\mathrm{co}(W_\mathbf{iw})$.
\end{proof}
\begin{lem}
\label{getting rid of}
Let $(f_k)_k$ be a block sequence in $\mathrm{co}(W_\mathbf{iw})$ and let $\varepsilon > 0$. Then there exists $L\in[\mathbb{N}]^\infty$ and a sequence $(g_k)_{k\in L}$ in $\mathrm{co}(W_\mathbf{iw})$ with $\mathrm{supp}(g_k) \subset \mathrm{supp}(f_k)$ for all $k\in L$, so that for all $j_0\in\mathbb{N}$ and all $F\subset L$ so that $(f_k)_{k\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible we have that
\[\left\|\sum_{k\in F}\left(f_k - \frac{1}{2}g_k\right)\right\| \leq m_{j_0} + \varepsilon.\]
\end{lem}
\begin{proof}
Let each $f_k = \sum_{r\in F_k}c_r^kf_r^k$, where $f_r^k\inW_\mathbf{iw}$ and $\supp(f_r^k)\subset\supp(f_k)$ for all $r\in F_k$ and $k\in\mathbb{N}$. Without loss of generality we may assume that $\sum_{r\in F_k}c_r^k = 1$ for all $k\in\mathbb{N}$. Define
\[
\begin{split}
\mathbb{N}^{<\infty}_{N} &= \{\vec j = (j_1,\ldots,j_l)\in\mathbb{N}^{<\infty}: m_{j_1}\cdots m_{j_l}\leq N\},\\
F_{\vec j,k} &= \{r\in F_k: \vec w(f_r^k) = \vec j\},\quad \nu_{\vec j,k} = \sum_{r\in F_k}c_r^k\text{ for all }\vec j\in\mathbb{N}^{<\infty}\text{ and }k\in\mathbb{N},\\
F_{N,k} &=\cup_{\vec j\in \mathbb{N}^{<\infty}_N}F_{\vec j,k}\text{ and }G_{N,k} = F_k\setminus G_{N,k},\text{ for all }k,N\in\mathbb{N}.
\end{split}
\]
By passing to a subsequence of $(f_k)_k$ we may assume that for all $\vec j\in\mathbb{N}^{<\omega}$ the limits $\lim_k\nu_{\vec j,k} = \nu_{\vec j}$ exists. Define $\lambda = \sum_{\vec j\in\mathbb{N}^{<\infty}} \nu_{\vec j}$, which is in $[0,1]$. Fix a sequence of positive real numbers $(\varepsilon_i)_i$, with $\sum_i\varepsilon_i<\varepsilon$, and recursively pick strictly increasing sequences $(k_i)_i$ and $(N_i)_i$ so that the following are satisfied:
\begin{subequations}
\begin{align}
&\left|\lambda - \sum_{\vec j \in \mathbb{N}^{<\infty}_{N_i}}\nu_{\vec j}\right| < \varepsilon_i/3\text{ and if }i>1\text{ then }N_{i} > \max\supp(f_{k_{i-1}}),\label{choose N}\\
\label{close enuffb}
&\sum_{\vec j\in \mathbb{N}^{<\infty}_{N_i}}\left|\nu_{\vec j,k_i} - \nu_{\vec j}\right|< \varepsilon_i/3.
\end{align}
Define then for each $i\in\mathbb{N}$ the number $\mu_i = \sum_{r\in G_{N_i,k_i}} c_r^{k_i}$ and note that \eqref{choose N} and \eqref{close enuffb} yield
\begin{equation}
\label{close enuffc}
\begin{split}
\left|\mu_i - (1-\lambda)\right|&= \left|\lambda - \sum_{\vec j\in\mathbb{N}_{N_i}^{<\infty}}\nu_{\vec j,k_i}\right|\\
&\leq \sum_{\vec j\in \mathbb{N}^{<\infty}_{N_i}}\left| \nu_{\vec j} - \nu_{\vec j,k_i}\right| +\!\!\!\!\!\! \sum_{\vec j\in\mathbb{N}^{<\infty}\setminus \mathbb{N}^{<\infty}_{N_i}}\nu_{\vec j}\\
&< \frac{2\varepsilon_i}{3}.
\end{split}
\end{equation}
\end{subequations}
For each $i\in\mathbb{N}$, using the convection 1/0 = 0, define
\[f_{\vec j,k_i} = \sum_{r\in F_{\vec j,{k_i}}}\frac{c_r^{k_i}}{\nu_{\vec j,k_i}}f_r^{k_i}, \text{ for }\vec j\in\mathbb{N}^{<\infty}_{N_i}, \text{ and }f_{\mathrm{iw},k_i} = \sum_{r\in G_{N_i,k_i}}\frac{c_r^{k_i}}{\mu_i}f_r^{k_i}.\]
Clearly, all the above functionals are in $\mathrm{co}(W_\mathbf{iw})$ and a quick inspection reveals that
\begin{equation}
\label{true form}
f_{k_i} = \sum_{\vec j\in\mathbb{N}^{<\infty}_{N_i}}\nu_{\vec j,k_i}f_{\vec j,k_i} + \mu_i f_{\mathrm{iw},k_i}, \text{ with }\sum_{\vec j\in\mathbb{N}^{<\infty}_{N_i}}\nu_{\vec j,k_i} + \mu_i = 1.
\end{equation}
By \eqref{choose N} we observe that if $j_0\in\mathbb{N}$ and $F\subset\mathbb{N}$ is such that $(f_{k_i})_{i\in F}$ is $\mathcal{S}_{n_{j_0}}$ admissible, then by Lemma \ref{vfg covnex} we have that
\begin{equation}
\label{vfg part sums}
\frac{1}{m_{j_0}}\sum_{i\in F}f_{\mathrm{iw},k_i}\in\mathrm{co}(W_\mathbf{iw}).
\end{equation}
In the next step, for each $i\in\mathbb{N}$ and $\vec j\in\mathbb{N}^{<\infty}_{N_i}$, if $\vec j = (j_1,\ldots,j_l)$, write for each $r\in F_{\vec j,k_i}$
\[f_r^{k_i} = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{t=1}^{d_r^{k_i}} h_t^{r,i},\]
with $(h_t^{r,i})_{t=1}^{d_r^{k_i}}$ being $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$-admissible and very fast growing, and define $g_r^{k_i} = \frac{2}{m_{j_1}\cdots m_{j_l}}h_1^{r,i}$, which is in $\mathrm{co}(W_\mathbf{iw})$. Define for each $i\in\mathbb{N}$ and $\vec j\in\mathbb{N}^{<\infty}_{N_i}$ the functional
\[g_{\vec j, k_i} = \sum_{r\in F_{\vec j,{k_i}}}\frac{c_r^{k_i}}{\nu_{\vec j,k_i}}g_r^{k_i},\]
which is in $\mathrm{co}(W_\mathbf{iw})$ and make the following crucial observations:
\begin{equation}
\label{satisfies same weight lemma}
\begin{array}{c}
f_{\vec j,k_i} - \frac{1}{2}g_{\vec j,k_i} = \sum_{r\in F_{\vec j,{k_i}}}\frac{c_r^{k_i}}{\nu_{\vec j,k_i}}\left(f_r^{k_i} - \frac{1}{2}g_r^{k_i}\right),\\
f_r^{k_i} - \frac{1}{2}g_r^{k_i} = \frac{1}{m_{j_1}\cdots m_{j_l}}\sum_{t=2}^{d_r^{k_i}} h_t^{r,i},\\
\mbox{with $(h_t^{r,i})_{t=2}^{d_r^{k_i}}$ $\mathcal{S}_{n_{j_1}+\cdots+n_{j_l}}$-admissible and very fast growing so that}\\
\min\{w(h_t^{r,i}):2\leq r\leq d_r^{k_i}\} > \min\supp(f_{k_i}).
\end{array}
\end{equation}
Now, Lemma \ref{same weight covnex} and \eqref{satisfies same weight lemma} yield that if we fix $\vec j\in\mathbb{N}^{<\infty}$ then we can deduce that if $j_0\in\mathbb{N}$ and $F\subset\mathbb{N}$ is such that $(f_{k_i})_{i\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible then, if $F_{\vec j} = \{i: \vec j\in \mathbb{N}^{<\infty}_{N_i}\}$, we have that
\begin{equation}
\label{same weight stuff inside}
\frac{1}{m_{j_0}}\sum_{i\in F_{\vec j}}\left(f_{\vec j,k_i} - \frac{1}{2}g_{\vec j,k_i}\right)\in\mathrm{co}(W_\mathbf{iw}).
\end{equation}
Once we made this observation we set for all $i\in\mathbb{N}$
\[g_{k_i} = \sum_{\vec j\in\mathbb{N}^{<\infty}_{N_i}}\nu_{\vec j}g_{\vec j,k_i},\]
which is in $\mathrm{co}(W_\mathbf{iw})$ and $\supp(g_{k_i})\subset\supp(f_{k_i})$.
We next wish to show that the conclusion is satisfied for $(g_{k_i})_{i\in\mathbb{N}}$. That is, if $j_0\in\mathbb{N}$ and $(f_{k_i})_{i\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible, then
\[\left\|\sum_{i\in F}\left(f_{k_i} - \frac{1}{2}g_{k_i}\right)\right\| \leq m_{j_0} + \varepsilon.\]
Define for each $i\in\mathbb{N}$ the functional
\[\tilde f_{k_i} = \sum_{\vec j\in\mathbb{N}^{<\infty}_{N_i}}\nu_{\vec j}f_{\vec j,k_i} + (1-\lambda) f_{\mathrm{iw},k_i},\]
which is in $\mathrm{co}(W)$. By \eqref{close enuffb}, \eqref{close enuffc}, and \eqref{true form} we obtain $\|f_{k_i} - \tilde f_{k_i}\| <\varepsilon_i$. By this, it is now sufficient to prove that, if $(f_{k_i})_{i\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible, then
\begin{equation}
\label{the goal}
f = \frac{1}{m_{j_0}}\sum_{i\in F}\left(\tilde f_{k_i} -
\frac{1}{2}g_{k_i}\right)\in\mathrm{co}(W_\mathbf{iw})
\end{equation}
because this will imply $\|f\| \leq 1$. The conclusion will then follow from a simple application of the triangle inequality. We are now ready to dissect $f$. Set $N_0 = \max_{i\in F} N_i$ and for each $\vec j\in\mathbb{N}^{<\infty}$ $F_j = \{i\in F: \vec j\in\mathbb{N}^{<\infty}_{N_i}\}$. Write
\begin{align*}
f& = \frac{1}{m_{j_0}}\sum_{i\in F}\left(\left(\sum_{\vec j\in\mathbb{N}^{<\infty}_{N_i}}\nu_{\vec j}\left(f_{\vec j,k_i} - \frac{1}{2}g_{\vec j,k_i}\right)\right)+(1 - \lambda)f_{\mathrm{iw},k_i}\right)\\
&=\left(\sum_{\vec j\in\mathbb{N}_{N_0}^{<\infty}}\nu_j\left(\frac{1}{m_{j_0}}\sum_{i\in F_{\vec j}}\left(f_{\vec j,k_i} - \frac{1}{2}g_{\vec j,k_i}\right)\right)\right) + (1-\lambda)\frac{1}{m_{j_0}}\sum_{i\in F}f_{\mathrm{iw},k_i}.
\end{align*}
Finally, by \eqref{vfg part sums} and \eqref{same weight stuff inside}, $f$ is a convex combination of elements of $\mathrm{co}(W_\mathbf{iw})$ and hence it is in $\mathrm{co}(W_\mathbf{iw})$.
\end{proof}
\begin{prop}
\label{subseq mT dual}
Let $(f_k)_k$ be a block sequence in the unit ball of $X_{\mathbf{iw}}^*$. Then for any $\varepsilon>0$ there exists $L\in[\mathbb{N}]^\infty$ so that for any $j_0\in\mathbb{N}$ and $F\subset \mathbb{N}$ with $(f_k)_{k\in F}$ being $\mathcal{S}_{n_{j_0}}$-admissible we have
\[\left\|\sum_{k\in F}f_k\right\| \leq 2m_{j_0}+\varepsilon.\]
\end{prop}
\begin{proof}
By reflexivity we have that the unit ball of $X_{\mathbf{iw}}^*$ is the closed convex hull of $W_\mathbf{iw}$. Actually, a compactness argument yields that every finitely supported vector in the unit ball of $X_{\mathbf{iw}}^*$ must be in $\mathrm{co}(W_\mathbf{iw})$. Set $(f_k^{(0)})_k = (f_k)_k$ and apply Lemma \ref{getting rid of} inductively to find infinite sets $L_1\supset L_2\supset\cdots\supset L_q\supset\cdots$ and, for each $q\in\mathbb{N}$, $(f_k^{(q)})_{k\in L_q}$ in $\mathrm{co}(W_\mathbf{iw})$ so that for all $j_0\in\mathbb{N}$ and $F\subset L_q$ with $(f_k^{(q-1)})_{k\in L_q}$ being $\mathcal{S}_{n_{j_0}}$ we have that
\[\left\|\sum_{k\in F}\left(f_k^{(q-1)}-\frac{1}{2}f_k^{(q)}\right)\right\| \leq m_{j_0} + \frac{\varepsilon}{4}.\]
Pick $q_0\in\mathbb{N}$ with $1/2^{q_0-1} < \varepsilon/2$ and then pick an infinite subset of $L_{q_0}$ $L = \{\ell_i:i\in\mathbb{N}\}$ so that for all $q\geq q_0$ and $i\geq q$ we have $\ell_i\in L_q$. Let now $j_0\in\mathbb{N}$ and $F\subset L$ so that $(f_k^{(0)})_{k\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible. If $F = \{k_1,\ldots,k_N\}$, define for $q = 0,1,\ldots,q_0$ the set $F_q = \{k_1,\ldots,k_N\}$ and for $q = q_0+1,\ldots,n$ the set $F_q = \{k_q,\ldots,k_N\}$. Observe that $F_q\subset L_q$ and $(f_k^{(q-1)})_{k\in F_q}$ is $\mathcal{S}_{n_{j_0}}$-admissible. Then,
\[
\begin{split}
\left\|\sum_{k\in F}f_k^{(0)}\right\| &= \left\|\sum_{k\in F}f_k^{(0)} + \sum_{q=1}^N\frac{1}{2^q}\sum_{k\in F_q}(f_k^{(q)} - f_k^{(q)})\right\|\\
&= \left\|\sum_{q=1}^N\left(\frac{1}{2^{q-1}}\sum_{k\in F_{q-1}}f_k^{(q-1)}-\frac{1}{2^q}\sum_{k\in F_q}f_k^{(q)}\right) + \frac{1}{2^N}f^{(N)}_{k_N}\right\|\\
&\leq \sum_{q=1}^{q_0}\frac{1}{2^{q-1}}\left\|\sum_{r=1}^Nf^{(q-1)}_{k_r} - \frac{1}{2}f^{(q)}_{k_r}\right\|\\
&+ \sum_{q=q_0+1}^N\frac{1}{2^{q-1}}\left(\left\|f^{(q-1)}_{k_{q-1}}\right\| + \left\|\sum_{r=q}^Nf^{(q-1)}_{k_r} - \frac{1}{2}f^{(q)}_{k_r}\right\|\right) + \frac{1}{2^N}\left\|f^{(N)}_{k_N}\right\| \\
&\leq \sum_{q=1}^N\frac{1}{2^{q-1}}\left(m_{j_0} + \frac{\varepsilon}{4}\right) + \sum_{q=q_0}^N\frac{1}{2^{q}} \leq 2m_{j_0} + \varepsilon.
\end{split}
\]
\end{proof}
\begin{cor}
Every normalized block sequence in $X_{\mathbf{iw}}^*$ has a subsequence that generates a spreading model 4-equivalent to the unit vector basis of $c_0$.
\end{cor}
\begin{proof}
Let $(f_k)_k$ be a normalized block sequence in the unit ball of $X_{\mathbf{iw}}^*$ and apply Proposition \ref{subseq mT dual}, for some $\varepsilon>0$, and relabel to assume that conclusion holds for the whole sequence. By 1-unconditionality we deduce that for any $F\subset \mathbb{N}$ so that $(f_k)_{k\in F}$ is $\mathcal{S}_{n_1}$-admissible we have that $(f_k)_{k\in F}$ is $(2m_1+\varepsilon)$-equivalent to the unit vector basis of $c_0$. Recall that $m_1 = 2$ and $n_1 = 1$.
\end{proof}
Reflexivity of $X_{\mathbf{iw}}$, the above stated corollary, and Proposition \ref{unique 1 or 0 am} yield the next result.
\begin{cor}
The space $X_{\mathbf{iw}}^*$ is asymptotically symmetric.
\end{cor}
For $n\in\mathbb{N}$ we shall say that a finite block sequence $(f_k)_{k=1}^d$ in $X_{\mathbf{iw}}^*$ is maximally $\mathcal{S}_n$-admissible if $\{\min\supp(f_k):1\leq k\leq d\}$ is a maximal $\mathcal{S}_n$-set.
\begin{prop}
\label{very good c0 vectors}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}^*$. Then for every $n\in\mathbb{N}$ and $\delta>0$ there exists a sequence $(f_k)_{k=1}^d$ that is maximally $\mathcal{S}_n$-admissible with $\|f_k\| \geq 1$ for $k=1,\ldots,d$ and $\|\sum_{k=1}^df_k\| \leq 1+\delta$.
\end{prop}
\begin{proof}
The proof goes along the lines of the proof of Proposition \ref{very good ell1 vectors}. Start with a normalized sequence $(f_i)_i$, to which we apply Proposition \ref{subseq mT dual}, and assume that the conclusion fails in the linear span of this sequence. We can then find for every $j\in\mathbb{N}$ with $j\geq n$ an integer $d_j$ with $n_j - n\leq d_jn\leq n_j$ and an $F_j$ so that $(f_k)_{k\in F_j}$ is maximally $\mathcal{S}_{d_jn}$-admissible with
\[2m_j + \varepsilon\geq\left\|\sum_{i\in F_j}f_i\right\| \geq \left(1+\delta\right)^{d_j+1} \geq \left(1+\delta\right)^{n_j/n}.\]
This implies that $\limsup_j((1+\delta)^{1/n})^{n_j}/m_j\leq 2$ which contradicts the first property of the sequences $(m_j)_j$, $(n_j)_j$ (see Section \ref{definition section}).
\end{proof}
\begin{cor}
\label{dual RIS exists}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}^*$ and let $C>1$. Then there exist a block sequence $(y_n^*)_n$ in $Y$ and a block sequence $(y_n)_n$ in $X_{\mathbf{iw}}$ so that the following hold.
\begin{itemize}
\item[(i)] $1\leq\|y_n\|$ and $\|y_n^*\| \leq C$ for all $n\in\mathbb{N}$,
\item[(ii)] $\supp(y_n) = \supp(y_n^*)$ and $y_n^*(y_n) = 1$, and
\item[(iii)] $(y_n)_n$ is a $C$-RIS.
\end{itemize}
\end{cor}
\begin{proof}
Fix $C>1$ and apply Lemma \ref{very good c0 vectors} to find a block sequence $(y_n^*)_n$ so that that for all $n\in\mathbb{N}$ we have $\|w^*_n\| \leq (1+\sqrt C)/2$, $\min\supp(w_n^*) \geq (6n)/(\sqrt C - 1)$, and $y_n$ is of the form $w^*_n = \sum_{i \in F_n}f_i$ with $f_i$ in $Y$, $\|f_i\| \geq 1$, for all $i\in\mathbb{N}$, $(f_i)_{i\in\mathbb{N}}$ is maximally $\mathcal{S}_n$-admissible. Pick for each $n\in\mathbb{N}$ and $i\in\mathbb{N}$ a normalized vector $x_i$ with $\supp(x_i)\subset\supp(f_i)$ and $f_i(x_i) \geq 1$. For each $n\in\mathbb{N}$ we may perturb each vector $x_i$ to assume that $\supp(x_i) = \supp(f_i)$. By scaling we can ensure that all the aforementioned properties are retained, only perhaps increasing the upper bound of $\|w_n^*\|$ to $\|w_n^*\|\leq \sqrt C$.
Because, for each $n\in\mathbb{N}$, $(x_i)_{i\in F_n}$ is maximally $\mathcal{S}_n$-supported, by \cite[Proposition 2.3]{AT}, we can find coefficients $(c_i)_{i\in F_n}$ so that the vector $w_n = \sum_{i\in F_n}c_ix_i$ is a $(n,\varepsilon)$-s.c.c. with $\varepsilon \leq 3/\min\supp(w_n^*) \leq (\sqrt C - 1)/(2n)$. By Proposition 6.5 we have that for every $f\inW_\mathbf{iw}$, with $w(f) = (j_1,\ldots,j_l)$ and $n_1+\cdots+n_l < n$ the estimate $|f(w_n)| \leq \sqrt{C}/w(f)$. It follows that $(w_n)_n$ has a subsequence $(w_{k_n})_n$ that is a $\sqrt{C}$-RIS.
Note that $w_{k_n}^*(w_{k_n}) = \sum_{i\in F_{k_n}}c_if_i(x_i) = 1$, hence $1\geq \|w_{k_n}\| \geq 1/\|w_{k_n}^*\| \geq 1/\sqrt{C}$. Thus, the sequence $(y_n)_n = (\sqrt{C}w_{k_n})_n$ is a $C$-RIS with $\|y_n\| \geq 1$ for all $n\in\mathbb{N}$ and the sequence $(y_n^*)_n = (w_{k_n}^*/\sqrt{C})_n$ satisfies $\|y_n^*\| \leq C$ and $y_n^*(y_n) = 1$ for all $n\in\mathbb{N}$.
\end{proof}
\begin{thm}
\label{dual omega joint}
Let $Y$ be a block subspace of $X_{\mathbf{iw}}^*$. Then $Y$ contains an array of normalized block sequences $(f_j^{(i)})_j$, $i\in\mathbb{N}$, that generates an asymptotic model equivalent to the unit vector basis of $\ell_1$.
\end{thm}
\begin{proof}
The proof of this result follows the proof of Proposition \ref{omega joint spreading models}. Fixing $\varepsilon>0$, choose $C>1$ and a sequence $(j_i)_i$ as in the aforementioned proof. Apply Corollary \ref{dual RIS exists} to find a $C$-RIS $(y_s)_s$ and a sequence $(y_s^*)_s$ in $Y$ with properties (i), (ii), and (iii) in the statement of that result. Pass to common subsequences, by applying Proposition \ref{subseq mT dual}, so that for any $j_0\in\mathbb{N}$ and any $F\subset \mathbb{N}$ so that $(y_s)_{s\in F}$ is $\mathcal{S}_{n_{j_0}}$-admissible we have $\|\sum_{s\in F}y_s\| \leq 3m_{j_0}$.
Following the proof of Proposition \ref{omega joint spreading models} define an array of block sequences $(x^{(i)}_j)_j$, $i\in\mathbb{N}$, that satisfies \eqref{omega equation}, so that each vector $x^{(i)}_j$ is of the form $x^{(i)}_j = m_{j_i}\sum_{s\in F_j^{(i)}}c_s^{i,j}y_s$, with $(y_s)_{s\in F_j^{(i)}}$ $\mathcal{S}_{n_{j_i}-1}$-admissible and $\sum_{s\in F_j^{(i)}}c_s^{i,j} = 1$. Also, the sets $(F_j^{(i)})_j$, $i\in\mathbb{N}$ are all pairwise disjoint. If we then define $f_j^{(i)} = \sum_{s\in F_j^{(i)}}y_i$, for $i,j\in\mathbb{N}$, we have that $\|f_j^{(i)}\| \leq 3$, and $f_j^{(i)}(x_j^{(i)}) = 1$, and $f_j^{(i)}(x_{j'}^{(i')}) = 0$ if $(i,j)\neq(i',j')$. For every $n\leq j_1<\cdots<j_n$ the sequence $(x_{j_i}^{(i)})$ has a $(1+\varepsilon)$-upper $c_0$-estimate which yields that $(f_{j_i}^{(i)})$ has a $1/(1+\varepsilon)$-lower $\ell_1$ estimate and therefore it is $3(1+\varepsilon)$-equivalent to the unit vector basis of $\ell_1$.
\end{proof}
\begin{rem}
A slightly a more careful version of the above proof yields that in every block subspace $Y$ of $X_{\mathbf{iw}}^*$, for every $m\in\mathbb{N}$ one can find a array $(f_j^{(i)})_j$, $1\leq i\leq m$ that generates a joint spreading model 3-equivalent to the unit vector basis of $\ell_1^m(c_0)$. It is not clear what the asymptotic spaces of $Y$ are. Although $\tilde K(Y) = \{\infty\}$ all we know about the set $K(Y)$ is $\{1,\infty\} \subset K(Y)$.
\end{rem}
\section{The space $\tilde{X}_{\mathbf{iw}}$}
\label{the other space}
The purpose of this section is to simplify the definition of the space $X_{\mathbf{iw}}$ to obtain a new space $\widetilde X_{\mathbf{iw}}$. This new space also has the property that every normalized block sequence in $\widetilde X_{\mathbf{iw}}$ has a subsequence generating a spreading model equivalent to the unit vector basis of $\ell_1$ without containing a subspace where all spreading models of normalized block sequences are uniformly equivalent to $\ell_1$.
\subsection{Definition of $\widetilde X_{\mathbf{iw}}$}
We simplify the definition of the norming set $W_\mathbf{iw}$ of $X_{\mathbf{iw}}$ by only considering functionals of the form $(1/m_j)\sum_{q=1}^d f_j$.
\begin{defn}
Let $\widetilde W_\mathbf{iw}$ be the smallest subset of $c_{00}(\mathbb{N})$ that satisfies the following to conditions.
\begin{itemize}
\item[(i)] $\pm e_i^*$ is in $\widetilde W_\mathbf{iw}$ for all $i\in\mathbb{N}$ and
\item[(ii)] for every $j\in\mathbb{N}$ and every $\mathcal{S}_{n_j}$ very fast growing sequence of weighted functionals $(f_q)_{q=1}^d$ in $\widetilde W_\mathbf{iw}$ the functional $$f = \frac{1}{m_j}\sum_{q=1}^df_q$$ is in $\widetilde W_\mathbf{iw}$.
\end{itemize}
We define a norm on $c_{00}(\mathbb{N})$ given by $\iii{x} = \sup\{f(x): x\in \widetilde W_\mathbf{iw}\}$ and we set $\widetilde X_{\mathbf{iw}}$ to be the completion of $(c_{00}(\mathbb{N}), \iii{\cdot})$.
\end{defn}
\begin{defn}
For each $j\in\mathbb{N}$ we define the norm $\|\cdot\|_{\ell_1,j}$ on $\ell_1(\mathbb{N})$ given by
\begin{equation}
\left\|\sum_{k=1}^\infty a_ke_k\right\|_{\ell_1,j} = \max\left\{\max_k |a_k|,\frac{m_{j}}{m_{j+1}}\sum_{k=1}^\infty|a_k|\right\}.
\end{equation}
\end{defn}
Clearly, this norm is equivalent to the usual norm of $\ell_1$, however this equivalence is not uniform in $j\in\mathbb{N}$. This can be seen by taking, e.g., the vector $x_j = \sum_{k=1}^{m_{j+1}}e_k$ in which case $\|x_j\|_{\ell_1,j} = m_j$ whereas $\|x_j\|_{\ell_1} = m_{j+1}$. We will see that every block subspace of $\widetilde X_{\mathbf{iw}}$ for every $j\in\mathbb{N}$ contains a block sequence that generates a spreading model isometrically equivalent to the unit vector basis of $\ell_1(\mathbb{N})$ endowed with $\|\cdot\|_{\ell_1,j}$.
\subsection{The auxiliary space for $\widetilde X_{\mathbf{iw}}$}
The auxiliary spaces are almost identical as those for the space $X_{\mathbf{iw}}$, the difference being the lack of the factors $1/2^l$.
\begin{defn}
For $N\in\mathbb{N}$ let $\widetilde W_\mathrm{aux}^N$ be the smallest subset of $c_{00}(\mathbb{N})$ that satisfies the following to conditions.
\begin{itemize}
\item[(i)] $\pm e_i^*$ is in $W_\mathrm{aux}$ for all $i\in\mathbb{N}$ and
\item[(ii)] for every $j\in\mathbb{N}$ and every $\mathcal{S}_{n_j}\ast\mathcal{A}_3$ admissible sequence of $N$-sufficiently large auxiliary weighted functionals $(f_q)_{q=1}^d$ in $\widetilde W_\mathrm{aux}$ the functional $$f = \frac{1}{m_j}\sum_{q=1}^df_q$$ is in $\widetilde W_\mathrm{aux}$.
\end{itemize}
We define a norm $\iii{\cdot}_{\mathrm{aux},N}$ on $c_{00}(\mathbb{N})$ by defining for all $x\in c_{00}(\mathbb{N})$ the quantity $\iii{x}_{\mathrm{aux},N} = \sup\{f(x): f\in W_\mathrm{aux}^N\}$.
\end{defn}
\begin{comment}
\begin{lem}
\label{upper first auxiliary second space}
Let $N$, $n$, $j_0\in\mathbb{N}$, $(\varepsilon_k)_{k=1}^n$ be a sequence of positive real numbers and $(x_{k})_{k=1}^n$ be vectors in $c_{00}(\mathbb{N})$ so that for each $1\leq k\leq n$ the vector $x_{k}$ is of the form
\begin{equation}
x_{k} = m_{j_0}\tilde x_{k},\text{ where } \tilde x_{k} = \sum_{r\in F_{k}}c_r^{k}e_r \text{ is a } (n_{j_0},\varepsilon_k) \text{ basic s.c.c.}
\end{equation}
Let also $G\in\mathcal{S}_{n_{j_0}}$ and $f = (1/m_{j_0})\sum_{s\in G}e_s^*$. Then, for any scalars $(a_{k})_{k=1}^n$ we have
\begin{equation}
\label{upper first auxiliary second space eq}
\left|f\left(\sum_{k=1}^na_{k}x_{k}\right)\right|\leq (1+\eta)\max_{1\leq k\leq n}|a_k|,
\end{equation}
for $\eta = (1/m_{j_0})\max\supp(x_{1})\sum_{k=2}^n\varepsilon_k$.
\end{lem}
\begin{proof}
Set $k_0 = \min\{k: \min(G)\leq \max\supp (x_k)\}$, if such a $k_0$ exists. As $G\in\mathcal{S}_{n_{j_0}}$ there are $d\leq\min(G)\leq\max\supp(x_{k_0})$ and $G_1<\cdots<G_d$ in $\mathcal{S}_{n_{j_0}-1}$ so that $G = \cup_{q=1}^dG_q$. If we set $f_q = (1/m_{j_0})\sum_{s\in G_q}e_s^*$ then for $1\leq q\leq d$ and $k_0 < k\leq n$ we have $|f_q(x_k)| \leq (1/m_{j_0})\varepsilon_k$ which yields
$$\left|f_q\left(\sum_{k>k_0}a_kx_k\right)\right| \leq \frac{1}{m_{j_0}}\max_{1\leq k\leq n}|a_k|\left(\sum_{k>k_0}\varepsilon_k\right).$$
It is easy to see that $|f(x_{k_0})|\leq 1$. Combine this with $d\leq \max\supp(x_{k_0})$ and the above inequality to obtain the desired result.
\end{proof}
\end{comment}
\begin{lem}
\label{upper auxiliary second space}
Let $n$, $j_0, N\in\mathbb{N}$ with $N\geq 2m_{j_0}$, $(\varepsilon_k)_{k=1}^n$ be a sequence of real numbers with $0<\varepsilon_k < 1/(6m_{j_0})$ for $1\leq k\leq n$ and $(x_{k})_{k=1}^n$ be vectors in $c_{00}(\mathbb{N})$ so that for each $1\leq k\leq n$ the vector $x_k$ is of the form
\begin{equation}
x_{k} = m_{j_0}\tilde x_{k},\text{ where } \tilde x_{k} = \sum_{r\in F_{k}}c_r^{k}e_r \text{ is a } (n_{j_0},\varepsilon_k) \text{ basic s.c.c.}
\end{equation}
Then, for any scalars $(a_{k})_{k=1}^n$ and $f\in\widetilde W_\mathrm{aux}^N$, we have
\begin{equation}
\label{upper auxiliary second space eq}
\left|f\left(\sum_{k=1}^na_{k}x_{k}\right)\right|\leq (1+\delta)\max\left\{\max_{1\leq k\leq n}|a_k|,\frac{m_{j_0}}{m_{j_0+1}}\sum_{k=1}^n|a_k|\right\}
\end{equation}
for any $\delta$ satisfying
\begin{equation}
\label{this long delta second space}
\delta \geq \max\left\{\frac{2m_{j_0+1}}{N},6\sum_{k=2}^n\max\supp(x_{k-1})\varepsilon_k,6m_{j_0}\sum_{k=2}^n\varepsilon_k\right\}.
\end{equation}
\end{lem}
\begin{proof}
We perform an induction on $m=0,1,\ldots$ to show that for all $f\in \widetilde W^N_m$ and for all $1\leq k\leq n$ we have $|f(x_k)|\leq 1$ as well as that \eqref{upper auxiliary second space eq} holds for $f$. The step $m=0$ is trivial so let $m\in\mathbb{N}$, assume that the inductive assumption holds for all $f\in\widetilde W_m^N$ and let $f\in\widetilde W_{m+1}^N\setminus\widetilde W_{m}^N$. Let $f = (1/m_j)\sum_{q=1}^df_q$ where $(f_q)_{q=1}^d$ is $\mathcal{S}_{n_j}$ admissible and $N$ sufficiently large. If $j > j_0$ then an elementary calculation yields $|f(x_k)| \leq m_{j_0}/m_{j_0+1}$ for $1\leq k\leq n$ and hence \eqref{upper auxiliary second space eq} easily follows. Therefore, we may assume that $j\leq j_0$.
Set $M_k = \max\supp(x_k)$ for $1\leq k\leq n$, $k_0 = \min\{k: \min\supp(f)\leq M_k\}$, if such a $k_0$ exists, and set $q_0 = \min\{q:\max\supp(f_q)\geq\min\supp(x_{k_0})\}$. For simplicity let us assume $q_0 = 1$. Set $\tilde f = (1/m_j)\sum_{q=2}^df_q$, $G = \{2\leq q\leq d: f_q = \pm e_i^* \text{ for some } i\in\mathbb{N}\}$, $D = \{2,\ldots,d\}\setminus G$, and
\begin{equation*}
g_1 = \frac{1}{m_j}\sum_{q\in G}f_q,\quad g_2 = \frac{1}{m_j}\sum_{q\in D}f_q.
\end{equation*}
As the sequence $(f_q)_{q=1}^d$ is $N$-sufficiently large we obtain $w(f_q) \geq N$ for all $q\in D$ which easily implies
\begin{equation}
\label{estimate g2 in this proof}
\begin{split}
\left|g_2\left(\sum_{k=k_0+1}^na_kx_k\right)\right| &\leq \frac{m_{j_0}}{m_j N}\sum_{k=k_0+1}^n|a_k| \leq \left(\frac{m_{j_0+1}}{2 N}\right)\frac{m_{j_0}}{m_{j_0+1}}\sum_{k=k_0+1}^n|a_k|\\
&\leq \frac{\delta}{4}\frac{m_{j_0}}{m_{j_0+1}}\sum_{k=k_0+1}^n|a_k|
\end{split}
\end{equation}
We now estimate the quantity $g_1(\sum_{k = k_0+1}^na_kx_k)$ and we distinguish cases depending on the relation of $m_j$ and $m_{j_0}$. We first treat the case $j = j_0$. As $\{\min\supp (f_q):1\leq q\leq d\}$ is in $\mathcal{S}_{n_{j_0}}\ast\mathcal{A}_3$ it follows that $l\leq M_{k_0}$ and there are $G_1<\cdots<G_l$ in $\mathcal{S}_{n_{j_0}-1}\ast\mathcal{A}_3$ so that $G = \cup_{p=1}^lG_p$. If we set $h_p = (1/m_{j_0})\sum_{s\in G_p}f_s$ then for $1\leq p\leq l$ and $k_0 < k\leq n$ we have $|h_p(x_k)| \leq (1/m_{j_0})3\varepsilon_k$ which yields
\begin{equation*}
\begin{split}
\left|g_1\left(\sum_{k>k_0}a_kx_k\right)\right| &\leq \frac{1}{m_{j_0}}\sum_{p=1}^l\left|h_p\left(\sum_{k>k_0}a_kx_k\right)\right|\leq \frac{M_{k_0}}{m_{j_0}}\sum_{k>k_0}3\varepsilon_k\max_{k_0< k\leq n}|a_k|\\
&\leq \left(\frac{3}{2}\sum_{k=2}^nM_{k-1}\varepsilon_k\right)\max_{1\leq k\leq n}|a_k|.
\end{split}
\end{equation*}
In the second case $j<j_0$ and we use a simpler argument to show that
$$\left|g_1\left(\sum_{k=k_0+1}^na_kx_k\right)\right| \leq \frac{m_{j_0}}{m_j}\sum_{k=2}^n3\varepsilon_k\max_{k_0<k\leq n}|a_k| \leq \left(\frac{3m_{j_0}}{2}\sum_{k=2}^n\varepsilon_k\right)\max_{1\leq k\leq n}|a_k|.$$
We conclude that in either case we have
\begin{equation}
\label{this specific part when weight is at most that much}
\left|g_1\left(\sum_{k=k_0+1}^na_kx_k\right)\right| \leq \frac{\delta}{4}\max_{1\leq k\leq n}|a_k|.
\end{equation}
\begin{comment}
In the remaining case we have $j>j_0$ which easily yields
\begin{equation}
\label{when the weights are so much bigger}
\left|g_1\left(\sum_{k=k_0+1}^na_kx_k\right)\right| \leq \frac{m_{j_0}}{m_{j_0+1}}\sum_{k=k_0+1}^n|a_k|.
\end{equation}
\end{comment}
Before showing that $f$ satisfies \eqref{upper auxiliary second space eq} we quickly show that $|f(x_k)| \leq 1$ for $1\leq k\leq n$ (there is a more classical proof that depends on the properties of the sequences $(m_j)_j$ and $(n_j)_j$ however the constraints make the proof faster). If $j = j_0$ this is easy. Otherwise $j<j_0$ and arguments very similar to those above yield
\begin{equation*}
\begin{split}
|f(x_k)| &\leq \frac{1}{m_j}|f_1(x_k)| + |g_1(x_k)| + |g_2(x_k)| \leq \frac{1}{m_j} + \frac{m_{j_0}}{m_j}3\varepsilon_k + \frac{m_{j_0}}{m_jN}\\
&\leq \frac{1}{2} + \frac{1}{4} + \frac{1}{4} = 1.
\end{split}
\end{equation*}
Set $$L = \max\left\{\max_{1\leq k\leq n}|a_k|,\frac{m_{j_0}}{m_{j_0+1}}\sum_{k=1}^n|a_k|\right\}.$$
We now distinguish cases concerning the support of $f_1$ in relation to the support of $x_{k_0}$. If $\max\supp(f_1) > \max\supp(x_{k_0})$ then
\begin{equation*}
\begin{split}
\left|f\left(\sum_{k=1}^na_kx_k\right)\right| & \leq \frac{1}{m_{j_0}}\left|f_1\left(\sum_{k=1}^na_kx_k\right)\right| + \left|\left(g_1 + g_2\right)\left(\sum_{k=k_0+1}^na_kx_k\right)\right|\\
&\leq \frac{1}{m_{j_0}}(1+\delta)L + \frac{2\delta}{4}L \leq \left[\frac{1}{2} + \left(\frac{1}{2} + \frac{2}{4}\right)\delta\right]L\leq (1+\delta)L.
\end{split}
\end{equation*}
If $\max\supp(f_1) \leq \max\supp(x_{k_0})$ then
\begin{equation*}
\begin{split}
\left|f\left(\sum_{k=1}^na_kx_k\right)\right| & \leq \left|f\left(a_{k_0}x_{k_0}\right)\right| + \left|\left(g_1 + g_2\right)\left(\sum_{k=k_0+1}^na_kx_k\right)\right|\\
&\leq L + \frac{2\delta}{4}L \leq \left(1 + \frac{2\delta}{4}\right)L\leq (1+\delta)L.
\end{split}
\end{equation*}
The inductive step is complete and so is the proof.
\end{proof}
\subsection{The spreading models of $\widetilde X_{\mathbf{iw}}$}
We observe that all spreading models of normalized block sequences in $\widetilde X_{\mathbf{iw}}$ are equivalent to $\ell_1$ and we construct in every subspaces a block sequence that generates a spreading model equivalent to $\ell_1$ but with arbitrarily bad isomorphism constant.
\begin{prop}
\label{at least you have ell1 spreading model}
Let $(x_i)_i$ be a normalized block sequence in $\widetilde X_{\mathbf{iw}}$. Then there exist $L\in[\mathbb{N}]^\infty$ of $(x_i)_i$ and $K_0\in\mathbb{N}\cup\{0\}$ so that for every $j,k\in\mathbb{N}$ with $k\leq n_j - K_0$, every $F\subset L$ with $(x_i)_{i\in F}$ $\mathcal{S}_k$ admissible, and every scalars $(c_i)_i\in F$ we have
\begin{equation*}
\left\|\sum_{i\in F}c_ix_i\right\| \geq \frac{1}{m_j}\sum_{i\in F}|c_i|.
\end{equation*}
In particular, every normalized block sequence in $\widetilde X_{\mathbf{iw}}$ has a subsequence that generates a spreading model equivalent to the unit vector basis of $\ell_1$.
\end{prop}
\begin{proof}
Take a sequence of functionals $(f_i)_i$ in $W_\mathbf{iw}$ with $\ran(f_i)\subset \ran(x_i)$ and $f_i(x_i) = 1$ for all $i\in\mathbb{N}$. , namely the one in which $\limsup_kw(f_k)$ is finite and the one in which it is infinite.
We shall only treat the first case as the second one is simpler and it follows for $K_0 = 0$. By passing to an infinite subset of $\mathbb{N}$ and relabeling there is $j_0\in\mathbb{N}$ with $w(f_i) = m_{j_0}$ for all $i\in\mathbb{N}$. Define $K_0 = n_{j_0} $. Write each $f_i$ as
\[f_i = \frac{1}{m_{j_0}}\sum_{q=1}^{d_i}f_q^i\]
with $(f_q^i)_{q=1}^{d_i}$ being $\mathcal{S}_{K_0}$-admissible and very fast growing. Arguing as in \eqref{uniform ell1 eq 1} it follows that for all $i$ we have $\sum_{q=2}^{d_i}f_q^i(x_i) \geq (1/2)m_{j_0}\geq 1$ and passing to a subsequence and relabeling we have that $((f_q^i)_{q=2}^{d_i})_i$ is very fast growing.
We can conclude that for any $j,k\in\mathbb{N}$ and any $F\subset \mathbb{N}$ so that $(x_i)_{i\in F}$ is $\mathcal{S}_k$-admissible with $k\leq n_j - K_0$, the sequence $((f_q^i)_{q=2}^{d_i})_{q\in F}$ is $\mathcal{S}_{n_j}$ admissible because $\mathcal{S}_k\ast\mathcal{S}_{K_0} = \mathcal{S}_{k+K_0}$ and $k + K_0 \leq n_j$. Hence, $f_F = (1/m_j)\sum_{i\in F}\sum_{q=2}^{d_i}f_q^i$ is in $\widetilde W_\mathbf{iw}$. This means that for any scalars $(c_i)_{i\in F}$ we have
\begin{equation}
\left\|\sum_{i\in F}c_ix_i\right\| = \left\|\sum_{i\in F}|c_i|x_i\right\| \geq f_F\left(\sum_{i\in F}|c_i|x_i\right) \geq \frac{1}{m_j}\sum_{i\in F}|c_i|.
\end{equation}
\end{proof}
\begin{prop}
\label{bad ell1 spreading models all over the place}
Let $Y$ be a block subspace of $\widetilde X_{\mathbf{iw}}$. Then for every $j_0\in\mathbb{N}$ there exists a sequence $(x_k)_k$ in $Y$ that generates a spreading model isometrically equivalent to the unit vector basis of $(\ell_1,\|\cdot\|_{\ell_1,j_0})$.
\end{prop}
Before proving the above statement we point out that RIS sequences in $\widetilde X_{\mathbf{iw}}$ are defined identically as in Definition \ref{definition ris} and Proposition \ref{basic inequality} is also true by taking the set $\widetilde W_\mathrm{aux}^N$. Furthermore all results of subsection \ref{section ris existence} are true for the space $\widetilde X_{\mathbf{iw}}$ and the proofs are very similar. In particular Corollary \ref{building ris} is true in $\widetilde X_{\mathbf{iw}}$ and this is proved by using Proposition \ref{at least you have ell1 spreading model}.
\begin{proof}[Proof of Proposition \ref{bad ell1 spreading models all over the place}]
For a sequence of positive numbers $(C_k)_k$ decreasing strictly to one apply Corollary \ref{building ris} to find a sequence $(y_i)_i$ in $Y$ so that for all $k\in\mathbb{N}$ the sequence $(y_i)_{i\geq k}$ is $(C_k,(j_i)_{i\geq k})$-RIS with $\|y_i\|\geq 1$ for all $i\in\mathbb{N}$ (this is possible via a minor modification of the proof of Corollary \ref{building ris} in which $\delta$ is replaced by $\delta_i$). Inductively build a sequence $(x_k)_k$ so that for all $k\in\mathbb{N}$ the vector $x_k$ is of the form $x_k = m_{j_0}\tilde x_k$ where $\tilde x_k = \sum_{i\in F_k}c_i^ky_i$ a $(n_{j_0},\varepsilon_k/2)$ s.c.c. with $\varepsilon_{k+1} < (2^k\max\supp(x_k))^{-1}$ for all $k\in\mathbb{N}$. As in the proof of Proposition \ref{omega joint spreading models} we can find for all $k\in\mathbb{N}$ a sequence of very fast growing and $\mathcal{S}_{n_{j_0}}$ admissible functionals $(f_i)_{i\in F_k}$ in $\widetilde W_\mathbf{iw}$ with $\supp(f_i)\subset y_i$ for all $i\in F_k$ so that if $f_k = (1/m_{j_0})\sum_{i\in F_k}f_i\in \widetilde W_\mathbf{iw}$ then $f_k(x_k) = 1$ and so that the sequence $((f_i)_{i\in F_k})_k$ enumerated in the obvious way is very fast growing. We deduce that for all natural numbers $n \leq k_1<\cdots<k_n$ the functionals $((f_i)_{i\in F_{k_l}})_{l=1}^n$ are $\mathcal{S}_{n_{j_0}+1}$ admissible. This means that they are also $\mathcal{S}_{n_{j_0+1}}$ admissible i.e. $f = (1/m_{j_0+1})\sum_{l=1}^n\sum_{i\in F_{k_l}}f_i = (m_{j_0}/m_{j_0+1})\sum_{l=1}^nf_{k_l}$ is in $\widetilde W_\mathbf{iw}$. We conclude that for any scalars $(a_l)_{l=1}^n$ we have
\begin{equation*}
\iii{\sum_{l=1}^na_lx_{k_l}} = \iii{\sum_{l=1}^n|a_l|x_{k_l}} \geq f\left(\sum_{l=1}^n|a_l|x_{k_l}\right) \geq \frac{m_{j_0}}{m_{j_0+1}}\sum_{l=1}^n|a_l|
\end{equation*}
and also
\begin{equation*}
\iii{\sum_{l=1}^na_lx_{k_l}} = \iii{\sum_{l=1}^n|a_l|x_{k_l}} \geq \max_{1\leq l\leq n}f_{k_l}\left(\sum_{l=1}^n|a_l|x_{k_l}\right) = \max_{1\leq l\leq n}|a_l|.
\end{equation*}
For the upper inequality, Proposition \ref{basic inequality} and Lemma \ref{upper auxiliary second space} imply that there is a null sequence of positive numbers $\delta_n$ so that for all natural numbers $n\leq k_1 <\cdots < k_n$ and scalars $(a_l)_{l=1}^n$ we have
\begin{equation*}
\iii{\sum_{l=1}^na_lx_{k_l}}\leq \left(1+\delta_n\right)\max\left\{\max_{1\leq l\leq n}|a_l|,\frac{m_{j_0}}{m_{j_0+1}}\sum_{l=1}^n|a_l|\right\}.
\end{equation*}
\end{proof}
\begin{rem}
It can be shown that the space $\widetilde X_{\mathbf{iw}}$ satisfies the conclusions of Theorem \ref{c0 asmodel} and Corollary \ref{hereditary fbr}. Note also that unlike $\tilde K(X_{\mathbf{iw}})$, the set $\widetilde K(\widetilde X_{\mathbf{iw}})$ contains $\{1,\infty\}$. It is unclear whether $\widetilde K(\widetilde X_{\mathbf{iw}})$ contains any $p$'s in $(1,\infty)$.
\end{rem}
As it was shown in Section \ref{dual section} the space $X_{\mathbf{iw}}^*$ admits only the unit vector basis of $c_0$ as a spreading model. This is false for the space $\widetilde X_{\mathbf{iw}}^*$.
\begin{prop}
The space $\widetilde X_{\mathbf{iw}}^*$ admits spreading models that are not equivalent to the unit vector basis of $c_0$.\end{prop}
\begin{proof}
\cite[Proposition 3.2]{AOST} yields that if a space has the property that every spreading model generated by a normalized weakly sequence in that space is equivalent to the unit vector basis of $c_0$, then there must exist a uniform constant $C$ so that this equivalence is always with constant $C$. We point out that this conclusion only works for the spacial case $p=\infty$ and not for other $p$'s, because the unit vector basis of $c_0$ is the minimum norm with respect to domination. By duality we would obtain that every spreading model generated by a normalized block sequence in $\widetilde X_{\mathbf{iw}}$ is $C$-equivalent to the unit vector basis of $\ell_1$. This would contradict the statement of Proposition \ref{bad ell1 spreading models all over the place}.
\end{proof}
\begin{comment}
As it is pointed out in the proof above $c_0$ spreading models are special. If a space admits only $c_0$ spreading models then this holds uniformly. We remind here another asymptotic result about $c_0$ from \cite{FOSZ}: if $X$ is a separable Banach space not containing $\ell_1$ and every asymptotic model generated by a weakly null array in $X$ is equivalent to the unit vector basis of $c_0$ then $X$ is an asymptotic-$c_0$ space. This means in particular that separating the asymptotic model and asymptotic space structures is not possible for $c_0$. To conclude our paper we reiterate a problem from \cite{HO}. The preceding discussion explains why $p=\infty$ is excluded.
\begin{problem}
Let $X$ be a Banach space and $1\leq p <\infty$ so that every asymptotic model generated by a weakly null array in $X$ is equivalent to the unit vector basis of $\ell_p$. Does $X$ contain an asymptotic-$\ell_p$-subspace?
\end{problem}
For $1<p<\infty$ there exist spaces with only as $\ell_p$ asymptotic models that are not asymptotic-$\ell_p$ but contain subspaces isomorphic to $\ell_p$ (see \cite{BLMS}).
\end{comment}
|
2,877,628,091,262 | arxiv | \section*{Introduction}
A classical result of Thom states that the topological signature of
the boundary of a compact manifold with boundary vanishes. Regarding the
signature as the index of an elliptic operator, Atiyah and Singer \cite{as}
generalized this vanishing to the so-called twisted signatures.
The cobordism invariance of the index, as this vanishing is known,
was the essential step in their first
proof of the index formula on closed manifolds.
Conversely, cobordism invariance follows from the index theorem
of \cite{as}.
On open manifolds a satisfactory index formula
is not available, and probably not reasonable to expect in
full generality. Such formulae in various particular cases are
given e.g., in \cite{aps1}, \cite{melaps} for manifolds with boundary,
in \cite{boris}, \cite{phind} for manifolds with fibered boundary, and in
\cite{loya}, \cite{in} for manifolds with corners in the sense of Melrose.
To advance in this direction, we believe it is important to understand
conditions which ensure the vanishing of the index, in particular cobordism
invariance, without using any index formula.
Direct proofs of the cobordism invariance of the index for first-order
differential operators on closed manifolds
were given e.g., in \cite{braver}, \cite{higson}, \cite{lesch},
\cite{nicol}, and also \cite[Theorem 1]{cii}. We have proposed in
\cite{cii} an extension of cobordism invariance to manifolds with
corners. The result states that the sum
of the indices on the hyperfaces is null, under suitable hypothesis.
All these results are partial, in that they only
apply to differential operators of a special type.
A well-known fact states that the index of ``geometrically defined''
operators is cobordism-invariant; but besides being vague,
this is also not true (look at the Gau\ss-Bonnet operator).
Only very recently, Carvalho \cite{carvalho,carvart} found a remarkable
$K$-theoretic statement of cobordism invariance of the index on open manifolds,
using the topological approach of \cite{as2}.
Here is a reformulation
of the main result of \cite{carvalho} specialized to closed manifolds:
\begin{theorem} \label{th2}
Let $M$ be the boundary of the compact manifold $X$ and $D$ an elliptic
pseudodifferential operator on $M$. The principal symbol of $D$ defines
a vector bundle over the sphere
bundle inside $T^*M\oplus\underline{\rz}$. If the class in $K^0(S(T^*M\oplus\underline{\rz}))$
of this bundle is the restriction of a class from
$K^0(S^*X)$ modulo $K^0(M)$, then $\mathrm{index}(D)=0$.
\end{theorem}
The missing details appear in Theorem \ref{kr}.
The aim of this note is to reprove Theorem \ref{th2} with analytic methods.
In order to make the proof likely to generalize to
open manifolds, we have made a point of avoiding
to use results from $K$-theory, e.g., Bott periodicity
and the index theorem. Our approach is
based on Theorem \ref{pt}, a statement about
the cusp calculus of pseudodifferential
operators of Melrose on the manifold with boundary $X$,
in the spirit of \cite{cii}.
Although they do not appear explicitly in the literature,
Carvalho's statement (in the closed manifold case) and its present
variant could be recovered from known results in $K$-theory and $K$-homology.
We would like to mention here only \cite[Prop. 3]{mepi}, whose arrow-theoretic
proof could be extended to pseudodifferential operators. For completeness, we
show in Section \ref{clo} how to retrieve Theorem \ref{th2} also
from the Atiyah-Singer index formula.
\subsection*{Acknowledgments} I am grateful to Paolo Piazza for useful
discussions, and to the anonymous referee for valuable
remarks which greatly improved Section \ref{kre}.
\section{Review of Melrose's cusp algebra} \label{rev}
In this section we recall the facts about the cusp algebra needed in the sequel.
For a full treatment of the cusp algebra we refer to \cite{meni96c} and
\cite{in}.
Let $X$ be a compact manifold with boundary $M$, and $x:X\to{\mathbb R}_+$ a
boundary-defining function. Choose a product decomposition
$M\times[0,\epsilon)\hookrightarrow X$.
A vector field $V$ on $X$ is called \emph{cusp} if
$dx(V)\in x^2\cC^{\infty}(X)$. The space of cusp vector fields forms a
Lie subalgebra ${}^c\mathcal{V}(X)\hookrightarrow \mathcal{V}(X)$ which is a finitely
generated projective $\cC^{\infty}(X)$-module; indeed, a local basis of ${}^c\mathcal{V}(X)$
is given by $\{x^2\partial_x, \partial_{y_j}\}$ where $y_j$
are local coordinates on $M$. Thus there exists a vector bundle ${}^cTX\to X$
such that ${}^c\mathcal{V}(X)=\cC^{\infty}(X,{}^cTX)$. Fix a Riemannian metric $g$
on $X\setminus M$ of the form $dx^2/x^4+h^M$ near $x=0$; it extends
to a metric on the fibers of ${}^cTX$ over $X$ and is
called a \emph{cusp metric} on $X$ (such $g$ is traditionally called an
\emph{exact} cusp metric).
The algebra $\cD_c(X)$ of (scalar) cusp differential operators is defined
as the universal enveloping algebra of ${}^c\mathcal{V}(X)$ over $\cC^{\infty}(X)$. In
a product decomposition as above, an operator in $\cD_c(X)$ of order $m$
takes the form
\begin{equation}\label{cudi}
P=\sum_{j=0}^m P_{m-j}(x) (x^2\partial_x)^j
\end{equation}
where $P_{m-j}(x)$ is a smooth family of differential operators
of order $m-j$ on $M$.
\subsection{Cusp pseudodifferential operators} The operators in
$\cD_c(X)$ can be described alternately (see \cite{meni96c})
in terms of their Schwartz kernels.
Namely, there exists a manifold with corners $X^2_c$ obtained by blow-up from
$X\times X$, and a submanifold $\Delta_c$, such that $\cD_c(X)$ corresponds
to the space of distributions on $X^2_c$ which are classical conormal to
$\Delta_c$, supported on $\Delta_c$ and smooth at the boundary face of $X^2_c$
which meets $\Delta_c$. It is then a showcase application of Melrose's
program \cite{meicm} to construct a calculus of
pseudodifferential operators
$\Psi_{\!c}^\lambda(X)$, $\lambda\in{\mathbb C}$, in which $\cD_c(X)$ sits as the
subalgebra of differential operators (the symbols used in the definition
are classical of order $\lambda)$. No extra difficulty appears
in defining cups operators acting between sections of vector bundles over $X$.
By adjoining the multiplication
operators by $x^z$, $z\in{\mathbb C}$, we get a pseudodifferential
calculus with two complex indices
\[\Psi_{\!c}^{\lambda,z}(X,\mathcal{F},\mathcal{G}):=x^{-z}\Psi_{\!c}^\lambda(X,\mathcal{F},\mathcal{G})\]
such that $\Psi_{\!c}^{\lambda,z}(X,\mathcal{E},\mathcal{F})\subset \Psi_{\!c}^{\lambda',z'}(X,\mathcal{E},\mathcal{F})$
if and only if $\lambda'-\lambda\in{\mathbb N}$ and $z'-z\in{\mathbb N}$ (since we work
with classical symbols). Also,
\[\Psi_{\!c}^{\lambda,z}(X,\mathcal{G},\mathcal{H})\circ\Psi_{\!c}^{\lambda',z'}(X,\mathcal{F},\mathcal{G})\subset
\Psi_{\!c}^{\lambda+\lambda',z+z'}(X,\mathcal{F},\mathcal{H}).\]
The fixed cusp metric and a metric on $\mathcal{F}$ allow one to define the space of cusp
square-integrable sections $L^2_c(X,\mathcal{F})$.
By closure, cusp operators act on a scale of weighted Sobolev spaces
$x^\alpha H_c^\beta$: \[\Psi_{\!c}^{\lambda,z}(X,\mathcal{F},\mathcal{G})\times
x^\alpha H_c^\beta(X,\mathcal{F})\to x^{\alpha-\Re(z)}H_c^{\beta-\Re(\lambda)}(X,\mathcal{G}).\]
\subsection{Symbol maps}
There exists a natural surjective \emph{cusp principal symbol} map from
$\Psi_{\!c}^{\lambda}$ onto the space of homogeneous functions on
${}^cT^*X\setminus\{0\}$ of homogeneity $\lambda$, which extends the usual
principal symbol map over the interior of $X$:
\[\sigma:\Psi_{\!c}^\lambda(X,\mathcal{E},\mathcal{F})\to \cC^{\infty}_{[\lambda]}({}^cT^*X,\mathcal{E},\mathcal{F}).\]
In the sequel we refer to $\sigma$ as the principal symbol map.
A cusp operator is called \emph{elliptic} if
its (cusp) principal symbol is invertible on ${}^cT^*X\setminus\{0\}$.
\begin{definition1}[\cite{meleta}]
Let $\Psi_{\mathrm{sus}}^\lambda(M,\mathcal{E},\mathcal{F})$ be the space of classical pseudo-differential
operators $P$ of order $\lambda\in{\mathbb C}$ from $\mathcal{E}$ to $\mathcal{F}$ which are
translation invariant, and such that the convolution kernel
$\kappa_P(x,y_1,y_2)$ (which is smooth for $x\neq 0$)
decays rapidly as $|x|$ tends to infinity.
\end{definition1}
Under partial Fourier transform in the variable $x$, $\Psi_{\mathrm{sus}}(M,\mathcal{E},\mathcal{F})$
is identified with the space
of families of operators on $M$ with one real parameter $\xi$, with joint
symbolic behavior in $\xi$ and in the cotangent variables of $T^*M$.
The second symbol map
is a surjection $I_M:\Psi_{\!c}^{\lambda}(X,\mathcal{E},\mathcal{F})\to
\Psi_{\mathrm{sus}}^\lambda(M,\mathcal{E},\mathcal{F})$, called the \emph{indicial family} map
\cite{meleta}. If $P$ is given by Eq.\ \eqref{cudi} near $x=0$, then
\[I_M(P)(\xi)=\sum_{j=0}^m P_{m-j}(x) (i\xi)^j.\]
The principal symbol map and the indicial family are star-morphisms,
i.e., they are multiplicative and commute with taking adjoints.
Elliptic cusp operators whose indicial family is invertible for each $\xi\in{\mathbb R}$
are called fully elliptic. Being fully elliptic is equivalent to being
Fredholm (see \cite{melaps}).
Let $L^\lambda:=\{(U,\alpha)\in\Psi_{\mathrm{sus}}^\lambda(M,\mathcal{E},\mathcal{F})\times
\cC^{\infty}_{[\lambda]}({}^cT^*X,\mathcal{E},\mathcal{F}); \sigma(U)=\alpha_{|x=0}\}$.
It is proved in \cite{meni96c} that the joint symbol map
\begin{equation}\label{jss}
(\sigma_\lambda,I_M):\Psi_{\!c}^{\lambda}(X,\mathcal{E},\mathcal{F})\to L^\lambda
\end{equation}
is surjective.
\subsection{Analytic families of cusp operators}\label{afoco}
Let $Q\in\Psi_{\!c}^{1,0}(X,\mathcal{E})$ be a positive fully elliptic cusp operator of order
$1$. Then the complex powers $Q^\lambda$ form an analytic family of
cusp operators of order $\lambda$.
Let ${\mathbb C}^2\ni(\lambda,z)\mapsto P(\lambda,z)\in\Psi_{\!c}^{\lambda,z}(X,\mathcal{E})$
be an analytic family in two complex variables. Then $P(\lambda,z)$
is trace-class on $L^2_c(M,\mathcal{E})$ for $\Re(\lambda)<-\dim(X), \Re(z)<-1$.
Moreover, $(\lambda,z)\mapsto\mathrm{Tr}(P(\lambda,z))$ is analytic,
extends to ${\mathbb C}^2$ meromorphically with at most simple poles
in each variable at $\lambda\in{\mathbb N}-\dim(X)$, $z\in{\mathbb N}-1$, and
\begin{equation}\label{trz}
\mathrm{Res}_{z=-1}\mathrm{Tr}(P(\lambda,z))=\frac{1}{2\pi}\int_{\mathbb R} \mathrm{Tr}({I_M}(x^{-1}P(\lambda,-1)))
d\xi.
\end{equation}
This identity is the content of \cite[Prop.\ 3]{cii}.
\section{Cobordism invariance for cusp operators}\label{cobcus}
This section extends a result from \cite{cii} to
pseudodifferential operators, in a form which can be applied to $K$-theory.
We use the same line of proof, with some extra technical difficulties.
A similar extension from the differential to the
pseudodifferential case appears
in \cite{kso} when computing the $K$-theory of the algebra $\Psi_{\mathrm{sus}}^0(M)$.
\begin{theorem}\label{pt}
Let $X$ be a compact manifold with boundary $\partial X=M$, and
\[D:\cC^{\infty}(M,\mathcal{E}^+)\to\cC^{\infty}(M,\mathcal{E}^-)\]
a classical pseudodifferential operator of order $1$ on $M$.
Assume that there exist hermitian vector
bundles $V^+,V^-\to M$, $\mathcal{G}\to X$ with
$\mathcal{G}|_M=\mathcal{E}^+\oplus\mathcal{E}^-\oplus V^+\oplus V^-$, and
an elliptic symmetric cusp pseudodifferential
operator $A\in\Psi_{\!c}^{1,0}(X,\mathcal{G})$ such that
\begin{equation}\label{noua}
{I_M}(A)(\xi)=\left[\begin{smallmatrix}
\xi&{\tilde{D}}^*(\xi)&&\\ {\tilde{D}}(\xi)&-\xi &&\\
&&(1+\xi^2+\Delta^+)^{\frac12}&&\\&&&&
-(1+\xi^2+\Delta^-)^{\frac12}
\end{smallmatrix}\right]
\end{equation}
where $\Delta^+,\Delta^-$ are connection Laplacians on $V^+,V^-$,
${\tilde{D}}\in\Psi_{\mathrm{sus}}^1(M,\mathcal{E}^+,\mathcal{E}^-)$
and ${\tilde{D}}(0)=D$.
Then $\mathrm{index}(D)=0$.
\end{theorem}
\begin{proof}
We first show that we can assume without loss of generality that
$D$ is either injective or surjective. Assuming this, we construct from $A$
a positive cusp operator $Q$ of order $1$. The complex powers of $Q$
are used in defining
a complex number $N$ as a non-commutative residue. The proof will be finished
by computing $N$ in two ways; first we get $N=0$, then $N$ is shown to be
essentially $\mathrm{index}(D)$.
\subsection*{Reduction to the case where $D$ is injective or surjective}
Fix an operator $T\in\Psi^{-\infty}(M,\mathcal{E}^+,\mathcal{E}^-)$ such that $D+T$
is either injective or surjective (or both). Choose
${\tilde{T}}\in\Psi_{\mathrm{sus}}^{-\infty}(X,\mathcal{E}^+,\mathcal{E}^-)$ with ${\tilde{T}}(0)=T$. Choose
$S\in\Psi_{\!c}^{-\infty,0}(X,\mathcal{G})$ such that
\[I_M(S)(\xi)=\begin{bmatrix}
&{\tilde{T}}^*(\xi)&&\\ {\tilde{T}}(\xi)&&&\\&&0&\\&&&0\end{bmatrix}.\]
We can assume that $S$ is symmetric (if not, replace $S$ by $(S+S^*)/2$).
Replace $D$ by $D+T$ and $A$ by $A+S$. Note that $\mathrm{index}(D)=\mathrm{index}(D+T)$, since
$T:H^1_c\to L^2_c$ is compact.
The hypothesis of the theorem (in particular (\ref{noua})) still hold for
$D+T$ instead of $D$ and with $A+S$ instead of $A$. So we can additionally
assume that $D$ is surjective or injective.
\subsection*{Construction of a positive cusp operator $Q$}
For $\xi\in{\mathbb R}$ we have $\sigma_1({\tilde{D}}(\xi))=\sigma_1(D)$, so
${\tilde{D}}(\xi)$ is elliptic as an operator on $M$ and
$\mathrm{index}({\tilde{D}}(\xi))=\mathrm{index}(D)$. If $D$ is surjective or injective,
then $0$ does not belong to the spectrum of $DD^*$ (respectively $D^*D$)
so by continuity ${\tilde{D}}(\xi)$ will have the same property for small enough
$|\xi|$. Thus there exists $\epsilon>0$ such that
the kernel and the cokernel of ${\tilde{D}}(\xi)$ have constant
dimension (hence they vary smoothly)
for all $|\xi|<\epsilon$. Choose a smooth real function
$\phi$ supported in $[-\epsilon,\epsilon]$ such that $\phi(0)=1$.
By \cite[Lemma 2]{cii} and the choice of $\phi$, the families
$\phi(\xi)P_{\ker {\tilde{D}}(\xi)}$ and $\phi(\xi)P_{\ker {\tilde{D}}(\xi)}$
define suspended operators in $\Psi_{\mathrm{sus}}^{-\infty}(M)$.
Let $R\in\Psi_{\!c}^{-\infty,0}(X,\mathcal{G})$ be such that
\begin{equation}\label{imr}\begin{split}
{I_M}(R)(\xi)&=\begin{bmatrix}
\phi(\xi)P_{\ker {\tilde{D}}(\xi)}&&&\\
&\phi(\xi)P_{\mathrm{coker} {\tilde{D}}(\xi)}&&\\
&&0&\\&&&0
\end{bmatrix}\\
&\quad\in\Psi_{\mathrm{sus}}^{-\infty}(M,\mathcal{E}^+\oplus\mathcal{E}^-\oplus V^+\oplus V^-).
\end{split}\end{equation}
It follows that ${I_M}(A^2+R^*R)(\xi)$ is invertible for all $\xi\in{\mathbb R}$, so
the cusp operator $A^2+R^*R$ is fully elliptic; this implies that it
is Fredholm, and moreover its kernel is made of smooth sections
vanishing rapidly towards $\partial X$. Let $P_{\ker(A^2+R^*R)}$
be the orthogonal projection on the finite-dimensional nullspace
of $A^2+R^*R$. Clearly $A^2+R^*R\geq 0$,
thus $A^2+R^*R+P_{\ker(A^2+R^*R)}$ is strictly positive.
Set
\[Q:=(A^2+R^*R+P_{\ker(A^2+R^*R)})^{1/2}\]
and let $Q^\lambda$ be the complex powers of $Q$.
Since $Q^2-A^2\in \Psi_{\!c}^{-\infty,0}(X,\mathcal{G})$ and $A$ is self-adjoint, we deduce
that for all $\lambda\in{\mathbb C}$,
\begin{equation}\label{ci}
[A,Q^\lambda]\in\Psi_{\!c}^{-\infty,0}(X,\mathcal{G}).
\end{equation}
\subsection*{A non-commutative residue}
Let $P(\lambda,z)\in\Psi_{\!c}^{-\lambda -1, -z-1}(X,\mathcal{G})$ be the analytic family
of cusp operators
\[P(\lambda,z):=[x^z,A]Q^{-\lambda -1}.\]
From \eqref{trz}, $\mathrm{Tr}(P(\lambda,z))$
is holomorphic in $\{(\lambda,z)\in{\mathbb C}^2; \Re(\lambda)>\dim(X)-1, \Re(z)>0\}$
and extends meromorphically to ${\mathbb C}^2$.
Following the scheme of \cite[Theorem 1]{cii}, our proof of Theorem \ref{kr}
will consist of computing in two different ways the complex number
\[N:=\mathrm{Res}_{\lambda=0} \left(\mathrm{Tr}(P(\lambda,z))|_{z=0}\right),\]
i.e., $N$ is the coefficient of $\lambda^{-1}z^0$ in the Laurent expansion
of $\mathrm{Tr}(P(\lambda,z))$ around $(0,0)$. The idea is to evaluate at $z=0$
\emph{before} and then \emph{after} taking the residue at $\lambda=0$, noting that the final answer
is independent of this order.
\subsection*{Vanishing of $N$}
On one hand,
\[P(\lambda,z)=x^z[A,Q^{-\lambda -1}]+[A,Q^{-\lambda -1}x^z].\]
The meromorphic function $\mathrm{Tr}[A,Q^{-\lambda -1}x^z]$ is identically zero
since it vanishes on the open set
$\{(\lambda,z)\in{\mathbb C}^2; \Re(\lambda)>\dim(X)-1, \Re(z)>0\}$
by the trace property. By \eqref{ci}, the function
$\mathrm{Tr}(x^z[A,Q^{-\lambda -1}])$ is regular in $\lambda\in{\mathbb C}$, so
in particular the meromorphic function
\[z\mapsto\mathrm{Res}_{\lambda=0}\mathrm{Tr}(x^z[A,Q^{-\lambda -1}])\]
vanishes. We conclude that $N=0$.
\subsection*{Second computation of $N$}
On the other hand, $P(\lambda,0)=0$ so
\[U(\lambda,z):=z^{-1}P(\lambda, z)\in
\Psi_{\!c}^{-\lambda -1, -z-1}(X,\mathcal{G})\]
is an analytic family in $\Psi_{\!c}(X,\mathcal{G})$. Set
$[\log x,A]:=(z^{-1}[x^z,A])|_{z=0}\in\Psi_{\!c}^{0,1}(X,\mathcal{G})$. Then
$U(\lambda, 0)=[\log x,A]Q^{-\lambda-1}$. By multiplicativity of
the indicial family,
\[{I_M}(x^{-1}U(\lambda,0))={I_M}(x^{-1}[\log x,A]){I_M}(Q^{-\lambda-1}).\]
By \eqref{noua} and \cite[Lemma 3.4]{in},
we see that ${I_M}(x^{-1}[\log x,A])$ is the $4\times 4$
diagonal matrix
\[\begin{bmatrix}
i&&&\\&-i&&\\&& i\xi(1+\xi^2+\Delta^+)^{-\frac12}&\\&&&
-i\xi(1+\xi^2+\Delta^-)^{-\frac12}
\end{bmatrix}\]
and ${I_M}(Q^{-\lambda-1})=
{I_M}(A^2+R^*R)^{-\frac{\lambda+1}{2}}$. Also, using \eqref{imr}, we deduce that
${I_M}(A^2+R^*R)$ is the $4\times 4$ diagonal matrix with entries
\begin{align*}
a_{11}&=\xi^2+{\tilde{D}}(\xi)^*{\tilde{D}}(\xi)+\phi^2(\xi)P_{\ker {\tilde{D}}(\xi)}&
a_{33}&=1+\xi^2+\Delta^+\\
a_{22}&=\xi^2+{\tilde{D}}(\xi){\tilde{D}}(\xi)^*+\phi^2(\xi)P_{\mathrm{coker} {\tilde{D}}(\xi)}&
a_{44}&=1+\xi^2+\Delta^-.
\end{align*}
By \eqref{trz},
\begin{equation*}\begin{split}
\mathrm{Tr}(P(\lambda, z))|_{z=0}
&=\frac{1}{2\pi}\int_{\mathbb R} \mathrm{Tr}({I_M}(x^{-1}(U(\lambda,0)))d\xi\\
&=\frac{i}{2\pi}\int_{\mathbb R}
\left(\mathrm{Tr}(\xi^2+{\tilde{D}}(\xi)^*{\tilde{D}}(\xi)
+\phi^2(\xi)P_{\ker {\tilde{D}}(\xi)})^{-\frac{\lambda+1}{2}}\right.\\
&\quad-\mathrm{Tr}(\xi^2+{\tilde{D}}(\xi){\tilde{D}}(\xi)^*
+\phi^2(\xi)P_{\mathrm{coker} {\tilde{D}}(\xi)})^{-\frac{\lambda+1}{2}}\\
&\quad\left.+\xi\mathrm{Tr}(1+\xi^2+\Delta^+)^{-\frac\lambda2-1}
-\xi\mathrm{Tr}(1+\xi^2+\Delta^-)^{-\frac\lambda2-1}\right)
d\xi
\end{split}\end{equation*}
The third and fourth terms in this sum are odd in $\xi$
so their integral vanishes. For each fixed $\xi$ we compute
the trace of the first two terms by using orthonormal basis
of $L^2_c(M,\mathcal{E}^+)$, $L^2_c(M,\mathcal{E}^-)$ given by eigensections of
${\tilde{D}}(\xi)^*{\tilde{D}}(\xi)$, respectively ${\tilde{D}}(\xi){\tilde{D}}(\xi)^*$. The non-zero parts
of the spectrum of ${\tilde{D}}(\xi)^*{\tilde{D}}(\xi)$ and ${\tilde{D}}(\xi){\tilde{D}}(\xi)^*$ coincide,
so what is left is
\[\int_{\mathbb R} \mathrm{index}({\tilde{D}}(\xi))
(\xi^2+\phi^2(\xi))^{-\frac{\lambda+1}{2}}d\xi.\]
The subtle point here is that the kernel and cokernel of ${\tilde{D}}(\xi)$
may have jumps when $|\xi|>\epsilon$, but our formula involves only
the index, which is homotopy invariant and equals $\mathrm{index}(D)$ for all $\xi\in{\mathbb R}$.
Thus the index comes out of the integral; the residue
\[\mathrm{Res}_{\lambda=0}\int_{\mathbb R} (\xi^2+\phi^2(\xi))^{-\frac{\lambda+1}{2}}d\xi\]
is independent of the compactly supported function $\phi$ and equals $2$, so
\[0=N=\mathrm{Res}_{\lambda=0}\mathrm{Tr}(P(\lambda, z))|_{z=0}=\frac{i}{\pi}\mathrm{index}(D).\qedhere\]
\end{proof}
\section{The $K$-theoretic characterization of cobordism invariance}\label{kre}
We interpret now Theorem \ref{pt} in topological terms.
Let
\[p:{S_{\mathrm{sus}}^*(M)}\to M\] be the sphere bundle inside $T_\sus^* M:=T^*M\oplus\underline{{\mathbb R}}$.
We also denote by $p$ the bundle projections $T^*M\to M$, $T^*X\to X$,
$S^*X\to X$. The total space of ${S_{\mathrm{sus}}^*(M)}$ is the boundary of ${}^c\!S^*X$. By fixing
a product decomposition of $X$ near $M$, we get non-canonical vector bundle
isomorphisms making the diagram
\begin{equation*}
\xymatrix{
{}^cT^*X\ar[d]^\cong\ar[r]^r&T^*_{\mathrm{sus}} M\ar[d]^\cong\\
T^*X\ar[r]^r&T^*X|_M}
\end{equation*}
commutative, so we can replace ${}^c\!S^*X$ with the more familiar space
$S^*X$ in all the topological considerations of this section.
The interior unit normal vector inclusion $\imath:M\to{S_{\mathrm{sus}}^*(M)}$ and the
bundle projection $p:{S_{\mathrm{sus}}^*(M)}\to M$ induce a splitting
\[K^0({S_{\mathrm{sus}}^*(M)})=\ker(\imath)\oplus p^*(K^0(M)).\]
Let
\[r:K^0(S^*X)\to K^0({S_{\mathrm{sus}}^*(M)})\]
be the map of restriction to the boundary, and
\[d:K^0(T^*M)\to K^0({S_{\mathrm{sus}}^*(M)})/ p^*(K^0(M))\]
the isomorphism defined as follows: if
$(\mathcal{E}^+,\mathcal{E}^-,\sigma)$ is a triple defining a class in $K^0(T^*M)$ with
$\sigma:\mathcal{E}^+\to\mathcal{E}^-$ an isomorphism outside the open unit ball,
then
\[
d(\mathcal{E}^+,\mathcal{E}^-,\sigma)=\begin{cases}\mathcal{E}^+ &\text{on
${S_{\mathrm{sus}}^*(M)}\cap\{\xi\geq 0\}$}\\
\mathcal{E}^- &\text{on
${S_{\mathrm{sus}}^*(M)}\cap\{\xi\leq 0\}$}
\end{cases}\]
with the two bundles identified via $\sigma$ over
${S_{\mathrm{sus}}^*(M)}\cap\{\xi=0\}= S^*M$.
We can now reformulate Theorem \ref{th2} as follows:
\begin{theorem}\label{kr}
Let $X$ be a compact manifold with closed boundary $M$, $\mathcal{E}^\pm\to M$
hermitian vector bundles, and $D\in\Psi(M,\mathcal{E}^+,\mathcal{E}^-)$
an elliptic pseudodifferential operator with symbol class
\[[\sigma(D)]:=(p^*\mathcal{E}^+,p^*\mathcal{E}^-,\sigma(D))\in K^0(T^* M)\]
Assume that $d[\sigma(D)]\in p^*(K^0(M)) + r(K^0(S^*X)).$
Then $\mathrm{index}(D)=0.$
\end{theorem}
\begin{proof}
The idea is to construct an operator $A$ as in Theorem \ref{pt}.
We must first construct the vector bundles $V^\pm$, and then
extend the principal symbol of \eqref{noua} to an elliptic symbol in
the interior of $X$. Note that none of the bundles $\mathcal{E}^\pm, V^\pm$
has any reason to extend to $X$.
We can assume that $D$ is of order $1$. Extend $\sigma(D)_{|S^*M}$ arbitrarily
to a homomorphism $\sigma:p^*\mathcal{E}^+\to p^*\mathcal{E}^-$ (not necessarily invertible)
over ${S_{\mathrm{sus}}^*(M)}$. Let $\mathcal{F}^\pm\to{S_{\mathrm{sus}}^*(M)}$ be the vector bundles
defined as the span of the eigenvectors of positive, resp.\ negative
eigenvalues of the symmetric automorphism of $p^*(\mathcal{E}^+\oplus\mathcal{E}^-)$
\[a:=\begin{bmatrix}\xi&\sigma^*\\\sigma&-\xi\end{bmatrix}:
p^*(\mathcal{E}^+\oplus\mathcal{E}^-)\to p^*(\mathcal{E}^+\oplus\mathcal{E}^-).\]
\begin{lemma1}
The $K$-theory class of the vector bundle $\mathcal{F}^+$ is $d[\sigma(D)]$.
\end{lemma1}
\begin{proof} $\mathcal{F}^+$ is the image of the projector $\frac{1+a(a^2)^{-\frac12}}{2}$
inside $p^*(\mathcal{E}^+\oplus\mathcal{E}^-)$, or equivalently the image of the endomorphism
$(a^2)^{\frac12}+a$:
\begin{equation*}\begin{split}
\mathcal{F}^+&=\{((\xi+(\xi^2+\sigma^*\sigma)^{\frac12})v,\sigma v) ; v\in\mathcal{E}^+\}\\
&\quad+\{(\sigma^* w,(-\xi+(\xi^2+\sigma\sigma^*)^{\frac12})w) ; w\in\mathcal{E}^-\}.
\end{split}\end{equation*}
Now $\xi+(\xi^2+\sigma^*\sigma)^{\frac12}$ is invertible when
$\xi\geq 0$, and $-\xi+(\xi^2+\sigma\sigma^*)^{\frac12}$ is invertible when
$\xi\leq 0$. Thus the projection from $\mathcal{F}^+$ on $p^*\mathcal{E}^+$, respectively on
$p^*\mathcal{E}^-$, are isomorphisms for $\xi\geq 0$, respectively for $\xi\leq 0$.
Over $\{\xi=0\}$ these isomorphisms differ by $\sigma(\sigma^*\sigma)^{-\frac12}$,
which is homotopic to $\sigma$ by varying the exponent from $-\frac12$ to $0$.
\end{proof}
The hypothesis says therefore that
\begin{equation}\label{ep}
[\mathcal{F}^+]\in p^*(K^0(M)) + r(K^0(S^*X)).
\end{equation}
\begin{lemma1}
There exist vector bundles
$G^\pm\to S^*X$, $V^\pm\to M$ such that
\begin{equation}\label{fkg}
\mathcal{F}^\pm\oplus p^*V^\pm=G^\pm_{|{S_{\mathrm{sus}}^*(M)}}
\end{equation}
and moreover there exists $N\in{\mathbb N}$ with
\begin{align}\label{gtri}\mathcal{E}^+\oplus \mathcal{E}^-\oplus V^+\oplus V^-\cong
\underline{\cz}^N,&& G^+\oplus G^-\cong\underline{\cz}^N.\end{align}
\end{lemma1}
\begin{proof} From eq.\ \eqref{ep}, there exist $V^+_0\to M$, $G^+_0\to X$ and
$k\in {\mathbb N}$ with
$\mathcal{F}^+\oplus \underline{\cz}^k=p^*V^+_0\oplus {G^+_0}_{|{S_{\mathrm{sus}}^*(M)}}\oplus\underline{\cz}^k$. Let $V^+_1$
be a complement of $V^+_0$ inside some $\underline{\cz}^h$. Then $V^+:=\underline{\cz}^k\oplus V^+_1$
and $G^+:=\underline{\cz}^{h+k}\oplus G^+_0$ satisfy \eqref{fkg}. This implies
\begin{equation*}\begin{split}
[\mathcal{F}^-]&=[p^*(\mathcal{E}^+\oplus \mathcal{E}^-)]-[\mathcal{F}^+]
=p^*[\mathcal{E}^+\oplus \mathcal{E}^-\oplus V^+]-r[G^+].
\end{split}\end{equation*}
Let $G^-_0$ be a complement (inside $\underline{\cz}^{N_0}$) of $G^+$, and
$V^-_0$ a complement of $\mathcal{E}^+\oplus \mathcal{E}^-\oplus V^+$ inside $\underline{\cz}^{N_1}$.
Then \[[\mathcal{F}^-]+p^*[V^-_0]+\underline{\cz}^{N_0}=\underline{\cz}^{N_1}+r[G^-_0]\]
which amounts to saying that there exists $N_2\in{\mathbb N}$ with
\[\mathcal{F}^-\oplus p^*V^-_0\oplus \underline{\cz}^{N_0+N_2}\cong\underline{\cz}^{N_1+N_2}
\oplus G^-_0|_{S_{\mathrm{sus}}^*(M)}.\]
Thus $V^-:=V^-_0\oplus \underline{\cz}^{N_0+N_2}$ and
$G^-:=\underline{\cz}^{N_1+N_2}\oplus G^-_0$ satisfy \eqref{fkg}.
From the construction of $V^-$ and $G^-$,
Eq.\ \eqref{gtri} holds for $N:=N_0+N_1+N_2$.
\end{proof}
Let $\mathcal{G}:=\underline{\cz}^N\to X$ be the trivial bundle.
From \eqref{gtri}, $\mathcal{G}_{|M}\cong\mathcal{E}^+\oplus \mathcal{E}^-\oplus V^+\oplus V^-$
(as bundles over $M$) and $p^*\mathcal{G}\cong G^+\oplus G^-$ (as bundles over $S^*X$).
Define
$\tilde{a}:p^*\mathcal{G}\to p^*\mathcal{G}$ to be the automorphism of $p^*\mathcal{G}$ over $S^*X$
that equals $\pm 1$ on $G^\pm$. From the definition
of $\mathcal{F}^\pm$ and eq.\ \eqref{fkg} it follows that $\tilde{a}_{|{S_{\mathrm{sus}}^*(M)}}$ and
the automorphism $\left[\begin{smallmatrix}a&&\\&1&\\&&-1\end{smallmatrix}
\right]$ (written in the decomposition
$p^*\mathcal{G}_{|{S_{\mathrm{sus}}^*(M)}}=p^*(\mathcal{E}^+\oplus\mathcal{E}^-)\oplus p^*V^+\oplus p^*V^-$) have the same spaces of
eigenvectors of positive, respectively negative eigenvalues. Thus we can deform
$\tilde{a}$ inside self-adjoint automorphisms to an automorphism $\alpha$
with
\begin{equation}\label{tas}
\alpha_{|{S_{\mathrm{sus}}^*(M)}}=\left[\begin{smallmatrix}a&&\\&1&\\&&-1\end{smallmatrix}\right].
\end{equation}
We extend $\alpha$ to $T^*X\setminus 0$ with homogeneity $1$.
As noted at the beginning of this section, we replace $S^*X$ by ${}^c\!S^*X$.
By (\ref{tas}) and the definition of $a$, $\alpha|_{S_{\mathrm{sus}}^*(M)}$ coincides with
the principal symbol of the right-hand side of (\ref{noua}). Therefore
using \eqref{jss},
there exists an elliptic cusp operator $A\in\Psi_{\!c}^1(X,\mathcal{G})$ with
$\sigma_1(A)=\alpha$ and with
indicial family given by the symmetric suspended operator (\ref{noua}).
By replacing $A$ with $(A+A^*)/2$ we can assume
$A$ to be symmetric. The hypothesis of Theorem \ref{pt} is fulfilled,
so we conclude that $\mathrm{index}(D)=0$.
\end{proof}
\section{Variants of Theorem \ref{kr}}\label{clo}
\subsection{Carvalho's theorem}
Carvalho \cite{carvalho} obtained a slightly different statement of
cobordism invariance (her result holds for non-compact manifolds as well).
Namely, in the context of Theorem \ref{kr} she proved that
$\mathrm{index}(D)=0$ provided that $[\sigma(D)]$ lies in the image of
the composite map
\[K^1(T^*X)\stackrel{r}{\to} K^1(T^*M\oplus {\mathbb R})
\stackrel{\beta^{-1}}{\to} K^0(T^*M)\]
defined by restriction and by Bott periodicity.
Consider the relative pairs
\begin{align*}
S^*X\hookrightarrow & B^*X,& {S_{\mathrm{sus}}^*(M)} \hookrightarrow & B^*_{\mathrm{sus}} M
\end{align*}
the inclusion map between them and the
induced boundary maps in the long exact sequences in $K$-theory.
We claim that we get a commutative diagram
\begin{align*}
\xymatrix{
K^0(S^*X)\ar[d]^r\ar @{->>} [r]&K^1(T^*X)\ar[d]^r\\
K^0({S_{\mathrm{sus}}^*(M)})\ar[d]^q\ar @{->>} [r]&K^1(T_\sus^* M)\\
K^0({S_{\mathrm{sus}}^*(M)})/p^*K^0(M)\ar[r]^-{d^{-1}}&K^0(T^*M)\ar[u]_\beta}\label{dia}
\end{align*}
Indeed, the upper square commutes by naturality and the lower one
by checking the definitions. Moreover, the existence\footnote{The obstruction
lives in $H^n(X)$ which is $0$ when $X$ has nonempty boundary;
I am grateful to Gustavo Granja for this argument.}
of nonzero sections
in $T^*X\to X$ and $T_\sus^* M\to M$ shows that the rows are surjective. Also
$\beta$, $d$ are isomorphisms, so $d[\sigma(D)]$ lies in the image of $q\circ r$
if and only if $[\sigma(D)]$ lies in the image of $\beta^{-1}\circ r$.
Thus Theorem \ref{kr}
is equivalent to Carvalho's statement applied to closed manifolds.
Our formulation is marginally simpler because it does not involve
the Bott isomorphism.
\subsection{An indirect proof of Theorem \ref{kr}}
As mentioned in the introduction, Theorem \ref{kr} follows from the
Atiyah-Singer formula:
\[\mathrm{index}(D)=\langle M, \mathrm{Td}(TM)\cup p_*\mathrm{ch}([\sigma(D)])\rangle,\]
where $p_*$ denotes integration along the fibers of $p:T^*M\to M$, taking
values in the cohomology of $M$ twisted with the orientation bundle.
Indeed, the normal bundle to $M$ in $X$ is trivial so $\mathrm{Td}(TM)=\mathrm{Td}(TX)_{|M}$.
We can embed $T^*M$ into $S^*X_{|M}$ via the central
projection from the interior pole of each sphere; the pull-back through this
map of $d[\sigma(D)])$ coincides with $[\sigma(D)]$ modulo $p^*K^0(M)$, in particular
the push-forward on $M$ of $\mathrm{ch}(d[\sigma(D)])$ and of $\mathrm{ch}([\sigma(D)])$ are equal.
So the hypothesis that $d[\sigma(D)]$ is the restriction of a class on $S^*X$ modulo
$p^*K^0(M)$ implies, by the functoriality of the Chern character, that
$p_*\mathrm{ch}([\sigma(D)])\in H^*(M,\mathcal{O})$ is the restriction of a (twisted)
cohomology class from $X$. Finally Stokes formula shows that $\mathrm{index}(D)=0$.
\bibliographystyle{plain}
|
2,877,628,091,263 | arxiv | \section{Introduction}\medskip
\par
There exists a strong analogy between the properties of black holes and
conventional thermodynamical systems \cite{thermo}. In this analogy the entropy
of the black hole is directly proportional to the surface area of its
event horizon and literature refers to this quantity as the Bekenstein-Hawking
entropy \cite{bekenstein}. Meanwhile, the temperature of the black hole is proportional
to the surface gravity of its horizon. In spite of this established
correspondence, there is still a lack of understanding of precisely what
accounts for black hole entropy which is a pure geometrical quantity.
In a usual statistical mechanical system the entropy is explained in
terms of the degrees of freedom of its microscopic constituents. However
a black hole has a limited number of such degrees of freedom,
as demonstrated by
the so called "No-Hair" theorems \cite{hair}.
In recent literature, varied attempts have been made to derive black hole
entropy on statistical mechanical principles, with varying
degrees of success \cite{stringy}\cite
{carlip}\cite{induced}\cite{loop}. For instance, Strominger and Vafa
\cite{stringy} counted the degeneracy of soliton bound states for extremal black
holes in string theory. In a very different and more geometrical approach,
Carlip \cite{carlip} counted horizon edge states in a gauge theory formulation
of 2+1 anti-deSitter gravity. Another approach that has been investigated
involves Sakharov's theory of induced gravity \cite{sak} following a proposal
by Jacobson \cite{jac}. Recent work along these lines to generate black hole
entropy has been done by Frolov, Fursaev, and Zelnikov \cite{induced}.
The success of a number of very diverse approaches seems to suggest that
the correct explanation for black hole entropy may in some sense
be universal. That is, it should
depend explicitly on neither the macroscopic gravitational form nor
on a hidden microscopic quantum theory \cite{wald}. Consequently, it may prove
beneficial to study as wide a range of theories as feasible and in doing
so look for model independent features. Such observations could
potentially provide valuable insight as to the geometrical origins
of black hole entropy. To this end, we examine the thermodynamic
properties of black holes in generic dilaton gravity coupled to an
Abelian gauge field in 1+1 dimensions. This provides a very extensive
class of models which allow for black hole solutions. Even with the
2-dimensional limitation, many such models are seen to have direct
physical significance. For instance, in the spherically symmetric
reduction of 4-dimensional Einstein-Maxwell gravity the
dilaton scalar field corresponds to
radial distance. Also it has been shown that black hole solutions
of constant curvature gravity in 2-dimensions ( Jackiw-Teitelboim\cite{JT}) are
in fact projections of BTZ black holes described by 2+1 gravity with
axial symmetry \cite{ortiz}.
In recent work we have studied the classical
thermodynamic properties of generic
dilaton gravity via a Hamiltonian partiton function method
\cite{star}\cite{shelemy}. In the present paper we calculate
the thermodynamics so as to include one-loop corrections. The approach
we use here is based on York's Euclidean-action method
\cite{york}\cite{brown} which
in turn follows from the Gibbons-Hawking path integral
formalism \cite{gibbon}. This
entails taking the black hole to be in a state of thermal equilibrium
with evaporated radiation and then relating the periodicity of
Euclidean time with the inverse thermodynamic temperature. First-order
quantum corrections are introduced into the procedure via a technique
applied by Frolov, Israel, and Soldukin \cite{frolov} in the study of spherically
symmetric charged black holes. The basic idea is to add to the classical
action a correction corresponding to the one-loop effective action obtained by
integrating out matter fields coupled to the metric and non-minimally coupled to the dilaton.
This one loop effective action is a suitable generalization\cite{odintsov1,hawk,kummer1,dowk} of the
Polyakov action obtained from the 2-D conformal (or trace) anomaly\cite{trace}. The effect of these quantum corrections on the black hole thermodynamics is two-fold. First, it modifies the black hole geometry due to the
non-vanishing one-loop effective stress-energy tensor. Secondly, the surface terms which give rise to the black hole free energy also acquire quantum corrections. As a result the formulae relevant to calculating thermodynamic
quantities ( energy and entropy ) are modified as well. Our results will
hopefully provide insight into the nature of such corrections for generic
dilaton gravity as well as provide the template for closer examination
of a myriad of specific theories.
This paper is arranged as follows. In Section 2 we introduce the action for
generic 2-dimensional dilaton gravity coupled to an Abelian gauge field.
Here we present the most general solution to the field equations and
by way of the Euclidean action approach \cite{york} we are able to describe
black hole thermodynamic properties at a classical level. In Section 3
we introduce one-loop quantum corrections due to matter fields propogating
on a curved background. The resulting modifications to the black hole
geometry are deduced by applying the formalism of Frolov et al. \cite{frolov}.
In Section 4 we calculate the quantum corrections to black hole energy
and entropy. In Sections 5 and 6 we apply our results to the specific
examples of charged black holes in spherically symmetric gravity and
rotating BTZ black holes respectively. \footnote{Quantum corrections to the thermodynamics of the BTZ black hole and some classes of 2d charged black holes have been previously considered using different methods in \cite{odintsov2}
while \cite{bytsenko} has examined quantum gravitational corrections to
the entropy of the BTZ black hole.} For simplicity cases consider
minimal coupling of matter fields with the dilaton however the formalism
to be presented is readily extendable to more general coupling scenarios.
Section 7 summarizes the paper and considers future prospects for
related work.
\section{Classical Theory}
In two spacetime dimensions the Einstein tensor vanishes
identically. Consequently, the
construction of a dynamical theory of gravity with no more than two
derivatives of the metric
in the action requires the introduction of a scalar field, namely
the dilaton. Recent works
have demonstrated that the dilaton is more than a lagrange
multiplier but significant in
determining both the symmetries and topologies of the solution
\cite{DGK}.
Here we consider the most general Lorentzian action functional
depending on the metric
tensor $g_{\mu\nu}$, dilaton scalar field $\phi $ and Abelian gauge
field $A_{\mu\nu}$ in two
spacetime dimensions \cite{banks}\cite{DK}:
\begin{eqnarray}
W[g,\phi ,A] &=& {1\over{2G}}\int
d^2x\sqrt{-g}\left[D(\phi )R(g)\right.\nonumber\\
&&\quad +
\left.{1\over2}g^{\alpha\beta}\partial_{\alpha}\phi\partial_{\beta
}
\phi + {1\over{l^2}}V(\phi ) - {2G\over 4}Y(\phi
)F^{\alpha\beta}
F_{\alpha\beta}\right]
\label{eq: 1}
\end{eqnarray}
where $G$ is the dimensionless 2-d Newton constant, $F_{\mu\nu} =
\partial_{\mu}A_{\nu} - \partial_{\nu }A_{\mu }$ ,and l is a fundamental
constant of dimension length.
Also
$D(\phi ), V(\phi ),$ and $Y(\phi )$ are arbitrary functions of the
dilaton field.
Variation of the action with respect to the metric, dilaton field
and gauge field respectively
leads to the following set of field equations.
\begin{eqnarray}
-2\nabla_{\alpha }\nabla_{\beta }D(\phi ) & + & \nabla_{\alpha
}\phi\nabla_{\beta }
\phi + g_{\alpha\beta}\left(2\Box D(\phi )-{1\over2}(\nabla\phi
)^2-{1\over{l^2}}
V(\phi )\right.\nonumber\\
& + & \left.{G\over 2}Y(\phi )F^{\mu\nu}F_{\mu\nu}\right)-2G
Y(\phi )F^{\gamma}_{\alpha }F_{\beta\gamma} = 0
\label{eq: 2}
\end{eqnarray}
\begin{equation}
-\Box\phi +
\left[R{\delta D\over\delta\phi }+{1\over{l^2}}
{\delta V\over{\delta\phi}}-{G\over 2}{\delta Y\over \delta\phi }F^
{\alpha\beta }F_{\alpha\beta }\right] = 0
\label{eq: 3}
\end{equation}
\begin{equation}
\nabla_{\beta }(Y(\phi )F^{\alpha\beta}) = 0
\label{eq: 4}
\end{equation}
Directly solving Maxwell's equation \req{eq: 4} yields
\begin{equation}
F = {{\sqrt{-g}q}\over {Y(\phi )}}
\label{eq: 5}
\end{equation}
where $F$ is defined implicitly by $F_{\mu r} = F\epsilon_{\mu\nu}$
and
$q$ is a constant that
corresponds to Abelian charge. Next we define an ``effective''
potential
${\tilde V}(\phi ,q)$ such that
\begin{equation}
{\tilde V}(\phi ,q) = V(\phi ) - Gl^2{q^2\over {Y(\phi )}}
\label{eq: 6}
\end{equation}
The action and remaining field equations Eqs.[\ref{eq: 2},
\ref{eq: 3}] can be rewritten as
follows:
\begin{equation}
W[g,\phi ,q] = {1\over {2G}}\int d^2x\sqrt{-g}\left[D(\phi
)R(g) + {1\over2}
(\nabla\phi )^2 + {1\over {l^2}}\tilde{V}(\phi ,q)\right]
\label{eq: 7}
\end{equation}
\begin{equation}
-2\nabla_{\alpha }\nabla_{\beta}D(\phi ) +
\nabla_{\alpha}\phi\nabla_{\beta}
\phi + g_{\alpha\beta}\left(2\Box D(\phi ) - {1\over2}
(\nabla\phi )^2 - {1\over{l^2}}{\tilde V}(\phi , q)\right) =
0
\label{eq: 8}
\end{equation}
\begin{equation}
-\Box\phi +
R{{\delta D}\over{\delta\phi }} + {1\over l^2}
{\delta{\tilde V}\over {\delta\phi }} = 0
\label{eq: 9}
\end{equation}
Obtaining the solutions for an action of this form has been well
documented in prior works \cite{star}\cite{DK} so only a brief account will be
presented here. First the action is reparameterized
thereby eliminating the kinetic term (requires $D(\phi )
\neq 0$ and ${{dD}\over {d\phi }}\neq 0$ for any admissable value
of $\phi $ ).
\begin{equation}
\overline{\phi} = D(\phi )
\label{eq: 10}
\end{equation}
\begin{equation}
\Omega^2 = \exp{1\over2}\int{{d\phi }\over {dD/d\phi }}
\label{eq: 12}
\end{equation}
\begin{equation}
\overline{g}_{\mu\nu} = \Omega^2(\phi )g_{\mu\nu}
\label{eq: 11}
\end{equation}
\begin{equation}
{\overline{V}}(\overline{\phi} ,q) = {\tilde V}(\phi ,q)/\Omega^2(\phi )
\label{eq: 13}
\end{equation}
The reparameterized action is then as follows:
\begin{equation}
W[\overline{g} ,\overline{\phi} ,q] = {1\over {2G}}\int d^2x\sqrt{-\overline{g}}
\left[\overline{\phi} R(\overline{g} ) + {1\over {l^2}}\overline{V}
(\overline{\phi},q)\right]
\label{eq: 13.5}
\end{equation}
The timelike killing
vector for the resultant field equations is easily identifiable.
It is found to be:
\begin{equation}
\overline{K}^{\mu } = {l\epsilon^{\mu\nu}\over \sqrt{-\overline{g}}}\partial_{\nu
}\overline{\phi}
\label{eq: 14}
\end{equation}
With norm given by
\begin{equation}
|\overline{K}|^2 = \overline{j}(\overline{\phi} ) - 2GlM
\label{eq: 15}
\end{equation}
where
\begin{equation}
\overline{j}(\overline{\phi} ) = \int^{\overline{\phi} }_od\overline{\phi}\overline{V}(\overline{\phi} ,q)
\label{eq: 16}
\end{equation}
and $M$ is a constant of integration identified as the mass
observable.
Next we choose a local coordinate system in which $\overline{\phi} $ and hence
$\phi $ have spatial dependence only. The final solutions in these
static coordinates are then
obtained by exploiting the form of the killing vector. These are
found to be:
\begin{equation}
{\overline {\phi }} = {x\over l}
\label{eq: 16.25}
\end{equation}
\begin{equation}
ds^2 = -\overline{g}(x)dt^2 + \overline{g}^{-1}(x)dx^2
\label{eq: 16.5}
\end{equation}
where
\begin{equation}
{\overline{g}}(x) = \overline{j}(\overline{\phi} ) - 2GlM
\label{eq: 16.75}
\end{equation}
We can then re-express this solution in terms of the original
parameterization as follows:
\begin{equation}
\phi = D^{-1}({x\over l})
\label{eq: 17}
\end{equation}
\begin{equation}
ds^2 = -g(x)dt^2 + g^{-1}(x)\Omega^{-4}(\phi (x))dx^2
\label{eq: 18}
\end{equation}
where
\begin{equation}
g(x) = {1\over \Omega^2(\phi (x))}\left[\overline{j}({x\over
l})-2GlM\right]
\label{eq: 19}
\end{equation}
The necessary condition for a given theory to admit black hole
configurations is the existence of apparent horizons. That is,
spacetime curves of the form
$\phi (x,t) = \phi_o$ (constant) where $\phi_o$ satisfies $g(\phi_o
; M, q) = 0$. The nature of a given black hole solution can
be revealed by considering $dg / d\phi $ evaluated at these
event horizons $\phi = \phi_o$. For a fixed value of mass M there
may exist critical values
of charge $q (M)$ so that
this derivative vanishes. For such critical values the function
$g(\phi ; M, q)$ may have either a local extremum or point of
inflection at the horizon. If it is an extremum the norm of the
killing vector does not change sign when passing through the event
horizon. As $q$ is varied away from its critical value either the
horizon will disappear or two event horizons (inner and outer) will
appear. The latter case signifies the presence of an extremal
black hole when $q$ is at its critical value. For a point of
inflection the norm of the killing vector does change sign but as
$q$ is varied away from its critical value one expects the
formation of either one or three horizons \cite{DK}.
For the subsequent (thermodynamic) analysis we consider the Euclidean
sector such that $t\rightarrow it$. Hence we re-write the action
\req{eq: 7}
with respect to the Euclidean metric tensor:
\begin{eqnarray}
W_{E} &=& -{1\over 2G}\int d^2x\sqrt{g}\left[D(\phi )R(g) +
{1\over 2}(\nabla\phi )^2\right.\nonumber\\
&&\quad + \left.{1\over l^2}{\tilde V}(\phi ,q)\right]-{1\over G}
\oint_{outer\ boundary} dt\gamma D(\phi)\nabla_{\alpha }
n^{\alpha }
\label{eq: 19.5}
\end{eqnarray}
(Henceforth the subscript E on the Euclidean action will be implied).
The second integral in \req{eq: 19.5} is the extrinsic curvature
boundary term. It is included so that when second derivatives of the metric are
cancelled off (via appropriate integration by parts) then the resulting
total divergences on the outer boundary will be cancelled off as well
\cite{gibbon}. Here we
define $n_{\mu }$ as the outward unit vector normal to the outer
boundary enclosing the black hole and $\gamma$ as the induced
metric appropriate for evaluating the line integral.
We re-write the Euclidean static metric in the following form
\begin{equation}
ds^2 = g(x)dt^2 + e^{-2\lambda (x)}g^{-1}(x)dx^2
\label{eq: 20}
\end{equation}
where $e^{\lambda(x)} = \Omega^2(\phi (x))$. For this metric it
follows that
\begin{equation}
\sqrt{g}= e^{-\lambda (x)}
\label{eq: 21}
\end{equation}
\begin{equation}
R = -e^{\lambda (x)}\left(e^{\lambda
(x)}g^{\prime}(x)\right)^{\prime}
\label{eq: 22}
\end{equation}
where $\prime $ indicates differentiation with respect to $x$. The
coordinates $t, x$ describe a disc and will be taken to range
between the limits $x_+\leq x \leq L$ and $0 \leq t \leq 2\pi\beta
$. Here $x = x_+$ represents the black hole horizon (ie. - $g(x_+)
= 0), x = L$ is the outer boundary of the black hole (ie. - box
size), and $\beta$ is the asymptotic inverse temperature. It can
be
shown that regularity of the solution requires the absence of a conical
singularity which leads to the following condition:
\begin{equation}
\beta = {2e^{-\lambda (x_+)}\over g^{\prime }(x_+)}
\label{eq: 23}
\end{equation}
Note that application of standard thermodynamics requires using the
inverse temperature of the box
${\overline {\beta }}$ which is "red-shifted" from the previously
defined quantity such that:
\begin{equation}
{\overline{\beta }} = g^{1/2}(L)\beta
\label{eq: 24}
\end{equation}
For this metric it also follows that the extrinsic curvature
(defined by the boundary term of the action) can
be expressed \cite{frolov}:
\begin{equation}
\gamma\nabla_{\alpha }n^{\alpha} = {1\over2} e^{\lambda (L)}g^{\prime
}(L)
\label{eq: 25}
\end{equation}
Using these results Eqs.[\ref{eq: 20}-\ref{eq: 25}] we can
express the Euclidean action functional (\req{eq: 19.5}), with respect to the
generic static metric giving:
\begin{eqnarray}
W &=& {-\pi\beta\over G}\int^L_{x_+}dx\left(D^{\prime }
(x)e^{\lambda (x)}g^{\prime }(x) + {e^{\lambda (x)}\over 2}g(x)
(\phi^{\prime }(x))^2\right.\nonumber\\
&&\quad + \left.{e^{-\lambda (x)}{\tilde V}(x)\over l^2}\right)
-{2\pi\over G}D(x_+)
\label{eq: 26}
\end{eqnarray}
We can use this form of the action to
derive thermodynamic
properties of interest. These include the free energy $F =
(2\pi{\overline{\beta }})
^{-1} W $, energy $E = (2\pi )^{-1}\partial_{\overline
{\beta }} W $,
and entropy $S = ({\overline {\beta }}\partial_{\overline {\beta
}}-1) W $.
\begin{eqnarray}
F &=& -{1\over 2Gg^{1/2}(L)}\int^L_{x_+}dx\left(D^{\prime }(x)e^
{\lambda (x)}g^{\prime }(x) + {e^{\lambda (x)}\over 2}g(x)
(\phi^{\prime }(x))^2\right.\nonumber\\
&&\quad + \left.{e^{-\lambda (x)}{\tilde V}(x)\over l^2}\right)
- {D(x_+)\over G{\overline{\beta}}}
\label{eq: 27}
\end{eqnarray}
\begin{equation}
E = -{1\over 2Gg^{1/2}(L)}\int^L_{x_+}dx\left(D^{\prime }(x)e^{\lambda
(x)}g^{\prime }(x) + {e^{\lambda (x)}\over 2}g(x)(\phi^{\prime }
(x))^2 + {e^{-\lambda (x)}\over l^2}{\tilde V}(x)\right)
\label{eq: 28}
\end{equation}
\begin{equation}
S = {2\pi\over G}D(x_+)
\label{eq: 29}
\end{equation}
Since the box temperature is taken to be $T = 2\pi
{\overline{\beta }}$ from Eqs.[\ref{eq: 27}-\ref{eq: 29}]
we obtain the result $F = E-TS$. At the extremum of free energy (or
equivalently action) $\delta F = 0$ and hence the second law of
thermodynamics immediately follows.
It is possible and convenient to re-express the action (\req{eq:
26}) in a form which, except for surface terms, vanishes on shell.
Defining $G_{\alpha\beta }$ to be the left hand side of
\req{eq: 8} and re-writing with respect to the
coordinate system defined by metric \req{eq: 20} yields:
\begin{eqnarray}
G_{\alpha\beta } &=& -2\delta^x_{\alpha }\delta^x_{\beta }
D^{\prime\prime}(x) + 2\Gamma^x_{\alpha\beta }D^{\prime }(x)
+ g_{\alpha\beta }\left[2e^{\lambda (x)}\left(e^{\lambda (x)}g(x)
D^{\prime }(x)\right)^{\prime }\right.\nonumber\\
&&\quad - \left.{1\over2} g(x)e^{2\lambda (x)}(\phi^{\prime }
(x))^2-{1\over l^2}{\tilde V}(x)\right]=0
\label{eq: 30}
\end{eqnarray}
In the case where both tensor indices represent time coordinate
denoted by 0 (and note that $\Gamma^x_{00} = -{1\over2}
e^{2\lambda}gg^{\prime })$ we get:
\begin{eqnarray}
G^0_0 &=& -e^{\lambda(x)}\left[-e^{\lambda (x)}g^{\prime
}(x)D^{\prime }(x) - 2g(x)\left(e^{\lambda (x)}D^{\prime }(x)\right)^{\prime
}\right.\nonumber\\
&&\quad +\left. {1\over2} g(x)e^{\lambda (x)}(\phi^{\prime }(x))^2 +
{e^{-\lambda (x)}\over l^2}{\tilde V}(x)\right]
\label{eq: 31}
\end{eqnarray}
Using this result to substitute for the second and third terms in
the integrand of \req{eq: 26} :
\begin{equation}
W = -{\pi\beta\over G}\int^L_{x_+}dx\left[-e^{-\lambda (x)}
G^0_0+ 2\left(g(x)e^{\lambda (x)}D^{\prime }(x)\right)^{\prime }
\right] - {2\pi\over G}D(x_+)
\label{eq: 32}
\end{equation}
Since the second term in the integrand is a total derivative and
$g(x_+) = 0$ it follows that:
\begin{equation}
W = {\pi\beta\over G}\int^L_{x_+}e^{-\lambda (x)}G^0_0dx -
{2\pi\beta\over G}e^{\lambda (L)}g(L)D^{\prime }(L) - {2\pi\over G}D(x_+)
\label{eq: 33}
\end{equation}
Reconsider the energy $E = (2\pi )^{-1}\partial_{{\overline{\beta
}}} W $. Since thermodynamic quantities are presumed to be
calculated for equilibrium configurations (i.e. on shell) here we
can set $G^0_0 = 0$ giving an energy which reduces to an outer
boundary surface term:
\begin{equation}
E = -{1\over G}e^{\lambda (L)}g^{{1\over2} }(L)D^{\prime }(L)
\label{eq: 34}
\end{equation}
This expression is typically divergent as $ L\rightarrow\infty $.
(This follows from the divergence of the Euclidean action as the outer
boundary goes to infinity). To resolve this dilemma we compare the
energy of \req{eq: 34} with that of a carefully selected background
geometry \cite{hawk2}. The background metric will be taken here
to represent the asymptotic geometry of the black hole. Hence
we define $ g_0 = lim_{L\rightarrow\infty} g(L) $ and the
"subtracted energy" is given by:
\begin{equation}
E_{sub} = {1\over G}e^{\lambda(L)}D^{\prime}(L)\left[g^{{1\over2}}_0-g^{{1\over2}}(L)\right]
\label{eq: 34.5}
\end{equation}
We can justify this choice of background by noting the agreement between
this result with that attained for a Hamiltonian partition function
approach in a prior study \cite{star}.
\section{Quantum Corrected Black Hole Geometry}
In the path integral approach to black hole thermodynamics the
matter fields can be integrated out yielding an effective action which
depends only on fields in the classical action. Hence
one-loop quantum effects can be taken into account by adding a
quantum counterpart $\hbar\Gamma $ to the classical gravitational action
$W_{CL}$ (\req{eq: 19.5}) such that (assuming no matter coupling to
the Abelian gauge field):
\begin{equation}
W\left[g,\phi,q\right] = W_{CL}\left[g,\phi,q\right] + \hbar\Gamma
\left[g,\phi\right]
\label{eq: 35}
\end{equation}
Variation of this complete action yields the quantum corrected
field equations which may be solved perturbatively. Variation of
the action with respect to the metric gives us:
\begin{equation}
G_{\alpha\beta }(g,\phi,q) + \hbar T_{\alpha\beta }(g,\phi) + O(\hbar^2) = 0
\label{eq: 35.5}
\end{equation}
where $G_{\alpha\beta } = {\delta W_{CL}\over \delta g^{\alpha\beta
}}$ is again given by the left hand side of \req{eq: 8}
and $T_{\alpha\beta } =
{\delta\Gamma\over \delta g^{\alpha\beta }}$.
The general form of the one loop effective action is \cite{odintsov1,kummer1}:
\begin{equation}
\Gamma = {1\over 12}\int d^2x\sqrt{g}\left[aR{1\over \Box}R
+b(\phi)(\nabla\phi)^2{1\over \Box}R +c(\phi) R - \ln(\mu^2)b(\phi)
(\nabla\phi)^2\right]
\label{eq: 36}
\end{equation}
The first term is the usual trace anomaly that arises for minimally coupled scalars, while the next two terms are contributions to the anomaly from the
non-minimal coupling to the dilaton. The last term contains an arbitrary scale factor, $\mu$, and comes from the conformally invariant part of the effective action\footnote{We are grateful to S. Odintsov
for pointing out the necessity for including this term.}. It can be obtained using
a Schwinger-DeWitt type expansion\cite{odintsov2}. We will show that the
scale factor $\mu$ does not affect the final thermodynamic quantities. $a$ is a constant (we will set $a=1$ for simplicity), while $b(\phi)$ and $c(\phi)$
are determined by the specific form of the coupling between the matter fields and the dilaton. For example, in dimensionally reduced spherically symmetric
gravity, $b$ and $c$ are constants\cite{hawk,dowk,odintsov1,kummer1}. Here we will treat them as arbitrary local
functions of the dilaton.
It is important for the
following thermodynamic analysis to put the non-local
expression \req{eq: 36} for $\Gamma $ in local form. We do this by introducing
a pair of scalar fields $\psi$ and $N$, and writing:
\begin{eqnarray}
\Gamma &=& {1\over 12}\int d^2x\sqrt{g}\left[(\psi + N) R + (\nabla N)
\cdot(\nabla\psi
)+b(\nabla\phi)^2(\psi-\ln(\mu^2))+c\phi R \right] \nonumber\\
&&\quad + {1\over 6}
\oint_{outer\ boundary}dt\gamma(\psi +N+c(\phi))\nabla_{\alpha }
n^{\alpha }
\label{eq: 37}
\end{eqnarray}
where an extrinsic curvature surface term has been added in analogy to
the classical case. It is straightforward to show that variation of \req{eq: 37}
yields the following field equations for the scalars:
\begin{equation}
\psi = {1\over \Box } R
\label{eq: psi1}
\end{equation}
and
\begin{equation}
N={1\over \Box } ( R+b(\nabla \phi )^2 )
\label{eq: N1}
\end{equation}
Substituting these equations back into \req{eq: 37} yields precisely \req{eq: 36}.
Note that in the 2-dimensional minimally coupled
case ($b=0$), N reduces to $ \psi $ and only a single scalar field need
be introduced.
Before proceeding we show it is possible to solve explicitly for
$\psi (x)$ at the classical level (as is appropriate for this
analysis). This is achieved by conformally mapping the coordinate
space described by the static Euclidean metric \req{eq: 20} to a
flat disc of radius $z_o$
and curvature $ R=\Box\psi $. This disc may be expressed
\begin{equation}
ds^2 = e^{-\psi (z)}(z^2d\theta^2 + dz^2)
\label{eq: 38}
\end{equation}
where the disc coordinates $\theta $ and $z$ are taken to range
between $0 \leq \theta \leq 2\pi $ and $0 \leq z \leq z_0$.
Substituting the Euclidean static metric \req{eq:
20} into the left hand side gives
\begin{equation}
g(x)dt^2 + g^{-1}(x)e^{-2\lambda (x)}dx^2 = {\overline
{e}}^{\psi(z)}(z^2d\theta^2 + dz^2)
\label{eq: 39}
\end{equation}
where $t = \beta\theta $ and $x_+ \leq x \leq L$. The following
relations follow directly from \req{eq: 39}:
\begin{eqnarray}
g(x)dt^2 &=&e^{-\psi }z^2d\theta^2\nonumber\\&=&
{e^{-\psi }z^2\over \beta^2}dt^2
\label{eq: 40}
\end{eqnarray}
\begin{equation}
g^{-1}(x)e^{-2\lambda }dx^2 = e^{-\psi
}dz^2
\label{eq: 41}
\end{equation}
Using \req{eq: 40} to solve for $z$ and \req{eq: 41} to solve
for $dz$ gives us:
\begin{equation}
z = \beta\sqrt{g}e^{+\psi /2}
\label{eq: 42}
\end{equation}
\begin{eqnarray}
dz&=&{e^{-\lambda}e^{+\psi/2}\over\sqrt{g}}dx\nonumber\\
&=& {e^{-\lambda}z\over
\beta g}dx
\label{eq: 43}
\end{eqnarray}
Dividing \req{eq: 43} by \req{eq: 42} and integrating for given
boundary conditions yields:
\begin{equation}
ln \left({z_o\over z}\right) = {1\over \beta }\int^L_x{dx\over
g(x)}
e^{-\lambda (x)}
\label{eq: 44}
\end{equation}
Using equation \req{eq: 42} to re-write the left-hand side of
\req{eq: 44} as a function of $x$ and solving for $\psi (x)$ :
\begin{equation}
\psi (x) = \psi (L) - {2\over \beta} \int^L_x dx{e^{-\lambda (x)}\over g(x)} + ln g(L) - lng(x)
\label{eq: 45}
\end{equation}
To find an explicit expression for $\psi (L)$ consider the
calculation of the proper time evaluated for a closed path on the boundary of the disc at $x = L(z = z_o)$:
\begin{equation}
\oint^{2\pi\beta }_{t=0}\sqrt{g(L)}dt = \oint^{2\pi }_{\theta = 0}
z_o e^{-\psi (L)/2}d\theta
\label{eq: 46}
\end{equation}
Integrating and solving for $\psi (L)$ :
\begin{equation}
\psi (L) = -2ln\left({\beta\over z_o}\right)-lng(L)
\label{eq: 47}
\end{equation}
Substituting \req{eq: 47} into \req{eq: 45}:
\begin{equation}
\psi (x) = -lng(x) -{2\over \beta }\int^L_xdx{e^{-\lambda
(x)}\over g(x)}-2ln\left({\beta\over z_o}\right)
\label{eq: 48}
\end{equation}
To solve for N we repeat the prior analysis except here we map to
a flat disc described in the form:
\begin{equation}
ds^2=e^{-N(z)+{1\over \Box}\left[b(\nabla \phi)^2\right]}(z^2d\theta^2+dz^2)
\label{eq:48.01}
\end{equation}
This results in the following:
\begin{equation}
N(x)=-lng(x)-{2\over\beta}\int^L_xdx{e^{-\lambda(x)}\over g(x)}
-2ln({\beta\over z_0})+{1\over\Box}\left[b(\phi)\left(\nabla\phi (x)\right)^2
\right]
\label{eq:48.02}
\end{equation}
Because of the non-local form of the last term this is not a
satisfactory result so we integrate $ \Box N = R
-b(\nabla\phi)^2 $ giving
\begin{eqnarray}
N(x)&=& N(L)-lng(x)+lng(L)\nonumber\\
&&-\int^L_xd{\tilde x}{e^{-\lambda({\tilde x}
)}\over g({\tilde x})}\left[C-\int^L_{\tilde x}d{\overline {x}}\,b\,
e^{\lambda({\overline{x}})}(\phi^{\prime}({\overline{x}}))^2
g({\overline{x}})\right]
\label{eq: 48.0999}
\end{eqnarray}
where $N(L)$ and $C$ are arbitrary constants of integration. The constant
$N(L)$ does not affect the thermodynamic quantities in the subsequent analysis,
so without loss of generality we set $N(L)=\psi(L)$. The remaining constant must in principle be determined by experiment. However, we adopt the ansatz that
$N(x)$ should reduce to $\psi(x)$ when $b=0$, and that the geometry should
uniquely determine both $\psi$ and $N$. With these conditions
N reduces to:
\begin{equation}
N(x)=\psi(x)+\int^L_xd{\tilde x}{e^{-\lambda({\tilde x})}\over
g({\tilde x})}\int^L_{\tilde x}d{\overline {x}}\,b\,e^{\lambda({\overline{x}
})}(\phi^{\prime}({\overline{x}}))^2g({\overline{x}})
\label{eq: 48.1}
\end{equation}
If we signify $g_{CL}$ as the classical metric and $g = g_{CL} +
\delta g$ as the one-loop quantum corrected metric it can be
shown (by perturbative expansion) that the following form of the
field equation \req{eq: 35.5} is valid to first order:
\begin{equation}
G_{\alpha\beta }(g) + \hbar T_{\alpha\beta }(g_{CL}) = 0
\label{eq: 49}
\end{equation}
Where $G_{\alpha\beta }$ is given by the left hand side of
\req{eq: 8} and $T_{\alpha\beta }$ can be obtained from the
variation of \req{eq: 37}. We find:
\begin{eqnarray}
T_{\alpha\beta }&=&{G\over 3} \left[ \nabla_{\alpha }\nabla_{\beta
}(\psi +N) - {1\over 2}(\nabla_{\alpha}N\nabla_{\beta}\psi+
\nabla_{\alpha }\psi\nabla_{\beta }N)\right.\nonumber\\
&&\quad\quad-g_{\alpha\beta }
(R+\Box N-{1\over2} (\nabla N )\cdot(\nabla\psi)) \nonumber\\
&&\quad\quad
-b(\psi-\ln(\mu^2))(\nabla_{\alpha}\phi\nabla_{\beta}\phi-{1\over2} g_{\alpha\beta}(\nabla\phi)^2)\nonumber\\
&&\left.\quad\quad-(g_{\alpha\beta}\Box c(\phi)-\nabla_{\alpha}
\nabla_{\beta}c(\phi))
\right]
\label{eq: 50}
\end{eqnarray}
An explicit expression for $ T_{\alpha\beta} $ in terms of the metric
is then obtained by substituting for $ \psi $ and N via
\req{eq: 48} and \req{eq: 48.1} respectively. It should be noted
that the resulting equation can be equivalently obtained by direct
functional differentiation of the action in its non-local form
(\req{eq: 36}) \cite{torre}.
We again take the dilaton as representing the spatial
coordinate so that the geometric corrections are manifested in the
metric.
Solving the field \req{eq: 49} yields an explicit form
of the quantum corrected metric. In analogy to the formalism
presented by Frolov et al. \cite{frolov} we adapt the classical static
metric (Eqs.[\ref{eq: 18}-\ref{eq: 19}]) to the quantum corrected
case as follows:
\begin{equation}
ds^2 = g(x)e^{2w(x)}dt^2 + g^{-1}(x)\Omega^{-4}(\phi (x)) dx^2
\label{eq: 51}
\end{equation}
\begin{equation}
g(x) = {1\over \Omega^2(\phi (x))}\left[\overline{j} ({x\over l})
-2GlM - 2Glm(x)\right]
\label{eq: 52}
\end{equation}
Here $m(x)$ is the first order quantum correction to the classical
mass $M$ and we have introduced a metric function $w(x)$ which
vanishes in the classical limit (and where functions $\overline{j} $ and
$\Omega^2$ are as defined by Eqs.[\ref{eq: 12},\ref{eq:
13},\ref{eq: 16}]).
We now solve $G_{\alpha\beta } = -\hbar T_{\alpha\beta }$ by first
finding expressions for quantum quantities $m(x)$ and $w(x)$ in
terms of components of the tensor $T_{\alpha\beta }$ . Using
\req{eq: 8} for $ G_{\alpha\beta} $ gives us:
\begin{equation}
-2\nabla_{\alpha }\nabla_{\beta }D + \nabla_{\alpha
}\phi\nabla_{\beta }\phi + g_{\alpha\beta }\left[2\Box D
-{1\over2}(\nabla\phi )^2-{1\over l^2}{\tilde V}\right] =
-\hbar T_{\alpha\beta }
\label{eq: 53}
\end{equation}
Using the fact that the solution only depends on $x$ and the definition of
covariant derivative:
\begin{equation}
-2\delta^x_{\alpha }\delta^x_{\beta
}D^{\prime \x}+2\Gamma^x_{\alpha\beta }D^{\prime } + \delta^x_{\alpha
}\delta^x_{\beta }(\phi^{\prime })^2 + g_{\alpha\beta }
\left[2\Box D-{g^{xx}\over 2}(\phi^{\prime })^2-{1\over l^2}
{\tilde V}\right] = -\hbar T_{\alpha\beta }
\label{eq: 54}
\end{equation}
The off diagonal components (i.e. $\alpha=x$, $\beta=t$) of the above equation vanish identically.
For the case in which both indices $\alpha ,\beta $ represent the
time coordinate:
\begin{equation}
-g^{xx}g^{\prime }_{tt}D^{\prime }+g_{tt}\left[2\Box D-{g^{xx}\over
2}(\phi^{\prime })^2-{1\over l^2}{\tilde V}\right] =
-\hbar T_{tt}
\label{eq: 55}
\end{equation}
Now we re-express the left hand side with respect to the metric
defined by Eqs.[\ref{eq: 51}-\ref{eq: 52}]. First note that by using
$D = {x\over l}$ (\req{eq: 17}) we can evaluate $\Box D$ to give
\begin{equation}
\Box D = {\Omega^2\over l}(\Omega^2gw^{\prime } + \overline{j}^{\prime }-2Glm^{\prime })
\label{eq: 56}
\end{equation}
and so:
\begin{eqnarray}
e^{2w}\left[-{2\Omega^4g^2w^{\prime }\over l}-{\Omega^2g\over
l}(\overline{j}-2Glm)^{\prime }+{g^2\Omega^2\over l}(\Omega^2)^{\prime
}\right.&&\nonumber\\
\left.+{2g\Omega^2\over l}(g\Omega^2w^{\prime }+\overline{j}^{\prime }-2Glm^{\prime
})-
{\Omega^4g^2\over 2}(\phi^{\prime })^2-{g\over l^2}
{\tilde V}\right] &=& -\hbar T_{tt}
\label{eq: 57}
\end{eqnarray}
From Eqs.[\ref{eq: 13},\ref{eq: 16}]
\begin{equation}
\overline{j}^{\prime} = {1\over l}{\tilde V\over \Omega^2}
\label{eq: 58}
\end{equation}
and from Eqs.[\ref{eq: 12},\ref{eq: 17}] :
\begin{equation}
(\Omega^2)^{\prime } = {l\Omega^2\over 2}(\phi^{\prime })^2
\label{eq: 59}
\end{equation}
Using these 2 results in \req{eq: 57} and solving for
$m^{\prime }$ (and using $T_{tt} = g_{tt}T^t_t)$ :
\begin{equation}
m^{\prime } = {\hbar\over 2G\Omega^2}T^t_t
\label{eq: 60}
\end{equation}
Now for the case in which both tensor indices in \req{eq:
54} represent the spatial coordinate:
\begin{equation}
-2D^{\prime \x }+g^{xx}g^{\prime }_{xx}D^{\prime }+(\phi^{\prime
})^2+g_{xx}\left[2\Box D-{g^{xx}\over 2}(\phi^{\prime })^2-{1\over l^2}
{\tilde V}\right]=-\hbar T_{xx}
\label{eq: 61}
\end{equation}
Using the metric (Eqs.[\ref{eq: 51}-\ref{eq: 52}]) along with
\req{eq: 56} :
\begin{eqnarray}
&- &{1\over lg\Omega^2}(\overline{j} -2GlM)^{\prime }-{(\Omega^2)^{\prime }\over
l\Omega^2}+{(\phi^{\prime })^2\over 2}\nonumber\\
&&\quad+{2\over lg\Omega^2}
(g\Omega^2w^{\prime }+\overline{j}^{\prime }-2Glm^{\prime })
- {{\tilde V}\over l^2g\Omega^4} = -\hbar T_{xx}
\label{eq: 62}
\end{eqnarray}
Using Eqs.[\ref{eq: 58},\ref{eq: 59}] and solving for
$w^{\prime }$ :
\begin{equation}
w^{\prime } = {l\over 2}\left({2G\over g\Omega^2}m^{\prime }-\hbar
T_{xx}\right)
\label{eq: 63}
\end{equation}
Substitute for $m^{\prime }$ via \req{eq: 60} and use
$T_{xx} = g_{xx}T^x_x$ :
\begin{equation}
w^{\prime } = {l\hbar\over 2g\Omega^4}(T^t_t - T^x_x)
\label{eq: 64}
\end{equation}
\req{eq: 60} and \req{eq: 64} provide the first order
quantum corrections to the geometry. Note that consistency of
the perturbative expansion requires $T_{\alpha\beta }$ in the above
expressions to be evaluated
on the classical solution.
Next we explicitly evaluate the tensor
components $T^t_t$ and $T^x_x$.
In terms of the classical static metric as expressed by
\req{eq: 20} the non-vanishing terms are given by ( after some
simplification ):
\begin{eqnarray}
T^t_t &=& {G\over 3} \left[{1\over2} g^{\prime}e^{2\lambda}(\psi^{\prime}
+N^{\prime})+{1\over2} g e^{2\lambda}N^{\prime}\psi^{\prime}+2 e^{\lambda}
(e^{\lambda}g^{\prime})^{\prime}\right.\nonumber\\
&&\quad\left.-b g e^{2\lambda}(\phi^{\prime})^2
(1-{1\over2}(\psi-\ln(\mu^2))-e^{2 \lambda}( gc^{\prime\prime}
+gc^{\prime}\lambda^{\prime}+{1\over2} g^{\prime}c^{\prime})
\right]
\label{eq: 65}
\end{eqnarray}
\begin{eqnarray}
T^x_x&=&{G\over 3}\left[g e^{2\lambda}(\psi+N)^{\prime\prime}+({1\over2} g^{\prime}+g\lambda^{\prime})e^{2\lambda}(\psi
+N)^{\prime}-{1\over2} g e^{2\lambda}N^{\prime}\psi^{\prime}+2 e^{\lambda}
(e^{\lambda}g^{\prime})^{\prime}\right.\nonumber\\
&&\quad\left.- b g e^{2\lambda}(\phi^{\prime})^2
(1+{1\over2}(\psi-\ln(\mu^2))-{1\over2} e^{2\lambda}g^{\prime}c^{\prime}
\right]
\label{eq: 66}
\end{eqnarray}
Substituting for $\psi $ (\req{eq: 48}) and $N$ (\req{eq: 48.1}) and further simplification gives:
\begin{eqnarray}
T^t_t &=&{G e^{2\lambda}\over 6 g}\left[ 4g e^{-\lambda}(e^{\lambda}g^
{\prime})^{\prime}-(g^{\prime})^2+{4 \over \beta^2}e^{-2\lambda}
\right.\nonumber\\
&&\quad\quad- 2 b
g^2(\phi^{\prime})^2\left( 1+{1\over2} ln g +{1 \over\beta}\int^L_x dx{e^
{-\lambda}\over g}+ln({\beta\over z_0\mu})\right)\nonumber\\
&&\quad\quad\left. -2{e^{-2\lambda}\over\beta}
\int^L_x dx be^{\lambda}g(\phi^{\prime})^2
-2 g \left(gc^
{\prime\prime}+gc^{\prime}\lambda^{\prime}+{1\over2} g^{\prime}c^{\prime
}\right)\right]
\label{eq: 67}
\end{eqnarray}
\begin{eqnarray}
T^x_x &=&{G e^{2\lambda}\over 6 g}\left[(g^{\prime})^2-4{e^{-2\lambda}
\over\beta^2}\right.\nonumber\\
&&\quad \quad-
2 bg^2(\phi^{\prime})^2\left( -{1\over2} ln g-{1\over\beta}
\int^L_x dx{ e^{-\lambda}\over g}-ln({\beta\over z_0\mu})\right)\nonumber\\
&&\quad\quad\left. +2{e^{-2
\lambda}\over\beta} \int^L_x dx be^{\lambda}g(\phi^{\prime})^2
-g g^{\prime}c^{\prime}\right]
\label{eq: 68}
\end{eqnarray}
Substituting these results into \req{eq: 60} and
\req{eq: 64} gives us the desired explicit expressions for the
first order quantum corrected mass $M(x) = M_{CL}+m(x)$ and metric
function $w(x)$. Integrating and using $\Omega^2 = e^{\lambda }$ leads
to the results:
\begin{eqnarray}
M(x)&=&M_{CL}+{\hbar\over 6}\int^xdxe^{\lambda }\left[
2e^{-\lambda }(e^{\lambda }g^{\prime })^{\prime }-{(g^{\prime })^2
\over 2g}+{2 e^{-2\lambda }\over \beta^2g}\right.\nonumber\\
&&\quad\quad- bg(\phi^{\prime})^2
\left(1+{1\over2} ln g+{1\over\beta}\int^L_x dy{e^{-\lambda(y)}\over g(y)}+ln(
{\beta\over z_0\mu})\right)\nonumber\\
&&\quad\quad\left.-{e^{-2\lambda}\over\beta g}\int^L_x dy b e^
{\lambda(y)}g(\phi^{\prime}(y))^2
-\left(gc^{\prime\prime}
+gc^{\prime}\lambda^{\prime}+{1\over2} g^{\prime}c^{\prime}\right)\right]
\label{eq: 69}
\end{eqnarray}
\begin{eqnarray}
w(x) &=& -{l\hbar G\over 6}\int^L_xdx{1\over g}\left[2 e^{-\lambda }(e^{\lambda }g^{\prime })^{\prime }-{(g^{\prime })^2\over g}+
{4 e^{-2\lambda }\over \beta^2g}\right.\nonumber\\
&&\quad \quad-
2 bg(\phi^{\prime})^2\left({1\over2}+{1\over2} ln g
+{1\over\beta}\int^L_x dy {e^{-\lambda(y)}\over g(y)}+ln({\beta\over z_0\mu})\right)
\nonumber\\ &&\quad\quad\left.
-2{e^{-2\lambda}\over\beta g}\int^L_x dy be^{\lambda(y)}g(y)(\phi^{\prime}(y))
^2
-g\left(c^{\prime\prime}+c^{\prime}\lambda^{\prime}
\right)\right]
\label{eq: 70}
\end{eqnarray}
Here we have imposed the condition $w(L) = 0$ and have absorbed
the lower limit of \req{eq: 69} into the constant $M_{CL}$.
Also of importance (particularly for the evaluation of the
quantum corrected entropy) is evaluation of the first order
quantum shift in the horizon and hence in the horizon value of
the dilaton field. To this purpose we define $\Delta\phi_+ = \phi_+ -
\phi_{+CL} $ where $\phi_+ $ and $\phi_{+CL}
$ are the quantum corrected and classical horizon
values of the dilaton field respectively. Because the norm of the
killing vector (\req{eq: 15}) must vanish at the horizon it follows
that the above fields must satisfy:
\begin{equation}
\overline{j} (\phi_{+})-2M_{CL}Gl-2m(\phi_+)Gl = 0
\label{eq: 71}
\end{equation}
\begin{equation}
\overline{j} (\phi_{+CL})-2M_{CL}Gl = 0
\label{eq: 72}
\end{equation}
Expanding $\overline{j} (\phi_+)$ about $\phi_{+CL}$ (to first order) and
using \req{eq: 58} to evaluate the derivative of $ {\overline{j}} $ gives:
\begin{equation}
\overline{j} (\phi_+) = \overline{j} (\phi_{+CL})+{1\over l}\left.{\tilde
{V}(\phi_+)\over \Omega^2(\phi_+)}{1\over \phi^{\prime }}
\right|_{\phi_{+CL}} \Delta\phi_+
\label{eq: 73}
\end{equation}
Substituting for the $\overline{j} $'s via Eqs.[\ref{eq:
71},\ref{eq: 72}] and solving for $\Delta\phi_+$ gives to first
order (note $m\sim\hbar $):
\begin{equation}
\Delta\phi_+ = \left.{2Gl^2m(\phi )\Omega^2(\phi )\phi^{\prime }\over
\tilde {V}(\phi )}\right|_{\phi_{+CL}}
\label{eq: 74}
\end{equation}
\section{Quantum Corrections to Black Hole Thermodynamics}
Here we calculate the thermodynamical quantities $E = (2\pi )^{-
1}\partial_{{\overline {\beta }}}W$ and $S = ({\overline {\beta }}
\partial_{{\overline {\beta }}}-1)W$ for the action functional
\req{eq: 35} which describes the one loop quantum corrected black
hole configuration:
\begin{eqnarray}
W &=& W_{CL}\left[g\right] + \hbar\Gamma\left[g\right]\nonumber\\
&=& W_{CL}\left[g_{CL}\right] + \hbar {\delta W_{CL}
\over \delta g}\vert_{{g_{CL}}}\delta g + \hbar\Gamma
\left[g_{CL}\right] + O\left[\hbar^2\right]
\label{eq: 75}
\end{eqnarray}
Recall \req{eq: 33} for the classical action $
W_{CL}[g_{CL}]$. This included a term with an integrand proportional
to $G^0_0$ and an inner and outer surface term. It is possible and
convenient to derive an analogous expression for the quantum
effective action $\Gamma $. Rewriting \req{eq: 37} for
$\Gamma $ in terms of the static classical metric \req{eq: 20},
using \req{eq: 25} to evaluate the extrinsic curvature
boundary term and integrating by parts leads to:
\begin{eqnarray}
\Gamma &=&{\pi\beta\over 6}\int^L_{x_{+}}dx\left[e^{\lambda}g^{\prime}
\left(\psi +N+c\right)^{\prime}+e^{\lambda}g N^{\prime}\psi^{\prime}+b e^
{\lambda}g(\phi^{\prime})^2(\psi-\ln(\mu^2))\right]\nonumber\\
&&\quad+{\pi\over 3}\left(
\psi(x_+)+N(x_+)+c(x_+)\right)
\label{eq: 76}
\end{eqnarray}
Now recall \req{eq: 65} for $ T^0_0 = T^t_t $ . Re-writing
this result ( making use of definitions of $ \Box\psi $ and $ \Box N $
and rearranging ) :
\begin{eqnarray}
T^0_0 &=& {Ge^{\lambda }\over 6}\left[e^{\lambda}g^{\prime}\left(\psi+N+c\right)
^{\prime}+e^{\lambda}g N^{\prime}\psi^{\prime}+b e^{\lambda}g(\phi
^{\prime})^2(\psi-\ln(\mu^2))\right.\nonumber\\
&&\quad\left.+
4(e^{\lambda}g^{\prime})^{\prime}+2\left(e^{\lambda}g
(\psi-N)^{\prime}\right)^{\prime}-2 (e^{\lambda}gc^{\prime})^{\prime}
\right]
\label{eq: 77}
\end{eqnarray}
Using this result to substitute for the integrand in \req{eq: 76}
yields :
\begin{eqnarray}
\Gamma &=&{\pi\beta\over G}\int^L_{x_{+}}dx\left[
e^{-\lambda}T^0_0-{2 G\over 3}(e^{\lambda}g^{\prime})^{\prime}
-{G\over 3}\left(e^{\lambda}g(\psi-N)^{\prime}\right)^{\prime}
+{G\over 3}(e^{\lambda}gc^{\prime})^{\prime}\right]\nonumber\\
&&\quad\quad\quad+
{\pi\over 3}\left(\psi (x_+)+N(x_+)+c(x_+)\right)
\label{eq: 78}
\end{eqnarray}
Since three of the four terms in the integrand are total derivatives
we get :
\begin{eqnarray}
\Gamma ={\pi\beta\over G}\int^L_{x_{+}}dx e^{-\lambda
}T^0_0
&-&{2\pi\beta\over 3}e^{\lambda (L)}g^{\prime }(L)\nonumber\\
&-&{\pi\beta\over 3}e^{\lambda(L)}g(L)\left(\psi^{\prime}(L)-
N^{\prime}(L)-c^{\prime}(L)\right)\nonumber\\
&+&
{\pi\over 3}\left(\psi(x_+)+N(x_+)+c\phi(x_+)\right)
\label{eq: 79}
\end{eqnarray}
where we have used $g^{\prime
}(x_+)e^{\lambda (x_+)}=2/\beta $ and
discarded the irrelevant constant term which results. When we
combine this result for $\Gamma $ with the first order quantum
corrected form for $W_{CL}$ into \req{eq: 75} we obtain
an integral with integrand proportional to $G^0_0(g)+\hbar
T^0_0(g_{CL})$ along with boundary terms. Since the integrand
vanishes on shell according to the field equation (\req{eq: 49}) we
are left with only surface contributions to $W_{on\ shell}$. These are
found to be
\begin{eqnarray}
W_{on\ shell} &=& -2\pi \left[{\beta\over G}e^{\lambda (L)}g(L)D^{\prime }(\phi
(L))+{D\over G}(\phi (x_+))+{\hbar\over 3}\beta
e^{\lambda (L)}g^{\prime }_{CL}(L)\right.\nonumber\\
&&\quad\quad +
{\hbar\over 6}\beta e^{\lambda(L)}g_{CL}
(L)\left(\psi^{\prime}(L)-N^{\prime}(L)-c^{\prime}(L)\right)\nonumber\\
&&\quad\quad\left.-{\hbar\over 6}\left(\psi(x_+)+N(x_+)+c_{CL}(x_+)\right)\right]
\label{eq: 80}
\end{eqnarray}
where the surface contributions from $W_{CL}$ are obtained by
generalizing \req{eq: 33}. Note that $g(x)$ and $\phi
(x_+)$ in the first two terms refer to the quantum corrected
solutions whereas the remaining terms are defined with respect
to classical geometry. Evaluation of thermodynamic quantities
is then straightforward giving:
\begin{eqnarray}
E = - {e^{\lambda (L)}\over G}g^{{1\over2} }(L)D^{\prime }(\phi (L))
&-&{\hbar\over 3}e^{\lambda (L)}g^{-{1\over2} }_{CL}(L)g^{\prime }_{CL}(L)
\nonumber\\
&+&{\hbar\over 6}e^{\lambda(L)}g^{{1\over2}}_{CL}(L)c^{\prime}(L)
\label{eq: 81}
\end{eqnarray}
\begin{equation}
E_{sub}=E(g;g_{CL})-E(g_0;g_{0CL})
\label{eq: 81.5}
\end{equation}
\begin{eqnarray}
S &=& {2\pi\over G}D(\phi (x_+))-{\hbar\over 6}2\pi
\left[2\psi(x_+)+c_{CL}(x_+)\right.\nonumber\\
\quad\quad&&+\left.\int^L_{x_{+}}dx{e^{-\lambda(x)}\over g_{CL}(x)}
\int^L_{x}d{\overline{x}}be^{\lambda({\overline{x}})}
\phi^{\prime}({\overline{x}})^2g({\overline{x}})\right]
\label{eq: 82}
\end{eqnarray}
Where $ g_{0} $ and $ g_{0CL} $ represent the background geometry
and are the metric fields evaluated at $ x=L\rightarrow\infty $.
Note that the left-most terms in the expressions for energy and entropy
have classical forms but have implied quantum corrections due to geometry
. On the other hand, the remaining terms all vanish in the classical ( $ \hbar
\rightarrow 0 $ ) limit.
\section{Quantum Corrections in Spherically Symmetric Reduced
Gravity Theory}
Next we want to use the preceding formalism to examine a specific
theory. Here we consider the form of action obtained from the
spherically symmetric
reduction of 4-dimensional Einstein-Maxwell gravity to a 2-
dimensional dilaton model \cite{SSG}. We will specifically examine the
minimal case $ b=c=0 $ so that $ N=\psi $. This case in particular
was studied by Frolov et al. \cite{frolov} and we find our results to be in agreement.
We proceed by considering an
effective action of the following form (Note that we neglect
writing the extrinsic curvature term for sake of brevity but its
inclusion is implied.):
\begin{equation}
W_{CL} = -{1\over 2Gl^2}\int d^2x\sqrt{g}\left[{r^2\over 2}R(g)+
(\nabla r)^2+\left(1-Q^2/r^2\right)\right]
\label{eq: 83}
\end{equation}
Where $Gl^2 = G^{(4)}$ is the square of the 3+1 dimensional Planck
length and where the ``effective'' charge $Q$ has dimensions
of length. Comparison with the form of the classical action
(\req{eq: 13.5}) leads to the following identifications:
\begin{equation}
\phi = {\sqrt{2}r\over l}
\label{eq: 84}
\end{equation}
\begin{equation}
D(\phi ) = D(r) = {r^2\over 2l^2}
\label{eq: 85}
\end{equation}
\begin{equation}
{\tilde V}(\phi ,q) = {\tilde V}(r, Q) = 1-{Q^2\over r^2}
\label{eq: 86}
\end{equation}
The classical solution Eqs.[\ref{eq: 17}-\ref{eq: 19}] can then
be expressed:
\begin{equation}
x = {r^2\over 2l}
\label{eq: 87}
\end{equation}
\begin{equation}
\Omega^2(r) = e^{\lambda (r)} = {r\over l}
\label{eq: 88}
\end{equation}
\begin{equation}
{\overline {j}}({x\over l}) = \overline{j} (r) = {r\over
l}\left(1+Q^2/r^2\right)
\label{eq: 89}
\end{equation}
\begin{equation}
g(r) = 1-{2Gl^2M\over r}+{Q^2\over r^2}
\label{eq: 90}
\end{equation}
Note that the constant of integration in the evaluation of the
integral defined in \req{eq: 12} for $\Omega^2$ is
selected to be $-2\ln\sqrt{2}$. This yields
a metric function $g(r)$ which goes to 1 as $r\rightarrow\infty
$, which is the correct asymptotic behaviour of the metric in
spherically symmetric gravity.
For subsequent calculations it will often be convenient to express
the metric function $g(r)$ in the following form (Here we consider
solutions only for which two real, distinct horizons exist, i.e.
$(l^2GM)^2>Q^2$ .)
\begin{equation}
g(r) = {1\over r^2}(r-r_+)(r-r_-)
\label{eq: 91}
\end{equation}
where $r_{\pm }$ represents the outer(+) and inner (-) horizons
given by:
\begin{equation}
r_{\pm } = l^2GM\pm\sqrt{(l^2GM)^2-Q^2}
\label{eq: 92}
\end{equation}
Before proceeding to evaluate the quantum corrected quantities, we
consider the classical energy and entropy. The classical energy
(\req{eq: 34}) in terms of $r$ for this theory becomes:
\begin{eqnarray}
E &=& -{1\over G}e^{{\lambda }(r=L)}g^{{1\over2} }(r=L)
{dD(r)\over dr}\vert_{r=L}{dr\over dx}\nonumber\\
&=& -{L\over Gl^2}\sqrt{1-{2Gl^2M\over L}+{Q^2\over L^2}}
\label{eq: 93}
\end{eqnarray}
Above and for the remainder of this section $L$ is taken to
be the value of $r$ at the outer boundary. Clearly this energy is
divergent as $L\rightarrow\infty $. Hence we apply the
standard subtraction procedure as defined by \req{eq: 34.5} to give:
\begin{equation}
E_{sub} = {L\over Gl^2}\left(1-\sqrt{1-{2Gl^2M\over L}+{Q^2\over
L^2}}\right)
\label{eq: 94}
\end{equation}
Taking the asymptotic ($L \rightarrow \infty $) limit yields the
expected result $\lim_{L \rightarrow \infty }(E_{sub}) = M$. The
classical entropy
(\req{eq: 29}) is:
\begin{equation}
S = {\pi r^2_+\over Gl^2}
\label{eq: 95}
\end{equation}
Next we calculate the quantum corrected quantities in spherically
symmetric theory. Recall we consider the minimal case $ b=c=0 $
so that $ N=\psi $ .
To avoid confusion classical-specific quantities
will be labelled with the subscript ``CL''. Re-writing
\req{eq: 69} for quantum corrected mass $M(x)$ in terms of $r$
and substituting \req{eq: 91} for the classical metric
gives us:
\begin{eqnarray}
M(r) &=& M_{CL}+{\hbar\over 6}\int^{r}dr\left[2{d^2g_{CL}
\over dr^2}-{({dg_{CL}\over dr})^2\over 2g_{CL}}+
{2\over \beta^2_{CL}g_{CL}}\right]\nonumber\\
&=& M_{CL}+{\hbar\over 6}\int^rdr\left[{10\over r^4}(r-
r_{+CL})(r-r_{-CL})-{6\over r^3}(2r-r_{+CL}-r_{-CL})\right.
\nonumber\\ &&\quad+
{3\over r^2}
- {(r-r_{-CL})\over 2r^2(r-r_{+CL})}-{(r-r_{+CL})
\over 2r^2(r-r_{-CL})}\nonumber\\
&&\quad\left.+{2r^2\over (r-r_{+CL})(r-r_{-
CL})\beta^2_{CL}}\right]
\label{eq: 96}
\end{eqnarray}
Integrating and using \req{eq: 23}
\begin{equation}
\beta_{CL} =\left. {2e^{-\lambda (r)}\over \left({dg_{CL}\over
dr}
\right)\left({dr\over dx}\right)}\right|_{r=r_{+CL}} =
{2r^2_{+CL}\over (r_{+CL}-r_{-CL})}
\label{eq: 97}
\end{equation}
gives us the following
\begin{equation}
M(r) = M_{CL} + {\hbar\over 6}\left[A\ln(r-r_{-CL})+B\ln(r) +
C(r)\right]
\label{eq : 98}
\end{equation}
where:
\begin{equation}
A = {(r_{+CL}-r_{-CL})^2(r_{+CL} +r_{-CL})
(r^2_{+CL} +r^2_{-CL})\over 2r^4_{+CL} r^2_{-CL}}
\label{eq: 99}
\end{equation}
\begin{equation}
B = - {(r_{+CL}-r_{-CL})^2(r_{+CL}+r_{-CL})\over 2r^2_{+CL}
r^2_{-CL}}
\label{eq: 100}
\end{equation}
\begin{equation}
C(r) = {2r\over \beta^2_{CL}}+{(r_{+CL}-r_{-CL})^2\over
2rr_{+CL}r_{-CL}}+{2(r_{+CL}+r_{-CL})\over r^2}-{10r_{+CL}
r_{-CL}\over 3r^3}
\label{eq: 101}
\end{equation}
Note that $A+B = 4M_{CL}Gl^2/\beta^2_{CL}$. Consider the quantum
corrected mass for
some special cases. For $M(r = L)$ for large $L$ then
$\ln(L-r_{-CL})\sim\ln(L)$ and using
the prior property for $A$ and $B$ gives:
\begin{equation}
M(L)\sim M_{CL}+{\hbar\over 3\beta^2_{CL}}[L+2M_{CL}Gl^2\ln(L)]
\label{eq: 102}
\end{equation}
Also consider the case of an uncharged black hole. The classical
metric function becomes
$g_{CL} = (1-r_{+CL}/r)$ where $r_{+CL} = 2M_{CL}Gl^2$ and
$\beta_{CL} =
2r_{+CL}$. An analogous calculation to that presented above then
gives for the uncharged
case:
\begin{equation}
M(r) = M_{CL}+{\hbar\over 12}\left[{r\over r_{+CL}^2}+
{7r_{+CL}\over 2r^2}-{1\over r}+{\ln(r)\over r_{+CL}}\right]
\label{eq: 103}
\end{equation}
For the case of an extremal black hole $r_{-CL}\rightarrow
r_{+CL}$ and
$\beta_{CL}\rightarrow\infty $. Consequently $A$ and $B$ vanish and $ C(r) $ reduces to:
\begin{equation} C(r) =
4r_{+CL}/r^2-10r_{+CL}^2/3r^3
\end{equation}
Next consider the metric function $w(x)$. Re-writing
\req{eq: 70} in terms of $r$
and then substituting \req{eq: 91} for the classical
metric, using \req{eq: 97}
for the inverse asymptotic temperature and finally integrating
gives the result
\begin{equation}
w(r) = {\hbar Gl^2\over 6}\left(F(L)-F(r)\right)
\label{eq: 104}
\end{equation}
where :
\begin{eqnarray}
F(r) &=& -{\left[3r_{+CL}^2+2r_{+CL}r_{-CL}+3r^2_{-
CL}\right]\over r_{+CL}^2r^2_{-CL}}\ln(r)\nonumber\\
&&\quad + {\left[3r_{+CL}^4+2r_{+CL}^3r_{-CL}+2r_{+CL}^2
r^2_{-CL}+2r_{+CL}r^3_{-CL}-r^4_{-CL}\right]\over
r^4_{+CL}r^2_{-CL}}ln(r-r_{-CL})\nonumber\\
&&\quad + {4\over r^2}+{4(r_{+CL}+r_{-CL})\over rr_{+CL}r_{-CL}}
-{(r_{+CL}^4-r^4_{-CL})\over r_{+CL}^4r_{-CL}(r-r_{-CL})}
\label{eq: 105}
\end{eqnarray}
As for the quantum corrected mass we consider some special cases.
For an uncharged black
hole the function $F(r)$ takes the simpler form:
\begin{equation}
F(r) = {3\over 2r^2}+{2\over rr_{+CL}}-{1\over r_{+CL}^2}\ln(r)
\label{eq: 106}
\end{equation}
If $L$ is large, we can write:
\begin{equation}
e^{2w(r)}\sim \left({r\over L}\right)^{{\hbar Gl^2\over
3r_{+CL}^2}}
\exp\left(-{\hbar Gl^2\over 3}\left({3\over 2r^2}+{2\over
rr_{+CL}}\right)\right)
\label{eq: 107}
\end{equation}
In the extremal black hole limit the function $F(r)$ reduces to:
\begin{equation}
F(r) = {8\over r_{+CL}^2}\ln
({r-r_{+CL}\over r} )+{4\over r^2}+{8\over rr_{+ CL}}
\label{eq: 108}
\end{equation}
Consequently at the extremal black hole horizon
$F(r_+)\rightarrow -\infty $ so that
$e^{2w(r_+)}\rightarrow\infty $.
Next we examine the quantum corrected energy. Revising
\req{eq: 81} for reduced
spherically symmetric gravity:
\begin{eqnarray}
E &=& -{L\over Gl^2}\sqrt{1-{2Gl^2M(L)\over L}+{Q^2\over
L^2}}\nonumber\\ &&\quad-{\hbar\over 3}
{1\over L^2}\left(2Gl^2M_{CL}-{2Q^2\over L} \right)
{1\over \sqrt{1-{2Gl^2M_{CL}\over L}+{Q^2\over L^2}}}
\label{eq: 109}
\end{eqnarray}
Where $M(r=L)$ is given by \req{eq: 103}. Consider the
case of large box size
$L$. Clearly the second part of the expression is small relative
to the first. So we consider
the first term only and substitute for the quantum corrected mass
(for large $r = L$) by way
of \req{eq: 102}:
\begin{equation}
E\sim -{L\over Gl^2}\sqrt{1-{2Gl^2M_{CL}\over L}-{2\hbar Gl^2\over
3\beta^2_{CL}}
-{4\hbar(Gl^2)^2M_{CL}\over 3\beta^2_{CL}L}\ln(L)+{Q^2\over L^2}}
\label{eq: 110}
\end{equation}
As in the calculation of classical energy we again apply the
standard subtraction procedure
of comparing the divergent quantity with that of a background
defined by the metric $g_0 =
\lim_{L\rightarrow\infty}g(r=L)$. Since $g(L)$ for large $L$ is
the quantity inside the square
root sign in \req{eq: 110} it follows that $g_0 = 1 - {2\hbar
Gl^2\over 3\beta^2_{CL}}$ and
since $E[g_0] = -{Lg^{{1\over2} }_0\over Gl^2}$ the subtracted energy
is given by:
\begin{eqnarray}
E_{sub} &\sim& {L\over Gl^2}\left[\sqrt{1-{2\hbar Gl^2\over
3\beta^2_{CL}}}\right.\nonumber\\
&&\left.- \sqrt{1-{2Gl^2M_{CL}\over L}-{2\hbar Gl^2\over 3\beta^2_{CL}}
-{4\hbar (Gl^2)^2M_{CL}\ln(L)\over 3\beta^2_{CL}L} +{Q^2\over
L^2}}\right]
\label{eq: 111}
\end{eqnarray}
The approach we use here is to first fix $L$ and expand the square
roots with respect to the
perturbative factor $\hbar .$ Then we take $L$ to be large and
expand with respect to
${1\over L}.$ Then eliminating all $O\left({1\over L^2}\right)$
terms inside the square
brackets leaves:
\begin{equation}
E_{sub} \sim M_{CL} + {\hbar Gl^2M_{CL}\over 3\beta^2_{CL}}\left(2\ln
(L) + 1\right)
\label{eq: 112}
\end{equation}
Note that the first order quantum correction to the energy can be
attributed to temperature effects
since $\beta_{CL}$ represents the asymptotic inverse temperature.
Next consider the quantum corrected value of the horizon radius
$r_+$. For this purpose we
define $\Delta r_+ = r_+-r_{+CL}$ and as previously defined $m(r)
= M(r)-M_{CL}$. The
quantum corrected metric must vanish at $r = r_+$ and this relation
can be expressed as
follows:
\begin{equation}
0 = g(r_+) = g_{CL}(r_+) - {2Gl^2m(r_+)\over r_+}
\label{eq: 113}
\end{equation}
Expanding to first order about $r_+ = r_{+CL}$ and using
$g_{CL}(r_{+CL})=0$ and
expressing $(dg_{CL}/dr)_{r=r_{+CL}}$ in terms of $\beta_{CL}$
(\req{eq: 97}) gives:
\begin{equation}
\Delta r_{+}= {\beta_{CL}G l^2 m(r_{+CL})\over r_{+CL} }
\label{eq: 114}
\end{equation}
Note that $m(r_{+CL})$ contains a factor of $\hbar $( see \req{eq:
69}). From this result we can
calculate the quantum correction to the horizon area which is
proportional to $r^2_+ =
(r_{+CL} + \Delta r_+)^2$ and hence to first order:
\begin{equation}
r^2_+ = r_{+CL}^2 + 2\beta_{CL}Gl^2m(r_{+CL})
\label{eq: 115}
\end{equation}
Finally in this section we evaluate the quantum correction to the
entropy. For this theory the
entropy (\req{eq: 82}) is:
\begin{equation}
S = {\pi r^2_+\over Gl^2} - \hbar {2\pi\over 3}\psi (r_{+CL})
\label{eq: 116}
\end{equation}
Making use of the preceding result for $r^2_+$ (\req{eq: 115}) we can write:
\begin{equation}
S = S_{CL} + 2\pi\beta_{CL}m(r_{+CL}) - \hbar {2\pi\over 3}\psi
(r_{+CL})
\label{eq: 117}
\end{equation}
Revising \req{eq: 48} for this theory gives us:
\begin{equation}
\psi (r_{+CL}) = -\ln g_{CL}(r_{+CL})-{2\over
\beta_{CL}}\int^L_{r_+}
{dr\over g_{CL}(r)}-2\ln\left({\beta_{CL}\over z_0}\right)
\label{eq: 118}
\end{equation}
Using \req{eq: 91} for $g_{CL}$, \req{eq: 97}
for $\beta_{CL}$,
integrating the middle term, and simplification yields:
\begin{eqnarray}
\psi (r_{+CL}) &=& {r^2_{-CL}\over r_{+CL}^2}\ln\left(
{L-r_{-CL}\over r_{+CL}-r_{-CL}}\right)-\ln\left(
{L-r_{+CL}\over r_{+CL}-r_{-CL}}\right)\nonumber\\
&&\quad - {(r_{+CL}-r_{-CL})\over r_{+CL}^2}(L-r_{+CL})
-2\ln\left({r_{+CL}\over z_0}\right)
\label{eq: 119}
\end{eqnarray}
The third term in $\psi (r_{+CL})$ can be interpreted\cite{frolov} as the
contribution to the entropy
of a two-dimensional hot gas in a box size $L-r_{+CL}$ and
temperature
$(2\pi\beta_{CL})^{-1} = (r_{+CL}-r_{-CL})/4\pi r_{+CL}^2$. Hence we subtract
off this contribution to obtain the quantum corrected black hole
entropy:
\begin{eqnarray}
S &=& S_{CL}+2\pi\beta_{CL}m(r_{+CL})-{\hbar 2\pi\over 3}{r^2_{-CL}
\over r_{+CL}^2}\ln\left({L-r_{-CL}\over
r_{+CL}-r_{-CL}}\right)\nonumber \\
& &\qquad+\hbar {2\pi\over 3}\ln \left({L-r_{+CL}\over
r_{+CL}-r_{-CL}}\right)
+ \hbar {4\pi\over 3}\ln \left( {r_{+CL}\over z_0}\right)
\label{eq: 120}
\end{eqnarray}
We next consider some special cases.
In the case of large box size $L$ the entropy reduces to:
\begin{equation}
S \sim S_{CL}+2\pi\beta_{CL}m(r_{+CL})+{2\pi\over 3}\left
(1-{r^2_{-CL}\over r_{+CL}^2}\right)\ln\left(
{L\over r_{+CL}-r_{-CL}}\right) + {4\pi\over 3}\ln
\left({r_{+CL}\over z_0}\right)
\label{eq: 121}
\end{equation}
For an uncharged black hole then
\begin{equation}
S = S_{CL} + 2\pi\beta_{CL}m(r_{+CL}) + {2\pi\over 3}
\ln \left({Lr_{+CL}\over z^2_0}\right)
\label{eq: 122}
\end{equation}
where $m(r_{+CL})$ can be evaluated using \req{eq:
103}. Finally, in the
extremal black hole limit
\begin{equation}
S = S_{CL} + 2\pi\beta_{CL}m(r_{+CL}) + {2\pi\over 3}\ln
\left({r_{+CL}^2\over z^2_0}\right)
\label{eq: 123}
\end{equation}
where $m(r_{+CL}) = \hbar/9r_{+CL}$ in the extremal case however
$\beta_{CL}\rightarrow\infty $ so the entropy is divergent in this
limit.\\
\section{Quantum Corrections in Jackiw-Teitelboim Theory}
In this section we examine the
Achucarro-Ortiz black hole \cite{ortiz}, which is a
solution to the field equations for Jackiw-Teitelboim
gravity\cite{JT}. This theory can be obtained
by imposing axial symmetry in 2+1 dimensional gravity, so that
the Achucarro-
Ortiz black hole corresponds to the projection of the BTZ axially
symmetric black hole \cite{BTZ} into
1+1-dimensional spacetime. The Jackiw-Teitelboim field
equations can be derived from
an effective action of the form \cite{ortiz}
\begin{equation}
W_{CL} = -\int d^2x\sqrt{g}\Lambda^{{1\over2} }\left[rR(g) + \Lambda r
-{J^2\over 2r^3}\right]
\label{eq: 124}
\end{equation}
where $\Lambda $ is the cosmological constant (dimension
length$^{-2}$) and $J$ is an
``effective charge'' (dimension length) which describes the angular
momentum of the $BTZ$
black hole. Note that there is no kinetic term in this action
so it is of
the form (\req{eq: 13.5}) without the need for a field reparametrization.
This leads to the following
identification (provided we set 2G = 1 and $l = \Lambda^{-{1\over2}
}$):
\begin{equation}
\overline{\phi} = \Lambda^{{1\over2} }r
\label{eq: 125}
\end{equation}
\begin{equation}
D(\overline{\phi} ) = D(r) = \Lambda^{{1\over2} }r
\label{eq: 126}
\end{equation}
\begin{equation}
{\tilde V}(\overline{\phi} ,q) = {\tilde V}(r, J) = \Lambda^{{1\over2} }
(r - {J^2\over 2\Lambda r^3})
\label{eq: 127}
\end{equation}
The classical solution Eqs.[\ref{eq: 16.25}--\ref{eq: 16.75}] can
then be expressed:
\begin{equation}
x = r
\label{eq: 128}
\end{equation}
\begin{equation}
\overline{j} (\overline{\phi} ) = \overline{j} (r) = {\Lambda r^2\over 2}+{J^2\over 4r^2}
\label{eq: 129}
\end{equation}
\begin{equation}
g(r) = {\Lambda r^2\over 2}-{M\over \Lambda^{{1\over2} }}+{J^2\over
4r^2}
\label{eq: 130}
\end{equation}
Here we consider only solutions for which two real, distinct
horizons exist (i.e. $M^2 >
\Lambda^2J^2/2$) so it will prove convenient to express the
metric \req{eq: 130} in the
following form
\begin{equation}
g(r) = {\Lambda\over 2}{(r^2-r^2_+)(r^2-r^2_-)\over r^2}
\label{eq: 131}
\end{equation}
where $r_{+}$ and $r_-$ represent the inner and outer
horizons, respectively, given by:
\begin{equation}
r^2_{\pm } = {1\over \Lambda^{3/2}}\left[M\pm
\sqrt{M^2-\Lambda^2J^2/2}\right]
\label{eq: 132}
\end{equation}
Note that because the action is already in reparameterized form we
set $\Omega^2 = 1$ (or
equivalently $\lambda = 0$) in the previously derived results.
The classical, subtracted energy for this theory, as described by \req{eq: 34.5}, is given by:
\begin{equation}
E_{sub}=2^{{1\over2}}\Lambda L\left(1-\sqrt{1-{2M\over\Lambda^{{3\over 2}
}L^2}+{J^2\over 2\Lambda L^4}}\right)
\label{eq: 133.5}
\end{equation}
whle the classical entropy
(\req{eq: 29}) is:
\begin{equation}
S = 4\pi\Lambda^{{1\over2} }r_+
\label{eq: 134}
\end{equation}
Next we calculate the quantum corrected quantities in Jackiw-
Teitelboim theory with minimal coupling ($ b=c=0 $) as for SSG in the prior
section.
Henceforth, purely classical quantities will
be labelled with the subscript ``CL.'' \req{eq: 69} for
the quantum corrected mass $M(x=r)$ gives us (substituting for
classical metric \req{eq: 131}):
\begin{eqnarray}
M(r) &=& M_{CL}+{\Lambda\hbar\over 6}\int^rdr
\left[{4\over r^2}(r_{+CL}^2+r^2_{-CL})\right.\nonumber \\
& &\quad\left.+{5\over r^4}
(r^2-r^2_{+CL})(r^2-r^2_{-CL})
-{(2r^2-r_{+CL}^2-r^2_{-CL})\over (r^2-
r_{+CL}^2)}\right.\nonumber\\
&&\quad - \left.{(2r^2-r_{+CL}^2-r^2_{-CL})\over
(r^2-r^2_{-CL})}+{4\Lambda^{-2}r^2\over \beta^2_{CL}(r^2-
r_{+CL}^2)(r^2-r^2_{-CL})}\right]
\label{eq: 135}
\end{eqnarray}
Integrating and using via \req{eq: 23}
\begin{equation}
\beta_{CL} = {2\over \left({dg_{CL}
\over dr }\right)}\vert_{r=r_{+CL}} = {2\over \Lambda r_{+CL}
\left(1-r^2_{-CL}/r_{+CL}^2\right)}
\label{eq: 136}
\end{equation}
gives us the following result
\begin{equation}
M(r) = M_{CL}+{\hbar\Lambda\over G}\left[A\left(\ln(r-r_{-CL})-\ln
(r+r_{-CL})\right)+B(r)\right]
\label{eq: 137}
\end{equation}
where:
\begin{equation}
A = {r_{+CL}^6-3r_{+CL}^4r_{-CL}^2+3r_{+CL}^2r_{-CL}^4-r_{-CL}^6 \over
2r_{+CL}^2r_{-CL}(r_{+CL}^2-r_{-CL}^2)}
\label{eq: 138}
\end{equation}
\begin{equation}
B(r)=r+{r_{+CL}^2\over r}+{r_{-CL}^2\over r}-{5\over 3}{r_{+CL
}^2r_{-CL}^2\over r^3}
\label{eq: 139}
\end{equation}
Next consider the quantum corrected mass for some special cases.
For$M(r=L)$ and large $L$ then coefficient of $A\sim 0$ and we are
left with:
\begin{equation}
M(L) \sim M_{CL}+{\hbar\over 6}\Lambda L \left[ 1+{r_{+CL}^2
+r_{-CL}^2\over L^2}\right]
\label{eq: 140}
\end{equation}
Also we consider the ``chargeless'' case. The classical metric
function becomes $g_{CL}={\Lambda\over 2}(r^2-r_{+CL}^2)$
where $r_{+CL}^2 = 2M\Lambda^{-3/2}$ and $\beta_{CL} = 2/\Lambda
r_{+CL}$. Repeating the above calculation yields:
\begin{equation}
M(r) = M_{CL} + {\hbar\over 6}\Lambda r
\label{eq: 141}
\end{equation}
For the extremal black hole case $r_{-CL}\rightarrow r_{+CL}$ and
$\beta_{CL}\rightarrow\infty $ . The coefficient of $A$ vanishes
and $B(r)$ reduces to:
\begin{equation}
B(r)=r+2{r_{+CL}^2\over r}-{5\over 3}{r_{+CL}^4\over r^3}
\label{eq: 141.5}
\end{equation}
Next consider the metric function $w(x=r)$. Applying \req{eq:
70} for this theory and using \req{eq: 131} for the
classical metric and \req{eq: 136} for the inverse
asymptotic temperature gives:
\begin{equation}
w(r) = {\hbar\over 6\Lambda^{{1\over2} }}(F(L)-F(r))
\label{eq: 142}
\end{equation}
where:
\begin{equation}
F(r)={4\over r}-{r(r_{+CL}^2-r_{-CL}^2)\over r_{+CL}^2(r^2-
r_{-CL}^2)}+ln({r-r_{-CL}\over r+r_{-CL}})\left[ { 3r_{+CL}^4
-2r_{+CL}^2r_{-CL}^2-r_{-CL}^4 \over 2r_{-CL}r_{+CL}^2(
r_{+CL}^2-r_{-CL}^2)}\right]
\label{eq: 143}
\end{equation}
For the uncharged case we find using the revised metric function
discussed above that $w(r) = 0$ for all allowable $r$. Meanwhile
for the extremal limit $F(r)$ reduces to $4/r$ except at the
horizon where the right most term in \req{eq: 143} becomes a
divergent quantity. Hence in this limit
$F(r_+)\rightarrow-\infty $ and as in the preceding section the
factor $e^{2w(r_+)}$ is divergent.
Next we consider the quantum corrected energy. Revising
\req{eq: 81} for Jackiw-Teitelboim theory:
\begin{equation}
E = -2^{{1\over2}}\Lambda L\left[\sqrt{1-{2M(L)\over\Lambda^{{3\over 2}}L^2
}+{J^2\over 2\Lambda L^4}}+{\hbar\over 3\Lambda^{{1\over2}}L}{
(1 - {J^2\over2\Lambda L^4})
\over \sqrt{1-{2M_{CL}\over \Lambda^{3\over 2 }L^2}+
{J^2\over 2\Lambda L^4}}}\right]
\label{eq: 144}
\end{equation}
Applying the usual background subtraction procedure (\req{eq: 81.5}) then gives:
\begin{eqnarray}
E_{sub}&=&2^{{1\over2}}\Lambda L\left[1-\sqrt{1-{2M(L)\over\Lambda^
{{3\over 2}}L^2}+{J^2\over 2\Lambda L^4}}\right.\nonumber\\
\quad\quad&&\left.+{\hbar\over 3\Lambda^{{1\over2}}L}\left(1-
{(1-{J^2\over 2\Lambda L^4})\over\sqrt{1-{2M_{CL}\over\Lambda}^{{3\over 2}}L^2}+{J^2\over 2\Lambda L^4}}\right)\right]
\label{eq: 144.5}
\end{eqnarray}
In the case of large box size $L$ the second part vanishes relative
to the first. Consequently, for large $L$ the primary contribution
to the quantum shift in energy is a result of the shift in mass as
described by \req{eq: 140}. So to first order in $\hbar
$ and to zero'th order in ${1\over L}$ we find:
\begin{equation}
E_{sub}\sim (E_{sub})_{CL} + {2^{{1\over2} }\hbar\Lambda^{{1\over2} }\over 6}
\label{eq: 145}
\end{equation}
Following the procedure for calculating the quantum correction to the
horizon radius $\Delta r_+ = r_+-r_{+CL}$ which was introduced in
the previous section we find
\begin{equation}
\Delta r_+ = {\beta_{CL}m(r_{+CL})\over 2\Lambda^{{1\over2} }}
\label{eq: 146}
\end{equation}
where $m(r) = M(r)-M_{CL}$ is given by \req{eq: 137}.
Furthermore the first order quantum correction to the horizon area can
be obtained from:
\begin{equation}
r^2_+ = r_{+CL}^2 + {\beta_{CL}r_{+CL}M(r_{+CL})\over
\Lambda^{{1\over2} }}
\label{eq: 147}
\end{equation}
Finally, in this section we determine the quantum correction to
entropy. For Jackiw-Teitelboim theory the dilaton generic entropy
(\req{eq: 82}) becomes:
\begin{equation}
S = 4\pi\Lambda^{{1\over2} }r_+ - {\hbar 2\pi\over 3}\psi (r_{+CL})
\label{eq: 148}
\end{equation}
Making use of the preceding result $\Delta r_+$ (\req{eq: 146}):
\begin{equation}
S = S_{CL}+2\pi\beta_{CL}m(r_{+CL})-\hbar{2\pi\over 3}\psi
(r_{+CL})
\label{eq: 149}
\end{equation}
From (\req{eq: 48}) we get:
\begin{equation}
\psi (r_{+CL}) = -\ln g_{CL}(r_{+CL})-{2\over \beta_{CL}}
\int^L_{r_{+}}{dr\over g_{CL}(r)}-2\ln \left(
{\beta_{CL}\over z_0}\right)
\label{eq: 150}
\end{equation}
Using \req{eq: 131} for $g_{CL}$, \req{eq: 136} for
$\beta_{CL}$, integrating the middle term and simplifying yields:
\begin{eqnarray}
\psi (r_{+CL}) &=& -{r_{-CL}\over r_{+CL}}\ln
\left[{(r_{+CL}-r_{-CL})(L+r_{-CL})\over (r_{+CL}+r_{-CL})
(L-r_{-CL})}\right]+\ln({r_{+CL}^2-r^2_{-CL}\over r^2_{+CL}})\nonumber \\
&&\quad - \ln\left({L-r_{+CL}\over L+r_{+CL}}\right)
+\ln\left({\Lambda z_0^2\over 8}\right)
\label{eq: 151}
\end{eqnarray}
So the complete quantum corrected entropy is obtained by
substituting \req{eq: 151} for $\psi (r_{+CL}$) and
$m(r_{+CL})$ via \req{eq: 137} back into
\req{eq: 149}. For large $L$ this result reduces to
(subtracting off the constant term):
\begin{eqnarray}
S &=& S_{CL}+2\pi\beta_{CL}m(r_{+CL})+{\hbar 2\pi\over 3}
{r_{-CL}\over r_{+CL}}\ln\left({r_{+CL}-r_{-CL}\over
r_{+CL}+r_{-CL}}\right)\nonumber\\
&&\quad - {\hbar 2\pi\over 3}\ln\left({r_{+CL}^2-r^2_{-CL}\over r^2_{+CL}}\right)
\label{eq: 152}
\end{eqnarray}
For an uncharged black hole then the entropy is given by:
\begin{equation}
S = S_{CL} + \hbar {2\pi\over 3}\ln\left[{L-r_{+CL}\over
L+r_{+CL}}\right]
\label{eq: 153}
\end{equation}
Note that
using \req{eq: 141} the $m(r_{+CL})$ term reduces to a constant
which we subtract off.
Finally, in the extremal black hole limit
\begin{equation}
S = S_{CL}+2\pi\beta_{CL}m(r_{+CL})
\label{eq: 154}
\end{equation}
where $m(r_{+CL}) = {2\over 9}\hbar\Lambda r_{+CL}$ in the
extremal case however $\beta_{CL}\rightarrow\infty $ so the entropy
is divergent in this limit.
\section{Conclusions}
We have calculated the one-loop quantum corrections for generic dilaton
gravity coupled to an Abelian gauge field. Both corrections to the
black hole geometry and black hole thermodynamics were studied in
detail. We then applied our generic results to the special cases of
charged black holes in spherically symmetric gravity and rotating
BTZ black holes. The former case enabled us to verify our results
by comparison with the tree-level calculations of Braden et al. \cite{brown}
and the one-loop corrections of Frolov et al. \cite{frolov}. Study of BTZ
black holes is of particular interest due to recent revelations of a
possible
connection between string inspired black holes and BTZ geometry \cite{near}.
Although our quantum corrected results can in principle be integrated
exactly, numerical analysis will be required for rigourous study of
particular theories. Such an analysis is in progress.
Our hope is that ultimately such studies will lead to a better
understanding of quantum thermodynamical
processes associated with black holes
and hence insight into the deep mysteries
surrounding quantum gravity.
\section{Acknowledgements}
\par
This work
was supported in part by the Natural Sciences and Engineering
Research
Council of Canada. G.K. would like to thank J. Gegenberg for helpful
conversations. We are also grateful to S.D. Odintsov for useful comments on the original version of the manuscript and for bringing several important
references to our attention.
\par\vspace*{20pt}
|
2,877,628,091,264 | arxiv | \section{Motivation}
Many computerized methods for RNA-RNA interaction structure
prediction have been developed. Recently, $O(N^6)$ time and $O(N^4)$
space dynamic programming algorithms have become available that
compute the partition function of RNA-RNA interaction complexes.
However, few of these methods incorporate the knowledge concerning
related sequences, thus relevant evolutionary information is often
neglected from the structure determination. Therefore, it is of
considerable practical interest to introduce a method taking into
consideration both thermodynamic stability and sequence covariation.
\section{Results}
We present the \emph{a priori} folding algorithm \texttt{ripalign},
whose input consists of two (given) multiple sequence
alignments (MSA). \texttt{ripalign} outputs (1) the partition function,
(2) base-pairing probabilities, (3) hybrid probabilities and (4) a set
of Boltzmann-sampled suboptimal structures consisting of canonical
joint structures that are compatible to the alignments.
Compared to the single sequence-pair folding algorithm \texttt{rip},
\texttt{ripalign} requires negligible additional memory resource.
Furthermore, we incorporate possible structure constraints as input
parameters into our algorithm.
\section{Availability}
The algorithm described here is implemented in C as part of the
\texttt{rip} package. The supplemental material, source code
and input/output files can freely be downloaded from
\url{http://www.combinatorics.cn/cbpc/ripalign.html}.
\section{Contact}
Christian Reidys \texttt{duck@santafe.edu}
\end{abstract}
{\bf Keywords }{multiple sequence alignment, RNA-RNA interaction,
joint structure, dynamic programming, partition function, base
pairing probability, hybrid, loop, RNA secondary structure.}
\maketitle
\section{Introduction}\label{S:Introduction}
RNA-RNA interactions play a major role at many different levels of
the cellular metabolism such as plasmid replication control, viral
encapsidation, or transcriptional and translational regulation. With
the discovery that a large number of transcripts in higher
eukaryotes are noncoding RNAs, RNA-RNA interactions in cellular
metabolism are gaining in prominence. Typical examples of
interactions involving two RNA molecules are snRNAs
\citep{Forne:96}; snoRNAs with their targets \citep{Bachellerie:02};
micro-RNAs from the RNAi pathway with their mRNA target
\citep{Ambros:04, Murchison:04}; sRNAs from \emph{Escherichia coli}
\citep{Hershberg:03, Repoila:03}; and sRNA loop-loop interactions
\citep{Brunel:02}. The common feature in many ncRNA classes,
especially prokaryotic small RNAs, is the formation of RNA-RNA
interaction structures that are much more complex than the simple
sense-antisense interactions.
As it is the case for the general RNA folding problem with
unrestricted pseudoknots \citep{Akutsu}, the RNA-RNA interaction
problem (RIP) is NP-complete in its most general form
\citep{Alkan:06,Mneimneh:07}. However, polynomial-time algorithms
can be derived by restricting the space of allowed configurations in
ways that are similar to pseudoknot folding algorithms
\citep{Rivas}. The simplest approach concatenates the two
interacting sequences and subsequently employs a slightly modified
standard secondary structure folding algorithm. The algorithms
\texttt{RNAcofold} \citep{Hofacker,Bernhart}, \texttt{pairfold}
\citep{Andronescu}, and \texttt{NUPACK} \citep{Ren} subscribe to
this strategy. A major shortcoming of this approach is that it
cannot predict important motifs such as kissing-hairpin loops. The
paradigm of concatenation has also been generalized to the
pseudoknot folding algorithm of \cite{Rivas}. The resulting model,
however, still does not generate all relevant interaction structures
\citep{Backofen}. An alternative line of thought is to neglect all
internal base-pairings in either strand and to compute the minimum
free energy (MFE) secondary structure for their hybridization under
this constraint. For instance, \texttt{RNAduplex} and
\texttt{RNAhybrid} \citep{rehmsmeier:04} follows this line of
thought. \texttt{RNAup} \citep{Mueckstein:05a,Mueckstein:08a} and
\texttt{intaRNA} \citep{Busch:08} restrict interactions to a single
interval that remains unpaired in the secondary structure for each
partner. These models have proved particularly useful for bacterial
sRNA/mRNA interactions \citep{Geissmann}.
\cite{Pervouchine:04} and \cite{Alkan:06} independently proposed MFE
folding algorithms for predicting the \emph{joint structure} of two
interacting RNA molecules with polynomial time complexity. In their
model, a ``joint structure'' means that the intramolecular
structures of each molecule are pseudoknot-free, the intermolecular
binding pairs are noncrossing and there exist no so-called
``zig-zags'', see supplement material (SM) for detailed definition.
The optimal joint structure is computed in $O(N^6)$ time and
$O(N^4)$ space via a dynamic programming (DP) routine.
A more reliable approach is to consider the partition function, which by
construction integrates over the Boltzmann-weighted probability space,
allowing for the derivation of thermodynamic quantities, like e.g.~equilibrium
concentration, melting temperature and base-pairing
probabilities. The partition function
of joint structures was independently derived by
\cite{Backofen} and \cite{rip:09} while the base-pairing probabilities
are due to \cite{rip:09}.
A key quantity here is the probability of hybrids, which
cannot be recovered from base pairing probabilities since the latter
can be highly correlated. \cite{rip2} presented a new hybrid-based
decomposition grammar,
facilitating the computation of the nontrivial hybrid-probabilities
as well as the Boltzmann sampling of RNA-RNA interaction structures.
The partition function of joint structures can be computed in
$O(N^6)$ time and $O(N^4)$ space and current implementations require
very large computational resources. \cite{Backofen:fast} recently
achieved a substantial speed-up making use of the observation that
the external interactions mostly occur between pairs of unpaired
regions of single structures. \cite{Chitsaz:09} introduced
tree-structured Markov Random Fields to approximate the joint
probability distribution of multiple $(\geq 3)$ contact regions.
Unfortunately, incompleteness of the underlying energy model, in
particular for hybrid- and kissing-loops, may result in prediction
inaccuracy. One way of improving this situation is to involve
phylogenetic information of multiple sequence alignments (MSA).
In an MSA homologous nucleotides are grouped in columns, where
homologous is interpreted in {\it both}: structural as well as
evolutionary sense. I.e.~a column of nucleotides occupies similar
structural positions and all diverge from a common ancestral
nucleotide. Also, many ncRNAs show clear signs of undergoing
compensatory mutations along evolutionary trajectories. In
conclusion, it seems reasonable to stipulate that a non-negligible
part of the existing RNA-RNA interactions contain preserved but
covarying patterns of the interactions \citep{Seemann:10}. Therefore
we can associate a consensus interaction structure to pairs of
interacting MSAs (see Section~\ref{S:basic}).
Along these lines \cite{Seemann:10} presented an algorithm
\texttt{PETcofold} for prediction of RNA-RNA interactions including
pseudoknots in given MSAs. Their algorithm is an extension of
\texttt{PETfold} \citep{Seemann:08} using elements of
\texttt{RNAcofold} \citep{Bernhart} and computational strategies for
hierarchical folding \citep{Gaspin:95, Jabbari:07}. However,
\texttt{PETcofold} is an approximation algorithm and further
differences between the two approaches will be discussed in
Section~\ref{S:discussion}.
Here, we present the algorithm \texttt{ripalign} which computes the
partition function, base-pairing as well as hybrid probabilities and
performs Boltzmann-sampling on the level of MSAs. \texttt{ripalign}
represents a generalization of \texttt{rip} to pairs of interacting
MSAs and a new grammar of canonical interaction structures. The
latter is of relevance since there are no isolated base pairs in
molecular complexes.
One important step consists in identifying the notion of a joint
structure compatible to a pair of interacting MSAs. Our notion is
based on the framework of \cite{Hofacker:02}, where a sophisticated
cost function capturing thermodynamic stability as well as sequence
covariation is employed. Furthermore \texttt{ripalign} is tailored
to take structure constraints, such as blocked nucleotides known
e.g.~from chemical probing, into account.
\begin{methods}
\section{Theory}
\subsection{Multiple sequence alignments and compatibility}\label{S:basic}
A MSA, $\bar{\mathbf{R}}$, consists of $m_{\bar{\mathbf{R}}}$ RNA
sequences of known species. Denoting the length of the aligned
sequences by $N$, $\bar{\mathbf{R}}$ constitutes a
$m_{\bar{\mathbf{R}}}\times N$ matrix, having $5'-3'$ oriented rows,
$\bar{\mathbf{R}}^{i}$ and columns, ${\bar{\mathbf{R}}}_{i}$. Its
$(i,j)$-th entry, $\bar{\mathbf{R}}^{i}_{j}$, is a nucleotide,
$\textbf{A},\textbf{U},\textbf{G},\textbf{C}$ or a gap denoted by
$\textbf{\texttt{.}}$.
For any pair $(\bar{\mathbf{R}},\bar{\mathbf{S}})$ we assume
that $\bar{\mathbf{S}}$ is a $m_{\bar{\mathbf{S}}}\times M$
matrix, whose rows carry $3'-5'$ orientation.
In the following we shall assume that a pair of RNA sequences can only
interact if they belong to the same species.
A pair $(\bar{\mathbf{R}},\bar{\mathbf{S}})$, can interact if for any
row $\bar{\mathbf{R}}^{i}$, there exist at least one row in
$\bar{\mathbf{S}}$ that can interact with $\bar{\mathbf{R}}^{i}$.
Given a pair of interacting MSAs
$(\bar{\mathbf{R}},\bar{\mathbf{S}})$, let $m$ be the total number
of potentially interacting pairs. \texttt{ripalign} exhibits a
pre-processing step which generates a $m\times N$-matrix
$\mathbf{R}$ and a $m\times M$-matrix $\mathbf{S}$ such that
$(\mathbf{R}^{i}, \mathbf{S}^{i})$ range over all $m$ potentially
interacting RNA-pairs, see Tab.~1 and the SM, Section~{1.2}.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|l|c|l|c|c|}
\hline
sp.& {\fontsize{7pt}{\baselineskip}\selectfont $\bar{\mathbf{R}}$} & sp. &{\fontsize{7pt}{\baselineskip}\selectfont $\bar{\mathbf{S}}$} &sp.&
{\fontsize{7pt}{\baselineskip}\selectfont$\mathbf{R}$} &{\fontsize{7pt}{\baselineskip}\selectfont$\mathbf{S}$}\\ \hline
$\theta_1$ & \textbf{\texttt{AGAACGGA}} & $\theta_1$ & \textbf{\texttt{GGGCCG}} & $\theta_1$ &\textbf{\texttt{AGAACGGA}}& \textbf{\texttt{GGGCCG}}\\
$\theta_1$ & \textbf{\texttt{GAAACGGA}} & $\theta_1$ & \textbf{\texttt{AGUUAG}} & $\theta_1$ &\textbf{\texttt{AGAACGGA}}& \textbf{\texttt{AGUUAG}}\\
$\theta_2$ & \textbf{\texttt{AGA.CGAC}} & $\theta_2$ & \textbf{\texttt{AGGCAG}} & $\theta_1$ &\textbf{\texttt{GAAACGGA}}& \textbf{\texttt{GGGCCG}}\\
& & $\theta_2$ & \textbf{\texttt{..GUGG}} & $\theta_1$ &\textbf{\texttt{GAAACGGA}}& \textbf{\texttt{AGUUAG}}\\
& & & & $\theta_2$ &\textbf{\texttt{AGA.CGAC}}& \textbf{\texttt{AGGCAG}}\\
& & & & $\theta_2$ &\textbf{\texttt{AGA.CGAC}}& \textbf{\texttt{..GUGG}}\\
\hline
\end{tabular}
\centerline{} \caption{\textbf{Preprocessing in \texttt{ripalign}:}
Given a pair of MSAs
$(\bar{\mathbf{R}},\bar{\mathbf{S}})$, where
$\bar{\mathbf{R}}$
consists of three aligned RNA sequences of species (sp.) $\theta_1$
or $\theta_2$. $\bar{\mathbf{S}}$ in turn consists of four aligned
sequences of species $\theta_1$ and $\theta_2$. Then we obtain
the matrix-pair $({\mathbf{R}},{\mathbf{S}})$, where
$(\mathbf{R}^{i},\mathbf{S}^{i})$, $1\leq i\leq 6$,
ranges over all the six potentially interacting RNA-pairs.
}
\end{table}
\label{T:1}
We shall refer in the following to $\mathbf{R}$ and $\mathbf{S}$ as
MSAs ignoring the fact that they have multiple sequences.
We proceed by defining joint structures that are compatible to a
fixed $({\mathbf{R}},{\mathbf{S}})$. To this end, let us briefly
review some concepts introduced in \cite{rip:09}.
A joint structure $J(R,S,I)$ is a graph consisting of\\
\textbf{(j1)} Two secondary structures $R$ and $S$, whose backbones
are drawn as horizontal lines on top of each other and whose arcs
are drawn in the upper and lower halfplane, respectively. We
consider $R$ over a $5'$ to $3'$ oriented backbone $(R_1,\dots,R_N)$
and $S$ over a $3'$ to $5'$ oriented backbone $(S_1,\dots,S_M)$ and
refer to any $R$- and $S$-arcs as interior arcs. \\
\textbf{(j2)} An additional set $I$, of noncrossing arcs of the
form $R_iS_j$ (exterior arc), where $R_i$ and $S_j$ are unpaired in $R$ and $S$.\\
\textbf{(j3)} $J(R,S,I)$ contains no ``zig-zags'' (see SM).
The subgraph of a joint structure $J(R,S,I)$ induced by a pair of
subsequences $(R_i,R_{i+1},\dots,R_j)$ and $(S_h,
S_{h+1},\dots,S_\ell)$ is denoted by $J_{i,j;h,\ell}$. In
particular, $J(R,S,I)=J_{1,N;1,M}$ and $J_{i,j;h,\ell}\subset
J_{a,b;c,d}$ if and only if $J_{i,j;h,\ell}$ is a subgraph of
$J_{a,b;c,d}$ induced by $(R_i,\dots,R_j)$ and $(S_h,\dots,S_\ell)$.
In particular, we use $S[i,j]$ to denote the subgraph of
$J_{1,N;1,M}$ induced by $(S_i,S_{i+1}, \dots,S_j)$, where
$S[i,i]=S_{i}$ and $S[i,i-1]=\varnothing$.
Given a joint structure, $J_{a,b;c,d}$, a tight structure (TS),
$J_{i,j;h,\ell}$, \citep{rip:09} is a specific subgraph of
$J_{a,b;c,d}$ indexed by its type $\in
\{\circ,\bigtriangledown,\square,\bigtriangleup\}$, see
Fig.~\ref{F:typeins}. For instance, we use
$J^{\square}_{i,j;h,\ell}$ to denote a TS of type $\square$.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{typeins.eps}
\end{center}
\par\noindent
$\circ$ \hspace*{0.2\columnwidth}
$\bigtriangledown$\hspace*{0.25\columnwidth}
$\square$ \hspace*{0.28\columnwidth}
$\bigtriangleup$
\par\noindent
\caption{The four basic types of tight structures are given as
follows: $\circ:$ $\{R_iS_h\}=J_{i,j;h,\ell}$ and $i=j$, $h=\ell$;
$\bigtriangledown:$ $R_iR_j\in J_{i,j;h,\ell}$ and
$S_{h}S_{\ell}\not\in J_{i,j;h,\ell}$;
$\square:$ $\{R_iR_j,S_{h}S_{\ell}\}\in J_{i,j;h,\ell}$;
$\bigtriangleup:$ $S_{h}S_{\ell} \in J_{i,j;h,\ell}$ and $R_iR_j\not
\in J_{i,j;h,\ell}$. } \label{F:typeins}
\end{figure}
A \emph{hybrid} is a joint structure
$J^{\mathsf{Hy}}_{i_1,i_\ell;j_1,j_\ell}$, i.e.~a maximal sequence
of intermolecular interior loops consisting of a set of exterior
arcs $(R_{i_1}S_{j_1}, \dots, R_{i_\ell}S_{j_\ell})$ where
$R_{i_h}S_{j_h}$ is nested within $R_{i_{h+1}}S_{j_{h+1}}$ and where
the internal segments $R[i_h+1,i_{h+1}-1]$ and $S[j_h+1,j_{h+1}-1]$
consist of single-stranded nucleotides only. That is, a hybrid is
the maximal unbranched stem-loop formed by external arcs.
A joint structure $J(R,S,I)$ is called \textit{canonical} if and
only if: \\
{\bf (c1)} each stack in the secondary structures $R$ and $S$ is of
size at least two, i.e.~there exist no isolated interior arcs,\\
{\bf (c2)} each hybrid contains at least two exterior arcs.\\
In the following, we always assume a joint structure to be canonical.
Next, we come to $(\mathbf{R},\mathbf{S})$-compatible joint structures.
In difference to single sequence compatibility, this notion
involves statistical information of the MSAs.
The key point consists in specifying under which conditions two
vertices contained in $(R_1,\dots,R_N, S_1,\dots,S_M)$ can pair.
This is obtained by a generalization of the \texttt{RNAalifold}
approach \citep{Hofacker:02}. We specify these conditions for
interior $(c_{i,j}^{\mathbf{R}})$, $(c_{i,j}^{\mathbf{S}})$ and
exterior pairs $(c_{i,j}^{\mathbf{R,S}})$ in
eq.~(\ref{E:c1})-(\ref{E:c3}). \\
For interior arcs $(R_i,R_j)$, let $\text{X,Y}\in\{\textbf{A},
\textbf{U},\textbf{G},\textbf{C}\}$. Let
$f_{ij}^{\mathbf{R}}(\text{XY})$ be the frequency of
$(\text{X},\text{Y})$ which exists in the $2$-column sub-matrix
$(\mathbf{R}_{i},\mathbf{R}_{j})$ as a row-vector and
\begin{equation}
C_{i,j}^{\mathbf{R}}=\sum_{\text{XY},\text{X}^{\prime}\text{Y}^{\prime}}
f_{ij}^{\mathbf{R}}(\text{XY})D^{\mathbf{R}}_{\text{XY},\text{X}^{\prime}
\text{Y}^{\prime}}f^{\mathbf{R}}_{ij}(\text{X}^{\prime}\text{Y}^{\prime}).
\end{equation}
Here XY and X$'$Y$'$ independently range over all 16 elements
of $\{\textbf{A},\textbf{U},\textbf{G},\textbf{C}\}\times\{\textbf{A},
\textbf{U},\textbf{G},\textbf{C}\}$ and
$D^{\mathbf{R}}_{\text{XY},\text{X}'\text{Y}'}=d_{H}(\text{XY},
\text{X}'\text{Y}')$, i.e.~the Hamming distance between XY and
X$'$Y$'$ in case of XY and X$'$Y$'$ being Watson-Crick, or
$\textbf{G}\textbf{U}$ wobble base pair and 0, otherwise.
Furthermore, we introduce $q_{i,j}^{\mathbf{R}}$ to deal with the
inconsistent sequences
\begin{equation}
q_{i,j}^{\mathbf{R}}=1-\frac{1}{m}\sum_{h}\{\Pi_{i,j}^{h}(\mathbf{R})+
\delta(\mathbf{R}_{i}^{h},\text{gap})
\delta(\mathbf{R}_{j}^{h},\text{gap})\},
\end{equation}
where $\delta(x,y)$ is the Kronecker delta and
$\Pi_{i,j}^{h}(\mathbf{R})$ is equal to 1 if $\mathbf{R}^{h}_{i}$
and $\mathbf{R}^{h}_{j}$ are Watson-Crick or $\textbf{G}\textbf{U}$
wobble base pair and 0, otherwise.
Now we obtain $B_{i,j}^{\mathbf{R}}=C_{i,j}^{\mathbf{R}}-
\phi_{1}q_{i,j}^{\mathbf{R}}$. Based on sequence data, the threshold
for pairing $B^{\mathbf{R}}_{*}$ as well as the weight of inconsistent
sequences $\phi_{1}$ are computed we have
\begin{equation}\label{E:c1}
(c_{i,j}^{\mathbf{R}})\quad B^{\mathbf{R}}_{i,j}\geq
B^{\mathbf{R}}_{*}
\end{equation}
The case of two positions $S_{i}$ and $S_{j}$ is completely analogous
\begin{equation}\label{E:c2}
(c_{i,j}^{\mathbf{S}})\quad B^{\mathbf{S}}_{i,j}\geq
B^{\mathbf{S}}_{*},
\end{equation}
where $B^{\mathbf{S}}_{i,j}$ and $B^{\mathbf{S}}_{*}$ are
analogously defined.
As for $(c_{i,j}^{\mathbf{R},\mathbf{S}})$ a further observation
factors in: since many ncRNA show clear signs of undergoing
compensatory mutations in the course of evolution \citep{Seemann:10,Marz:08},
we postulate the existence of a non-negligible amount of RNA-RNA
interactions containing conserved pairs, consistent mutations,
compensatory mutations as well as inconsistent mutations. Based on
this observation we arrive at
\begin{equation}\label{E:c3}
(c_{i,j}^{\mathbf{R},\mathbf{S}})\quad
B^{\mathbf{R},\mathbf{S}}_{i,j}\geq B^{\mathbf{R},\mathbf{S}}_{*},
\end{equation}
where $B^{\mathbf{R},\mathbf{S}}_{i,j}$ and
$B^{\mathbf{R},\mathbf{S}}_{*}$ are analogously defined as the case
for $B^{\mathbf{R}}_{i,j}$ and $B^{\mathbf{R}}_{*}$.
A joint structure $J$ is compatible to $(\mathbf{R},\mathbf{S})$ if
for any $J$-arc, the corresponding intra- or inter-positions can
according to eq.~(\ref{E:c1})-(\ref{E:c3}) pair.
\subsection{Energy model}\label{S:loop}
According to \cite{rip:09} joint structures can be decomposed into
disjoint loops. These loop-types include standard hairpin-, bulge-,
interior- and multi-loops found in RNA secondary structures as well
as \emph{hybrid} and \emph{kissing-loops}. Following the energy
parameter rules of \cite{Mathews}, the energy of each loop can be
obtained as a sum of the energies associated with non-terminal
symbols, i.e.~graph properties (sequence independent) and an
additional contributions which depend uniquely on the terminal bases
(sequence dependent).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\columnwidth]{interloop.eps}
\end{center}
\caption{\textbf{Interior loop energy:} An interior loop formed by
$R_{i}R_{j}$ and $R_{h}R_{\ell}$, where $i<h<\ell<j$ are the
alignment positions. Grey bands are used to denote the positions we
omit between segment $(i,h)$, $(h,\ell)$ and $(\ell,j)$.}
\label{F:loop}
\end{figure}
Suppose we are given a joint structure $J$, compatible to a pair
$\mathcal{P}=(\mathbf{R},\mathbf{S})$. Let $L\in J$ be a loop and
let $\mathcal{F}_{L,i}$ represent the loop energy of the $i$-th
interaction-pair $(\mathbf{R}^{i},\mathbf{S}^{i})$. Then the loop
energy of $\mathcal{P}$ is
\begin{equation}\label{E:loopenergy}
\mathcal{F}_{L,\mathcal{P}} = 1/m \sum_{i}\mathcal{F}_{L,i}.
\end{equation}
We consider the energy of the structure as the sum of all loop
contributions:
\begin{equation}\label{E:energy1}
\mathcal{F}_{J}=\sum_{L\in J}\mathcal{F}_{L,\mathcal{P}}.
\end{equation}
To save computational resources, gaps are treated as bases in
\texttt{ripalign}. Thus only alignment positions contribute as
indices and loop sizes. Since no measured energy parameters for
nonstandard base-pairs are available at present time, additional
terminal-dependent contributions for the latter are ignored. For
instance, let ${\sf Int}_{i,j;h,l}$ denote an interior loop formed
by $R_{i}R_{j}$ and $R_{h}R_{\ell}$ and
$\mathcal{F}_{\textsf{Int},\mathcal{P}}^{i,j;h,\ell}$ denote the
free energy of $\textsf{Int}_{i,j;h,l}$ with respect to the aligned
sequences in $\mathcal{P}$. Then $\mathcal{F}_{\textsf{Int},
\mathcal{P}}^{i,j;h,\ell}$ associated to the three aligned
subsequences of Fig.~\ref{F:loop} reads
\begin{equation}\label{E:loopexample}
\mathcal{F}^{\textsf{Int},\mathcal{P}}_{i,j;h,\ell}=\frac{1}{3}(3G^{\sf
Int}_{i,j;h,\ell}+G_{*,\textbf{G,C;G,C}}^{\sf
Int}+G_{*,\textbf{G,U;G,U}}^{\sf
Int}+G_{*,\textbf{G,C;gap,gap}}^{\sf Int}).
\end{equation}
Here $G^{\sf Int}_{i,j;h,\ell}$ represents contributions related exclusively
to the positions of the interior loop while $G^{\sf Int}_{*,\textbf{A,B;C,D}}$
represents additional contributions related to the specific nucleotides which
form the interior loop. We set $G_{*,\textbf{G,C;gap,gap}}^{\sf Int}$ to be
zero.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{typeins2.eps}
\end{center}
\begin{center}
\par\noindent
$\bigtriangledown$\hspace*{0.33\columnwidth}
$\square$ \hspace*{0.33\columnwidth}
$\bigtriangleup$
\par\noindent
\end{center}
\caption{\textbf{Examples of two TS-types.} We display
$\bigtriangledown$,
$\square$, or $\bigtriangleup$-tight structures: Type cc (top) and
Type c (bottom). } \label{F:typeins2}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{grammar.eps}
\end{center}
\caption{\textbf{Grammar:} Illustration of the decomposition of
$J_{1,N;1,M}$, DTS, RTS and hybrids in Procedure (a) and of tight
structures in Procedure (b). In the bottom row the symbols for the
16 distinct types of structural components are listed:
\textbf{A}: arbitrary joint structure $J_{1,N;1,M}$ (canonical);
\textbf{B}: right-tight structures $J^{RT}_{i,j;r,s}$;
\textbf{C}: double-tight structure $J^{DT}_{i,j;r,s}$;
\textbf{D}: tight structure $J^{\bigtriangledown, cc}_{i,j;h,\ell}$,
$J^{\bigtriangleup, cc}_{i,j;h,\ell}$ or $J^{\square, cc}_{i,j;h,\ell}$;
\textbf{E}: hybrid structure $J^{\sf Hy}_{i,j;h,\ell}$;
\textbf{F}: substructure of a hybrid $J^{\sf
h}_{i,j;h,\ell}$ such that $R_{i}S_{j}$ and $R_{h}S_{\ell}$ are
exterior arcs and $J^{\sf
h}_{i,j;h,\ell}$ itself is not a hybrid since it is not maximal;
\textbf{G}, \textbf{H}: maximal secondary structure
segments $R[i,j]$, $S[r,s]$;
\textbf{J}: isolated segment $R[i,j]$ or $S[h,\ell]$;
\textbf{K}: maximal secondary structure
segments appear in pairs such that at
least one of them is not empty.
\textbf{L}: tight structure $J^{\square,cc}_{i,j;r,s}$;
\textbf{M}: tight structure $J^{\square,c}_{i,j;r,s}$;
\textbf{N}: tight structure $J^{\bigtriangledown,cc}_{i,j;r,s}$;
\textbf{O}: tight structure $J^{\bigtriangledown,c}_{i,j;r,s}$;
\textbf{P}: tight structure $J^{\bigtriangleup,cc}_{i,j;r,s}$;
\textbf{Q}: tight structure $J^{\bigtriangleup,c}_{i,j;r,s}$.
} \label{F:grammar}
\end{figure}
\subsection{The grammar of canonical joint structures and the
partition function}\label{S:grammar}
The partition function algorithm is easily extended to work with the
modified energy functions given in eq.~(\ref{E:energy1}). The
reformulation of the original hybrid-grammar into a grammar of
canonical joint structures represents already for single interaction
pairs a significant improvement in prediction quality. The original
\texttt{rip}-grammar would oftentimes encounter joint structures
having a hybrid composed by a single isolated exterior arc, see
Fig.~\ref{F:comversion}.\\
In order to decompose canonical joint structures via the unambiguous
grammar introduced in Section~\ref{S:grammar}, we distinguish the
two types (Type cc and Type c) of TS's of type $\bigtriangledown$,
$\bigtriangleup$ or $\square$. Given a TS of type
$\bigtriangledown$, denoted by $J^{\bigtriangledown}_{i,j;h,\ell}$,
we write depending on whether $R_{i+1}R_{j-1}\in
J^{\bigtriangledown}_{i,j;h,\ell}$,
$J^{\bigtriangledown,cc}_{i,j;h,\ell}$ and $J^{\bigtriangledown,
c}_{i,j;h,\ell}$, respectively. Analogously, we define $J^{\square,
cc}_{i,j;h,\ell}$, $J^{\square,c}_{i,j;h,\ell}$ and
$J^{\bigtriangleup,cc}_{i,j;h,\ell}$,
$J^{\bigtriangleup,c}_{i,j;h,\ell}$, see Fig.~\ref{F:typeins2}.\\
Fig.~\ref{F:grammar} summarizes the two basic steps of the
canonical-grammar: (I) {\emph interior arc-removal} to reduce TS,
and (II) {\emph block-decomposition} to split a joint structure into
two smaller blocks. The key feature here is, that since $J$ is
canonical, the smaller blocks are still canonical after
block-decomposition. Each decomposition step displayed in
Fig.~\ref{F:grammar} results in substructures which eventually break
down into generalized loops whose energies can be directly computed.
More details of the decomposition procedures are described in
Section~2 of the SM, where we prove that for any canonical joint
structure $J$, there exists a unique decomposition-tree
(parse-tree), denoted by $T_{J}$, see Fig.~\ref{F:tree}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{detree.eps}
\end{center}
\caption{\textbf{Example of the parse tree.} The parse tree of the
canonical joint structure $J_{1,17;1,9}$.} \label{F:tree}
\end{figure}
\subsection{Probabilities and the Boltzmann
Sampling}\label{S:prosam}
A dynamic programming scheme for the computation of a partition
function implies a corresponding computation of probabilities of
specific substructures is obtained ``from the outside to the inside"
and a stochastic backtracing procedure that can be used to sample
from the associated distribution \citep{McCaskill, Ding:03, rip2}.
We remark that the time complexity does not increase linearly as a
function of $m$ (see SM Table.~5).\\
Along the lines of the design of the Vienna software package
\citep{Hofacker}, \texttt{ripalign} now offers the following
features as optional input parameters:\\
{\sf (1)} a position $i$ can be restricted to form an interior or an
exterior arc. (denoted by ``$-$'' and ``\,\textasciicircum\,'',
respectively);\\
{\sf (2)} a position $i$ can be forced to be unpaired (denoted by ``x''); \\
{\sf (3)} a position $i$ can be restricted to form an (interior or
an exterior) arc with {\it some} position $j$
(denoted by ``$*$''); \\
{\sf (4)} a pair of positions $i$ and $j$ can be forced to form an
interior or exterior arc (denoted by ``$(\,)$'' or ``$[\,]$'',
respectively).\\
However, the above features are optional. Thus \texttt{ripalign}
can deal with both scenarios: the absence of any \emph{a priori}
information and the existence of specific information, e.g~the
location of the Sm-binding site, see Fig.~\ref{F:comversion}.
\section{Results and discussion}\label{S:conclusion}
In this paper we present an \emph{a priori} $O(N^6)$ time and
$O(N^4)$ space dynamic programming algorithm \texttt{ripalign},
whose input consists of a pair of interacting MSAs.
\texttt{ripalign} requires only marginally more computational
resources but is, without doubt, still computationally costly.
Approximation algorithms are much faster, for instance
\texttt{PETcofold} \citep{Seemann:10}, having a time complexity of
$O(m\,(N+M)^3\,n)$, where $m$ is the number of sequences in MSA, $N$
and $M$ being the sequence lengths of the longer and shorter
alignment, respectively, and $n<N/2$ is the number of iterations for
the adaption of the threshold value to find likely partial secondary
structures. Their basic assumption is that the two secondary
structures fold independently and that intra-loop evaluation
differences are negligible. The flip-side of reducing the
complexity of a folding problem by introducing additional
assumptions, is however, the uncertainty of the quality of the
solution. Point in case here is that the two secondary structures
did not evolve independently, but rather correlated by means of
their functional interaction. We remark that \texttt{ripalign}
(within its complexity limitations) is capable to describe the space
of RNA interaction structures, for instance via Boltzmann sampling,
in detail and transparency.\\
\texttt{ripalign} represents significant improvements in the
following
aspects:\\
{\bf (a)} we incorporate evolutionary factors into the
RNA-RNA interaction structure prediction via alignments as input,\\
{\bf (b)} we introduce the grammar of canonical joint
structures of interacting-alignments,\\
{\bf (c)} we \emph{a priori} factor in structural-constraints, like
for instance, knowledge on Sm-binding sites.\\
Below we shall discuss {\bf (a), (b)} and {\bf (c)} in more detail in the
context of concrete examples. All the MSAs involving in {\bf
(a), (b)} and {\bf (c)} are listed in SM, Section 2.
{\bf (a): The \emph{fhlA}/\emph{OxyS} interaction}\\
The \emph{OxyS} RNA represses \emph{fhlA} mRNA translation
initiation through base-pairing with two short
sequences\cite{Argaman:00}, one of which overlaps the ribosome
binding sequence and the other resides further downstream, within
the coding region of \emph{fhlA}. Our algorithm predicts correctly
both interaction sites based on MSAs, see Fig.~\ref{F:singlevsmsa}.
In addition, most predicted stacks in the secondary structures of
\emph{fhlA} and \emph{OxyS} agree well with the most frequent
Bolztmann sampled structure. Two more hybrids, $J^{\sf
Hy}_{56,59;41,44}$ and $J^{\sf Hy}_{81,83;48,50}$ are predicted in
our output. The two additional contact regions, identified in the
partition function, exhibit a significantly lower probability. An
additional hairpin over $R[72,89]$ is predicted in \emph{fhlA},
instead of the unpaired segment occurring in the natural structure,
can be understood in the context of minimizing free energy.
Comparing the prediction based on the MSAs
(Fig.~\ref{F:singlevsmsa}, middle) with the one based on the
consensus sequence
(Fig.~\ref{F:singlevsmsa}, bottom), we observe: \\
(1) the secondary structure of \emph{fhlA} agrees better with the
annotation joint structure (Fig.~\ref{F:singlevsmsa}, top), \\
(2) the leftmost hybrid agrees better with that of the annotated
structure. \\
(3) the binding-site probability (see SM, Section~5, eq.~(5.5))
of the leftmost hybrid increases by nearly 40\%. \\
On the flip side, due to the gaps in seven out of eight subsequences
induced by $R[98,102]$ (Column 98-102 in \emph{fhlA}), the
prediction quality of the right-most hybrid and its corresponding
contact-region probability decreases slightly.\\
Let us next contrast our results with those of \texttt{PETcofold},
see Fig.~\ref{F:comfhlA}. The latter predicts \emph{one} of the two
interaction sites. The second site is predicted subject to the
condition that constrained stems were not extended
\citep{Seemann:10}. It can furthermore be observed that in order to
predict the second hybrid, at the same time the secondary structures
prediction of both \emph{fhlA} and \emph{OxyS} gets worse.
\texttt{ripalign} predicts both: the interaction sites situated in
\emph{fhlA} and comes close to predicting the secondary structures
of \emph{fhlA} as well as \emph{OxyS} without any additional
constraints.
{\bf (b): The \emph{SmY-10}/\emph{SL-1} interaction of \emph{C.
elegans}}\\
\cite{MacMorris:07} stipulated that \emph{SmY-10} RNA, possible
involved in \emph{trans-}splicing, interacts with the splice leader
RNA (\emph{SL1} RNA). In Fig.~\ref{F:comversion}, we show that the
Sm-binding sites (colored in red) of the RNA molecules \emph{SmY-10}
and \emph{SL-1} are $R[56,62]$ and $S[25,31]$, respectively. In
Fig.~\ref{F:comversion}, the top structure is being predicted by
\texttt{rip} \citep{rip2}. We observe that firstly a stack in
\emph{SmY-10} consisting of the single arc $R_{24}S_{67}$ and
secondly the nucleotides of the Sm-binding sites form intra base
pairs. The canonical grammar presented here restricts the
configuration ensemble to canonical joint structures, resulting in
the structure presented in Fig.~\ref{F:comversion} (middle) in which
the peculiar isolated interaction arc disappears. However, the
nucleotides of the Sm-binding sites still form either intra or
inter-molecular base pairs. Incorporating the structural constraints
option we derive the bottom structure displayed in
Fig.~\ref{F:comversion}. Here the Sm-binding sites are
single-stranded. In Table.~\ref{T:2} we elaborate this point further
and show that the combination of canonical grammar and structural
constraints eliminate unwanted hybrids and ``free'' the nucleotides
attributed to Sm-binding sites of unwanted interactions.
\begin{figure}
\begin{center}
\cite{Argaman:00}
\includegraphics[angle=90,width=1\columnwidth]{fhlanature.eps}
\texttt{ripalign}: MSA-input
\includegraphics[angle=90,width=1\columnwidth]{fhlamsa.eps}
\texttt{ripalign}: Single-sequences input
\includegraphics[angle=90,width=1\columnwidth]{fhlasingle.eps}
\end{center}
\caption{\textbf{Improvement of \texttt{prediction} via
incorporating evolutionary history.} Top: the annotated structure of
the \emph{fhlA}/\emph{OxyS} interaction \cite{Argaman:00}; Middle:
the joint structure predicted by \texttt{ripalign} with MSAs as
input; Bottom: the joint structure predicted by \texttt{ripalign}
with the consensus sequences of MSAs as input. The target site
(green boxes) probabilities (defined in SM Section.~5, eq.~(5.5))
computed by \texttt{ripalign} are annotated explicitly if $>10\%$ or
just by $\leq 10\%$, otherwise. For instance, the probability of the
left-most contact region $R[25,30]$ in \emph{fhlA} (middle) is
$55.4\%$.} \label{F:singlevsmsa}
\end{figure}
\begin{figure}[t]
\begin{center}
\texttt{PETcofold} without the extension of the constrained stems
\includegraphics[angle=90,width=1\columnwidth]{pre1.eps}
\texttt{PETcofold} with the extension of the constrained stems
\includegraphics[angle=90,width=1\columnwidth]{pre2.eps}
\end{center}
\caption{\textbf{The joint structures of \emph{fhlA}/\emph{OxyS}
predicted by \texttt{PETcofold}:} The prediction was performed (top)
without and (bottom) with the extension of the constrained stems
based on the same MSAs showed in Fig.~\ref{F:singlevsmsa}. Here, the
extension of constrained stems is a specific programming-technique
of \cite{Seemann:10} to avoid incomplete stems appear in their
prediction result.} \label{F:comfhlA}
\end{figure}
\begin{figure}[t]
\begin{center}
\texttt{rip}
\includegraphics[angle=90,width=1\columnwidth]{testrip2.eps}
\texttt{ripalign} without structure-constraint
\includegraphics[angle=90,width=1\columnwidth]{testrip3.eps}
\texttt{ripalign} with structure-constraint
\includegraphics[angle=90,width=1\columnwidth]{testrip3r.eps}
\end{center}
\caption{\textbf{\texttt{ripalign} versus \texttt{rip}:} Interaction
of two specific RNA molecules, \emph{SL1} and \emph{SmY-10} of
\emph{Caenorhabditis elegans}. The Sm-binding sites (colored in red)
in the
RNA molecules \emph{SmY-10} and \emph{SL-1} are
$\textbf{5'-AAUUUUUG-3'}(R[56,62])$ and
$\textbf{3'-GUUUUAA-5'}(S[25,31])$, respectively. The joint structure
contain a single interior arc $R_{24}S_{67}$(top) is predicted by
\texttt{rip} implemented by \cite{rip2}. The
joint structure (middle) is predicted by \texttt{ripalign} without any
structural constraint. The joint structure (bottom) is predicted by
\texttt{ripalign} under the structural constraints that
$\textbf{5'-AAUUUUUG-3'}(R[56,62])$ and
$\textbf{3'-GUUUUAA-5'}(S[25,31])$ are Sm-binding sites in the RNA
molecules \emph{SmY-10} and \emph{SL-1}, respectively. The target site
(green boxes) probabilities computed by \texttt{ripalign} are
annotated explicitly if $>10\%$ or just by $\leq 10\%$,
otherwise.}
\label{F:comversion}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
& I&II&III\\ \hline
1&$J^{\sf {Hy}}_{37,40;79,82}$&$J^{\sf {Hy}}_{40,41;50,51}$&
$J^{\sf {Hy}}_{5,6;9,10}$\\
\hline
2&$J^{\sf {Hy}}_{40,41;50,51}$&$J^{\sf {Hy}}_{39,40;51,52}$
&$J^{\sf {Hy}}_{76,78;90,92}$\\
\hline
3&$J^{\sf {Hy}}_{76,78;90,92}$&$J^{\sf {Hy}}_{76,78;90,92}$
&$J^{\sf {Hy}}_{37,40;79,82}$\\
\hline
4&$\mathbf{R_{11}S_{10}}$&$J^{\sf {Hy}}_{11,12;9,10}$&
$J^{\sf {Hy}}_{78,80;89,91}$\\
\hline
5&$J^{\sf {Hy}}_{16,18;33,35}$&$J^{\sf {Hy}}_{78,80;89,91}$&
$J^{\sf {Hy}}_{11,12;51,52}$\\
\hline
6&$J^{\sf {Hy}}_{54,57;65,68}$&$J^{\sf {Hy}}_{54,57;65,68}$&
$J^{\sf {Hy}}_{16,17;47,48}$\\
\hline
\end{tabular}
\caption{\textbf{Top 6 probable hybrids predicted by \texttt{rip}
and \texttt{ripalign}:} Interaction of two specific RNA
molecules, \emph{SL1} and \emph{SmY-10} of \emph{Caenorhabditis
elegans} as illustrated in Fig.~\ref{F:comversion}. The top 6
probable hybrids predicted by \texttt{rip} implemented by
\cite{rip2} is shown in column I. The hybrids listed in column II
are predicted by \texttt{ripalign} without any structure constraint.
The hybrids listed in Column III are predicted by \texttt{ripalign}
under the structural constraints that
$\textbf{5'-AAUUUUUG-3'}(R[56,62])$ and
$\textbf{3'-GUUUUAA-5'}(S[25,31])$ are Sm-binding sites (colored in
red) in \emph{SmY-10} and \emph{SL-1}, respectively. Here, we use
$J^{\sf {Hy}}_{i,j;h,l}$ to denote the hybrid induced by $R[i,j]$
and $S[h,l]$.}
\end{table}
\label{T:2}
{\bf (c): The \emph{U4}/\emph{U6} interaction}\\
Two of the snRNAs involved in pre-mRNA splicing, \emph{U4} and
\emph{U6}, are known to interact by base pairing
\citep{ZuckerAprison:88}. We divided all known metazoan \emph{U4}
and \emph{U6} snRNAs into three distinct groups and alignments:
protostomia without insects, insects and deuterostomia
\citep{Marz:08}. \cite{Marz:08} observed that insects behave in
their secondary structure different from other protostomes, see
Fig.~\ref{F:u4u6}. Comparing all the predicted \emph{U4}/\emph{U6}
interactions,
displayed in Fig.~\ref{F:u4u6}, we can conclude:\\
(1) the secondary partial structures of the \emph{U4}/\emph{U6}
complex for all three groups predicted by \texttt{ripalign} agree
predominantly with the described secondary structures in metazoans
\citep{Thomas:90,Otake:02,Shambaugh:94,Lopez:08, Shukla:02}, e.g.~as
depicted in Fig.~\ref{F:u4u6} (top) for \emph{C.~elegans}
\citep{ZuckerAprison:88}.\\
(2) for all three groups, Stem I and II (Fig.~\ref{F:u4u6}, top) are
highly conserved. External ascendancies, such as protein interactions
may stabilize stem II additionally.\\
(3) for all three groups, the $5'$ hairpin of \emph{U4} snRNA seems
highly conserved to interact with the \emph{U6} snRNA. This RNA
feature is not fully understood, since this element is also believed
to contain intraloop interactions and may bind to a 15.5kDa protein
\cite{Vidovic:00}.\\
(4) for all metazoans, the \emph{U6} snRNA shows conserved
intramolecular interactions between the $3'$ part and the region
downstream of the $5'$-hairpin.\\
(5) for deuterostomes (Fig.~\ref{F:u4u6}, bottom), with a contact-region
probability of 45.5\%), our algorithm identifies a third \emph{U4}/\emph{U6}
interaction, Stem III, to be conserved, which agrees with the findings in
\cite{Jakab:97,Brow:95}. For protostomes, a similar feature with a
contact-region probability of $\leq 10\%$ can also be assumed.\\
(6) for both: protostomia (without insects) and deuterostomes, the
$5'$ hairpin of \emph{U6} snRNA seems to interact with the \emph{U4}
$3'$ hairpin. However, this observation does not hold for insects,
which agrees with a systematically different secondary structure of
spliceosomal RNAs in insects \citep{Marz:08}.\\
\begin{figure}
\begin{center}
\begin{tabular}{c}
\cite{ZuckerAprison:88} \\
\includegraphics[width=0.7\columnwidth]{u4-u6.eps}\\
Protostomia without insects \\
\includegraphics[angle=90,width=0.9\columnwidth]{prot-new.eps} \\
Insects \\
\includegraphics[angle=90,width=0.9\columnwidth]{insectr.eps} \\
Deuterostomia \\
\includegraphics[angle=90,width=0.9\columnwidth]{deutr.eps} \\
\end{tabular}
\end{center}
\caption{\textbf{The \emph{U4}-\emph{U6} interaction prediction with
Sm-binding site constraint in \emph{U4}.} The Sm-binding site in
molecule \emph{U4} is \textbf{$5'$-AAUUUUUG-$3'$}(colored in red).
Top of the figure is the natural structure of \emph{U4}/\emph{U6} of
\emph{C. elegans} depicted by \cite{ZuckerAprison:88}, in which the
stem I, stem II and Sm-binding site are colored in green and red,
respectively. The joint structures of protostomia (without
insects), insects and deuterostomia (from top to bottom) are
predicted by \texttt{ripalign} under the Sm-binding site
constraint. The target site (green boxes) probabilities computed by
\texttt{ripalign} are annotated explicitly if $>10\%$ or just by
$\leq 10\%$, otherwise.} \label{F:u4u6}
\end{figure}
We finally remark that the quality of prediction of \texttt{ripalign}
depends critically on the quality of the MSAs.
This issue of alignment quality is not easily solved: creating an
alignment without knowing the structure is unlikely to produce a
structural alignment. It might be an option to realign the sequences
of an RNA family taking both the predicted secondary structures and
predicted joint structure with other RNA families into consideration.
Furthermore, \texttt{ripalign} is limited by its \emph{a priori}
output class of joint structures. Thus \texttt{ripalign} cannot identify
any joint structures exhibiting pseudoknots.
To save computational resources, we stipulate that only alignment
positions contribute as indices and loop sizes. The assumption may
cause, for instance, the existence of some interior arcs $R_{i}R_{j}$
having arc-length smaller than three.
\cite{Bernhart:08} showed that this problem can be improved
substantially by introducing a different, more rational handling of
alignment gaps, and by replacing the rather simplistic model of
covariance scoring with more sophisticated RIBOSUM-like scoring
matrices.
\end{methods}
\begin{methods}
\bigskip\par\noindent\textbf{Acknowledgements.} We want to thank
Fenix W.D. Huang and Jan Engelhardt for helpful suggestions. We
thank Sharon Selzo of the Modular and BICoC Benchmark Center, IBM
and Kathy Tzeng of IBM Life Sciences Solutions Enablement. Their
support was vital for all computations presented here. We thank
Albrecht Bindereif, Elizabeth Chester and Stephen Rader for their
\emph{U4}/\emph{U6} analysis. This work was supported by the 973
Project of the Ministry of Science and Technology, the PCSIRT
Project of the Ministry of Education, and the National Science
Foundation of China to CMR and his lab, grant No.\ STA 850/7-1 of
the Deutsche Forschungsgemeinschaft under the auspices of SPP-1258
``Small Regulatory RNAs in Prokaryotes'', as well as the European
Community FP-6 project SYNLET (Contract Number 043312) to Peter F.
Stadler and his lab.
\end{methods}
\bibliographystyle{bioinformatics}
|
2,877,628,091,265 | arxiv | \section{Introduction}
For practical deployment of convolutional neural networks (CNNs), it is important to consider different hardware budgets \cite{dc,han2020ghostnet,han2020model,mobilenetv2}, to name a few, floating point operations (FLOPs), latency, memory footprint and energy consumption. One way to simultaneously accommodate all these budgets is to prune the redundant channels of a model, so that a compact network width can be obtained. Typical channel pruning usually leverages a pre-trained network and implement the pruning in an end-to-end \cite{sss,slimming,su2020data} or layer-by-layer \cite{cp,tang2020reborn} manner. After pruning, the structure of the pre-trained model remains unchanged, so that the pruned network is friendly to off-the-shelf deep learning frameworks and can be further boosted by other techniques, such as quantization \cite{dc} and knowledge distillation \cite{distilling,you2017learning,kong2020learning}.
Recently, \cite{rethinkingpruning} found the core of channel pruning is to learn a more compact \textit{network width} instead of the retained weights. Other literature also uses number of channels/filters to indicate the network width. Thus follow-up work resorts to neural architecture search (NAS) \cite{yang2020ista,you2020greedynas,yang2021towards,proxylessnas} or other automl techniques for directly searching for an optimal network width, such as
MetaPruning \cite{metapruning}, AutoSlim \cite{autoslim} and TAS \cite{tas}. In their methods, a one-shot supernet is usually leveraged for evaluation of different widths. Concretely, for the width $c$ at a certain layer, we need to assign $c$ channels (filters) in the layer and all layers follow the same way. Then all these assigned channels in the supernet specify a sub-network with the supernet. As a result, the performance of a network width refers to the accuracy of the specified sub-network with shared weights of supernet. For fair evaluation of different network widths, during the training of supernet, all network widths will be evenly sampled from the supernet and get optimized accordingly. For brevity, we use the name of layer width to indicate the width for a certain layer, while network width represents the set of widths for all layers.
In this way, how to specify the sub-network(s) for each network width matters for the performance evaluation. However, current methods \cite{metapruning,autoslim,tas} mainly follow a \textit{unilaterally augmented} (UA) principle for the evaluation of network widths in supernet. Suppose we count channels in a layer from the left to the right as Figure \ref{motivation}. To evaluate the width $c$, UA principle simply assigns the leftmost $c$ channels to specify a sub-network for evaluation. In this way, channels within smaller width will also be used for evaluation of larger widths. Since we uniformly sample all widths during training the supernet, channels close to left side will be used more times than those close to the right side in the evaluation of widths as in Figure \ref{motivation}(a). For example, the leftmost channel will be used 6 times for evaluation while the rightmost channel is only used once. This causes \textit{training unfairness} among the channels and their corresponding kernels. Left channels will be trained more than right ones. Nevertheless, this training unfairness will affect the accuracy of evaluation, and thus hampers the ability of supernet to rank over all network widths.
\begin{figure*}[t]
\centering
\includegraphics[width=0.90\linewidth]{figures/fig1_new.png}
\vspace{-2mm}
\caption{Comparison of unilaterally augmented (UA) principle and our proposed bilaterally coupled (BC) principle in supernet. In BC principle, each network width is indicated by two (left and right) paths, so that all channels get the same cardinality for evaluation different widths. However, in UA principle each width goes through one path, and training unfairness over channels and evaluation bias exist. Under uniform sampling strategy, for each channel the expectation of the times being evaluated is theoretically equal to the times being trained, since we simply sample each path and train it. For simplicity, we use \emph{cardinality} to refer to the number of times that a channel is used for evaluation over all widths.}
\label{motivation}
\vspace{-5mm}
\end{figure*}
In this paper, we introduce a new supernet called Bilaterally Coupled Network (BCNet) to address the training and evaluation unfairness within UA principle. In BCNet, each channel is fairly trained and responsible for the same amount of widths. Specifically, both in training and evaluation, each width is determined symmetrically by the average performance of bilateral ({\emph{i.e.}}, both left and right) channels. As shown in Figure \ref{motivation}(b), suppose a layer has 6 channels, then each channel of BCNet evenly corresponds to 7 layer widths from the left or right side. In this way, all channels will be trained equally; all widths are bilaterally coupled in BCNet and will be evaluated more fairly.
To encourage a rigorous training fairness over channels, we adopt a complementary training strategy for training BCNet as in Figure \ref{complementary}. As for the subsequent searching, since the evolutionary algorithm is empirically fairly sensitive to the initial population, we also propose a prior sampling method, which enables to generate a good and steady initial population instead of random initialization. Extensive experiments on the benchmark CIFAR-10 and ImageNet datasets show that our method outperforms the state-of-the-art methods under various FLOPs budget. For example, our searched EfficientNet-B0 achieves 74.9\% Top-1 accuracy on ImageNet dataset with 192M FLOPs (2 $\times$ acceleration).
\begin{comment}
Our main contributions can be summarized as follows:
\begin{enumerate}
\item We introduce a Bilaterally Coupled Network (BCNet) as the supernet, so that all channels are evenly trained and all widths can be more fairly evaluated.
\item We propose a stochastic complementary strategy to boost the training of BCNet and formulate a prior initial population sampling method for evolutionary search afterwards.
\item Our method is easy to implement, and experiments prove that VGGNet \cite{vgg}, MobileNetV2 \cite{mobilenetv2}, ResNet50 \cite{dr} and even NAS-based EfficientNet-B0 \cite{efficientnet} and ProxylessNAS \cite{proxylessnas} can be further boosted using our BCNet on both the CIFAR-10 and ImageNet datasets.
\end{enumerate}
\end{comment}
\section{Related Work}
Channel pruning is an effective method to compress and accelerate an over-parameterized convolutional neural network, and thus enables the pruned network to accommodate various hardware computational budgets. Extensive studies are illustrated in the comprehensive survey \cite{survey}. Here, we summarize the typical approaches of channel pruning \cite{sss,slimming,cp,tang2020scop} and network width search methods \cite{metapruning,autoslim,tas}.
\textbf{Channel pruning.} Channel pruning is an prevalent method which aims to reduce redundant channels of an heavy model, and generally implemented by selecting significant channels \cite{slimming,cp} or adding additional data-driven sparsity \cite{sss,dcp,tang2019bringing}. For example, CP \cite{cp} propose to construct a group Lasso to select unimportant channels. Slimming \cite{slimming} impose a $\l_1$ regularization on the scaling factors. DCP \cite{dcp} propose to construct an additional discrimination-aware losses. Despite the achievements, these methods rely heavily on manually assigned pruning ratios or hyperparameter coefficients, which is complicated, time consuming and hardly to find Oracle solutions.
\textbf{Network width search.} Inspired by the development of NAS \cite{proxylessnas,guo2020hit,huang2020explicitly}, network width search methods \cite{metapruning,autoslim,tas,dmcp,amc,su2021locally} generally take a carefully designed one-shot supernet to rank the relative performance of different widths. For example, TAS \cite{tas} aims to search the optimal network width via a learnable continuous parameter distribution. MetaPruning \cite{metapruning} proposes to directly generate representative weights for different widths. AutoSlim \cite{autoslim} proposes to leverage a slimmable network to approximate the accuracy of different network widths. However, all these methods follow UA principle in assigning channels, which affects the fairness in evaluation. To accurately rank the performance of network widths, our proposed BCNet aims to assign the same opportunity for channels during training, thus ensures the evaluation fairness in searching optimal widths.
\section{Channel Pruning as Network Width Search}
Formally, suppose the target network to be pruned $\mathcal{N}$ has $L$ layers, and each layer has $l_i$ channels. Then channel pruning aims to identify redundant channels (indexed by $\mathcal{I}_{pruned}^i$) layer-wisely, {\emph{i.e.}},
\begin{equation}
\mathcal{I}_{pruned}^i \subset [1:l_i],
\end{equation}
where $[1:l_i]$ is an index set for all integers in the range of $1$ to $l_i$ for $i$-th layer. However, \cite{rethinkingpruning} empirically finds that the absolute set of pruned channels $\mathcal{I}_{pruned}^i$ and their weights are not really necessary for the performance of pruned network, but the obtained width $c_i$ actually matters, {\emph{i.e.}},
\begin{equation}
c_i = l_i - |\mathcal{I}_{pruned}^i|.
\end{equation}
In this way, it is intuitive to directly search for the optimal network width to meet the given budgets.
Denote an arbritary network width as $\c = (c_1,c_2,...,c_L) \in \mathcal{C} = \bigotimes_{i=1}^L [1:l_i]$, where $\bigotimes$ is the Cartesian product. Then the size of search space $\mathcal{C}$ amounts to $|\mathcal{C}| = \prod_{i=1}^{L}l_i$. However, this search space is fairly huge, {\emph{e.g.}}, $10^{25}$ for $L=25$ layers and $l_i=10$ channels. To reduce the search space, current methods tend to search on a group level instead of channel-wise level. In specific, all channels at a layer is partitioned evenly into $K$ groups, then we only need to consider $K$ cases; there are just $(l_i/K)\cdot[1:K]$ layer widths for $i$-th layer. Therefore, the search space $\mathcal{C}$ is shrunk into $\mathcal{C}_K$ with size $|\mathcal{C}_K| = K^L$. In the following, we use both $\mathcal{C}$ and $\mathcal{C}_K$ seamlessly.
During searching, the target network is usually leveraged as a supernet $\mathcal{N}$, and different network widths $\c$ can be directly evaluated by sharing the same weights with the supernet.
Then the width searching can be divided into two steps, i.e., supernet training, and searching with supernet. Usually, the original training dataset is split into two datasets, {\emph{i.e.}}, training dataset $\mathcal{D}_{tr}$ and validation dataset $\mathcal{D}_{val}$. The weights $\mathcal{W}$ of the target supernet $\mathcal{N}$ is trained by uniformly sampling a width $c$ and optimizing its corresponding sub-network with weights with weights $w_c \subset W$,
\begin{equation}
W^* = \mathop{\arg\min}_{{\bm{w}}_\c \subset W}~ \Exp_{\c\in U(\mathcal{C})} \left[\kern-0.4em\left[ \L_{tr}({\bm{w}}_c; \mathcal{N}, \c, \mathcal{D}_{tr})\right]\kern-0.4em\right],
\label{eq2}
\end{equation}
where $\L_{train}$ is the training loss function, $U(\mathcal{C})$ is a uniform distribution of network widths,
and $\Exp\left[\kern-0.4em\left[\cdot\right]\kern-0.4em\right]$ is the expectation
of random variables. Then the optimal network width $\c^*$ corresponds to the one with best performance on validation dataset, {\emph{e.g.}}~classification accuracy,
\begin{equation}
\begin{aligned}
\c^* = &\mathop{\arg\max}_{\c \in \mathcal{C}}~\mbox{Accuracy}(\c, {\bm{w}}^*_\c; W^*, \mathcal{N}, \mathcal{D}_{val}), \\ &~{\mbox{s.t.}}~\mbox{FLOPs}(\c) \leq F_b,
\end{aligned}
\label{eq3}
\end{equation}
where $F_b$ is a specified budget of FLOPs. Here we consider FLOPs rather than latency as the hardware
constraint since we are not targeting any specific hardware device like EfficientNet \cite{efficientnet} and other pruning baselines \cite{dcp,tas,amc}. The searching of Eq.\eqref{eq3} can be fulfilled efficiently by various algorithms, such as random or evolutionary search \cite{metapruning}. Afterwards, the performance of the searched optimal width $\c^*$ is analyzed by training from scratch.
\section{BCNet: Bilaterally Coupled Network}
\begin{figure*}[t]
\centering
\includegraphics[width=0.82\linewidth]{figures/fig2_new.png}
\caption{Illustration of the complementary of a network width for both our bilaterally coupled (BC) principle and the baseline unilaterally augmented (UA) principle. In BC principle, for any width $\c$, all channels will be trained evenly (2 times) by training $\c$ and its complementary together. However, this fairness can not be ensured in UA principle, but gets worse; some channels will be trained 2 times while others will be trained one or zero time.}
\label{complementary}
\vspace{-5mm}
\end{figure*}
\subsection{BCNet as a new supernet}
As illustrated previously, for evaluation of width $c$ at certain layer, unilaterally augmented (UA) principle assigns the leftmost $c$ channels to indicate its performance as Figure \ref{motivation}(a). Hence all channels used for width $c$ can be indexed by a set $\mathcal{I}_{UA}(c)$, {\emph{i.e.}},
\begin{equation}
\mathcal{I}_{UA}(c) = [1:c].
\label{ua_channel}
\end{equation}
However, UA principle imposes an unfairness in updating channels (filters) for supernet. Channels with small index will be assigned to both small and large widths. Since different widths are sampled uniformly during the training of supernet, kernels for channels with smaller index thus get more training accordingly. To quantify this training unfairness, we can use the number of times that a channel is used for the evaluation of all widths to reflect its \textit{training degree}, and we name it as \emph{cardinality}. Suppose a layer has maximum $l$ channels, then the cardinality for the $c$-th channel in UA principle is
\begin{equation}
\mbox{Card-UA}(c) = l-c+1.
\label{ua_counts}
\end{equation}
In this way, the cardinality of all channels varies significantly and thus they get trained much differently, which introduces evaluation bias when we use the trained supernet to rank the performance over all widths.
To alleviate the evaluation bias over widths, our proposed BCNet serves as a new supernet which promote the fairness {{w.r.t.}}~channels. As shown in Figure \ref{motivation}(b), in BCNet each width is simultaneously evaluated by the sub-networks corresponding to left and right channels. left and right channels. It can be seen as two identical networks $\mathcal{N}_l$ and $\mathcal{N}_r$ bilaterally coupled with each other, and use UA principle for evaluation but in a reversed order of counting channels. In this way, all channels $\mathcal{I}_{BC}(c)$ used for evaluating width $c$ in BCNet are indexed by
\begin{align}
\mathcal{I}_{BC}(c) &= \mathcal{I}_{UA}^l(c) \uplus \mathcal{I}_{UA}^r(c) \\
&= [1:c] \uplus [(l-c+1):(l-c)],
\label{bc_channel}
\end{align}
where $\uplus$ means the merge of two lists with repeatable elements. In detail, left channels in $\mathcal{N}_l$ follow the same setting with UA principle as Eq.\eqref{ua_channel}, while for right channels in $\mathcal{N}_r$, we count channels starting from right with $\mathcal{I}_{UA}^r(c) = [(l-c+1):(l-c)]$.
As a result, the cardinality of each channel within BC principle is the sum from both two supernets $\mathcal{N}_l$ and $\mathcal{N}_r$. In detail, since channels count from the right side within $\mathcal{N}_r$, the cardinality for the $c$-th channel in left side corresponds to the cardinality of $l - c + 1$-th channel in right side with Eq.\eqref{ua_counts}. As a result, the cardinality for the $c$-th channel in BC principle is
\begin{multline}
\mbox{Card-BC}(c) = \mbox{Card-UA}(c) + \mbox{Card-UA}(l+1-c) \\
= (l-c+1) + (l+1-l-1+c) = l+1
\label{bc_counts}
\end{multline}
Therefore, the cardinality for each channel will always amounts to the same constant value ({\emph{i.e.}}, $7$ in Figure \ref{motivation}(b)) of widths, and irrelevant with the index of channels with BC principle, thus ensuring the fairness in terms of channel (filter) level, which promotes to fairly rank network widths with our BCNet.
\subsection{Stochastic Complementary Training Strategy}
To train the BCNet, we adopt stochastic training, {\emph{i.e.}}, uniformly sampling a network width $\c$ from the search space $\mathcal{C}_K$, and training its corresponding channel (filters) $\mathcal{N}(W,\c)$ using training data $\mathcal{D}_{tr}$ afterwards. Note that a single $\c$ has two paths in BCNet, during training, a training batch $\mathcal{B}\subset\mathcal{D}_{tr}$ is supposed to forward simultaneously through both $\mathcal{N}^*_l(W)$ and $\mathcal{N}^*_r(W)$. Then the training loss is the averaged loss of both paths, {\emph{i.e.}}, for each batch $\mathcal{B}$
\begin{equation}
\begin{aligned}
\L_{tr}(W,\c;\mathcal{B}) = \frac{1}{2} \cdot\left(
\L_{tr}(\mathcal{N}_l; \c, \mathcal{B}) + \L_{tr}(\mathcal{N}_r; \c, \mathcal{B})\right).
\end{aligned}
\label{eq7}
\end{equation}
Despite with our BCNet, channels are trained more evenly than other methods. However, it still can not ensure a rigorous fairness over channels. For example, if a layer has 3 channels and we sample 10 widths on this layer. Then results can come to that the first channel is sampled 4 times and the other two are sampled 3 times, respectively. The first channel thus still gets more training than the others, which ruins the training fairness.
To solve this issue, we propose to leverage a complementary training strategy, {\emph{i.e.}}, after sampling a network width $\c$, both $\c$ and its complementary $\bar{\c}$ get trained. For example, suppose a width $\c=(3,2,4)$ with maximum 6 channels per layer, then its complementary amounts to $\bar{\c} = (3,4,2)$ as Figure \ref{complementary}. The training loss for the BCNet is thus
\begin{equation}
\L_{tr}(W;\mathcal{D}_{tr},\mathcal{N}) = \Exp_{\bm{c}\in U(\mathcal{C})} \left[\kern-0.4em\left[ \mathcal{L}_{tr}(W,\bm{c};\mathcal{D}_{tr}) + \mathcal{L}_{tr}(W,\bar{\bm{c}};\mathcal{D}_{tr})\right]\kern-0.4em\right].
\label{eq8}
\end{equation}
In this way, when we sample a width $\c$, we can always ensure all channels are evenly trained, and expect a more fair comparison over all widths based on the trained BCNet. Note that this complementary strategy only works for our BCNet, and fails in the typical unilateral augmented (UA) principle \cite{autoslim,metapruning,tas}, which even worsens the bias as shown in Figure \ref{complementary}(b).
\subsection{Evolutionary Search with Prior Initial Population Sampling}
After the BCNet $\mathcal{N}^*$ is trained with weights $W^*$, we can evaluate each width by examining its performance ({\emph{e.g.}}, classification accuracy) on the validation dataset $\mathcal{D}_{val}$ as Eq.\eqref{eq3}. Besides, similar to the training of BCNet, the performance of a width $\c$ is indicated by the averaged accuracy of its left and right paths. Moreover, to boost the searching performance, we leverage the multi-objective NSGA-\Roman{2} \cite{deb2002fast} algorithm for evolutionary search, and hard FLOPs constraint can be thus well integrated. In generally, evolutionary search is prone to the initial population before the sequential mutation and crossover process. In this way, we propose a Prior Initial Population Sampling method to allocate a promising initial population, which is expected to contribute to the evolutionary searching performance.
Concretely, suppose the population size is $P$, and we hope the sampled initial population have high performance in order to generate competing generations during search. Note that during training of BCNet, we have also sampled various widths, whose quality can be reflected by the training loss. In this way, we can record the top $m$ ({\emph{e.g.}}, $m=100$) widths $\{\c^{(i)}\}_{i=1}^m$ with smallest training loss $\{\ell^{(i)}\}_{i=1}^m$ as priors for good network widths. However, even the group size for every layer is set to 10, the search space of MobileNetV2 is as large as $10^{25}$, which is too large to search good widths within limited training epochs. Thus we aim to learn layer-wise discrete sampling distributions $\P(l,c_i)$ to perform stochastic sampling a width $\c = (c_1,.,c_l,.,c_L)$, where $\P(l, c_i)$ indicates the probability of sampling width $c_l$ at $l$-th layer subject to $\sum_{i}\P(l, c_i) = 1$.
Note that these $m$ prior network widths actually can reflect the preference over some widths for each layer. For example, if at a layer $l$, a width $c_l$ exists in these $m$ prior widths with smaller training loss, then the sampling probability $\P(l,c_i)$ should be large as well. In this way, we can measure the \textit{potential error} $\mathcal{E}(l,c_i)$ of sampling $c_l$ width at $l$-th layer by recording the averaged training loss of all $m$ widths going through it,{\emph{i.e.}},
\begin{equation}
\mathcal{E}(l,c_i) = \frac{1}{\sum_{j=1}^m \mathbf{1}\{\c^{(j)}_l = i\}} \cdot \sum_{j=1}^m \ell^{(j)} \cdot \mathbf{1}\{\c^{(j)}_l = i\},
\label{eq10}
\end{equation}
where $\mathbf{1}\{\cdot\}$ is the indicator function. Then the objective is to sample with minimum expected potential errors, {\emph{i.e.}},
\begin{equation}
\begin{aligned}
\mathop{\min}_{\P}~\sum_{l}\sum_{i}&\P(l, c_i) \cdot \mathcal{E}(l, c_i),~{\mbox{s.t.}}~\sum_{i}\P(l, c_i) = 1, \\
&\P(l, c_i)\geq 0, \forall~l = 1,...,L.
\end{aligned}
\label{eq11}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{figures/fig_population.eps}
\caption{Histogram of Top-1 accuracy of searched widths on BCNet by evolutionary searching method with our prior or random initial population sampling {{w.r.t.}}~ResNet50 (2G FLOPs) on ImageNet dataset.}
\label{prior}
\vspace{-6mm}
\end{figure}
In addition, we also need to deal with the hard FLOPs constraint in the initial population. Since the FLOPs of a layer depends on the channels of its input and output, we can limit the expected FLOPs of the sampled network width, {\emph{i.e.}},
\begin{equation}
\sum_{l}\sum_{(i,j)} \P(l, c_i) \cdot F(l,c_i,c_j) \cdot \P(l+1, c_j) \leq F_b,
\label{eq12}
\end{equation}
where $F(l,c_i,c_j)$ is the FLOPs of $l$-th layer with $c_i$ input channels and $c_j$ output channels, which can be pre-calculated and stored in a looking-up table. Then Eq.\eqref{eq12} is integrated as an additional constraint for the problem Eq.\eqref{eq11}. The overall problem is a QCQP (Quadratically constrainted quadratic programming), which can be efficiently solved by many off-the-shelf solvers, such as CVXPY \cite{cvxpy}, GH \cite{qcqp}. As Figure \ref{prior} shows, our proposed sampling method can significantly boost evolutionary search by providing better initial populations. On average, the performance of our searched widths are much better than those obtained by random initial population, which proves the effectiveness of our proposed sampling method.
\begin{table*}[!t]
\centering
\scriptsize
\caption{Performance comparison of ResNet50 and MobileNetV2 on ImageNet. Methods with "*" denotes that the results are reported with knowledge distillation.}
\label{Experiments_Imagenet}
{\begin{tabular}{c|c|cc|cc||c|c|cc|cc}
\hline
\multicolumn{6}{c||}{ResNet50} & \multicolumn{6}{c}{MobileNetV2}\\ \hline
FLOPs level&Methods&FLOPs&Parameters&Top-1&Top-5&FLOPs level&Methods&FLOPs&Parameters&Top-1&Top-5 \\ \hline
\multirow{7}*{3G} & AutoSlim* \cite{autoslim} & 3.0G & 23.1M & 76.0\% & - & \multirow{5}*{305M (1.5$\times$)} & AutoSlim* \cite{autoslim} & 305M & 5.7M & 74.2\% & - \\
& MetaPruning \cite{metapruning} & 3.0G & - & 76.2\% & - && Uniform & 305M & 3.6M & 72.7\% & 90.7\% \\
& LEGR \cite{legr} & 3.0G & - & 76.2\% & - && Random & 305M & - & 71.8\% & 90.2\% \\
& Uniform & 3.0G & 19.1M & 75.9\% & 93.0\% && \textbf{BCNet} & 305M & 4.8M & \textbf{73.9\%} & 91.5\% \\
& Random & 3.0G & - & 75.2\% & 92.5\% && \textbf{BCNet*} & 305M & 4.8M & \textbf{74.7\%} & 92.2\% \\ \cline{7-12}
& \textbf{BCNet} & 3.0G & 22.6M & \textbf{77.3\%} & 93.7\% &\multirow{12}*{200M} & Uniform & 217M & 2.7M & 71.6\% & 89.9\% \\ \cline{1-6}
& SSS \cite{sss} & 2.8G & - & 74.2\% & 91.9\% && Random & 217M & - & 71.1\% & 89.6\% \\
\multirow{12}*{2G} & GBN \cite{gbn} & 2.4G & 31.83M & 76.2\% & 92.8\% && \textbf{BCNet} & 217M & 3.0M & \textbf{72.5\%} & 90.6\% \\
& SFP \cite{sfp} & 2.4G & - & 74.6\% & 92.1\% && \textbf{BCNet*} & 217M & 3.0M & \textbf{73.5\%} & 91.3\% \\
& LEGR \cite{legr} & 2.4G & - & 75.7\% & 92.7\% && MetaPruning \cite{metapruning} & 217M & - & 71.2\% & - \\
& FPGM \cite{fpgm} & 2.4G & - & 75.6\% & 92.6\% && LEGR \cite{legr} & 210M & - & 71.4\% & - \\
& TAS* \cite{tas} & 2.3G & - & 76.2\% & 93.1\% && AMC \cite{amc} & 211M & 2.3M & 70.8\% & - \\
& DMCP \cite{dmcp} & 2.2G & - & 76.2\% & - && AutoSlim* \cite{autoslim} & 207M & 4.1M & 73.0\% & - \\
& MetaPruning \cite{metapruning} & 2.0G & -
& 75.4\% & - && Uniform & 207M & 2.7M & 71.2\% & 89.6\% \\
& AutoSlim* \cite{autoslim} & 2.0G & 20.6M & 75.6\% & - && Random & 207M & - & 70.5\% & 89.2\% \\
& Uniform & 2.0G & 13.3M & 75.1\% & 92.7\% && \textbf{BCNet} & 207M & 3.1M & \textbf{72.3\%} & 90.4\% \\
& Random & 2.0G & - & 74.6\% & 92.2\% && \textbf{BCNet*} & 207M & 3.1M & \textbf{73.4\%} & 91.2\% \\ \cline{7-12}
& \textbf{BCNet} & 2.0G & 18.4M & \textbf{76.9\%} & 93.3\% &\multirow{11}*{100M} & MetaPruning \cite{metapruning} & 105M & - & 65.0\% & - \\ \cline{1-6}
\multirow{11}*{1G} & AutoPruner \cite{autopruner} & 1.4G & - & 73.1\% & 91.3\% && Uniform & 105M & 1.5M & 65.1\% & 89.6\% \\
& MetaPruning \cite{metapruning} & 1.0G & - & 73.4\% & - && Random & 105M & - & 63.9\% & 89.2\% \\
& AutoSlim* \cite{autoslim} & 1.0G & 13.3M & 74.0\% & - && \textbf{BCNet} & 105M & 2.3M & \textbf{68.0\%} & 89.1\% \\
& Uniform & 1.0G & 6.9M & 73.1\% & 91.8\% && \textbf{BCNet*} & 105M & 2.3M & \textbf{69.0\%} & 89.9\% \\
& Random & 1.0G & - & 72.2\% & 91.4\% && MuffNet \cite{muffnet} & 50M & - & 50.3\% & - \\
& \textbf{BCNet} & 1.0G & 12M & \textbf{75.2\%} & 92.6\% && MetaPruning \cite{metapruning} & 43M & -& 58.3\% & - \\
& AutoSlim* \cite{autoslim} & 570M & 7.4M & 72.2\% & - && Uniform & 50M & 0.9M & 59.7\% & 82.0\% \\
& Uniform & 570M & 6.9M & 71.6\% & 90.6\% && Random & 50M & - & 57.4\% & 81.2\% \\
& Random & 570M & - & 69.4\% & 90.3\% && \textbf{BCNet} & 50M & 1.6M & \textbf{62.7\%} & 83.7\% \\
& \textbf{BCNet} & 570M & 12.0M & \textbf{73.2\%} & 91.1\% && \textbf{BCNet*} & 50M & 1.6M & \textbf{63.8\%} & 84.6\% \\ \hline
\end{tabular}}
\vspace{-5mm}
\end{table*}
\section{Experimental Results}
In this section, we conduct extensive experiments on the ImageNet and CIFAR-10 datasets to validate the effectiveness of our algorithm. For all structures, we search on the reduced space $\mathcal{C}_K$ with default $K=20$. Note that most pruning methods do not report their results by incorporating the knowledge distillation (KD) \cite{distilling,du2020agree} improvement in retraining except for MobileNetV2. Thus in our method, except for MobileNetV2, we also do not include KD in final retraining. Detailed experimental settings are elaborated in supplementary materials.
\textbf{Comparison methods.} We include multiple competing pruning, network width search methods and NAS models for comparison, such as DMCP \cite{dmcp}, TAS \cite{tas}, AutoSlim \cite{autoslim}, MetaPruning \cite{metapruning}, AMC \cite{amc}, DCP \cite{dcp}, LEGR \cite{legr}, CP \cite{cp}, AutoPruner \cite{autopruner}, SSS \cite{sss}, EfficientNet-B0 \cite{efficientnet} and ProxylessNAS \cite{proxylessnas}. Moreover, we also consider two vanilla baselines. \textit{Uniform}: we shrink the width of each layer with a fixed factor to meet FLOPs budget. \textit{Random}: we randomly sample 20 networks under FLOPs constraint, and train them by 50 epochs, then we continue training the one with the highest performance and report its final result.
\begin{table}[!t]
\centering
\footnotesize
\caption{Searching results of EfficientNet-B0 and ProxylessNAS on ImageNet dataset.}
\label{Experiments_Efficientnetb0}
{\begin{tabular}{c|c|c|c|c}
\hline
\multicolumn{5}{c}{EfficientNet-B0} \\ \hline
Groups&Methods&Param&Top-1&Top-5 \\ \hline
\multirow{3}*{385M}&Uniform & 5.3M & 76.88\% & 92.64\% \\
&Random & 5.1M & 76.37\% & 92.25\% \\
& BCNet & 6.9M & \textbf{77.36\%} & 93.17\% \\ \hline
\multirow{3}*{192M}&Uniform & 2.7M & 74.26\% & 92.24\% \\
&Random & 2.9M & 73.82\% & 91.86\% \\
& BCNet & 3.8M & \textbf{74.92\%} & 92.06\% \\ \hline
\multicolumn{5}{c}{ProxylessNAS} \\ \hline
Groups&Methods&Param&Top-1&Top-5 \\ \hline
\multirow{3}*{320M} & Uniform & 4.1M & 74.62\% & 91.78\% \\
& Random & 4.3M & 74.16\% & 91.23\% \\
&BCNet & 5.4M & \textbf{75.07\%} & 91.97\% \\ \hline
\multirow{3}*{160M} & Uniform & 2.2M & 71.16\% & 89.49\% \\
&Random & 2.5M & 70.89\% & 89.12\% \\
&BCNet & 2.9M & \textbf{71.87\%} & 89.96\% \\ \hline
\end{tabular}}
\vspace{-5mm}
\end{table}
\subsection{Results on ImageNet Dataset}
ImageNet dataset contains 1.28M training images and 50K validation images from 1K classes. In specific, we report the accuracy on the validation dataset as \cite{slimming,autoslim}, and the original model takes as the supernet while for the 1.0$\times$ FLOPs of all models, the supernet refers to a 1.5$\times$ FLOPs of original model by uniform width scaling. To verify the performance on both heavy and light models, we search on the ResNet50 and MobileNetV2 with different FLOPs budgets. In our experiment, the original ResNet50 (MobileNetV2) has 25.5M (3.5M) parameters and 4.1G (300M) FLOPs with 77.5\% (72.6\%) Top-1 accuracy, respectively.
As shown in Table \ref{Experiments_Imagenet}, our BCNet achieves the highest accuracy on ResNet50 and MobileNetV2 {{w.r.t.}}~different FLOPs, which indicates the superiority of our BCNet to other pruning methods. For example, our 3G FLOPs ResNet50 decreases only 0.2\% Top-1 accuracy compared to the original model, which exceeds AutoSlim \cite{autoslim} and MetaPruning \cite{metapruning} by 1.3\% and 1.1\%. While for MobileNetV2, our 207M MobileNetV2 exceeds the state-of-the-art AutoSlim, MetaPruning by 0.4\%, 1.1\%, respectively. In addition, our BCNet even surpasses other algorithms more on tiny MobileNetV2 (105M) with 68\% Top-1 accuracy and exceeds MetaPruning by 3.0\%.
To further demonstrate the effectiveness of our BCNet on highly efficient models, we conduct searching on the NAS-based models EfficientNet-B0 and ProxylessNAS. The original EfficientNet-B0 (ProxylessNAS) has 5.3M (4.1M) parameters and 385M (320M) FLOPs with 76.88\% (74.62\%) Top-1 accuracy, respectively. As shown in Table \ref{Experiments_Efficientnetb0}, although the increase of performance is not as signicant as in Table \ref{Experiments_Imagenet}, our method can still boost the NAS-based models by more than 0.4\% on Top-1 accuracy.
\begin{table}[t]
\centering
\caption{Performance comparison of MobileNetV2 and VGGNet on CIFAR-10.}
\label{Experiment_Cifar10}
\scriptsize
{\begin{tabular}{c|c|ccc}
\hline
\multicolumn{5}{c}{MobileNetV2} \\ \hline
Groups&Methods&FLOPs&Params&accuracy \\ \hline
\multirow{4}*{200M} & DCP \cite{dcp} & 218M & - & 94.69\% \\
& Uniform & 200M & 1.5M & 94.57\% \\
& Random & 200M & - & 94.20\% \\
& \textbf{BCNet} & 200M & 1.5M & \textbf{95.44\%}\\ \cline{1-5}
\multirow{4}*{146M} & MuffNet \cite{muffnet} & 175M & - & 94.71\% \\
& Uniform & 146M & 1.1M & 94.32\% \\
& Random & 146M & - & 93.85\% \\
& \textbf{BCNet} & 146M & 1.2M & \textbf{95.42\%} \\ \cline{1-5}
\multirow{5}*{44M} & AutoSlim \cite{autoslim} & 88M & 1.5M & 93.20\% \\
& AutoSlim \cite{autoslim} & 59M & 0.7M & 93.00\% \\
& MuffNet \cite{muffnet} & 45M & - & 93.12\% \\
& Uniform & 44M & 0.3M & 92.88\% \\
& Random & 44M & - & 92.31\% \\
& \textbf{BCNet} & 44M & 0.4M & \textbf{94.42\%} \\ \cline{1-5}
\multirow{4}*{28M} & AutoSlim \cite{autoslim} & 28M & 0.3M & 92.00\%\\
& Uniform & 28M & 0.2M & 92.37\% \\
& Random & 28M & - & 91.69\% \\
& \textbf{BCNet} & 28M & 0.2M & \textbf{94.02\%} \\ \hline
\multicolumn{5}{c}{VGGNet} \\ \hline
Groups&Methods&FLOPs&Params&accuracy \\ \hline
\multirow{5}*{200M} & Sliming \cite{slimming} & 199M & 10.4M & 93.80\% \\
& DCP \cite{dcp} & 199M & 10.4M & 94.16\% \\
& Uniform & 199M & 10.0M & 93.45\% \\
& Random & 199M & - & 93.02\% \\ \cline{1-5}
& \textbf{BCNet} & 197M & 3.1M & \textbf{94.36\%} \\
\multirow{8}*{100M$+$} & Uniform & 185M & 9.3M & 93.30\% \\
& Random & 185M & - & 92.71\% \\
& \textbf{BCNet} & 185M & 6.7M & \textbf{94.14\%} \\
& CP \cite{cp} & 156M & 7.7M & 93.67\% \\
& Multi-loss \cite{multi} & 140M & 5.5M & 94.05\% \\
& Uniform & 138M & 6.8M & 93.14\% \\
& Random & 138M & - & 92.17\% \\
& \textbf{BCNet} & 138M & 3.3M & \textbf{94.09\%} \\ \cline{1-5}
\multirow{5}*{77M} & CGNets \cite{cgnet} & 91.8M & - & 92.88\% \\
& Uniform & 77.0M & 3.9M & 92.38\% \\
& Random & 77.0M & - & 91.72\% \\
& \textbf{BCNet} & 77.0M & 1.2M & \textbf{93.53\%} \\
& CGNet \cite{cgnet} & 61.4M & - & 92.41\% \\ \hline
\end{tabular}}
\vspace{-7mm}
\end{table}
\subsection{Results on CIFAR-10 Dataset}
We also examine the performance of MobileNetV2 and VGGNet on the moderate CIFAR-10 dataset, which has 50K training and 10K testing images with size 32$\times$32 of 10 categories. Our original VGGNet (MobileNetV2) has 20M (2.2M) parameters and 399M (297M) FLOPs with accuracy of 93.99\% (94.81\%).
As shown in Table \ref{Experiment_Cifar10}, our BCNet still enjoys great advantages in various FLOPs levels. For instance, our 200M MobileNetV2 can achieve 95.44\% accuracy, which even outperforms the original model by 0.63\%. Moreover, even with super tiny size (28M), our BCNet can still have 94.02\% accuracy, which surpasses the state-of-the-art AutoSlim\cite{autoslim} by 2.0\%. As for VGGNet, our BCNet is capable of outperforming those competing channel pruning methods DCP \cite{dcp} and Slimming \cite{slimming} by 0.20\% and 0.56\% with 2$\times$ acceleration rate.
\subsection{Ablation Studies}
\label{ablation}
\textbf{Effect of BCNet as a supernet.} To validate the effectiveness of our proposed supernet BCNet, we search the ResNet50, MobileNetV2, EfficientNet-B0 and ProxylessNAS on ImageNet dataset with 2$\times$ acceleration. Our default baseline supernet is that adopted by AutoSlim \cite{autoslim}, which follows unilateral augmented principle to evaluate a network width. As the results in Table \ref{BCNet_analysis} shows, under the greedy search, only using our BCNet evaluation mechanism (second line) can enjoy a gain of 0.27\% to 0.66\% Top-1 accuracy. When searching with evolutionary algorithms, the gain still reaches at 0.28\% to 0.35\% Top-1 accuracy on various models. These exactly indicates using BCNet as supernet could boost the evaluation and searching performance. As for the complementary training strategy, we can see that it enables to boost our BCNet by improving the MobileNetV2 (ResNet50) from 69.92\% (76.41\%) to 70.04\% (76.56\%) on Top-1 accuracy. Note that greedy search without BCNet supernet amounts to AutoSlim, we can further indicate the superiority of our method to AutoSlim with achieved Top-1 accuracy 70.20\% (76.90\%) vs 69.52\% (75.94\%) on MobileNetV2 (ResNet50).
\begin{table*}[t]
\caption{Performance of searched MobileNetV2 (150M FLOPs), ResNet50 (2G FLOPs), EfficientNet-B0 and ProxylessNAS on ImageNet dataset with different supernet and searching methods.}
\label{BCNet_analysis}
\centering
\scriptsize
\begin{tabular}{|c|c|c|c|c||cc|cc|cc|cc|} \hline
\multicolumn{2}{|c|}{evaluator} & \multicolumn{3}{c||}{searching}& \multicolumn{8}{c|}{models} \\ \cline{1-13}
BCNet & complementary & greedy & \multicolumn{2}{c||}{evolutionary}& \multicolumn{2}{c|}{MobileNetV2} & \multicolumn{2}{c|}{ResNet50}&\multicolumn{2}{c|}{EfficientNet-B0}& \multicolumn{2}{c|}{ProxylessNAS}\\ \cline{4-13}
supernet &training&search&random & prior & Top-1 & Top-5&Top-1 & Top-5&Top-1 & Top-5&Top-1 & Top-5\\ \hline
& &\checkmark& & & 69.52\% & 88.91\% & 75.64\% & 92.90\% & 74.02\% & 91.58\% & 70.97\% & 89.43\% \\
\checkmark & &\checkmark& & & 69.87\% & 88.99\% & 76.30\% & 93.16\% & 74.39\% & 91.66\% & 71.24\% & 89.57\% \\
\checkmark & \checkmark &\checkmark& & & \textbf{69.91\%} & 89.02\% & \textbf{76.42\%} & 93.19\% & \textbf{74.51\%} & 91.78\% & \textbf{71.33\%} & 89.62\%\\ \hline
& & & \checkmark & & 69.64\% & 88.85\% & 76.12\% & 92.95\% & 74.35\% & 91.54\% & 71.13\% & 89.49\% \\
\checkmark & & & \checkmark & & 69.92\% & 88.91\% & 76.41\% & 93.12\% & 74.63\% & 91.93\% & 71.48\% & 89.69\% \\
\checkmark & \checkmark & & \checkmark & & 70.04\% & 89.02\% & 76.56\% & 93.21\% & 74.73\% & 91.85\% & 71.62\% & 89.73\% \\
\checkmark & \checkmark & & & \checkmark & \textbf{70.20\%} & 89.10\% & \textbf{76.90\%} & 93.30\% & \textbf{74.92\%} & 92.06\% & \textbf{71.87\%} & 89.96\% \\ \hline
\end{tabular}
\vspace{-4mm}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=0.71\linewidth]{figures/fig_ablation_channel_number.pdf}
\vspace{-1mm}
\caption{Accuracy performance of the searched network with different group size $K$ of the search space.}
\label{ratio_list}
\vspace{-6mm}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=1.00\linewidth]{figures/fig_visualization_new.pdf}
\vspace{-6mm}
\caption{Visualization of searched networks {{w.r.t.}}~different FLOPs. The vertical axis means the ratio of retained channel number compared to that of original networks at each layer.}
\label{visualization}
\vspace{-5mm}
\end{figure*}
\textbf{Effect of search space.}
We adopt the grouped search space $\mathcal{C}_K$ to reduce its complexity. To investigate the effect of search space, we searched VGGNet on CIFAR-10 dataset and MobileNetV2 on ImageNet dataset with various group size $K$. As in Figure \ref{ratio_list}(b), our method achieves the best performance in most cases around our default value $K=20$. In addition, we noticed that when the group size $K$ is small, the performance of searched network will increase with $K$ growing larger. This is because group size $K$ determines the size of search space $\mathcal{C}_K$, and larger $K$ induces larger search space. In this way, the obtained network width will be closer to the Oracle optimal width, with higher accuracy achieved accordingly. In addition, the performance tends to be stable when the group size lies in $[14:22]$ but decreases afterwards, which implies the searching space might be too large for searching an optimal width.
\subsection{Visualization and Interpretation of Results}
For intuitively understanding, we visualize our searched networks with various FLOPs in Figure \ref{visualization}. Moreover, for clarity we show the retained ratio of layer widths compared to that of the original models. Note that for MobileNetV2, ResNet50, EfficientNet-B0 and ProxylessNAS with skipping or depthwise layers, we merge these layers which are required to have the same width.
More visualization results can refer to the supplementary materials.
From Figure \ref{visualization}, we can see that on the whole, with decreasing FLOPs, layer width nearer the input tends to be reduced. However, the last layer is more likely to be retained. This might result from that the last layer is more sensitive to the classification performance, thus it is safely kept when the FLOPs is reduced. In the sequel, we will illustrate more elaborate observations {{w.r.t.}}~each network, and present some intuitions accordingly.
\textbf{ResNet50 on ImageNet.} We found that when the network is pruned with a large FLOPs budget ({\emph{e.g.}}, 3G or 2G), width of the first 1$\times$1 convolutional layer ({\emph{e.g.}}, $2$nd and $5$th layer in Figure \ref{visualization}) of each block in ResNet50 is preferentially reduced, which means 1$\times$1 convolution may contribute less to classification performance. However, when FLOPs drops to a fairly small value ({\emph{e.g.}}, 570M), channel number of 3$\times$3 convolution ({\emph{e.g.}}, $3$rd and $6$th layer in Figure \ref{visualization}) will decrease dramatically while that of 1$\times$1 convolution increases instead. This implies that the network will be forced to use more 1$\times$1 convolutions instead of 3$\times$3 convolutions to extract information from feature maps. In addition, this observation also indicates that evolutionary algorithm is more effective than greedy search {{w.r.t.}}~small FLOPs since evolutionary algorithm can always maintain the original search space. Nevertheless, greedy algorithm will greedily prune out more 1$\times$1 convolutions at the beginning, which can not be recovered for small FLOPs budget.
\textbf{MobileNetV2 on ImageNet and CIFAR-10.} Different from ResNet50, widths of MobileNetV2 decrease more evenly with the reduction of FLOPs. This may be due to the limitation of depthwise convolutions, which requires the output channel number of first 1$\times$1 convolution and the second 3$\times$3 convolution to be the same in MobileNetV2 blocks. Compared from pruning on ImageNet, widths closer to the input layer are more easily to be clipped on CIFAR-10 dataset. This may be because the input of CIFAR-10 is 32$\times$32, which do not need as many widths as ImageNet in the last layer. In addition, when FLOPs is reduced to a fairly small value ({\emph{e.g.}}, 28M, 44M, and 50M for MobileNetV2), unlike pruning on ImageNet, the width of the last layer of MobileNetV2 on CIFAR-10 decreases rapidly. The reason for this phenomenon may be that MobileNetV2 on ImageNet is forced to classify 1000 categories, while it only needs to deal with 10-way classification on CIFAR-10. Then the width of the last layer on ImageNet tends to be retained, but gets decreased rapidly on CIFAR-10. More visualizations about MobileNetV2 are analyzed in the supplementary material.
\textbf{EfficientNet-B0 on ImageNet.} EfficientNet-B0 shares similar block structure with MobileNetV2. However, the width of EfficientNet-B0 varies more evenly than MobileNetV2, which may be due to its width setting is more better since it is determined by NAS. In detail, compared to the original setting of EfficientNet-B0, for the searched 1$\times$ FLOPs network, the channels of adjacent blocks show opposite fluctuations ({\emph{e.g.}}, channels of 1,3,5 blocks increase while channels in 2,4,6 blocks decrease). This may mean that the fluctuations of widths are conducive to the performance of searched network structure.
\section{Conclusion}
In this paper, we introduce a new supernet called BCNet to address the training unfairness and corresponding evaluation bias for searching optimal network width. In our BCNet, each channel is fairly trained and responsible for the same amount of widths. Besides, we leverage a stochastic complementary strategy for the training of BCNet, and propose a prior initial population sampling method to boost the evolutionary search. Extensive experiments have been implemented on benchmark ImageNet and CIFAR-10 datasets to show the superiority of our proposed method to other state-of-the-art channel pruning/network width search methods.
\subsubsection*{Acknowledgments}
This work is funded by the National Key Research and Development Program of China (No. 2018AAA0100701) and the NSFC 61876095. Chang Xu was supported in part by the Australian Research Council under Projects DE180101438 and DP210101859.
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,091,266 | arxiv |
\section{Introduction}
Verification is a fundamental aspect of modern electronic design. Without a high level of assurance that a circuit design conforms to a particular specification, chip makers stand to lose hundreds of millions of dollars when their product is inevitably recalled. The consequences in the quantum computing realm aren't quite as clear, as the largely software-like nature of quantum circuits alleviates much of the risk associated with design flaws. On the other hand, quantum resource analyses, which typically vary wildly between compilers \cite{IARPAQCS}, are currently being used to assess and guide real security policies \cite{glrs16, adgmps16}, so it is highly desirable to attain some degree of assurance that these resource analyses are indeed correct.
Due to the absence of large, universal quantum computers and the inherent difficulty of simulating quantum circuits, testing is generally not a viable option for verification. By contrast, various methods of formal verification have been developed for quantum circuits and programs, including equivalence checking \cite{w09, y10, agn14}, diagrammatic methods \cite{dl13, gd17}, model checkers \cite{gnp08, z16}, program logics \cite{y12} and formal proof \cite{rpz18}. However, two questions remain: how can the intended effect of a quantum program be specified in a clear, human readable and verifiable way, and how can we scale automated verification to large circuits?
Typical \emph{functional} verification methods -- verification of the precise input-output relation -- either verify equivalence against a simpler circuit or diagrammatic implementation (e.g., \cite{w09, y10, gd17}), or a matrix representation such as a unitary or superoperator (e.g., \cite{rpz18}). With either approach, errors can creep in \emph{on the specification side}, as both circuit and matrix presentations can be difficult for humans to write and understand. Moreover, in the former case it is assumed that a certified implementation exists in the first place, and in the latter case the matrix either requires exponential space to write and store, or is left abstract \cite{rpz18}, relying on structural proofs which are generally not suitable for verifying heavily optimized circuits.
In this work we propose a novel framework for the formal specification and functional verification of unitary (i.e., measurement-free) quantum circuits over a universal gate set -- specifically, the Clifford group extended with $Z$-axis rotations taken from the Clifford hierarchy \cite{gc99}. Our framework is built around Richard Feynman's \emph{path integral} technique, which has been used recently to prove results in complexity theory \cite{dhmhno05, m17}, and to perform circuit simulation \cite{bg16, kps17} and optimization \cite{amm14, am16, aam17}. Specifically, we develop a concrete representation of quantum operators as \emph{path-sums} -- exponential sums of basis states over a finite set of Boolean \emph{path variables}. Our path-sums directly coincide with the standard mathematical presentation of common quantum circuits and algorithms (e.g., \cite{nc00}), and further allow the direct use of classical functions, which can themselves be tested or otherwise verified, to formally specify quantum operations.
To verify quantum circuits, we give a computable, compositional semantics of quantum circuits as path-sums. We show that over Clifford+$R_k$ circuits for any fixed $k$, this interpretation is efficiently computable and compact. We then present a reduction system for path-sums which iteratively reduces the number of path variables until a (non-unique) normal form is reached. Our reduction system together with an efficient initial transformation is complete for Clifford group circuits, giving a polynomial-time equivalence checking algorithm. Experimentally, we use our reduction system to perform the automated verification of optimized Clifford+$T$ circuits, as well as Clifford+$R_k$ implementations of various quantum algorithms against formal specifications as path-sums for up to 200 qubits.
\paragraph{Preliminaries}
We work in the strictly unitary picture of quantum computing \cite{nc00} -- that is, quantum computations are modelled by unitary operators on a complex vector space of dimension $2^n$. While we do not consider measurements, we allow qubit initialization, corresponding to partial isometries on a complex vector space. We denote the computational basis vectors as $\ket{\vec{x}}$ for binary strings $\vec{x}=x_1x_2\dots x_n\in\mathbb{Z}_2^n$.
A circuit is defined as a sequence of quantum gates applied to individual qubits. We primarily consider three quantum gates:
\[
H = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, \quad
R_k = \begin{pmatrix} 1 & 0 \\ 0 & e^{\frac{2\pi i}{2^k}} \end{pmatrix}, \quad \text{and} \quad
\mathrm{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix}.
\]
For $k\geq 1$, all three gates lie in the $k^{\text{th}}$ level of the \emph{Clifford hierarchy}, denoted $\mathcal{C}_k$, where $\mathcal{C}_1$ is the Pauli group and $\mathcal{C}_k=\{U|U\mathcal{C}_1 U^\dagger \subseteq \mathcal{C}_{k-1}\}$. Two important cases are the \emph{Clifford group} ($\mathcal{C}_2$) and \emph{Clifford+$T$} ($\mathcal{C}_3$). While for $k\leq 3$ the above gates suffice to generate $\mathcal{C}_k$, it is not generally known whether $\mathcal{C}_k=\langle H, R_k, \mathrm{CNOT}\rangle$.
Much of our formalism involves polynomial representations of pseudo-Boolean functions -- functions from $\mathbb{Z}_2^n$ into some set $S$. In particular, we are interested in pseudo-Boolean functions into the ring of \emph{dyadic fractions} $\mathbb{D}=\{\frac{a}{2^b} | a, b \in \mathbb{Z}\}$, which correspond uniquely to multilinear polynomials in $\mathbb{D}_M[\vec{x}]=\mathbb{D}[\vec{x}]/\langle x_i^2 - x_i\rangle$. In our context the ring of dyadic fractions arises from the phase factors of $R_k$ gates, and are needed to precisely represent the quantum Fourier transform.
\section{The path-sum framework}\label{sec:sop}
\begin{figure}
\centerline{\scalebox{1}{
\begin{tikzpicture}
\node at (7.4,3) {$B$};
\node at (0,0) {$A$};
\draw [thick, decoration={markings, mark=at position 0.7 with {\arrow{triangle 60}}}, postaction=decorate] (0.2 ,0.2) to
[curve through ={([out angle=45, in angle=200]3.4,2) . . (5.8,0.7) .. (6.5,2.3) }] (7.2,2.8)
\draw [thick, decoration={markings, mark=at position 0.4 with {\arrow{triangle 60}}}, postaction=decorate] (0.2 ,0.2) to
[curve through ={([out angle=45, in angle=180]3.4,1.4) . . (4.2,1.2) }] (7.2,2.8)
\draw [thick, decoration={markings, mark=at position 0.4 with {\arrow{triangle 60}}}, postaction=decorate] (0.2 ,0.2) to
[curve through ={(1.5,3) . . (4.3,2.2) . . (6.5,3) }] (7.2,2.8)
\end{tikzpicture}
}}
\caption{The paths of a particle from point $A$ to $B$.}
\label{fig:pathintegral}
\end{figure}
The path-sum dates back to Feynman and the path integral formulation of quantum mechanics \cite{fh65}. In a general sense, the idea is to describe the amplitude of a particular state (say, of a particle) by an integral over all possible paths leading to that state. \Cref{fig:pathintegral} shows the trajectories of a particle moving from states $A$ to $B$ -- in the path integral formulation, the final amplitude is described as the sum of the amplitudes of each path. The output amplitudes of a quantum circuit, as a quantum mechanical system, can likewise be described as the sum over all trajectories of the system. However, as quantum gates are typically modelled as operators on a finite dimensional Hilbert space, a discrete sum rather than integral is typically used \cite{dhmhno05, bvr08, m17, kps17}.
We can describe a path-sum abstractly as a discrete set of \emph{paths} $S\subseteq \mathbb{Z}_2^m$, together with an amplitude function $\phi$ and state transformation $f$ representing the operator
\[
U : \ket{\vec{x}} \mapsto \sum_{\vec{y}\in S}\phi(\vec{x}, \vec{y})\ket{f(\vec{x}, \vec{y})}.
\]
In this form, the path-sum is not particularly useful as a computational representation, as the representations of $\phi$ and $f$ are not fixed -- indeed $\phi$ itself may be a unitary matrix with $\phi(\vec{x}, \vec{y})$ indexing a particular entry. Instead, we fix a concrete representation based on multivariate polynomials which suffices to exactly represent most interesting quantum operations.
\begin{definition}[path-sum]\label{def:sop}
An $n$-qubit \emph{path-sum} $\xi$ consists of
\begin{itemize}
\item an \emph{input signature} $\ket{\vec{x}=x_1x_2\cdots x_n}$ where each $x_i$ is a (distinct) variable or Boolean constant,
\item a \emph{phase polynomial} $P\in\mathbb{D}_M[\vec{x},\vec{y}]$ over input variables $\vec{x}$ and \emph{path variables} $\vec{y}=y_1y_2\dots y_m$, and
\item an \emph{output signature} $\ket{f(\vec{x},\vec{y})=f_1(\vec{x},\vec{y})\cdots f_n(\vec{x},\vec{y})}$ where each $f_i\in\mathbb{Z}_2[\vec{x},\vec{y}]$ is a Boolean polynomial.
\end{itemize}
The \emph{associated operator} of a path-sum is the partial linear map $U_{\xi}$ where
\[
U_{\xi} : \ket{\vec{x}} \mapsto \frac{1}{\sqrt{2^m}}\sum_{\vec{y}\in\mathbb{Z}_2^m} e^{2\pi i P(\vec{x},\vec{y})}\ket{f(\vec{x},\vec{y})}.
\]
\end{definition}
We say a path variable is \emph{internal} if it does not appear in the output signature. Our presentation is inspired by descriptions of quantum operators in mathematical texts \cite{nc00, klm07}, and as such we write a path-sum informally by the action of its associated operator.
By an abuse of notation, we use $\ket{\vec{x}}$ to refer to either an input signature or an arbitrary Boolean vector corresponding to an input signature.
\begin{example}
Path-sum representations of common quantum gates and circuits are listed below:
\begin{align*}
T: &\ket{x} \mapsto e^{2\pi i\frac{x}{8}}\ket{x} \\
H:&\ket{x}\mapsto\frac{1}{\sqrt{2}}\sum_{y\in\mathbb{Z}_2}e^{2\pi i\frac{xy}{2}}\ket{y} \\
\textsf{Toffoli}_n: &\ket{x_1x_2\cdots x_n} \mapsto \ket{x_1x_2\cdots (x_n\oplus \textstyle \prod_{i=1}^{n-1}x_i)} \\
\textsf{Adder}_n: &\ket{\vec{x}}\ket{\vec{y}}\ket{\vec{0}}\mapsto \ket{\vec{x}}\ket{\vec{y}}\ket{\vec{x}+ \vec{y}} \\
\textsf{QFT}_n: &\ket{\vec{x}}\mapsto \frac{1}{\sqrt{2^n}}\sum_{\vec{y}\in\mathbb{Z}_2^n}e^{2\pi i \frac{\int{\vec{x}\cdot \vec{y}}}{2^n}}\ket{\vec{y}}
\end{align*}
Addition and multiplication of Boolean vectors are interpreted as integer operations at the bit level. In the \textsf{QFT} above, $\int{\vec{x} \cdot \vec{y}}$ denotes the integer value of $\vec{x} \cdot \vec{y}$. For any classical function $f$, we can lift the polynomial representation of $f$ to a quantum operator via the path-sum $\ket{\vec{x}}\ket{\vec{0}}\mapsto \ket{\vec{x}}\ket{f(\vec{x})}$. Note that the polynomial representation of a classical function may grow exponentially large, as in the case of addition. A practical implementation of path-sums as a specification language would include a classical sub-language, along with a verified translation from such programs into Boolean polynomials.
\end{example}
As a unitary or partial isometry may admit many distinct path-sum representations, we define an equivalence between path-sums with the same associated operator.
\begin{definition}[equivalence]
Two path-sums $\xi_1, \xi_2$ are \emph{equivalent}, denoted $\xi_1 \equiv \xi_2$, if and only if their associated operators are equal -- that is, $U_{\xi_1} = U_{\xi_2}$.
\end{definition}
An additional point to note is that non-isometric path-sums are possible in our framework, as for instance $\ket{x}\mapsto\ket{0}$ is a valid path-sum. In this work we are concerned only with the unitary circuit model and by extension isometric path-sums, hence we define a notion of well-formedness for path-sums.
\begin{definition}[well-formed]
A path-sum is well-formed if its associated operator is a (partial) isometry.
\end{definition}
In practice, well-formedness is only an issue when writing path-sums directly as specifications, and our verification methods work even when a path-sum is not guaranteed to be well-formed. We leave it as a question for future research to determine methods for checking well-formedness of path-sums.
\subsection{Compositions of path-sums}
As with quantum circuits, path-sums may be composed both \emph{vertically} and \emph{horizontally} -- that is, composed in parallel with another path-sum on a distinct subsystem or in sequence on the same subsystem, respectively. Vertical composition is defined in the obvious way -- concatenating the inputs and outputs then adding the phase polynomials with appropriate renaming -- but horizontal composition requires more care.
Intuitively, as path-sums symbolically describe mappings between linear combinations of basis vectors, we can compose the output $\ket{f(\vec{x}, \vec{y})}$ of one path-sum with the input $\ket{\vec{x}'}$ of another by substituting each input value $x_i'$ with the corresponding output $f_i(\vec{x},\vec{y})$. For instance, we can compute the composition of $\ket{x_1x_2x_3}\mapsto\ket{x_1(x_1\oplus x_2)x_3}$ followed by $\ket{x_1'x_2'x_3'}\mapsto\ket{x_1'x_2'(x_2'\oplus x_3')}$ by substituting $x_2'$ with $x_1\oplus x_2$:
\[
\ket{x_1x_2x_3}\mapsto\ket{x_1(x_1\oplus x_2)(x_1\oplus x_2\oplus x_3)}.
\] However, this presents a problem when the path-sum on the left (i.e. right-to-left composition) is a partial isometry, as we may end up composing a variable $f_i(\vec{x}, \vec{y})=x_j$ with a constant state $x_i'=b$ for some $b\in\mathbb{Z}_2$, effectively post-selecting on $x_j=b$. For this reason we require that only \emph{compatible}\footnote{Determining compatibility is at least as hard as detecting whether an ancilla is \emph{clean} and is hence non-trivial in general. For the verification tasks we consider this is not an issue, as in practice we only compose path-sums with unitary operators.} signatures are composed; in particular, an output $\ket{f(\vec{x},\vec{y})}$ is compatible with an input $\ket{\vec{x}'}$ if and only if for every $i$, either $x_i'$ is a variable or $x_i'=b=f_i(\vec{x}, \vec{y})$.
When the left-most path-sum has a non-zero phase polynomial, substitutions may extend to the phase. As the phase and output polynomials are defined over different rings ($\mathbb{D}$ and $\mathbb{Z}_2$, respectively), when substituting a variable with a Boolean polynomial in the phase we first need to lift it into a \emph{functionally equivalent} polynomial over $\mathbb{D}$. For instance, for all $x,y\in\mathbb{Z}_2$, $\frac{1}{4}\left( x \oplus y\right) = \frac{1}{4}x + \frac{1}{4}y - \frac{1}{2}xy$. We define the \emph{lifting} of a Boolean polynomial $P$ to a polynomial $\lift{P}\in\mathbb{D}_M[\vec{x}]$ recursively by
\begin{align*}
\lift{\vec{x}^\alpha}&=\vec{x}^\alpha, \\ \lift{P + Q} &= \lift{P} + \lift{Q} - 2\lift{PQ},
\end{align*}
where $\vec{x}^\alpha=x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_n^{\alpha_n}$ for $\alpha\in\mathbb{Z}_2^n$ is a multi-index, and the first equation uses the natural inclusion of $\mathbb{Z}_2$ in $\mathbb{D}$. It can be easily verified that the lifting of a Boolean polynomial preserves its action on elements of $\mathbb{Z}_2$.
\begin{lemma}\label{lem:poly}
For any Boolean-valued polynomial $P$ and all $\vec{x}\in\mathbb{Z}_2^n$, $\lift{P}(\vec{x}) = P(\vec{x}) \mod 2$.
\end{lemma}
We can now formally define the functional composition of path-sums.
\begin{definition}{(sequential composition)} \\
The \emph{sequential composition} of two compatible path-sums
\[
U_{\xi} : \ket{\vec{x}} \mapsto \frac{1}{\sqrt{2^{m}}}\sum_{\vec{y}\in\mathbb{Z}_2^{m}} e^{2\pi i P(\vec{x},\vec{y})}\ket{f(\vec{x},\vec{y})}, \qquad
U_{\xi'} : \ket{\vec{x}'} \mapsto \frac{1}{\sqrt{2^{m'}}}\sum_{\vec{y}'\in\mathbb{Z}_2^{m'}} e^{2\pi i P'(\vec{x},\vec{y})}\ket{f'(\vec{x}',\vec{y}')},
\]
denoted $\xi'\circ \xi$, is given by
\[
U_{\xi'\circ \xi} : \ket{\vec{x}} \mapsto \frac{1}{\sqrt{2^{m+m'}}}\sum_{\vec{y}\in\mathbb{Z}_2^{m+m'}}
e^{2\pi i \left(P + P'[y_i \gets y_{i+m}][x'_i \gets \lift{f_i}]\right)(\vec{x},\vec{y})}\ket{\left(f'[x'_i\gets f_i]\right)(\vec{x},\vec{y})},
\]
where $P[x \gets Q]$ for polynomials $P,Q$ over some ring $R$ denotes the substitution of $x$ with $Q$ in $P$.
\end{definition}
\begin{proposition}
For any well-formed, compatible path-sums $\xi, \xi'$, $\xi'\circ\xi$ is also well formed. Moreover,
\[U_{\xi'\circ \xi} = U_{\xi'}U_{\xi}.\]
\end{proposition}
\begin{remark}
A useful property of path-sums is that they unify structurally equivalent circuits without resorting to string diagrams, which can be difficult to reason about in automated ways \cite{bgksz16}. By this we mean that circuits which are equivalent up to symmetric monoidal laws are \emph{strictly equal} in the path-sum picture. For instance, the bifunctoriality law and the naturality of SWAP, stated respectively as the equivalences
\[
\Qcircuit @C=1em @R=.4em {
& \gate{f} & \qw & \qw \\
& \qw & \gate{g} & \qw
}
\raisebox{-0.9em}{$\equiv$}
\Qcircuit @C=1em @R=.4em {
& \qw & \gate{f} & \qw \\
& \gate{g} & \qw & \qw
}
\qquad \qquad
\Qcircuit @C=1em @R=.9em {
& \gate{f} & \qswap & \qw \\
& \qw & \qswap \qwx & \qw
}
\raisebox{-0.9em}{$\equiv$}
\Qcircuit @C=1em @R=.9em {
& \qswap & \qw & \qw \\
& \qswap \qwx & \gate{f} & \qw
}
\]
are both equality in the path-sum framework. While much progress has been made towards computational methods for diagrammatic reasoning \cite{dl13, bgksz16, cdkw16, gd17}, our framework allows us to use standard algebraic tools (e.g., rewriting) without explicitly managing structural laws.
Along with unifying the representation of \emph{structurally} equivalent circuits, path-sums further unify many \emph{semantic} equivalences of quantum circuits -- particularly allowing the long-distance cancellation of phase gates applied to the same logical state \cite{amm14}. In contrast, matrix representations unify \emph{all} equivalences between unitaries, at the expense of exponential space. Path-sums hence provide an intermediary model, where many equivalences are ``modded out'' yet still generally remain efficiently representable as we show next.
\end{remark}
\subsection{Path-sums as a circuit semantics}
As path-sums admit both a symmetric tensor product and functional composition, we can give a simple compositional path-sum semantics of measurement-free quantum circuits. Given a path-sum representation of each gate in a basis $\mathcal{B}$ and their inverses, we can define the path-sum interpretation of a circuit over $\mathcal{B}$ as the composition of each gate. In particular, we give a path-sum interpretation to the Clifford+$R_k$ basis $\{H, \mathrm{CNOT}, R_k\}$ for $k>0$.
\begin{definition}{(Clifford+$R_k$ path-sum)} \\
The path-sum interpretation of an $n$-qubit circuit $C$ over $\{H, \mathrm{CNOT}, R_k\}$, denoted $\sop{C}$, is defined as follows:
\begin{alignat*}{2}
\sop{H} &= \ket{x}\mapsto\frac{1}{\sqrt{2}}\sum_{y\in\{0,1\}}e^{2\pi i\frac{xy}{2}}\ket{y} \\
\sop{R_k} &= \ket{x}\mapsto e^{2\pi i\frac{x}{2^k}}\ket{x} \\
\sop{R_k^\dagger} &= \ket{x}\mapsto e^{2\pi i\frac{-x}{2^k}}\ket{x} \\
\sop{\mathrm{CNOT}} &= \ket{x_1x_2}\mapsto \ket{x_1(x_1\oplus x_2)}\\
\sop{C_1; C_2} &= \sop{C_2} \circ \sop{C_1}.
\end{alignat*}
We leave the appropriate vertical compositions implicit.
\end{definition}
\begin{proposition}
For any circuit $C$ over $\{H, \mathrm{CNOT}, R_k\}$ with unitary matrix $U_C$, we have $U_{\sop{C}}=U_C$.
\end{proposition}
As a composition of linear Boolean functions, it can trivially be observed that each of the outputs of a canonical path-sum is linear. Moreover, its phase polynomial has degree at most $k$. To show this, we first introduce the notion of the \emph{order} of a polynomial in $\mathbb{D}_M$ which gives a more precise characterization of the phase polynomials over a fixed level of the Clifford hierarchy. Note that without loss of generality we can restrict our attention to phase polynomials with coefficients in $\mathbb{D}/\mathbb{Z}$ since $e^{2\pi i}=1$.
\begin{definition}
The \emph{order} of a term $\frac{a}{2^b} \vec{x}^{\alpha}$ where $a$ is co-prime to $2$ and $\alpha\in\mathbb{Z}_2^n$ is $b + |\alpha| - 1$. The order of a polynomial $P\in\mathbb{D}_M[\vec{x}]$, denoted $\ord{P}$, is the maximum order of all terms in $P$.
\end{definition}
\begin{example}
\[
\ord{\frac{1}{2}} = 0, \qquad
\ord{\frac{1}{2}x_1 + \frac{1}{2}x_2} = 1, \qquad
\ord{\frac{1}{2^3}x_2 + \frac{1}{2}x_1x_2x_3} = 3
\]
\end{example}
An important fact, shown below, is that order is non-increasing with respect to substitution of linear Boolean polynomials.
\begin{lemma}\label{lem:order}
Let $P\in\mathbb{D}_M[\vec{x}]$, and let $Q\in\mathbb{Z}_2[\vec{x}]$ be a linear polynomial. Then for any $x_i$,
\[
\ord{P[x_i\gets \lift{Q}]} \leq \ord{P}
\]
\end{lemma}
\begin{proof}
Suppose $Q=\sum_{j\in S} x_j$ for some set $S$. It is easy to verify that
\[
\lift{\sum_{j\in S} x_j} = \sum_{S'\subseteq S}(-2)^{|S'| - 1}\prod_{j\in S'}x_j.
\]
Substituting $\lift{Q}$ in for $x_i$ we see that for any term $\frac{a}{2^b} \vec{x}^{\alpha}$ in $P$ such that $\alpha_i=1$,
\begin{align*}
\ord{\frac{a}{2^b} \vec{x}^{\alpha}[x_i\gets \lift{Q}]}
&=\max_{S'\subseteq S}\ord{\frac{a2^{|S'| - 1}}{2^b} \vec{x}^{\alpha}[x_i\gets \prod_{j\in S'}x_j]} \\
&\leq\max_{S'\subseteq S} b-|S'|+|\alpha| + |S'| - 1 \\
&= \ord{\frac{a}{2^b} \vec{x}^{\alpha}}.
\end{align*}
\end{proof}
Intuitively, as the output function of a Clifford+$R_k$ path-sum is strictly linear, composing Clifford+$R_k$ path-sums does not increase the order of the phase polynomial. Moreover, the path-sum interpretation of each gate over $\{H, \mathrm{CNOT}, R_k\}$ has a phase polynomial of order at most $k$ and maximum denominator $2^k$, hence we obtain the following result.
\begin{proposition}
The phase polynomial of a (canonical) Clifford+$R_k$ path-sum has degree at most $k$.
\end{proposition}
\begin{corollary}\label{cor:size}
The path-sum interpretation of an $n$-qubit Clifford+$R_k$ circuit $C$ has size polynomial in the volume of $C$ ($n\cdot |C|$) and can be computed in polynomial time.
\end{corollary}
\paragraph{On representations of the phase polynomial}
While the representation of the phase as a multilinear polynomial is indeed polynomial in the size of the circuit, at higher levels of the Clifford hierarchy (i.e. large $k$) the degree of the polynomial can become prohibitively large. Even for the standard Clifford+$T$ gate set, the path-sum of a circuit requires space cubic in the volume of the circuit \cite{am16}. In practice this makes verification of some larger circuits difficult.
The phase polynomial could instead be represented in \emph{linear space for any $k$} by its \emph{Fourier expansion} \cite{od14, aam17}. This however complicates the process of verification as the Fourier expansion is not necessarily unique modulo integer multiples \cite{aam17}. A possible compromise would be to store the Fourier expansion normally, and generate the multilinear form for small subsets on demand.
\section{A calculus for path-sums}\label{sec:rewrite}
The verification question we're generally concerned with is \emph{given a circuit $C$ and path-sum $\xi$, is $\sop{C}\equiv \xi$?}. From an automated perspective it is simpler to instead check that the path-sum \emph{miter} \cite{y10} $\sop{C^\dagger}\circ \xi$ is the identity transformation. In either case, we need a method of efficiently establishing equivalence. To that end, in this section we present a system of reduction rules for path-sums. A key feature of our calculus is that the reduction rules strictly decrease the number of path variables, producing a (not necessarily unique) normal form in polynomial time.
\subsection{Overview}
Our calculus operates by reducing the number of paths when sets of paths interfere in recognizable ways which we call \emph{interference patterns}. As an illustration, consider the identity circuit $HH$. Computing its canonical path-sum we get
\[
HH : \ket{x} \mapsto \frac{1}{\sqrt{2^2}}\sum_{y_1,y_2\in\mathbb{Z}_2}e^{2\pi i\frac{xy_1+y_1y_2}{2}}\ket{y_2}.
\]
To see that the above path-sum is equal to the identity, we can first expand the exponential sum on the right by the values of the internal path variable $y_1$:
\begin{align*}
\frac{1}{\sqrt{2^2}}\sum_{y_1,y_2\in\mathbb{Z}_2}e^{2\pi i\frac{xy_1+y_1y_2}{2}}\ket{y_2}
&= \frac{1}{\sqrt{2^2}}\sum_{y_2\in\mathbb{Z}_2}(1 + e^{2\pi i\frac{x+y_2}{2}})\ket{y_2}
\end{align*}
Since $e^\pi i = -1$, it can be observed that if $x+y_2 = 0 \mod 2$, the two paths corresponding to $y_1=0$ and $y_2=1$ \emph{constructively} interfere, whereas if $x+y_2=1\mod 2$ they \emph{destructively} interfere. As $\mathbb{Z}_2= x\oplus \mathbb{Z}_2 = \{x, 1\oplus x\}$ for any $x\in\mathbb{Z}$, we can rewrite the sum over $x\oplus \mathbb{Z}_2$ and explicitly calculate the interference on either path:
\begin{align*}
\frac{1}{\sqrt{2^2}}\sum_{y_2\in x\oplus \mathbb{Z}_2}(1 + e^{2\pi i\frac{x+y_2}{2}})\ket{y_2}
&=\frac{1}{2}(1 + e^{2\pi i\frac{x+x}{2}})\ket{x} +
\frac{1}{2}(1 + e^{2\pi i\frac{x+1+x}{2}})\ket{1 \oplus x} \\
&= \frac{2}{2}\ket{x} + \frac{0}{2}\ket{1\oplus x} \\
&= \ket{x}
\end{align*}
The reasoning above applies to any situation where an internal path variable $y_i$ only appears with coefficients taken from the Boolean subgroup $\{0,\frac{1}{2}\}$ of $\mathbb{D}/\mathbb{Z}$, as the two branches of $y_i$ are identical, except that $y_i=1$ path picks up a multiplicative factor of $-1$ whenever the quotient of $P/y_i$ is odd. Specifically, it can be shown that
\begin{align*}
\frac{1}{\sqrt{2^{m+1}}}\sum_{y_0\in\mathbb{Z}_2}\sum_{\vec{y}\in\mathbb{Z}_2^m}e^{2\pi i\left (\frac{1}{2} y_0Q(\vec{x}, \vec{y}) + R(\vec{x}, \vec{y})\right)}\ket{f(\vec{x}, \vec{y})}
= \frac{1}{\sqrt{2^{m-1}}}\sum_{\vec{y}\in\mathbb{Z}_2^m, Q(\vec{x}, \vec{y}) = 0\mod 2}e^{2\pi iR(\vec{x}, \vec{y})}\ket{f(\vec{x}, \vec{y})}
\end{align*}
Note that the polynomial $Q(\vec{x}, \vec{y})$ is Boolean-valued, as otherwise the $y_0=1$ path can pick up values not in $\{1, -1\}$. In practice, we only perform such reductions when the restricted sum can be reified by solving $Q(\vec{x}, \vec{y}) = 0\mod 2$ for some path variable, as we did above with $y_2=x$.
\subsection{Reduction rules}
\Cref{fig:rewrite} gives the rules of our calculus, presented as algebraic rewrite rules on exponential sums for convenience and applied to path-sums in the obvious way. We write $\xi--> \xi'$ to denote that $\xi$ reduces to $\xi'$, and denote by $-->*$ the transitive closure of $-->$. For all rules, $y_0$ is an internal path variable, quotients $Q$ are Boolean-valued and whenever $y_i\gets Q$, $y_i$ does not appear in $Q$. For the \textsf{[Case]} rule, both $y_i$ and $y_j$ are internal.
The rules were developed by translating known circuit identities into path-sums, then minimizing the identities to obtain simple interference patterns which 1) strictly reduce the number of path variables, and 2) can be efficiently matched. What we found was that most common Clifford+$T$ equalities reduce to a small set of rules -- in particular, the \textsf{[HH]} rule derived from the equality $HH=I$ as described above is sufficient for the vast majority of path-sum reductions. The \textsf{[$\omega$]} rule arises from the identity $(SH)^3 = e^{\frac{2\pi i}{8}}I$, and the final rule \textsf{[Case]} is a specific case distinction needed to prove the 2-qubit Clifford+$T$ identity $\left(\mathrm{CNOT}(X\otimes T) \textsf{controlled-}H(X\otimes T^\dagger)\right)^2$ \cite{bs16}. The \textsf{[Elim]} rule only appears to simplify the presentation of \textsf{[HH]} as well as in some contexts specific to verification which we describe later.
\begin{figure}
\resizebox{\textwidth}{!}{\begin{minipage}{\linewidth}
\begin{alignat*}{3}
&\frac{1}{\sqrt{2^{m+2}}}\sum_{y_0\in\mathbb{Z}_2}\sum_{\vec{y}\in\mathbb{Z}_2^m}e^{2\pi iP(\vec{x}, \vec{y})}\ket{f(\vec{x},\vec{y})}
&&-->\frac{1}{\sqrt{2^{m}}}\sum_{\vec{y}\in\mathbb{Z}_2^m}e^{2\pi iP(\vec{x}, \vec{y})}\ket{f(\vec{x},\vec{y})}
& \quad \textsf{[Elim]} \\
&\frac{1}{\sqrt{2^{m+1}}}\sum_{y_0\in\mathbb{Z}_2}\sum_{\vec{y}\in\mathbb{Z}_2^m}
e^{2\pi i \left(\frac{1}{4}y_0 + \frac{1}{2}y_0 Q(\vec{x}, \vec{y}) + R(\vec{x}, \vec{y})\right)}\ket{f(\vec{x},\vec{y})}
&&--> \frac{1}{\sqrt{2^{m}}}\sum_{y\in\mathbb{Z}_2^m}e^{2\pi i\left(\frac{1}{8} -\frac{1}{4}\lift{Q}(\vec{x}, \vec{y}) + R(\vec{x}, \vec{y})\right)}\ket{f(\vec{x},\vec{y})}
& \textsf{[$\omega$]} \\
&\frac{1}{\sqrt{2^{m+1}}}\sum_{y_0\in\mathbb{Z}_2}\sum_{\vec{y}\in\mathbb{Z}_2^m}
e^{2\pi i \left(\frac{1}{2}y_0(y_i + Q(\vec{x}, \vec{y})) + R(\vec{x}, \vec{y})\right)}\ket{f(\vec{x},\vec{y})}
&&--> \frac{1}{\sqrt{2^{m+1}}}\sum_{\substack{\vec{y}\in\mathbb{Z}_2^m}}
e^{2\pi i\left(R[y_i\gets \lift{Q}]\right)(\vec{x}, \vec{y})}\ket{\left(f[y_i \gets Q]\right)(\vec{x},\vec{y})}
& \textsf{[HH]}
\end{alignat*}\end{minipage}}
\[
\inference
{P(\vec{x},\vec{y}) = \frac{1}{4}y_ix + \frac{1}{2}y_i(y_j + Q(\vec{x}, \vec{y})) + R(\vec{x}, \vec{y})
= \frac{1}{4}y_j(1-x) + \frac{1}{2}y_j(y_i + Q'(\vec{x}, \vec{y})) + R'(\vec{x}, \vec{y})}
{\frac{1}{\sqrt{2^{m+2}}}\sum_{\vec{y}\in\mathbb{Z}_2^{m+2}}e^{2\pi i P(\vec{x}, \vec{y})}\ket{f(\vec{x},\vec{y})} -->
\frac{1}{\sqrt{2^{m}}}\sum_{\vec{y}\in\mathbb{Z}_2^{m}}e^{2\pi i \left((1 - x)R[y_j\gets \lift{Q}] +
xR'[y_i\gets \lift{Q'}]\right)(\vec{x}, \vec{y})}\ket{f(\vec{x},\vec{y})}}
\tag*{\textsf{[Case]}}
\]
\caption{Path-sum reduction rules}\label{fig:rewrite}
\end{figure}
\begin{proposition}[Correctness]\label{thm:correctness}
If $\xi -->* \xi''$, then $\xi\equiv \xi'$.
\end{proposition}
The correctness of our rewrite system follows from direct calculation over symbolic exponential sums. As the proof is quite tedious, we leave it to \Cref{app:proof}.
It is a trivial fact that our calculus is terminating, as every rule reduces the number of path variables. Moreover, each rewrite rule can be matched against in polynomial time, hence every path-sum reduces to a normal form in polynomial time.
\begin{proposition}[Strong normalization]\label{thm:normal}
Every sequence of rewrites terminates with an irreducible path-sum. The sequence is linear in the number of path variables $m$ and for an $n$-qubit path-sum takes time polynomial in $n$ and $m$.
\end{proposition}
\subsection{Examples}
To illustrate our rewrite system, we give examples below. Further examples can be found in \Cref{app:examples}.
\begin{example}
Recall that the standard implementation of the Toffoli gate over Clifford+$T$ has the path-sum form \cite{amm14} \vspace{-1pt}
\[
\textsf{Toffoli}_3:\ket{x_1x_2x_3}\mapsto \frac{1}{\sqrt{2^2}}\sum_{y_1, y_2\in\mathbb{Z}_2}
e^{2\pi i \frac{1}{2}\left(x_3y_1 + x_1x_2y_1 + y_1y_2\right)}\ket{x_1x_2y_2}.
\]
We can verify that this is equivalent to the functional specification $\ket{x_1x_2x_3}\mapsto \ket{x_1x_2(x_3\oplus x_1x_2)}$ with the following sequence of reductions and algebraic manipulations:
\begin{align*}
\ket{x_1x_2x_3}
&\mapsto \frac{1}{\sqrt{2^2}} \sum_{y_1, y_2\in\mathbb{Z}_2}
e^{2\pi i \frac{1}{2}\left(x_3y_1 + x_1x_2y_1 + y_1y_2\right)}\ket{x_1x_2y_2}\quad \\
&\mapsto \frac{1}{\sqrt{2^2}} \sum_{y_1, y_2\in\mathbb{Z}_2}
e^{2\pi i \frac{1}{2}y_1(y_2 + x_3 + x_1x_2)}\ket{x_1x_2y_2} \\
&\mapsto \frac{1}{\sqrt{2^2}} \sum_{y_2\in\mathbb{Z}_2}\ket{x_1x_2(x_3\oplus x_1x_2)} \tag*{\textsf{[HH]}} \\
&\mapsto \ket{x_1x_2(x_3\oplus x_1x_2)} \tag*{\textsf{[Elim]}}
\end{align*}
\end{example}
\begin{example}
The controlled-$T$ gate can be specified as the path-sum
\[
\textsf{controlled-T}:\ket{x_1x_2} \mapsto e^{2\pi i \frac{x_1x_2}{8}}\ket{x_1x_2}.
\]
An implementation of the controlled-$T$ gate over Clifford+$T$ is given below:
\[
\centerline{\footnotesize
\Qcircuit @C=1em @R=.7em {
& \ctrl{1} & \gate{S^\dagger} & \targ & \gate{T} & \targ & \qw & \gate{T} & \ctrl{2} & \gate{H} & \gate{T} & \gate{H}
& \ctrl{2} & \gate{T^\dagger} & \qw & \targ & \gate{T^\dagger} & \targ & \gate{S} & \ctrl{1} & \qw \\
& \targ & \ctrl{1} & \qw & \qw & \ctrl{-1} & \ctrl{1} & \qw & \qw & \qw & \qw & \qw
& \qw & \qw & \ctrl{1} & \ctrl{-1} & \qw & \qw & \ctrl{1} & \targ & \qw \\
\lstick{\ket{0}} & \gate{H} & \targ & \ctrl{-2} & \gate{T^\dagger} & \qw & \targ & \gate{T^\dagger} & \targ & \qw & \qw & \qw
& \targ & \gate{T} & \targ & \gate{T} & \qw & \ctrl{-2} & \targ & \gate{H} & \qw
}
}
\]
Computing the canonical path-sum and reducing we get
\begin{align*}
\ket{x_1x_2}\ket{0}
&\mapsto \frac{1}{\sqrt{2^4}}\sum_{\vec{y}\in\mathbb{Z}_2^4}
e^{2\pi i \frac{1}{8}\left( 4x_1x_2y_1 + 4x_1y_2 + 4y_1y_2 + y_2 + 4y_2y_3 + 4x_1x_2y_3 + 4x_1y_4 + 4y_3y_4 + 4x_1x_2\right)}
\ket{x_1x_2y_4} \\
&\mapsto \frac{1}{\sqrt{2^4}}\sum_{\vec{y}\in\mathbb{Z}_2^4}
e^{2\pi i \left(\frac{1}{2}y_1(y_2 + x_1x_2) +
\frac{1}{8}(4x_1y_2 + y_2 + 4y_2y_3 + 4x_1x_2y_3 + 4x_1y_4 + 4y_3y_4 + 4x_1x_2)\right)}
\ket{x_1x_2y_4} \;\; \\
&\mapsto \frac{1}{\sqrt{2^2}}\sum_{y_3, y_4\in\mathbb{Z}_2}
e^{2\pi i \frac{1}{8}(4x_1x_2 + x_1x_2 + 4x_1x_2y_3 + 4x_1x_2y_3 + 4x_1y_4 + 4y_3y_4 + 4x_1x_2)}
\ket{x_1x_2y_4} \tag*{\textsf{[HH, Elim]}} \\
&\mapsto \frac{1}{\sqrt{2^2}}\sum_{y_3, y_4\in\mathbb{Z}_2}
e^{2\pi i \left(\frac{1}{2}y_3y_4 + \frac{1}{8}(x_1y_4 + x_1x_2)\right)}
\ket{x_1x_2y_4} \\
&\mapsto e^{2\pi i \frac{x_1x_2}{8}}\ket{x_1x_2}\ket{0} \tag*{\textsf{[HH, Elim]}}
\end{align*}
Hence the above circuit implements the controlled-$T$ gate, and provably leaves the ancilla clean.
\end{example}
\section{Completeness}\label{sec:completeness}
While our calculus computes a normal form in polynomial time, the normal forms are not necessarily unique\footnote{It was pointed out by an anonymous referee that uniqueness would imply that equivalence checking of reversible Boolean circuits is in P. As this problem is co-NP-complete, uniqueness of our normal forms would indeed imply $\text{P}=\text{co-NP}$.} and hence our reduction system is incomplete. For instance, the Clifford+$T$ identity
\[
\centerline{\footnotesize
\Qcircuit @C=1em @R=.7em {
& \ctrl{1} & \gate{X} & \qw & \qw & \qw & \qw & \ctrl{1} & \gate{X} & \qw & \qw & \qw & \qw & \ustick{\;\;\;\;2}\qw \\
& \targ & \gate{T} & \gate{H} & \gate{T} & \gate{H} & \gate{T^\dagger} & \targ & \gate{T} & \gate{H} & \gate{T^\dagger}
& \gate{H} & \gate{T^\dagger} & \qw
}
}
\]
from \cite{bs16} gives the irreducible path-sum $\ket{x_1x_2} \mapsto \frac{1}{\sqrt{2^8}}\sum_{\vec{y}\in\mathbb{Z}_2^8}e^{2\pi i \frac{1}{8}P(\vec{x}, \vec{y})}\ket{x_1y_8}$ with phase polynomial
\begin{align*}
P(\vec{x}, \vec{y}) =
2 &+ 6x_1x_2 + x_2 + y_1 + 4y_1(x_1 + x_2 + y_2) + 6y_2 + 4y_2y_3 + 2y_2x_1 + 3y_3 + 4y_3(x_1 + y_4) \\
&+ 4y_4y_5 + 6y_4x_1 + y_5 + 4y_5(x_1 + y_6) + 6y_6 + 4y_6y_7 + 2y_6x_1 + 3y_7 + 4y_7(x_1 + y_8) + 7y_8.
\end{align*}
A complete verification procedure could proceed by explicitly expanding the values of remaining variables in the path-sum after all possible reductions have been made, and then checking equivalence to the identity transformation. In practice we found that this is generally not necessary, as our calculus, along with some additional observations, is sufficient to prove equivalence or non-equivalence for the majority of circuits. Moreover, these heuristics combined with path-sum reductions give a complete, polynomial-time procedure for determining equivalence of Clifford group circuits.
\subsection{Isometry restrictions}
Our first heuristic reduces the number of path variables in a \emph{well-formed} path sum when checking equivalence. Specifically, we denote by $\xi|_{f(\vec{x},\vec{y})=\vec{x}}$ the restriction of $\xi$ to solutions $\vec{x}\in\mathbb{Z}_2^n, \vec{y}\in\mathbb{Z}_2^m$ such that $f(\vec{x},\vec{y})=\vec{x}$, which we can write as the restricted sum
\[
\ket{\vec{x}} \mapsto \frac{1}{\sqrt{2^m}}\sum_{\vec{y}\in\mathbb{Z}_2^m, f(\vec{x}, \vec{y}) = \vec{x}}e^{2\pi i P(\vec{x}, \vec{y})}\ket{\vec{x}}.
\]
Effectively, the sum $\frac{1}{\sqrt{2^m}}\sum_{\vec{y}\in\mathbb{Z}_2^m, f(\vec{x}, \vec{y}) = \vec{x}}e^{2\pi i P(\vec{x}, \vec{y})}$ gives the amplitude of the basis state $\ket{\vec{x}}$ in the output for a given input state $\ket{\vec{x}}$. If the path sum $\xi$ is well-formed (i.e. isometric), then this sum will be equal to $1$ exactly if $\xi$ is the identity transformation. We sum this up in the lemma below:
\begin{lemma}\label{lem:wf}
Suppose $U_{\xi}:\ket{\vec{x}}\mapsto\frac{1}{\sqrt{2^m}}\sum_{\vec{y}\in\mathbb{Z}_2^m}e^{2\pi i P(\vec{x}, \vec{y})}\ket{f(\vec{x}, \vec{y})}$
is a well-formed path-sum. Then $\xi\equiv \ket{\vec{x}}\mapsto \ket{\vec{x}}$ if and only if
$\xi|_{f(\vec{x}, \vec{y}) = \vec{x}}\equiv \ket{\vec{x}}\mapsto \ket{\vec{x}}.$
\end{lemma}
Note that \cref{lem:wf} doesn't hold if $\xi$ is not well-formed, as $U_{\xi}$ may not be an isometry and so it may be that $U_{\xi}\ket{\vec{x}} = \ket{\vec{x}} + \ket{\psi}$ for some residual state $\ket{\psi}$. To reify the restricted path-sum $\xi|_{f(\vec{x}, \vec{y})=\vec{x}}$ we find path variable substitutions which give $f_i(\vec{x}, \vec{y})=x_i$ -- in particular, if for some index $i$ we have $f_i(\vec{x}, \vec{y}) = y_i\oplus Q(\vec{x}, \vec{y})$ where $y_i$ doesn't appear in $Q(\vec{x},\vec{y})$, we can substitute $Q(\vec{x}, \vec{y})$ for $y_i$ to get $f_i(\vec{x}, \vec{y}) = x_i$ and remove $y_i$ from the sum. Any restrictions which can't be reified are simply ignored. In practice this results in a significant simplification for some circuits, instantly removing up to $n$ path variables.
\subsection{Non-equivalence}
As the reduction rules of \cref{fig:rewrite} only suffice to prove \emph{positive} results, when no more reductions are possible we apply an observation that was found to be effective for proving that a path sum $\xi$ is \emph{not} the identity. In particular, recall that
\[
\frac{1}{\sqrt{2^{m+1}}}\sum_{y_0\in\mathbb{Z}_2}
\sum_{\vec{y}\in\mathbb{Z}_2^m}e^{2\pi i \left( \frac{1}{2}y_0Q(\vec{x}, \vec{y}) + R(\vec{x},\vec{y})\right)}\ket{f(\vec{x}, \vec{y})} = 0
\]
if $Q(\vec{x}, \vec{y}) = 1\mod 2$. If $Q$ is a non-zero Boolean-valued polynomial in \emph{only} input variables $x_i$, then there necessarily exists a solution $\vec{x}\in\mathbb{Z}^n$ such that $Q(\vec{x}) = 1\mod 2$ \cite{od14}, and in particular
\[
\sum_{\vec{y}\in\mathbb{Z}_2^m}e^{2\pi i \left( \frac{1}{2}y_0Q(\vec{x}, \vec{y}) + R(\vec{x},\vec{y})\right)}\ket{f(\vec{x}, \vec{y})} = \frac{1}{\sqrt{2^{m+1}}}
\sum_{\vec{y}\in\mathbb{Z}_2^m}(1 - 1)e^{2\pi i R(\vec{x},\vec{y})}\ket{f(\vec{x}, \vec{y})} = 0 .
\]
We sum this up in the following lemma.
\begin{lemma}\label{lem:negative}
Suppose
$
U_{\xi}: \ket{\vec{x}} \mapsto \frac{1}{\sqrt{2^{m+1}}}\sum_{y_0\in\mathbb{Z}_2}
\sum_{\vec{y}\in\mathbb{Z}_2^m}e^{2\pi i \left( \frac{1}{2}y_0Q(\vec{x}, \vec{y}) + R(\vec{x},\vec{y})\right)}\ket{f(\vec{x}, \vec{y})}
$
where $Q$ is a non-zero integer-valued polynomial not containing any path variables. Then $\xi \centernot{\equiv} \ket{\vec{x}}\mapsto\ket{\vec{x}}$.
\end{lemma}
Hence we can use a variant of \textsf{[HH]} where $Q$ contains only input variables to prove non-equivalence of a path-sum to the identity.
\subsection{Clifford completeness}
We can now show that together with the above simplifications, our path-sum reductions are complete for proving equivalence of Clifford group circuits. Recall that over the Clifford group, the path-sum interpretation of a circuit has phase polynomial of order at most $2$. Our proof of completeness rests on the fact that progress can always be made for an identity path-sum with only internal path variables and second-order phase polynomial, as shown below.
\begin{lemma}[Clifford progress \& preservation]\label{lem:pp}
If $\xi$ is a path-sum such that $\xi\equiv\ket{\vec{x}}\mapsto\ket{\vec{x}}$, $\ord{P} \leq 2$ and $\xi$ contains only internal path variables,
then there exists $\xi'$ such that $\xi--> \xi'$ and $\ord{P'}\leq 2$.
\end{lemma}
\begin{proof}
Since $P$ is at most second-order, we can write $P = y_0Q + R$
for some internal path variable $y_0$ and polynomials $Q, R$ where $Q$ is at most first-order, and in particular has the form
\[
a\frac{1}{4} + b\frac{1}{2}Q'
\]
where $a,b\in\mathbb{Z}_2$ and $Q'$ is a linear Boolean-valued polynomial.
We have 3 cases to consider, corresponding to the \textsf{[Elim]}, \textsf{[HH]} and \textsf{[$\omega$]} rules respectively.
\paragraph{Case 1: $a=b=0$.}
The variable $y_i$ does not appear in $P$, hence $\xi-->_{\textsf{[Elim]}} \xi'$ and $\ord{P'}=\ord{P} \leq 2$.
\paragraph{Case 2: $a=0, b=1$.}
If the polynomial $Q'$ contains a path variable $y_i$, then $Q' = y_i + Q''$ and $\xi-->_{\textsf{[HH]}} \xi'$.
Further, by \cref{lem:order}, $\ord{R[y_i\gets \lift{Q''}]} \leq \ord{R} \leq 2$ and $\xi'$ has only internal paths since $y_i\notin f$.
If on the other hand $Q'$ only contains input variables, by \cref{lem:negative} $\xi \centernot{\equiv} \ket{\vec{x}} \mapsto \ket{\vec{x}}$,
a contradiction.
\paragraph{Case 3: $a=1$.}
The sum matches the left hand side of \textsf{[$\omega$]}, hence $\xi-->_{\textsf{[$\omega$]}} \xi'$.
Further, by \cref{lem:order}
\[
\ord{P'}=\ord{\frac{1}{8} - \frac{1}{4}\lift{Q''} + R} = \max \left\{ \ord{\frac{1}{8}}, \ord{\frac{1}{4}\lift{Q''}}, \ord{R}\right\} = 2.
\]
\end{proof}
\begin{corollary}
If $C$ is a Clifford-group quantum circuit, then $\sop{C} \equiv \ket{\vec{x}}\mapsto \ket{\vec{x}}$ can be decided in time polynomial in the space-time volume of $C$.
\end{corollary}
\begin{proof}
Since $\sop{C}$ is well-formed, by \cref{lem:wf} it suffices to check $\sop{C}|_{f(\vec{x},\vec{y}) = \vec{x}}\equiv \ket{\vec{x}}\mapsto \ket{\vec{x}}$. Further, as $f(\vec{x},\vec{y})$ is linear, we can compute via Gaussian elimination a solution $\vec{y}$ so that $f(\vec{x},\vec{y}) = \vec{x}$ for any $\vec{x}$ -- if no such solution exists, $\sop{C}\centernot{\equiv} \ket{\vec{x}} \mapsto \ket{\vec{x}}$. Since each $f_i$ is linear, $\ord{P[y_i\gets f_i]}\leq\ord{P}\leq 2$, hence by \cref{lem:pp} and \cref{thm:normal}, either $\sop{C}|_{f(\vec{x},\vec{y}) = \vec{x}}$ reduces to $\ket{\vec{x}}\mapsto\ket{\vec{x}}$ in polynomial-time or $\xi'\centernot{\equiv}\ket{\vec{x}}\mapsto\ket{\vec{x}}$.
\end{proof}
\section{Case studies}\label{sec:experiments}
We implemented our framework and verification algorithm in the open-source Haskell library \href{https://github.com/meamy/feynman}{\sc{Feynman}}. To test the efficacy of our methods, we performed verification of circuit optimizations (both correct and incorrect), as well as the verification of circuit implementations against formal path-sum specifications. All experiments were run in Debian Linux running on a quad-core 64-bit Intel Core i7 2.40 GHz processor and 8 GB RAM, and can be executed from the command line with \texttt{./feyn VerBench} and \texttt{./feyn VerAlg} for the translation validation and algorithm benchmarks, respectively.
\subsection{Translation validation}
\begin{table}
\footnotesize
\caption{Translation validation results. $n$ lists the number of qubits, Path vars gives the number of path variables, and Clifford and $T$ give the number of respective gates. Times for positive and negative verification measure the time to prove equivalence or non-equivalence against the optimized circuit or the optimized circuit with one random gate removed, respectively. Benchmarks with no timing results ran out of memory.}
\label{tab:results}
\centering
\begin{tabular}[t]{lrrrrrr} \toprule
Algorithm & $n$ & Path vars & Clifford & $T$ & \multicolumn{2}{c}{Time (s)} \\ \cmidrule(l{4pt}r{4pt}){6-7}
& & & & & Positive & Negative \\ \midrule
Grover\_5 & 9 & 200 & 1515 & 490 & 0.973 & 0.988 \\
Mod 5\_4 & 5 & 12 & 66 & 44 & 0.005 & 0.028 \\
VBE-Adder\_3 & 10 & 20 & 167 & 94 & 0.026 & 0.028 \\
CSLA-MUX\_3 & 15 & 40 & 289 & 132 & 0.099 & 0.055 \\
CSUM-MUX\_9 & 30 & 56 & 638 & 280 & 0.270 & 0.270 \\
QCLA-Com\_7 & 24 & 74 & 1237 & 297 & 0.530 & 0.543 \\
QCLA-Mod\_7 & 26 & 164 & 1641 & 650 & 9.446 & 10.517 \\
QCLA-Adder\_10 & 36 & 100 & 627 & 400 & 0.674 & 0.683 \\
Adder\_8 & 24 & 160 & 1419 & 614 & 1.968 & 2.018 \\
RC-Adder\_6 & 14 & 44 & 322 & 124 & 0.080 & 0.090 \\
Mod-Red\_21 & 11 & 60 & 392 & 192 & 0.110 & 0.119 \\
Mod-Mult\_55 & 9 & 28 & 180 & 84 & 0.028 & 0.009 \\
Mod-Adder\_1024 & 28 & 660 & 4363 & 3006 & 21.362 & 21.588 \\
Cycle 17\_3 & 35 & 1366 & 9172 & 6694 & -- & -- \\
GF($2^4$)-Mult & 12 & 28 & 263 & 180 & 0.063 & 0.061 \\
GF($2^5$)-Mult & 15 & 36 & 393 & 286 & 0.143 & 0.141 \\
GF($2^6$)-Mult & 18 & 44 & 559 & 402 & 0.279 & 0.291 \\
GF($2^7$)-Mult & 21 & 52 & 731 & 560 & 0.501 & 0.527 \\
GF($2^8$)-Mult & 24 & 60 & 975 & 712 & 0.837 & 0.881 \\
GF($2^9$)-Mult & 27 & 68 & 1179 & 918 & 1.304 & 1.369 \\
GF($2^{10}$)-Mult & 30 & 76 & 1475 & 1110 & 1.958 & 0.327 \\
GF($2^{16}$)-Mult & 48 & 124 & 3694 & 2832 & 16.028 & 17.539 \\
GF($2^{32}$)-Mult & 96 & 252 & 14259 & 11296 & 430.883 & 436.521 \\
GF($2^{64}$)-Mult & 192 & 508 & 55408 & 45120 & -- & -- \\
Hamming\_15 (low) & 17 & 76 & 612 & 158 & 0.367 & 0.168 \\
Hamming\_15 (med) & 17 & 184 & 1251 & 762 & 1.390 & 1.430 \\
Hamming\_15 (high) & 20 & 716 & 5332 & 3462 & 24.360 & 24.303 \\% \hline
HWB\_6 & 7 & 52 & 369 & 180 & 0.200 & 0.207 \\
HWB\_8 & 12 & 2282 & 17583 & 8895 & -- & -- \\
QFT\_4 & 5 & 84 & 218 & 136 & 0.084 & 0.089 \\
$\Lambda_3(X)$ & 5 & 12 & 52 & 36 & 0.004 & 0.011 \\
$\Lambda_3(X)$ (Barenco) & 5 & 12 & 66 & 44 & 0.007 & 0.046 \\
$\Lambda_4(X)$ & 7 & 20 & 87 & 58 & 0.009 & 0.008 \\
$\Lambda_4(X)$ (Barenco) & 7 & 20 & 127 & 84 & 0.014 & 0.024 \\
$\Lambda_5(X)$ & 9 & 18 & 112 & 80 & 0.015 & 0.017 \\
$\Lambda_5(X)$ (Barenco) & 9 & 28 & 160 & 124 & 0.030 & 0.031 \\
$\Lambda_{10}(X)$ & 19 & 68 & 297 & 190 & 0.110 & 0.111 \\
$\Lambda_{10}(X)$ (Barenco) & 19 & 68 & 493 & 324 & 0.219 & 0.210 \\ \bottomrule
\end{tabular}
\end{table}
Translation validation is an important tool for verifying that the transformations a compiler performs do not change the semantics of an input program. While it is generally desirable to prove that a compiler operates correctly on \emph{all} input programs, as with \emph{verified compilers} like \textsf{CompCert} \cite{l06} or \textsc{ReVerC} \cite{ars17} in the reversible domain, in many cases this is infeasible since the best optimizations are typically difficult to formally verify.
We used our algorithm to verify a suite of optimized benchmark circuits against their original input. For the optimization algorithm we chose the \textsc{GraySynth} algorithm from \cite{aam17} which is implemented in \textsc{Feynman} and verified each benchmark reported in that paper. \Cref{tab:results} reports the results of our experiments. All but $3$ of the benchmark circuits were successfully verified, with the remaining $3$ benchmarks running out of memory with a 6 GB limit. The high memory usage may be mitigated in the future by switching to a linear-space representation of the phase polynomial. The largest (completed) benchmark GF($2^{32}$), containing $96$ bits, $252$ path variables and over $25000$ gates completed in under 10 minutes, with the remainder all taking under a minute.
To test the algorithm's ability to prove \emph{non-equivalence}, we also performed the verification of the optimized benchmark circuits after removing a randomly selected gate. Again, all but $3$ benchmarks were proven to be not equivalent, with the negative verification results taking about the same amount of time as positive results.
\subsection{Verifying quantum algorithms}
\begin{table}
\footnotesize
\caption{Results of verifying formally specified quantum algorithms.}
\label{tab:specs}
\centering
\begin{tabular}[t]{lrrrrrr} \toprule
Algorithm & $n$ & Path vars & Clifford & $T$ & \multicolumn{2}{c}{Time (s)} \\ \cmidrule(l{4pt}r{4pt}){6-7}
& & & & & Positive & Negative \\ \midrule
\textsf{Toffoli}$_{50}$ & 97 & 190 & 855 & 665 & 1.084 & 1.064 \\
\textsf{Toffoli}$_{100}$ & 197 & 390 & 1755 & 1365 & 5.566 & 5.275 \\
\textsf{Maslov}$_{50}$ & 74 & 192 & 481 & 384 & 0.801 & 0.778 \\
\textsf{Maslov}$_{100}$ & 149 & 392 & 981 & 784 & 3.987 & 3.983 \\
\textsf{Adder}$_8$ & 40 & 56 & 334 & 196 & 0.142 & 0.143 \\
\textsf{Adder}$_{16}$ & 80 & 120 & 710 & 420 & 25.527 & 92.607 \\
\textsf{QFT}$_{16}$ & 16 & 16 & 256 & -- & 1.250 & 1.335 \\
\textsf{QFT}$_{31}$ & 31 & 31 & 961 & -- & 16.929 & 15.295 \\
\textsf{Hidden Shift}$_{20, 4}$ & 20 & 60 & 5254 & 56 & 1.067 & 0.862 \\
\textsf{Hidden Shift}$_{40, 5}$ & 40 & 120 & 6466 & 70 & 3.383 & 2.826 \\
\textsf{Hidden Shift}$_{60, 10}$ & 60 & 180 & 12784 & 140 & 13.217 & 12.351 \\
\textsf{Symbolic Shift}$_{20, 4}$ & 40 & 60 & 5296 & 56 & 1.859 & 1.849 \\
\textsf{Symbolic Shift}$_{40, 5}$ & 80 & 120 & 6638 & 70 & 6.953 & 7.905 \\
\textsf{Symbolic Shift}$_{60, 10}$ & 120 & 180 & 12804 & 140 & 35.583 & 29.614 \\ \bottomrule
\end{tabular}
\end{table}
To evaluate our framework as a tool for functional specification and verification, we implemented and verified several quantum algorithms (both without and with errors) directly against their specification as a path sum. \Cref{tab:specs} reports the results of our experiments, and we describe the algorithms and implementations below.
\paragraph{Reversible functions}
We implemented and verified a number of known algorithms for reversible functions. In particular, we performed verifications of Clifford+$T$ implementations of the generalized Toffoli and (out-of-place) addition functions,
\begin{align*}
\textsf{Toffoli}_n &: \ket{x_1x_2\dots x_n}\mapsto \ket{x_1x_2\dots (x_n\oplus x_1x_2\dots x_{n-1})}, \\
\textsf{Adder}_n &: \ket{\vec{x}}\ket{\vec{y}}\ket{\vec{0}}\mapsto \ket{\vec{x}}\ket{\vec{y}}\ket{\vec{x}+\vec{y}}
\end{align*}
We chose two implementations of the $n$-bit Toffoli gate -- using the standard decomposition into $2(n-3) + 1$ Toffoli gates and $n-3$ ancillas, and the Maslov decomposition \cite{m16} using \emph{relative phase} Toffolis and $\lceil \frac{n-3}{2}\rceil$ ancillas. For either implementation we were able to verify up to $100$ bit Toffoli gates in just seconds.
For the addition circuit, we used a standard out-of-place ripple-carry adder which uses $n-1$ ancilla bits to store intermediate carry values and an additional $n$ bit register to store the output, before copying out and uncomputing. The resulting circuit uses $5n - 1$ bits of space for an $n$ bit adder, and $4(n-1)$ Toffoli gates, which are then expanded to the Clifford+$T$ gate set. The specification itself was generated by implementing binary addition on symbolic vectors, and could ostensibly be classically tested to verify its own correctness. In this case, the size of the bitwise expansion of $\vec{x}+\vec{y}$ made it difficult to push to implementation sizes (e.g., 32 bits), though smaller sizes such as $16$ bits were verifiable within a minute. \emph{Relational} techniques -- e.g., representing the outputs of a path-sum as ``primed'' variables along with equations relating them -- may help to push verification of such functions to larger sizes.
\paragraph{The quantum Fourier transform}
To test our verification method against circuits using higher-order rotations, we verified an implementation of the quantum Fourier transform. We use a circuit from \cite{klm07} together with a final qubit permutation correction and verified it against the specification
\[
\textsf{QFT}_n: \ket{\vec{x}}\mapsto \frac{1}{\sqrt{2^n}}\sum_{\vec{y}\in\mathbb{Z}_2^n}e^{2\pi i \frac{\int{\vec{x}\cdot \vec{y}}}{2^n}}\ket{\vec{y}}.
\]
The phase polynomial $\int{\vec{x} \cdot \vec{y}}$ was generated in the obvious way -- by computing $\int{\vec{x}} = x_1 + 2x_2 + \dots + 2^{n-1}x_n$ and multiplying the polynomials. In this case our implementation was able to verify implementations up to $31$ bits in size, after which integer overflow occurs due to our handling of dyadic arithmetic. Given that the 31 bit implementation took only 16 seconds to verify, it appears that with better methods for handling dyadic arithmetic much larger sizes of the QFT are likely verifiable.
\paragraph{The quantum hidden shift algorithm}
To test our framework on more general quantum algorithms, we implemented a version of the quantum hidden shift algorithm \cite{r10} which has been previously used to test quantum simulation algorithms \cite{bg16}. In particular, given oracles $O_{f'}:\ket{\vec{x}}\mapsto f(\vec{x}+\vec{s})\ket{\vec{x}}$ and $O_{\tilde{f}}:\ket{\vec{x}}\mapsto \tilde{f}(\vec{x})\ket{\vec{x}}$ for the shifted and dual bent functions $f', \tilde{f}:\mathbb{Z}_2^n\rightarrow \{-1, +1\}$ respectively, the circuit $H^{\otimes n}O_{\tilde{f}}H^{\otimes n}O_{f'}H^{\otimes n}$ is known \cite{r10} to implement the mapping $\ket{\vec{0}}\mapsto \ket{\vec{s}}$.
Following \cite{bg16}, we generated random instances of Maiorana McFarland bent functions by setting $f'(\vec{x},\vec{y}) = f((\vec{x},\vec{y})+\vec{s})=(-1)^{g(\vec{x}) + \vec{x}\vec{y}}$ with dual $\tilde{f}(\vec{x},\vec{y})=(-1)^{g(\vec{y}) + \vec{x}\vec{y}}$ for a random $\frac{n}{2}$ bit Boolean function $g$ of degree $3$. The circuit for $f$ is generated by, for a given number of alternations $A$, alternating between selecting 200 random $Z$ and controlled-$Z$ gates, then a random doubly controlled-$Z$ gate, expanded out to Clifford+$T$. We implemented two versions of the algorithm, one where a concrete shift is given by a randomly generated Boolean vector, and another where the shift is supplied symbolically via a quantum register. In the former case we verify the circuit for a given shift $\vec{s}$ against the specification $\ket{\vec{0}}\mapsto\ket{\vec{s}}$, and in the latter case we verify the specification $\ket{\vec{0}}\ket{\vec{s}} \mapsto \ket{\vec{s}}\ket{\vec{s}}$. \Cref{fig:shift} shows both circuits.
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\[
\centerline{
\footnotesize
\Qcircuit @C=.7em @R=.7em {
& \multigate{1}{H} & \multigate{1}{X^{\vec{s}}} & \ctrl{1} & \gate{O_g} & \multigate{1}{X^{\vec{s}}}
& \multigate{1}{H} & \ctrl{1} & \qw & \multigate{1}{H} & \qw \\
& \ghost{H} & \ghost{X^{\vec{s}}} & \control \qw & \qw & \ghost{X^{\vec{s}}}
& \ghost{H} & \control \qw & \gate{O_g} & \ghost{H} & \qw
}
}
\vspace{3mm}
\]
\subcaption{Hidden shift with a fixed shift $\vec{s}$.}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\[
\centerline{
\footnotesize
\Qcircuit @C=.7em @R=.7em {
& \multigate{1}{H} & \multigate{1}{X} & \ctrl{1} & \gate{O_g} & \multigate{1}{X}
& \multigate{1}{H} & \ctrl{1} & \qw & \multigate{1}{H} & \qw \\
& \ghost{H} & \ghost{X} & \control \qw & \qw & \ghost{X}
& \ghost{H} & \control \qw & \gate{O_g} & \ghost{H} & \qw \\
\lstick{\ket{\vec{s}}} & \qw & \ctrl{-1} & \qw & \qw & \ctrl{-1} & \qw & \qw & \qw & \qw & \qw
}
}
\]
\subcaption{Hidden shift with a symbolic shift.}
\end{subfigure}
\caption{Circuits for the Quantum Hidden Shift algorithm.}
\label{fig:shift}
\end{figure}
Our verification algorithm actually found a bug in our first implementation, which was a direct implementation of the circuit given in \cite{bg16}. After reimplementing the circuit based on \cite{r10}, we were able to verify both versions of the hidden shift algorithm for sizes exceeding those simulated in \cite{bg16} with only a fraction of the time (seconds versus hours \cite{bg16}). Our calculus further finds the correct output $\ket{\vec{s}}$ or $\ket{\vec{s}}\ket{\vec{s}}$ even without providing the specification, effectively simulating the algorithm rather than verifying it. Moreover, our implementation is deterministic compared to theirs which is probabilistic and only samples the output distribution, rather than compute it outright. It is interesting to note that their algorithm also uses a similar technique of effectively evaluating the circuit's phase polynomial -- however, by including the $T$ gate phases directly in the polynomial and solving \emph{around} them, rather than pushing them into state preparations, we save a massive amount of time for this algorithm. An interesting question for future research is to determine whether there are quantum algorithms which can be simulated more efficiently by their methods.
\section{Conclusion}
We have described a framework for the representation of partial isometries as sums over a discrete set of paths. As an alternative to matrices, our path-sums admit a symbolic representation using polynomials, for which there exists fixed-parameter polynomial size representations of Clifford+$R_k$ circuits. This allows the efficient computation and representation of the action of such a quantum circuit on an arbitrary basis state. Further, we have given a system of rewrite rules which can be used to reduce path-sums and perform functional verification. Our experiments have shown this to be a powerful framework for verifying large quantum circuits, particularly against formal mathematical specifications of quantum algorithms.
The work we have described here is only a preliminary step towards a fully-automated system of formal specification and verification for quantum circuits, and as such there are many issues for future work to address. One particularly appealing direction is to expand the path-sum framework to more general quantum programs, and to give a concrete syntax so that modular libraries of verified programs may be developed and used. Improvements can be made on the algorithmic side, from using Fourier expansions and relational methods to more efficiently store path-sums, to the use of algebraic decision diagrams or other mathematical tools to complete verification once no more reductions can be made. Another interesting direction, motivated by our experience writing path-sum proofs ``by hand,'' is to implement our framework in an interactive proof assistant, allowing inductive and higher-order proofs over entire families of quantum circuits.
\section{Acknowledgements}
The author wishes to thank Neil J. Ross and Michele Mosca for stimulating discussions on the topic of path sums, as well as the anonymous referees for their helpful comments on an earlier version. This work was supported in part by Canada's NSERC and CIFAR.
\bibliographystyle{eptcs}
|
2,877,628,091,267 | arxiv | \section{Introduction}
\label{sec:intro}
An indispensable step in planet formation is to build planetesimals---super-kilometer objects bound by self-gravity---in protoplanetary disks \citep{Chiang2010, Johansen2014}. One of the compelling pathways to planetesimal formation is the efficient concentration of solids by the streaming instability (SI) followed by their gravitational collapse \citep{Youdin2005, Johansen2007a}.
The SI is an aerodynamic instability arising from the relative drift and the mutual drag forces between gas and solids \citep{Youdin2005}. The SI is one example of a broader class of drag instabilities in protoplanetary disks \citep{Goodman2000,Lin2017, Squire2018}.
Strong particle clumping can be induced by the SI under the right conditions, that is, when the midplane dust-to-gas volume density ratio exceeds unity, which further depends on the particle size, dust-to-gas surface density ratio (often referred as metallicity), and gas pressure gradient \citep{JY2007, Bai2010a, Bai2010b, Carrera2015, Yang2017}. In a typical smooth disk with cm-sized pebbles, slightly super solar metallicity is required to trigger the strong SI \citep{Johansen2009a}. Higher metallicity may be reached by gas removal (e.g., via photoevoporation) or dust pile-up (e.g., at pressure bumps, snow lines), where smaller solids can also lead to strong particle clumping \citep{Carrera2017, Drazkowska2016, Drazkowska2017, Schoonenberg2017}. Previous studies have also shown that high-resolution SI simulations with self-gravity can produce a broad initial top-heavy mass distribution of planetesimals \citep{Johansen2015, Simon2016, Simon2017}.
Recently, \citet{Yang2018} suggest that the midplane dust-to-gas density ratio exceeding order unity may not be necessary for the strong SI when the radial diffusion of particles driven by turbulence near the midplane is weak. \citet{Lin2019} find that particle back-reaction on to the gas can lead to self-sustained dust sedimentation against the vertical shear instability turbulence at a modest super solar metallicity, which may trigger the SI. Furthermore, \citet{Krapp2019} investigate the linear growth of the SI with multiple dust species for the first time\deleted{ and find a properly resolved particle-size distribution can significantly affect the linear phase of the SI, which merit more studies and stratified simulations with sufficient dust species }\added{, showing how a properly resolved particle-size distribution affects the linear phase of the SI. Their study motivates more detailed simulations of the non-linear phase of the multi-species streaming instability as well, which would extend previous studies}\citep{Bai2010a, Schaffer2018}.
Observations of asteroids and Trans-Neptunian objects support models in which planetesimals were born big \citep{Morbidelli2009}, with evidence of a drop-off in planetesimal numbers below $\sim$1--50 kilometers, depending on the population \citep{Delbo2017, Singer2019}. \citet{Nesvorny2019} find that SI simulations correctly predict the primarily prograde mutual inclinations of the abundant binaries in the Cold Classical Kuiper Belt \citep{Grundy2019}. Further studies on the demographics of planetesimals formed via the SI, and by other mechanisms, offer the promise of more detailed observational comparisons and tests.
Quantifying the mass distribution of planetesimals formed by the SI is of significant interest. Due to the high computational cost of SI simulations, a parameterized mass function can be used as the input for global studies of disk evolution and planet formation \citep{Drazkowska2014}. Furthermore, the shape of the mass distribution offers insights to the physical processes of particle clumping and gravitational collapse.
Previous work has fit the mass distribution to a simple power law \citep{Simon2016, Simon2017} or to a power law with an exponential cutoff or truncation \citep{Schafer2017, Abod2019}. These work suggested that the initial planetesimal mass function might be near-universal. However, it is not trivial to determine the best parameterization of the initial planetesimal mass function in SI simulations. Moreover, it is not clear if a single functional form can describe planetesimal formation with different physical conditions, i.e. simulation parameters.
Motivated by these issues, our goal in this paper is to better understand and constrain the broad initial mass distribution of planetesimals with robust statistical analyses. This work will fit many different parameterizations to simulated planetesimal mass distributions. To determine which models best describe the data, model selection techniques weigh the goodness of fit against a complexity penalty, intended to avoid the overfitting of data features that might be spurious. Since there is no universal agreement on complexity penalties either, we apply different model selection techniques, including a bootstrap method that we developed independently.
The paper is organized as follows. In Section \ref{sec:method}, we begin with an overview of the numerical models and our simulations. Section \ref{subsec:plan} then introduce our newly-developed clump-finding tool, \texttt{PLAN}. Section \ref{sec:fitting} lays out all the statistical models and our fitting procedure as well as the model selection criteria. In Section \ref{sec:results}, we show the fitting results and the model selection results. Section \ref{sec:final} discusses the implications of our statistical understanding, with a summary and conclusions in the end.
\begin{deluxetable*}{c|c|c|c|c|c|c|c}
\tablecaption{Simulation Parameters\label{tab:setup}}
\tablecolumns{7}
\tablehead{
\colhead{Run} &
\colhead{Domain Size} &
\colhead{Number of Cells} &
\colhead{$N_{\rm par}$\tablenotemark{$*$}} &
\colhead{$\uptau_{\rm s}$} &
\colhead{$Z$} &
\colhead{$t_{0}$\tablenotemark{$\dag$}} &
\colhead{$N_{\rm tot}$\tablenotemark{$\ddag$}} \\
\colhead{} &
\colhead{$(L_X\times L_Y\times L_Z)H^3$} &
\colhead{$N_X\times N_Y\times N_Z$} &
\colhead{} &
\colhead{} &
\colhead{} &
\colhead{$(\Omega_0^{-1})$} &
\colhead{}
}
\startdata
\hline\hline
I & $0.1\times 0.1\times 0.2$ & $512\times 512\times 1024$ & $134,217,728$ & $2.0$ & $0.1$ & $36.0$ & $284$\\
II & $0.2\times 0.2\times 0.2$ & $512\times 512\times 512$ & $153,600,000$ & $0.3$ & $0.02$ & $110.0$ & $174$
\enddata
\tablecomments{For all runs: the radial pressure gradient term is $\Pi = 0.05$ and the particle self-gravity strength is $\tilde{G} = 0.05$. }
\tablenotetext{$*$}{The number of particles. For reference, $2^{27}=512^3=134,217,728$.}
\tablenotetext{$\dag$}{The time when the particle self-graivty is switched on.}
\tablenotetext{$\ddag$}{The number of clumps identified by \texttt{PLAN} at the snapshot where we perform analyses and fitting in Section \ref{sec:results}.}
\end{deluxetable*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{RunA12_4.pdf}
\caption{A snapshot of the solid surface density ($\Sigma_{\rm p}$) from the simulation Run I. This snapshot is $4/\Omega_0$ after the particle self-gravity has been switched on, where self-bound clumps have already formed from collapse. All of the clumps identified by \texttt{PLAN} are marked by white circles that illustrate their Hill spheres. \label{fig:snapshot}}
\end{figure*}
\section{Method}
\label{sec:method}
To simulate the formation of planetesimals, we use the \texttt{ATHENA} code with a similar setup to \citet{Simon2017}. In Section \ref{subsec:athena}, we briefly introduce the numerical methods employed in \texttt{ATHENA} for modeling the coupled dynamics of gas and particles, including the self-gravity of solids, in a protoplanetary disk \citep[see][for more details]{Bai2010, Simon2016}. Section \ref{subsec:setup} then summarizes the numerical setup and parameters used in our simulations. Section \ref{subsec:plan} explains how \texttt{PLAN} identifies and characterizes all the self-bound clumps in the output particle data.
\subsection{Planetesimal Formation Simulations}
\label{subsec:athena}
We use \texttt{ATHENA} to simulate a small three-dimensional vertically-stratified patch of the protoplanetary disk with the local shearing box approximation \citep{Stone2008, Stone2010, Hawley1995}. This approximation---which is justified by the small length scales of the SI compared to the radial position in the disk---maps the global disk geometry $(R, \phi, z')$ onto a local Cartesian coordinate system $(x, y, z)$ \citep{Goldreich1965}. The local box is centered at a fiducial disk radius ($R_0$) in the midplane, where $(x, y, z) \equiv (R-R_0, R_0 \phi, z')$, where the Keplerian frequency and velocity are $\Omega_0$ and $v_{\rm K} = \Omega_0 R_0$, respectively.
In this non-inertial computational domain, \texttt{ATHENA} solves the equations of gas dynamics and the equation of motion for each particle (indexed by $i$)
\begin{align}
\fracp{\rho_{\rm g}}{t} + \nabla \cdot (\rho_{\rm g} \bm{u}) &= 0, \label{eq:gascon}\\
\begin{split}\label{eq:gasmom}
\fracp{(\rho_{\rm g} \bm{u})}{t} + \nabla\cdot(\rho_{\rm g} \bm{u}\bm{u} + P\bm{I}) &=\\
\rho_{\rm g} \biggl[ 2\bm{u}\times\bm{\Omega}_0 + 3{\Omega}_0^2 \bm{x} &- {\Omega}_0^2 \bm{z} \biggr] + \rho_{\rm p} \frac{\bar{\bm{v}} - \bm{u}}{t_\mm{stop}},
\end{split} \\
\begin{split}\label{eq:ithpar}
\fracd{\bm{v}_i}{t} = 2\bm{v}_i\times\bm{\Omega}_0+3{\Omega}_0^2 \bm{x}_i &- {\Omega}_0^2 \bm{z}_i \\
-\frac{\bm{v}_i - \bm{u}}{t_\mm{stop, i}} &- \nabla\Phi_{\rm sg} - 2\eta v_{\rm K} \Omega_0 \hat{x},
\end{split}
\end{align}
where $\rho_{\rm g}$, $\bm{u}$ and $P$ are density, velocity and pressure of gas, $\bm{I}$ is the identity matrix, $\bm{\Omega}_0 = \Omega_0 \hat{z}$, $\rho_{\rm p}$ and $\bar{\bm{v}}$ are the average density and velocity of the particles in a hydrodynamic grid cell, $t_{\rm stop}$ is the dimensional stopping time, $\bm{v}_i$ is the velocity of the $i$-th particle, $\Phi_{\rm sg}$ is the potential field of the self-gravity of solids, and $\eta$ denotes the relative difference between the gas orbital velocity and the Keplerian velocity due to the radial pressure gradient in the disk.
Our model calculates the Coriolis forces, radial and vertical tidal gravity, and the particle feedback exerted on the gas, as in the right hand side of Eq. \ref{eq:gasmom}. The equation of state for the gas is assumed to be isothermal, $P = c_{\rm s}^2 \rho_{\rm g}$, where the constant $c_{\rm s}$ is the isothermal sound speed. We neglect the self-gravity of the gas because the gas density fluctuations are relatively negligible.
For solids, \texttt{ATHENA} adopts the super-particle treatment, where each particle in our simulations statistically represents a large number of pebbles in terms of mass. The acceleration of each particle is governed by Eq. \ref{eq:ithpar} with the Coriolis and tidal forces (similar to those in Eq. \ref{eq:gasmom}), and also the gas drag as well as the force due to particle self-gravity. The gravitational potential field, $\Phi_{\rm sg}$, is obtained by solving Poisson's equation
\begin{equation}
\nabla^2\Phi_{\rm sg} = 4 \pi G \rho_{\rm p}
\end{equation}
with the Fast Fourier Transform (FFT) method of \citet{Simon2016}, where $G$ is the gravitational constant. The accuracy of particle self-gravity from such a method depends on the grid resolution. The last source term in Eq. \ref{eq:ithpar}, $- 2\eta v_{\rm K} \Omega_0 \hat{x}$, is a constant radial force in \texttt{ATHENA} to implement the effective global radial pressure gradient under the restrictions of the local model.
The radial and azimuthal boundary conditions (BCs) for our model are the standard shearing-periodic BCs \citep{Stone2010}. In the vertical direction, we use a modified outflow BCs that extrapolates the gas density into the ghost zones exponentially and prohibits any gas inflow \citep{Simon2011, Li2018}. These vertical BCs maintain hydrostatic equilibrium and reduce artificial gas motions at vertical boundaries, which is beneficial for simulations in short boxes. Furthermore, the total gas mass is renormalized to compensate the gas outflow at each time step to ensure the mass conservation.
The physical behavior of our simulations are dominated by four key dimensionless parameters. The SI is characterized by the first three of them: the dimensionless particle stopping time
\begin{equation}
\uptau_{\rm s} = \Omega_0 t_{\rm stop},
\end{equation}
which represents the ratio of a particle’s aerodynamic ($t_{\rm stop}$) and orbital ($\Omega_0^{-1}$) timescales, increases with a particle’s size, and decreases with the local gas density; the surface density ratio between the solids ($\Sigma_{\rm p}$) and the gas ($\Sigma_{\rm g}$)
\begin{equation}
Z = \frac{\Sigma_{\rm p}}{\Sigma_{\rm g}},
\end{equation}
which is sometimes called the total solid-to-gas mass ratio; the global radial pressure gradient parameter
\begin{equation}
\Pi \equiv \frac{\eta v_{\rm K}}{c_{\rm s}} \equiv -\frac{1}{2}\frac{c_{\rm s}}{v_{\rm K}}\fracp{\ln{P}}{\ln{R}},
\end{equation}
which accounts for the strength of the headwind on the particles. The fourth key parameter controls the relative strength of the particle self-gravity compared with the tidal shear
\begin{equation}
\tilde{G} \equiv \frac{4 \pi G \rho_0}{\Omega_0^2} = \frac{4}{\sqrt{2\pi}Q},
\end{equation}
where $\rho_0$ is the midplane gas density, $Q$ is the Toomre's Q \citep{Toomre1964}.
The code units of our simulations are set to the natural units of the shearing box. The density unit and the time unit are $\rho_0$ and $\Omega_0^{-1}$, respectively. While $\rho_0$ and $\Omega_0$ are code units, their allowed physical values and thus the choice of disk model are constrained by the $\tilde{G}$ parameter. The length unit is $H = c_{\rm s}/\Omega_0$, the vertical scale height of the gas.
\subsection{Numerical Setup}
\label{subsec:setup}
All of our simulations are initiated with Gaussian vertical density profiles for both the gas and particles
\begin{equation}
\begin{aligned}
\rho_{\rm g} &= \rho_0 \exp\left(\frac{-z^2}{2H^2}\right), \\
\rho_{\rm p} &= \frac{\Sigma_{\rm p}}{\sqrt{2\pi}H_{\rm p}} \exp\left(\frac{-z^2}{2H_{\rm p}^2}\right),
\end{aligned}
\end{equation}
where $H_{\rm p}$ is the particle scale height and is set to $0.02H$ in the beginning. The choice of such an initial particle scale height matches our previous work in \citet{Simon2017}, which let particles naturally sediment to a pseudo-equilibrium scale height of the order of $0.01H$ \citep{Yang2014, Li2018}. Both gas and particles are then initialized with the Nakagawa–Sekiya–Hayashi (NSH) equilibrium drift velocities \citep{Nakagawa1986}.
In this work, we fix $\Pi = 0.05$ and $\tilde{G} = 0.05$ ($Q\simeq 32$), which are typical values in protoplanetary disks. Table \ref{tab:setup} lists other physical and numerical parameters for all two of our simulations. Run I has $(\uptau_{\rm s}, Z) = (2.0, 0.1)$ and a higher resolution in a smaller domain (one grid cell $\Delta x$ is $H/5120$ wide, the highest resolution to date). Run II has $(\uptau_{\rm s}, Z) = (0.3, 0.02)$ and a lower resolution in a larger domain. The data of Run II are directly taken from \citet{Simon2017}, and our Run I is a higher resolution version of another simulation in \citet{Simon2017}. The relation between particle size and $\uptau_{\rm s}$ depends on uncertain properties of particles and gas disk. The range of $\uptau_{\rm s}$ adopted here may correspond to solids with any size from millimeter to decimeter. These physical parameters are chosen known to produce strong particle clumping that triggers gravitational collapse. The resulting bound clumps are the subject of our statistical analyses below.
Following previous studies, we start our simulations first without particle self-gravity. Only after the SI has fully developed and saturated, do we switch on the self-gravity. This approach significantly reduces the computational expense and has little influence on the final properties of planetesimals \citep{Simon2016, Abod2019}. For convenience, we define a self-gravity time
\begin{equation}
t_{\rm sg} = t - t_0,
\end{equation}
where $t$ is the simulation time and $t_0$ denotes the time when the self-gravity is turned on (see the next to last column in Table \ref{tab:setup}). Also, we present all planetesimal masses in units of the dimensional mass for a self-gravitating particle disk
\begin{equation}
M_{\rm G} = \pi \left(\frac{\lambda_{\rm G}}{2}\right)^2\Sigma_{\rm p} = 4\pi^5\frac{G^2\Sigma_{\rm p}^3}{\Omega_0^4} = \frac{\sqrt{2}}{2}\pi^\frac{9}{2}Z^3\tilde{G}^2 (\rho_0 H^3),
\end{equation}
where $\lambda_{\rm G}$ is the critical unstable wavelength from the standard Toomre dispersion relation.
To translate $M_{\rm G}$ into a physical mass unit and planetesimal size, additional assumptions about the disk model are required. For instance, if we assume a fiducial disk radius of $R_0 = 10$ au in the modified minimum mass solar nebula (MMSN) model of \citet[adapted from \citealt{Hayashi1981}, hereafter CY10 model]{Chiang2010}, where $\Sigma_{\rm g}\propto R^{-3/2}$ and the temperature $T\propto R^{-3/7}$, then our $\tilde{G}=0.05$ parameter implies the gas mass in the CY10 model is about half
the original MMSN values. For these parameters, the CY10 model has $\Pi = 0.068$, slightly higher than $\Pi = 0.05$ in our simulations. Nevertheless, a smaller $\Pi$ value might arise from a weak pressure bump, a common substructure in protoplanetary disks \citep{Pinilla2017, dullemond2018}. Moreover, $\Pi$ values do not significantly affect the planetesimal mass distribution, as studied in \citet{Abod2019}. The mass unit for Run I then equates to $M_{\rm G} = 1.82\times 10^{23}$ g $= 0.19\ M_{\rm Ceres}$. With $1$ g cm$^{-3}$ as the mean density, the physical radius of a $1 M_{\rm G}$ is $\simeq 350$ km in this model. For Run II, the same gas disk model and location gives $M_{\rm G} = 1.45\times 10^{21}$ g $= 0.0015\ M_{\rm Ceres}$, and a physical radius of $\simeq 70$ km.
Fig. \ref{fig:snapshot} shows a snapshot of the particle surface density from Run I at $t_{\rm sg} = 4/\Omega_0$, where solids already collapse into self-bound clumps. In the following section, we describe how \texttt{PLAN} finds these clumps in detail.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{CDFcomparison.pdf}
\caption{Cumulative number of bound, planetesimal-forming clumps above a given mass as measured in SI simulations with the same physical parameters as Run I. The effects of both the clump finding algorithm -- PLAN (\textit{blue}) vs. a previous method (\textit{cyan}) -- and simulation resolution -- higher (\textit{orange}) vs. lower (\textit{blue}) -- are shown.
Both the \textit{cyan} and \textit{blue} curves analyze a simulation snapshot from \citet{Simon2017} which has half the resolution and twice the box size as our run I. The \textit{orange} curve analyzes a Run I snapshot, with planetesimal numbers augmented by a factor of 4 to compensate for the smaller surface area.
Comparing \textit{blue} and \textit{cyan}, PLAN finds smaller planetesimals than the previous method, and also gives lower masses for the largest planetesimals, by differentiating vertically overlapping clumps (see text for more details). Comparing \textit{orange} and \textit{blue}, higher numerical resolution extends the mass distribution to lower masses ($<10^{-4} M_{\rm G}$), amends results at intermediate masses ($<10^{-2} M_{\rm G}$) and agrees with lower resolution simulations at the high mass end ($>10^{-2} M_{\rm G}$). \label{fig:CDF}}
\end{figure*}
\subsection{Clump-Finding with PLanetesimal ANalyzer}
\label{subsec:plan}
To identify and further characterize the properties of planetesimals produced in our simulations, we develop a new clump-finding tool, PLanetesimal ANalyzer (\texttt{PLAN}, \citet{PLAN}). It is designed to work with the 3D particle output of \texttt{ATHENA} and find self-bound clumps robustly and efficiently. \texttt{PLAN} is scalable to analyze billions of particles and many snapshots simultaneously because of its massively parallelized scheme written in C\nolinebreak\hspace{-.05em}\raisebox{.4ex}{\tiny\bf +}\nolinebreak\hspace{-.10em}\raisebox{.4ex}{\tiny\bf +} with OpenMP/MPI.
We now briefly present the workflow of \texttt{PLAN}. The approach is based on the dark matter halo finder \texttt{HOP} developed by \citet{EH1998}, which is able to quickly group physically related particles. \texttt{PLAN} first builds a memory-efficient linear Barnes-Hut tree representing all the particles in the Morton order \citep{BH1986}. Each particle is then assigned a density computed from the nearest $N_{\rm den}$ particles ($N_{\rm den} = 64$ by default). For particles with densities higher than a threshold, $\delta_{\rm outer} = 8\rho_0/\tilde{G}$, \texttt{PLAN} chains them up towards their densest neighbors recursively until a density peak is reached. The value of $\delta_{\rm outer}$ is physically motivated to be slightly smaller than the Roche density ($9\rho_0/\tilde{G}$) such that PLAN can quickly find most relevant particles. All the particle chains leading to the same density peak are combined into a group.
\texttt{PLAN} then merges those groups by examining their boundaries to construct a list of bound clumps. Based on the total kinematic and gravitational energies, deeply intersected groups are merged if bound. However, two particle groups with a saddle point less dense than $\delta_{\rm saddle}=2.5\delta_{\rm outer}$ remain separated \citep{EH1998}. Next, \texttt{PLAN} goes through each group---or raw clump---to unbind any contamination (i.e., passing-by and not bound) particles and gather possibly unidentified member particles within its Hill sphere. After discarding those clumps (if any) with Hill radii ($R_{\rm Hill}$) smaller than one hydrodynamic grid cell ($\Delta x$) or density peaks less than $\delta_{\rm peak}=3\delta_{\rm outer}$, \texttt{PLAN} outputs the final list of clumps with their physical properties derived from particles.
Most clumps in our high-resolution simulations are highly-concentrated, where particles often collapse into regions much smaller than the Hill radius (see Fig. \ref{fig:snapshot}). For small clumps, these regions are comparable to one cell size. The particle-mesh method of calculating self-gravity does not resolve scales below $\Delta x$, which is a primary motivation for our high-resolution simulation and the reason why \texttt{PLAN} compares $R_{\rm Hill}$ to $\Delta x$. While this work was in progress, \texttt{PLAN} was already used in the analyses of \citet{Abod2019}, and \citet{Nesvorny2019}.
Our former clump-finding tool analyzed the surface density of solids in the ($x,y$) cells of the hydrodynamic grid. This technique identifies prominent clumps and calculates the clump masses as the encapsulated column mass within their projected Hill radii. Such a treatment is limited by the grid resolution and has difficulty detecting small planetesimals, especially when a massive clump is nearby. Consequently, this method tends to overestimate the clump mass by including the mass of surrounding small planetesimals as well as other solids that are vertically far away. \texttt{PLAN} overcomes those issues by diagnosing the particle data instead, as described above.
Fig. \ref{fig:CDF} shows that the \texttt{PLAN} results agree with our previous results at large masses, though the previous analyses slightly overestimated masses, as explained above. The significant advantage of the \texttt{PLAN} analysis is that we identify gravitationally bound clumps at much lower masses than before, which improves our ability to statistically characterize the resulting mass distributions.
\section{Statistical Modeling of the Mass Distribution}
\label{sec:fitting}
This section details our statistical methodology for analyzing the planetesimal mass distribution. We present a maximum likelihood estimator (MLE) for estimating the parameters of a given model, and the uncertainty on those parameters in Section \ref{subsec:MLE}. Section \ref{subsec:models} lists the models that we fit, which vary in complexity from 2 to 5 parameters. Finally, Section \ref{subsec:select} describes our model selection criteria, and how they apply a penalty to more complex models.
\subsection{Maximum Likelihood Parameter Estimation}
\label{subsec:MLE}
We assume that the masses, $M$, of planetesimals (strictly, protoplanetesimal clumps in the simulations) are drawn from a probability density function (PDF), $\xi(M)$, parameterized by a vector $\bm{\theta}$, where $\xi(M; \bm{\theta})dM$ represents the probability that a given clump forms in the mass interval $M$ to $M+dM$.
In practice, it is easier to work with a logarithmic mass coordinate, $x = \ln{(M/M_{\rm min})}$, referenced to the minimum mass of the distribution, $M_{\rm min}$. With this transformation, the functional form of the PDF is different and written as
\begin{equation}
p(x; \bm{\theta}) = \frac{1}{N_{\rm tot}} \fracd{N(x; \bm{\theta})}{x} = C(\bm{\theta}) g(x; \bm{\theta}),
\end{equation}
where the first equality simply relates the PDF to the total number of bodies $N_{\rm tot}$ and the number in a given logarithmic mass interval, $dN$. In the second equality, the normalization factor $C(\bm{\theta})$ is introduced for later convenience (see also \citealt{Youdin2011b}).
Accordingly, the cumulative distribution function (CDF) is
\begin{equation}
P_>(x; \bm{\theta}) = \int\limits_x^{+\infty} p(x'; \bm{\theta}) dx' = C(\bm{\theta}) \int\limits_x^{+\infty} g(x'; \bm{\theta}) dx',
\end{equation}
which denotes the expected fraction of clumps with masses larger than $M (= e^{x}M_{\rm min})$ in the distribution\footnote{Note the minus sign when relating the PDF to the CDF: $\displaystyle{p(x;\bm{\theta}) = -\fracd{}{x}P_>(x;\bm{\theta})}$.}. The normalization of the CDF, $P_>(x; \bm{\theta})|_{x = 0} = 1$, gives
\begin{equation}
C(\bm{\theta}) = \frac{1}{\int\limits_0^{+\infty} g(x; \bm{\theta}) dx},
\end{equation}
which requires that $g(x; \bm{\theta})$ does not diverge as $x\to+\infty$.
To fit a model to the data, i.e.\ the simulated mass distribution of planetesimals, we consider the \emph{likelihood} of the data given the model
\begin{equation}
\mathcal{L}(\bm{x}|\bm{\theta}) \equiv \prod\limits_{i=1}^{N_{\rm tot}} p(x_i; \bm{\theta}) = \prod\limits_{i=1}^{N_{\rm tot}} C(\bm{\theta}) g(x_i; \bm{\theta}).
\end{equation}
It is usually easier to consider the log-likelihood
\begin{equation}
\ln\mathcal{L}(\bm{x}|\bm{\theta}) = \sum\limits_{i=1}^{N_{\rm tot}} \ln p(x_i; \bm{\theta}) = N_{\rm tot} \ln{C(\bm{\theta})} + \sum\limits_{i=1}^{N_{\rm tot}} \ln g(x_i; \bm{\theta}).
\end{equation}
The maximum likelihood estimator (MLE) of $\bm{\theta}$ estimates the best-fit parameters, $\bm{\theta}_{\rm MLE}$, by maximizing the log-likelihood (within certain physical bounds if necessary). For some simple log-likelihood functions, $\bm{\theta}_{\rm MLE}$ can be solved
as the root(s) of
\begin{equation}\label{eq:ddL}
\left\{\begin{aligned}
\fracp{\ln\mathcal{L}(\bm{x}|\bm{\theta})}{\theta_j} &= 0, \\
\fracpp{\ln\mathcal{L}(\bm{x}|\bm{\theta})}{\theta_j} &< 0,
\end{aligned}\right.
\end{equation}
where $\theta_j$ means the $j$-th parameter in $\bm{\theta}$. However, constraints on the allowed values of parameters sometimes confound traditional root-finding methods.
In this work, we apply numerical techniques to maximize the likelihood of the trial PDF (see Section \ref{subsec:models} for our choices). In practice, we first use the python package \texttt{emcee} to explore the parameter space with a Markov-chain Monte Carlo (MCMC) approach \citep{Foreman-Mackey2013} to obtain an initial guess of parameters, $\bm{\theta}_{\rm MCMC}$. We then use the \texttt{minimize} method\footnote{A complicated PDF may lead to a non-convex or non-smooth log-likelihood function, which is known to be difficult to minimize. We always test different algorithms (e.g., ``Powell'', ``Newton-CG'', ``L-BFGS-B'', etc.) provided by \texttt{minimize} and run a set of optimizations with initial guesses selected in a mesh grid centered on $\bm{\theta}_{\rm MCMC}$. We then take the solution leading to the lowest $-\ln\mathcal{L}$.} in the \texttt{scipy.optimize} package \citep{Numpy} to find the most likely $\bm{\theta}$ that minimizes $-\ln\mathcal{L}(\bm{x}|\bm{\theta})$.
To quantify the uncertainties of the best-fit parameters, $\bm{\theta}_{\rm MLE}$, we adopt the \textit{nonparametric bootstrap method} \citep{Efron1994, Burnham2002}. By repeatedly taking a random sample of size $N_{\rm tot}$ \emph{with replacement} from the actual mass data, we first generate $N_{\rm bs}$ independent bootstrap samples. They serve as a proxy for a set of $N_{\rm bs}$ independent real samples from the same mass distribution, because taking extra data (from additional simulations) is too costly. The MLE is then employed to fit the model PDF to each bootstrap sample to obtain the best-fit parameters, $\bm{\theta}_{\mathrm{bs}, k}$ ($k=1, \cdots, N_{\rm bs}$). Parameter uncertainties expected from real samples are estimated by calculating the distance between $\bm{\theta}_{\rm MLE}$ and the 84th and 16th percentiles of the distribution of $\bm{\theta}_{\mathrm{bs}, k}$, i.e. $\bm{\theta}_{\mathrm{bs}, k}^{84\%}$ and $\bm{\theta}_{\mathrm{bs}, k}^{16\%}$, as
\begin{equation}\label{eq:BMS_sigma}
\begin{aligned}
\Delta \bm{\theta}_{\rm bs}^+ &= \bm{\theta}_{\mathrm{bs}, k}^{84\%} - \bm{\theta}_{\rm MLE}, \\
\Delta \bm{\theta}_{\rm bs}^- &= \bm{\theta}_{\rm MLE} - \bm{\theta}_{\mathrm{bs}, k}^{16\%}.
\end{aligned}
\end{equation}
\citet{Efron1994} have shown that this bootstrap method works reasonably well if $N_{\rm bs}$ is large (e.g., $>$1000). In this work, we fix $N_{\rm bs} = 10000$. Appendix \ref{appsec:eg_fit} gives a model fitting example in detail.
Our maximum likelihood estimator is similar to what used in \citet{Simon2016, Simon2017, Abod2019} but we use bootstrapping to estimate parameter uncertainties. A different method of parameter estimation is used by \citet{Johansen2015, Schafer2017}, etc., who applied curve fitting routines to the CDF. While maximizing the likelihood functions seems more fundamental to us, we make no claim that our method is actually superior. We conduct tests to recover the slope of a single power law distribution from randomly generated data using these two methods. This test yields non-identical results, which confirms the methods are not equivalent, but offers no evidence to clearly favor either method. A more rigorous investigation of this statistical issue could be warranted.
\subsection{Statistical Models}
\label{subsec:models}
We now describe the seven statistical models that are used to fit the mass distribution of planetesimals. To focus on the shapes of these models, this section only gives their basic functional forms. All the normalization coefficients and the full functional expressions are put in the Appendix \ref{appsec:model_coeff}. For convenience, we define $K$ as the number of parameters in a model.
The first three models below are presented as CDFs. We simply convert them to their corresponding PDFs to apply our maximum likelihood estimator. However, it is natural to expect, if the planetesimal masses arise probabilistically, that a continuous PDF would be a more physical description. The reason to consider the CDFs is that some appeared previously in the literature (the first two models) and one of our runs shows visual evidence for a kink in the CDF (the third model). Because this kink gives a discontinuity in the PDF, it is arguably unphysical, but in this work we only examine statistical robustness, as no comprehensive physical theory for the distribution of masses exists.
\begin{enumerate}
\item \textit{Simply Tapered Power Law} The concave downward profile of the CDFs of clump masses (see Fig. \ref{fig:CDF}) suggests a power law distribution with exponentially tapering \citep{Abod2019}
\begin{equation}
P_>(M; \bm{\theta})=c_1 M^{-\alpha}\ \exp\left(-\frac{M}{M_{\rm exp}}\right),
\end{equation}
where $M_{\rm exp}$ is the characteristic mass scale and $c_1$ is the renormalization coefficient (see Appendix \ref{appsec:model_coeff}, same for coefficients below). This model has two free parameters ($K=2$) and $\bm{\theta}=(\alpha, M_{\rm exp})$, with $M_{\rm min} \leqslant M_{\rm exp} \leqslant M_{\rm max}$ but no constraints on $\alpha$.
\item \textit{Variably Tapered Power Law} In addition to the first model, this model frees the tapering power by adding one more free parameter $\beta$ inside the exponential function \citep{Schafer2017},
\begin{equation}
P_>(M; \bm{\theta})=c_2 M^{-\alpha}\ \exp\left[-\left(\frac{M}{M_{\rm exp}}\right)^\beta\right],
\end{equation}
where $\bm{\theta}=(\alpha, \beta, M_{\rm exp})$ and $K=3$. This model requires that at least one of $\alpha$ and $\beta$ is positive, and again $M_{\rm min} \leqslant M_{\rm exp} \leqslant M_{\rm max}$.
\item \textit{Broken Cumulative Power Law} A broken cumulative power law distribution connects two power law segments with different slopes in the \emph{cumulative} distribution. It also manifests different behaviors at different mass ranges
\begin{equation}
P_>(M; \bm{\theta}) = \left\{\begin{aligned}
& c_{31} M^{-\alpha_1}\ &M\leqslant M_{\rm br} \\
& c_{32} M^{-\alpha_2}\ &M> M_{\rm br}
\end{aligned}\right.,
\end{equation}
where $M_{\rm br}$ denotes the characteristic mass scale at which the slope breaks. This model has three free parameters ($K=3$) and $\bm{\theta} = (\alpha_1, \alpha_2, M_{\rm br})$. There is no constraints on $\alpha_1$, but $\alpha_2$ needs to be positive, and $M_{\rm min} \leqslant M_{\rm br} \leqslant M_{\rm max}$.
\item \textit{Truncated Power Law} \citet{Schafer2017} also tested a truncated power law model
\begin{equation}
\xi(M; \bm{\theta}) = \left\{\begin{aligned}
&c_4 M^{-\alpha-1}\ &M\leqslant M_{\rm tr} \\
&0\ &M> M_{\rm tr}
\end{aligned}\right.,
\end{equation}
where an upper bound $M_{\rm tr}$ truncates the PDF (and CDF). In this model, $\bm{\theta} = (\alpha, M_{\rm tr})$, $K=2$, $\alpha > 0$, and $M_{\rm tr} \geqslant M_{\rm max}$. Here the power law exponent in PDF becomes $-\alpha-1$ because the exponent in the corresponding CDF is $-\alpha$. Furthermore, it is easy to show that the PDF monotonically increases with increasing $M_{\rm tr} \leq M_{\rm max}$, and hence $-\ln\mathcal{L}(\bm{x}|\bm{\theta})$ minimizes if and only if $M_{\rm tr} = M_{\rm max}$.
\item \textit{Broken Power Law} Another compelling possibility is the broken power law distribution\footnote{Not to be confused with the broken cumulative power law.}. The corresponding PDF consists of two different power law segments, leading to a smooth transition in the CDF near the breaking point
\begin{equation}
\xi(M; \bm{\theta}) = \left\{\begin{aligned}
& c_{51} M^{-\alpha_1-1}\ &M\leqslant M_{\rm br} \\
& c_{52} M^{-\alpha_2-1}\ &M> M_{\rm br}
\end{aligned}\right. .
\end{equation}
This model has three free parameters ($K=3$), $\bm{\theta} = (\alpha_1, \alpha_2, M_{\rm br})$, where $\alpha_1$ has no limits, $\alpha_2 > 0$, and $M_{\rm min} \leqslant M_{\rm br} \leqslant M_{\rm max}$. When $\alpha_2\to+\infty$, this model reverts to the Truncated Power Law model.
\item \textit{Truncated Broken Power Law} This model complicates the last model by adding a truncation to the PDF
\begin{equation}
\xi(M; \bm{\theta}) = \left\{\begin{aligned}
& c_{61} M^{-\alpha_1-1}\ &M\leqslant M_{\rm br} \\
& c_{62} M^{-\alpha_2-1}\ &\text{otherwise} \\
& 0\ &M> M_{\rm tr}
\end{aligned}\right. .
\end{equation}
This model has four free parameters ($K=4$) and $\bm{\theta} = (\alpha_1, \alpha_2, M_{\rm br}, M_{\rm tr})$, where $\alpha_1$ and $\alpha_2$ has no limits, $M_{\rm min} \leqslant M_{\rm br} \leqslant M_{\rm max} \leqslant M_{\rm tr}$. Similar to the Truncated Power Law model, the PDF monotonically decreases with $M_{\rm tr}$ and $-\ln\mathcal{L}(\bm{x}|\bm{\theta})$ minimizes when $M_{\rm tr} = M_{\rm max}$.
\item \textit{Three-segment Power Law} We take a step further to consider another broken power law distribution but with three segments in the PDF,
\begin{equation}
\xi(M; \bm{\theta}) = \left\{\begin{aligned}
& c_{71} M^{-\alpha_1-1}\ &M\leqslant M_{\rm br1} \\
& c_{72} M^{-\alpha_2-1}\ &\text{otherwise} \\
& c_{73} M^{-\alpha_3-1}\ &M>M_{\rm br2}
\end{aligned}\right. .
\end{equation}
This model has five free parameters ($K=5$) and $\bm{\theta} = (\alpha_1, \alpha_2, \alpha_3, M_{\rm br1}, M_{\rm br2})$. Both $\alpha_1$ and $\alpha_2$ have no boundaries, but $\alpha_3 > 0$ and $M_{\rm min} \leqslant M_{\rm br1} \leqslant M_{\rm br2} \leqslant M_{\rm max}$. When $\alpha_3\to+\infty$, this model reverts to the Truncated Broken Power Law model.
\end{enumerate}
We choose these seven statistical models as candidates since they have been previously used to fit the planetesimal mass function or are commonly applied to fit top-heavy mass distributions. Other models are also certainly possible, but are beyond the scope of this paper. Note that all the models above are transformed to a PDF function of $x$ (see Table \ref{tab:models}) to be used in our MLE.
\subsection{Model Selection Criteria}
\label{subsec:select}
Out next goal is to select the statistical models that best represent simulation data. Models with more parameters (larger $K$) have the flexibility to provide closer fits to the data, i.e.\ higher likelihood values. Often, a well-chosen function with fewer parameters can provide a better fit than a different function with more parameters. The much larger concern is the opposite case, where a more complex model does not better represent reality, but merely overfits statistical fluctuations in the data.
For the problem of planetesimal formation by the streaming instability, this statistical concern is relevant. The high mass tail of the planetesimal distribution is very significant, but with low numbers of high mass clumps in any simulation, the risk of statistical fluctuations impacting model fitting is potentially high.
To address these issues, we first review two of the most commonly-used model selection criteria and then introduce a selection criterion that we develop independently, motivated by the nonparametric bootstrap method.
\begin{deluxetable}{c|c}
\tabletypesize{\normalsize}
\tablecaption{Interpretation Guidelines for BIC \label{tab:BIC_guide}}
\tablecolumns{2}
\tablehead{
\colhead{$\Delta_{\rm BIC}$} &
\colhead{Evidence against the preferred model}
}
\startdata
\hline\hline
$0-2$ & Not worth more than a bare mention \\
$2-6$ & Positive \\
$6-10$ & Strong \\
$>10$ & Very Strong
\enddata
\end{deluxetable}
\begin{deluxetable}{c|c}
\tabletypesize{\normalsize}
\tablecaption{Interpretation Guidelines for AIC \label{tab:AIC_guide}}
\tablecolumns{2}
\tablehead{
\colhead{$\Delta_{\rm AIC}$} &
\colhead{Level of empirical support for a model}
}
\startdata
\hline\hline
$0-2$ & Substantial \\
$2-4$ & Strong \\
$4-7$ & Considerably less \\
$>10$ & Essentially none
\enddata
\end{deluxetable}
\subsubsection{Information Criteria}
\label{subsubsec:aicbic}
The most commonly used model selection criteria are (i) the Bayesian Information Criterion (BIC) \citep{Kass1995}
\begin{equation}\label{eq:bic}
\text{BIC} = K\ln(N_{\rm tot}) - 2 \ln\mathcal{L},
\end{equation}
and (ii) the Akaike Information Criterion (AIC) \citep{Akaike1974}
\begin{equation}\label{eq:aic}
\text{AIC} = 2 K - 2 \ln\mathcal{L}.
\end{equation}
Both the BIC and the AIC involve the calculations of the log-likelihood $-2\ln{\mathcal{L}}$, which are affected by arbitrary constants and the sample size. Thus, the individual BIC/AIC values are not significant and the relative differences between models
\begin{equation}
\begin{aligned}
\Delta_{\rm BIC} &= \text{BIC} - \text{BIC}_{\rm min}, \\
\Delta_{\rm AIC} &= \text{AIC} - \text{AIC}_{\rm min}
\end{aligned}
\end{equation}
are more important, where BIC$_{\rm min}$/AIC$_{\rm min}$ is the minimum of the BIC/AIC values of all the model candidates. In this way, the preferred model naturally has $\Delta_{\rm BIC} = 0$ and other models have positive $\Delta_{\rm BIC}$'s (similar for AIC). To interpret $\Delta_{\rm BIC}$ and $\Delta_{\rm AIC}$ quantitatively in model selection, we follow the conventional categorical guidelines in Tables \ref{tab:BIC_guide} and \ref{tab:AIC_guide}.
Formally, the value of $\Delta_{\rm BIC}$ represents the complexity-corrected likelihood ratio in a natural logarithmic scale, or the evidence provided by the data in favor of the preferred statistical model over another model \citep{Kass1995}. The value of $\Delta_{\rm AIC}$ measures the Kullback–Leibler distance, or the information lost when a less preferred model is used to approximate the true distribution \citep{Burnham2002}. For further discussions on the differences between the BIC and the AIC, we refer the reader to \citet{Burnham2002} and \citet{Burnham2004}.
These two criteria put different weights on the penalty on the number of parameters, $K$, which becomes quite significant for large $N_{\rm tot}$, and which can lead to different results in model selection. It is difficult (for us) to determine which information criterion is more appropriate, or indeed if either is reliable. More complex and computationally methods exist to assess the complexity penalty based not simply on the number of free parameters and/or data points, but the actual geometry of the model space \citep[e.g., Fisher Information Approximation,][]{Ly2017}. However these methods were beyond the scope of this work. Instead we describe an alternate model selection method below which we compare to the conventional AIC/BIC methods.
\subsubsection{Bootstrap Model Selection}
\label{subsubsec:bms}
Motivated by concerns about the applicability of standard model selection techniques (BIC and AIC, discussed above), we consider an alternative method where the complexity penalty is not given as a fixed, simple function of the number of parameters but instead is generated automatically by bootstrapping.
Inspired by the nonparametric bootstrap method for uncertainty estimation (described in Section \ref{subsec:MLE}), we again consider all the bootstrap samples a good proxy for mass distributions from $N_{\rm bs}$ independent simulations, which in reality are too computationally costly expensive to be conducted. Through such a proxy, the median likelihood of all the bootstrap samples given the best-fit parameters can be used as a model selection criterion
\begin{equation}\label{eq:bms}
\text{BMS} = -2\times \text{median}\left(\ln\mathcal{L}(\bm{x}_k|\bm{\theta}_{\rm MLE})\right).
\end{equation}
where BMS stands for Bootstrap Model Selection, $\bm{x}_k$ is the $k$-th bootstrap sample, and the factor of $2$ is chosen for similarity to AIC/BIC. This empirical criterion represents to what extent the best-fit parameters can explain/describe other samples drawn from the same mass distribution. Also, it naturally penalizes more complex models that tend to overfit data because they yield poorer fits to those bootstrap samples that deviate farther from the original data. In the following work, we therefore also consider
\begin{equation}
\Delta_{\rm BMS} = \text{BMS} - \text{BMS}_{\rm min},
\end{equation}
as one of our model selection metrics and follow the similar conventional categorical guidelines. The comparison between BMS and the commonly-used BIC/AIC are discussed in the following sections.
\begin{deluxetable*}{c|c|c|c|c|c|c
\tablecaption{Model Fitting Results for Run I ($t_{\rm sg} = 7.5/\Omega$) \label{tab:fit_A}}
\tablecolumns{7}
\tablehead{
\colhead{Models} &
\colhead{Best-fit Parameters} &
\colhead{Mass Scales [$M_G$]} &
\colhead{$-\ln{\mathcal{L}}$} &
\colhead{$\Delta_{\rm BMS}$} &
\colhead{$\Delta_{\rm BIC}$} &
\colhead{$\Delta_{\rm AIC}$}
}
\startdata
\hline\hline
\begin{minipage}[c][1.2cm][c]{0.275\textwidth}
\centering Simply Tapered Power Law \\ K=2, $\bm{\theta}=(\alpha, x_{\rm exp})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha = 0.208^{+0.011}_{-0.014}$ \\
$x_{\rm exp} = 8.905^{+0.323}_{-0.464}$
\end{tabular} &
$M_{\rm exp} = 0.1385^{+0.0529}_{-0.0515}$ & 660.443 &
63.7 & 53.1 & 60.4 \\
\hline
\begin{minipage}[c][1.7cm][c]{0.275\textwidth}
\centering Variably Tapered Power Law \\ K=3, $\bm{\theta}=(\alpha, \beta, x_{\rm exp})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha = 0.036^{+0.041}_{-0.041}$ \\
$\beta = 0.298^{+0.061}_{-0.040}$ \\
$x_{\rm exp} = 4.734^{+1.022}_{-1.128}$
\end{tabular} &
$M_{\rm exp} = 0.0021^{+0.0038}_{-0.0014}$ & 633.695 &
10.6 & 5.3 & 8.9 \\
\hline
\begin{minipage}[c][1.7cm][c]{0.275\textwidth}
\centering Broken Cumulative Power Law \\ K=3, $\bm{\theta}=(\alpha_1, \alpha_2, x_{\rm br})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha_1 = 0.162^{+0.009}_{-0.019}$ \\
$\alpha_2 = 0.589^{+0.042}_{-0.071}$ \\
$x_{\rm br} = 4.550^{+0.003}_{-0.607}$
\end{tabular} &
$M_{\rm br} = 0.0018^{+0.0000}_{-0.0008}$ & 631.683 &
10.1 & 1.2 & 4.9 \\
\hline\hline
\begin{minipage}[c][1.2cm][c]{0.275\textwidth}
\centering Truncated Power Law \\ K=2, $\bm{\theta}=(\alpha, x_{\rm tr})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha = 0.140^{+0.012}_{-0.029}$ \\
$x_{\rm tr} = 10.880^{+0.000}_{-0.000}$
\end{tabular} &
$M_{\rm tr} = 0.9981^{+0.0000}_{-0.0000}$ & 651.846 &
47.5 & 35.9 & 43.2 \\
\hline
\begin{minipage}[c][1.7cm][c]{0.275\textwidth}
\centering Broken Power Law \\ K=3, $\bm{\theta}=(\alpha_1, \alpha_2, x_{\rm br})$
\end{minipage} &
\begin{tabular}[c]{l}
$\mathbf{\alpha_1 = -0.079^{+0.043}_{-0.049}}$ \\
$\alpha_2 = 0.628^{+0.078}_{-0.055}$ \\
$x_{\rm br} = 5.620^{+0.329}_{-0.285}$
\end{tabular} &
$M_{\rm br} = 0.0052^{+0.0020}_{-0.0013}$ & 631.946 &
7.2 & 1.8 & 5.4 \\
\hline
\begin{minipage}[c][2.7cm][c]{0.275\textwidth}
\centering Truncated Broken Power Law \\ K=4, $\bm{\theta}=(\alpha_1, \alpha_2, x_{\rm br}, x_{\rm tr})$
\end{minipage} &
\begin{tabular}[c]{l}
$\mathbf{\alpha_1 = -0.102^{+0.058}_{-0.043}}$ \\
$\alpha_2 = 0.468^{+0.082}_{-0.070}$ \\
$x_{\rm br} = 5.066^{+0.519}_{-0.144}$ \\
$x_{\rm tr} = 10.880^{+0.000}_{-0.000}$
\end{tabular} &
\begin{tabular}[c]{l}
$M_{\rm br} = 0.0030^{+0.0020}_{-0.0004}$\\
$M_{\rm tr} = 0.9981^{+0.0000}_{-0.0000}$
\end{tabular} & 628.241 &
\textbf{0.0} & \textbf{0.0} & \textbf{0.0} \\
\hline
\begin{minipage}[c][2.7cm][c]{0.275\textwidth}
\centering Three-segment Power Law \\ K=5, $\bm{\theta}=(\alpha_1, \alpha_2, \alpha_3, x_{\rm br1}, x_{\rm br2})$
\end{minipage} &
\begin{tabular}[c]{l}
$\mathbf{\alpha_1 = -0.102^{+0.057}_{-0.043}}$ \\
$\alpha_2 = 0.468^{+0.081}_{-0.071}$ \\
$\alpha_3 = 5.78\se5^{+6.07\se7}_{-3.9\se4}\tablenotemark{*}$ \\
$x_{\rm br1} = 5.066^{+0.511}_{-0.142}$ \\
$x_{\rm br2} = 10.880^{+0.000}_{-0.632}$
\end{tabular} &
\begin{tabular}[c]{l}
$M_{\rm br1} = 0.0030^{+0.0020}_{-0.0004}$\\
$M_{\rm br2} = 0.9981^{+0.0000}_{-0.4674}$
\end{tabular} & 628.241 &
\textbf{0.0} & 5.6 & 2.0 \\
\enddata
\tablenotetext{*}{The best-fit third slope, $\alpha_3$, of the Three-segment Power Law model is extremely large because this model essentially reverts to the Truncated Broken Power Law model (see also Section \ref{subsec:models} and Fig. \ref{fig:fit_AC}).}
\end{deluxetable*}
\begin{deluxetable*}{c|c|c|c|c|c|c
\tablecaption{Model Fitting Results for Run II ($t_{\rm sg} = 7.6/\Omega$) \label{tab:fit_C}}
\tablecolumns{7}
\tablehead{
\colhead{Models} &
\colhead{Best-fit Parameters} &
\colhead{Mass Scales [$M_G$]} &
\colhead{$-\ln{\mathcal{L}}$} &
\colhead{$\Delta_{\rm BMS}$} &
\colhead{$\Delta_{\rm BIC}$} &
\colhead{$\Delta_{\rm AIC}$}
}
\startdata
\hline\hline
\begin{minipage}[c][1.2cm][c]{0.275\textwidth}
\centering Simply Tapered Power Law \\ K=2, $\bm{\theta}=(\alpha, x_{\rm exp})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha = 0.388^{+0.030}_{-0.039}$ \\
$x_{\rm exp} = 6.397^{+0.473}_{-0.721}$
\end{tabular} &
$M_{\rm exp} = 11.2531^{+6.8113}_{-5.7787}$ & 311.520 &
16.1 & 10.2 & 13.3 \\
\hline
\begin{minipage}[c][1.7cm][c]{0.275\textwidth}
\centering Variably Tapered Power Law \\ K=3, $\bm{\theta}=(\alpha, \beta, x_{\rm exp})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha = 0.304^{+0.063}_{-0.060}$ \\
$\beta = 0.527^{+0.233}_{-0.088}$ \\
$x_{\rm exp} = 5.178^{+0.743}_{-0.777}$
\end{tabular} &
$M_{\rm exp} = 3.3255^{+3.6627}_{-1.7965}$ & 309.274 &
11.6 & 10.8 & 10.8 \\
\hline
\begin{minipage}[c][1.7cm][c]{0.275\textwidth}
\centering Broken Cumulative Power Law \\ K=3, $\bm{\theta}=(\alpha_1, \alpha_2, x_{\rm br})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha_1 = 0.348^{+0.035}_{-0.033}$ \\
$\alpha_2 = 0.866^{+0.143}_{-0.080}$ \\
$x_{\rm br} = 3.226^{+0.035}_{-0.065}$
\end{tabular} &
$M_{\rm br} = 0.4719^{+0.0169}_{-0.0298}$ & 303.864 &
1.4 & \textbf{0.0} & \textbf{0.0} \\
\hline\hline
\begin{minipage}[c][1.2cm][c]{0.275\textwidth}
\centering Truncated Power Law \\ K=2, $\bm{\theta}=(\alpha, x_{\rm tr})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha = 0.360^{+0.029}_{-0.051}$ \\
$x_{\rm tr} = 7.891^{+0.000}_{-0.000}$
\end{tabular} &
$M_{\rm tr} = 50.126^{+0.000}_{-0.000}$ & 310.769 &
14.9 & 8.7 & 11.8 \\
\hline
\begin{minipage}[c][1.7cm][c]{0.275\textwidth}
\centering Broken Power Law \\ K=3, $\bm{\theta}=(\alpha_1, \alpha_2, x_{\rm br})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha_1 = 0.240^{+0.064}_{-0.068}$ \\
$\alpha_2 = 0.895^{+0.265}_{-0.103}$ \\
$x_{\rm br} = 4.431^{+0.322}_{-0.180}$
\end{tabular} &
$M_{\rm br} = 1.5754^{+0.5992}_{-0.2594}$ & 309.058 &
11.0 & 10.4 & 10.4 \\
\hline
\begin{minipage}[c][2.7cm][c]{0.275\textwidth}
\centering Truncated Broken Power Law \\ K=4, $\bm{\theta}=(\alpha_1, \alpha_2, x_{\rm br}, x_{\rm tr})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha_1 = 0.249^{+0.065}_{-0.077}$ \\
$\alpha_2 = 0.729^{+0.152}_{-0.200}$ \\
$x_{\rm br} = 4.330^{+0.261}_{-0.691}$ \\
$x_{\rm tr} = 7.891^{+0.000}_{-0.000}$
\end{tabular} &
\begin{tabular}[c]{l}
$M_{\rm br} = 1.4241^{+0.4254}_{-0.7103}$\\
$M_{\rm tr} = 50.1262^{+0.0000}_{-0.0000}$
\end{tabular} & 307.718 &
8.4 & 12.9 & 9.7 \\
\hline
\begin{minipage}[c][2.7cm][c]{0.275\textwidth}
\centering Three-segment Power Law \\ K=5, $\bm{\theta}=(\alpha_1, \alpha_2, \alpha_3, x_{\rm br1}, x_{\rm br2})$
\end{minipage} &
\begin{tabular}[c]{l}
$\alpha_1 = 0.771^{+0.291}_{-0.159}$ \\
$\alpha_2 = -0.362^{+0.181}_{-0.303}$ \\
$\alpha_3 = 0.833^{+0.164}_{-0.075}$ \\
$x_{\rm br1} = 1.827^{+0.270}_{-0.314}$ \\
$x_{\rm br2} = 3.501^{+0.198}_{-0.079}$
\end{tabular} &
\begin{tabular}[c]{l}
$M_{\rm br1} = 0.1166^{+0.0362}_{-0.0314}$\\
$M_{\rm br2} = 0.6216^{+0.1359}_{-0.0474}$
\end{tabular} & 303.711 &
\textbf{0.0} & 10.0 & 3.7
\enddata
\end{deluxetable*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{fit_AC_all0.pdf}
\caption{Fitting results to the simulated mass distribution in Run I (\textit{left}) and Run II (\textit{right}) at a similar $t_{\rm sg}$. The resulting model CDFs (dashed curves) are overplotted on the simulation data (grey-shaded curves). Each model is offset by $10$ from bottom to top for better visual comparison (``PL'' stands for ``Power Law''). The dot(s) on each curve denote(s) the model-specific characteristic mass scale(s) as defined in Table \ref{tab:models} and listed in Tables \ref{tab:fit_A} and \ref{tab:fit_C} (no dot at the truncation mass ($M_{\rm tr}$) because the CDF decreases to $0$). We emphasize that these fitting results are obtained by the maximum likelihood estimator described in Section \ref{subsec:MLE}. The values of $\Delta_{\rm BMS}$ are annotated for reference. Models annotated with $\Delta_{\rm BMS/AIC/BIC}=0.0$ are preferred models by BMS/AIC/BIC. \label{fig:fit_AC}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{RunI_BarPlots.pdf} \\
\includegraphics[width=\linewidth]{RunII_BarPlots.pdf}
\caption{The number of clumps formed per unit logarithmic mass interval (left) and the total mass of clumps in each interval (right) for Run I (upper) and Run II (lower). The PDFs of the preferred model(s) are overplotted for comparison, with error bars indicating the expected Poisson fluctuations. More specifically, the dashed and dashed-dotted lines represent the analytical curves of the PDFs of the preferred model(s). The points with error bars represent the average values of the PDFs over each mass intervals. Thus, the analytical curves do not necessarily go through the center of each point. \label{fig:barplot}}
\end{figure*}
\section{The Initial Planetesimal Mass Distribution}
\label{sec:results}
In this section, we analyze and compare our two high resolution simulations (Run I and II, see Table \ref{tab:setup}) that have different physical parameters $(\uptau_{\rm s}, Z)$. We fit seven statistical models (described in Section \ref{subsec:models} and Appendix \ref{appsec:model_coeff}) to the simulated mass distribution of planetesimals identified by \texttt{PLAN} (described in Section \ref{subsec:plan}). In this section, we first present the fitting results and then the preferred models by model selection criteria. Section \ref{subsec:turnover} describes an interesting turnover in the fitted mass distribution of Run I. In the end, Section \ref{subsec:comp2previous} compares our results to recent studies.
\subsection{Maximum Likelihood Fitting Results}
\label{subsec:best-fits}
Tables \ref{tab:fit_A} and \ref{tab:fit_C} list the best-fit likelihood and parameters for all the statistical models for Run I and II, respectively. The mass distribution used in fitting is extracted at $t_{\rm sg} = 7.5/\Omega_0$ for Run I and at $t_{\rm sg} = 7.6/\Omega_0$\footnote{Run I is the same simulation snapshot presented in Fig. 1, panel 2 in \citet{Simon2017}, but re-analyzed here by \texttt{PLAN}.} for Run II. By this time hundreds of planetesimals have formed, and the time evolution of the mass distribution has slowed. As shown in the two tables, for each model, the best-fit parameters for the two simulations are statistically different and are outside the uncertainty ranges of each other, indicating two different mass distributions are produced.
Fig. \ref{fig:fit_AC} visualizes all the resulting model CDFs for both of our simulations. They produce a broad mass distribution spanning more than three orders of magnitude in mass. However, Run I and II cover different mass regimes due to the different choices of physical parameters and hence the disk conditions, which may also require differently shaped distribution functions to describe. We emphasize that a physical understanding of these differences is elusive, hence our focus on statistics.
For Run I, we find the best-fit first-segment power law slopes ($-\alpha_1$) for the last three models in Table \ref{tab:fit_A} become positive, which is of great interest for understanding which planetesimal masses dominate in number counts, and will be discussed in Section \ref{subsec:turnover}. In addition, the third-segment power law slope ($-\alpha_3$) for the Three-segment Power Law model is extremely steep. As mentioned in Section \ref{subsec:models}, when $\alpha_3$ approaches $+\infty$, this model reverts to the Truncated Broken Power Law, with the other four parameters being identical between these two models.
In this paper, we do not further consider the mass distributions in other snapshots and the possible variations with time, which belongs to future work. \citet{Simon2017} used a single power law model to fit the mass distribution and found that the power law index remains relatively constant in time after an initially rapid collapse. \citet{Schafer2017} used the Variably Tapered Power Law model and also found that the power law index, characteristic mass scale, and tapering exponent all remain approximately constant in time for five orbital periods. Based on these results, we do not expect the mass distribution in our simulations evolve rapidly, i. e.\ on dynamical timescales, after the snapshots.
\subsection{Model Selection}
\label{subsec:model_sel}
Our model selection analyses are based on the information criteria described in Section \ref{subsec:model_sel}, i.e. $\Delta_{\rm BMS}$, $\Delta_{\rm BIC}$, and $\Delta_{\rm AIC}$ values, presented in Tables \ref{tab:fit_A} and \ref{tab:fit_C}. We find that more complex models ($K\geqslant3$) are generally much preferred than the other two simpler models ($K=2$), regardless of the model selection criteria. In other words, the Simply Tapered Power Law and the Truncated Power Law models are consistently less favored.
In the case of Run I, all the model selection criteria reach a consensus on choosing the $K=4$ Truncated Broken Power Law as the preferred model. However, there is disagreement on the strength of preference, as we now explain. In this specific case, the $K=5$ Three-segment Power Law model reverts to the preferred $K=4$ model,
because the high mass power law is very steep (effectively a truncation) with the other four parameters identical. Since the BMS does not count parameters, it does not distinguish between these identically shaped distributions. However the AIC and BIC both apply a penalty for the 5th parameter, which is much more significant for the BIC. While all model selection metrics prefer the $K=4$ Truncated Broken Power Law, the BIC method (only) finds that the evidence against a pair of $K=3$ models -- the Broken Cumulative Power Law model ($\Delta_{\rm BMS} = 1.2$) and the Broken Power Law model ($\Delta_{\rm BMS} = 1.8$) -- is not significant.
In the Run II case, both the BIC and the AIC designate the Broken Cumulative Power Law model as the preferred model, which only has a moderate complexity level among all the model candidates. However, the BMS prefers the Three-segment Power Law models slightly more but also substantially supports the Broken Cumulative Power Law model. The overall preference for a broken CDF model may not be surprising given that the mass distribution shows a kink in the CDF (see Fig. \ref{fig:fit_AC}), but is less physical intuitive since the planetesimal masses are expected to arise probabilistically and broken PDFs have been observed in the size-frequency distributions of asteroids and Kuiper Belt Objects (KBOs) \citep[e.g.][]{Morbidelli2009, Fraser2010, Shankman2013, Singer2019}. More work is needed to further understand whether our case is special or not.
Fig. \ref{fig:fit_AC} allows a visual inspection of our model fitting and selection results. Not surprisingly, more complex models generally better fit the mass distributions. The effect of the logarithmic number axis is worth emphasizing. A small deviation from the (logarithmically plotted) CDF at the low mass end is more statistically significant than a larger deviation at the high mass end, where numbers, especially cumulative numbers, are much higher. The advantage of CDF plots is that no binning choices are required. While no data features are lost to binning, a disadvantage is the difficulty in interpreting slopes at the low mass end (where the CDF is near unity).
Fig. \ref{fig:barplot} provides an alternate view of binned planetesimal numbers (masses) which are compared to the PDFs of the best fit models. The PDF of the Broken Cumulative Power Law model -- fit to Run II -- has a discontinuous value at the CDF bread. As noted earlier, a physical explanation for such break does not exist. Overall, the selected models are excellent fits to the binned data, especially when accounting for the error bars, which represent the Poisson noise on the expected number (and mass) of bodies per bin. For Run II, the two preferred models seem to represent the mass distribution equally well.
\subsection{A Turnover in the Mass Distribution}
\label{subsec:turnover}
In SI simulations to date, the low mass end of the PDF is described by a power law with $\alpha > 0$, where $dN/d\ln(M) \propto M^{-\alpha}$ (most of these works, described below, use $p \equiv \alpha + 1$). Such a slope cannot extend to arbitrarily low mass, because the total mass of planetesimals would diverge. Sufficiently high resolution simulations should solve this problem, by revealing a low mass tail with $\alpha < 0$. Such a measurement would determine the mass of planetesimals formed by SI that initially dominate by number. We present here the first evidence of such a turnover.
In Fig. \ref{fig:barplot}, the mass frequency distribution of Run I presents a turnover below $\sim 0.003M_{\rm G}$ (roughly $100$ km-sized objects for the disk model in Section \ref{subsec:setup}). The number of clumps drops with decreasing mass, except for the lowest mass bin (more on this below).
Our preferred statistical model confirms this visual evidence.
The $K=4$ Truncated Broken Power Law model has a positive low mass slope of $-\alpha_1=0.102$ (as does the in practice identical $K=5$ Three-segment Power Law model). Our bootstrap error estimates (see Table \ref{tab:fit_A}) confirm the significance of the positive slope. Moreover, the simpler $K=3$ Broken Power Law model (while not the most preferred model) also has a positive low mass slope ($-\alpha_1=0.079$), which is flatter, but agrees within statistical uncertainties.
The evidence for a mass turnover (i.e. a positive slope at low masses) in Run I is fairly compelling. However, Run I also has an increase in the number of bodies in the lowest mass bin in Fig. \ref{fig:barplot}. It is unclear if this increase has a physical origin, though we suspect that insufficient resolution on the smallest scales is an issue. We note that the lower resolution Run II shows an uptick in the planetesimal numbers in the two lowest mass bins.
Future studies with higher resolution are required to better resolve the low mass planetesimals and better characterize the inevitable turnover in the gravitational collapse mass function at low masses.
By contrast, for Run II resolution is not sufficient for evidence of a low mass turnover. However, one of the preferred statistical model (the $K=5$ Three-segment Power Law model) reveals a positive slope at intermediate masses ($-\alpha_2 = 0.362$). Whether this slope extends to the low-mass end and whether the uptick of the two lowest mass bins is numerical again require higher resolution studies. Moreover, the binned mass distribution of Run II shows a much flatter slope at the high mass regime than that of Run I, which is also shown by the CDFs in Fig. \ref{fig:fit_AC}. This comparison further demonstrates that the different physical conditions of our two simulations produce different mass distributions.
\subsection{Comparison with Previous Studies}
\label{subsec:comp2previous}
In this section, we compare our fitting results to two recent works on planetesimal mass distribution that considered models beyond a single power law.
\citet{Schafer2017} ran a suite of simulations with different resolutions and box sizes, fixing the physical parameters, $(\uptau_{\rm s}, Z, \Pi, \tilde{G}) = (0.314, 0.02, 0.05, 0.318)$, i.e.\ similar to our Run II with weaker self-gravity. They considered a two-parameter Truncated Power Law model and a three-parameter Variably Tapered Power Law model and found the latter provides a better fit. Our analysis did not favor either of these models, especially for the similar run II. For the tapering exponent (of the Variably Tapered Power Law), they fit $\beta \simeq 0.3 $ -- $ 0.4$, similar to our results ($0.298$ and $0.527$ for Run I and II, respectively). \citet{Schafer2017} explained that with limited resolution ($\Delta x = H/640$ at best), their simulations did not produce enough planetesimals to constrain the shape of the CDF in the power law part, and thus the values of $\alpha$ and $M_{\rm exp}$. Their work used a different code, PENCIL, and also used a sink-particle algorithm to handle bound clumps. These differences are a useful check on numerical robustness.
\citet{Abod2019} used high-resolution ($\Delta x = H/2560$) simulations with $(\uptau_{\rm s}, Z, \tilde{G}) = (0.05, 0.1, 0.02)$ -- i.e.\ smaller solids in a lower mass disk than our runs -- to study how the initial mass distribution of planetesimals depends on the pressure gradient, $\Pi$, with values from $0$ to $0.1$.
They found that the planetesimal mass function depends at most weakly on $\Pi$. \citet{Abod2019} used a two-parameter Simply Tapered Power Law model to fit the simulation data. Our analysis did not prefer this model, though it does have an advantage of simplicity. They fit the power law exponent $\alpha\approx 0.3$ and the characteristic mass scale of $\sim 0.3 M_{\rm G}$ when $\Pi = 0.05$. Our results give similar power law slopes ($\alpha=0.208$ for Run I and $0.388$ for Run II) and, for Run I a similar characteristic mass scale ($M_{\rm exp}=0.1385M_{\rm G}$). Our Run II fit, $M_{\rm exp}(=11.2531M_{\rm G})$, differs by a factor of $\sim 81$, for reasons that are not yet clear.
Since there are always more than one difference in the physical conditions and only limited model candidates are considered, these comparisons are somewhat inconclusive. Though costly, more parameter studies are needed to understand how the initial planetesimal mass function varies with each of the four physical parameters $(\uptau_{\rm s}, Z, \Pi, \tilde{G})$ and eventually how these parameter dependencies couple.
\section{Discussions And Conclusions}
\label{sec:final}
We investigate the mass distribution of planetesimals formed in high-resolution SI simulations. This mass distribution is of great astrophysical interest since it provides insights for the observations of small bodies in the Solar System (e.g., Cold Classical Kuiper Belt Objects, \citealp[etc.]{Nesvorny2019}) as well as for the modeling of subsequent protoplanet formation \citep[e.g.][]{Liu2019}.
In this work, we conduct SI simulations including particle self-gravity with the highest resolution to date, which produce broad mass distributions of planetesimals. While such distributions are top-heavy in mass for all numerical resolution choices, higher resolution simulations probe the lower-mass tail that dominates planetesimal numbers. We also develop and apply a new clump-finding tool, \texttt{PLAN} (described in Section \ref{subsec:plan}), to accurately identify self-bound clumps in our simulations and extract their mass distributions. \texttt{PLAN} was used in previous work \citep{Abod2019, Nesvorny2019}, but the details of the method are presented here.
We fit the mass distribution to statistical models with different parameterizations (described in Section \ref{subsec:models}. To determine and select the preferred model from simulation data is a difficult statistical art, especially when different model candidates have different numbers of parameters. Thus, this work considers a variety of model selection criteria: the commonly-used BIC and AIC, as well as a bootstrap model selection method that we call BMS.
Based on our analyses, we find that
\begin{itemize}
\item Run I is best described by a four-parameter model, the Truncated Broken Power Law.
\item For Run II (with smaller solids and a lower solid abundance than Run I) the preferred model depends on the model selection criterion used. The AIC and BIC prefer a three-parameter Broken Cumulative Power Law. The BMS selects a five-parameter model, the Three-Segment Power Law.
\end{itemize}
The interpretations and conclusions are drawn as follows:
\begin{enumerate}
\item The initial mass distribution of planetesimals formed by the streaming instability is shown to be numerically robust for the high mass regimes, and is most probably also robust at lower masses. Simulations with different numerical resolution (Run I and an equivalent run with lower resolution) show a similar mass distribution at the high mass end. Higher resolution gives a correction at intermediate masses and an extension to lower masses.
\item For different physical conditions, the initial mass distribution is \textit{not} universal. While all cases produce top-heavy mass distributions with similar overall shapes, simulations with different physical parameters produce statistically different mass distributions. Fitting the same model to different runs often yields different best-fit parameters, e.g. power law slopes and characteristic mass scales. Moreover, the preferred models for different runs have different functional forms. More work and more high-resolution simulations are needed to better understand the initial mass distribution.
\item Our preferred models were not previously considered in the literature. We analyze the models that were used in previous studies, and find alternate models which rank higher by all model selection criteria. We make no claim to have found the optimal model, which we may not have considered and which may change as simulation data improves.
\item We find evidence for a turnover in the mass frequency distribution at the low mass end. This evidence is most prominent for Run I, where the PDF of the logarithmic masses transitions to a positive slope below $M \sim0.003M_{\rm G}$ at roughly 2-$\sigma$ significance in the estimated slope. There is also statistical evidence for a turnover in the case of Run II, but only at intermediate masses. To better characterize the turnover of initial planetesimal mass distributions, higher resolution simulations are required.
\item The most complex model is not always selected as the preferred model. This result emphasizes the importance of applying complexity penalties for model selection.
\item Different model selection criteria disagree on both the absolute and relative rankings of different models. It is often difficult to rigorously justify a single model selection criteria for a given (astrophysical) application. Absent this justification, we generally recommend that multiple selection criteria be considered to increase confidence in model selection analyses.
\end{enumerate}
\citet{Nesvorny2019} recently find that the clumps formed by the SI possess excess angular momenta and are likely to form binaries or multiple systems. In that case, the mass distributions from our simulations may describe the mass function of binaries/systems, not individual objects. This finding introduces corrections to the overall mass distribution and also some uncertainties to our modeling, which are beyond the scope of this work. However, those corrections and uncertainties would be minor if all clumps eventually form equal-size binaries as proposed in \citet{Nesvorny2010}.
\section*{Acknowledgements}
We thank Kaitlin Kratter, Paola Pinilla, Philip Pinto, and Peter Behroozi for useful discussions. RL acknowledges support from NASA headquarters under the NASA Earth and Space Science Fellowship Program grant NNX16AP53H. ANY acknowledges partial support from NASA Astrophysics Theory Grant NNX17AK59G and from the NSF through grant AST-1616929.
\software{ATHENA \citep{Stone2008, Bai2010},
Matplotlib \citep{Matplotlib},
Numpy \& Scipy \citep{Numpy},
emcee v3.0rc2 \citep{Foreman-Mackey2013},
PLAN \citep{PLAN},
Pyridoxine \citep{Pyridoxine}.}
|
2,877,628,091,268 | arxiv | \section{Introduction}
The computation of residual gas particle (RGP) density profiles in particle accelerators is an essential task to optimize beam pipes and vacuum system design. In the last two decades, some software have been developed \cite{ady2014introduction,rossi2004vasco}. They have been used for most of the high-energy accelerators presently in activity, including the Large Hadron Collider (LHC) at CERN.
There exists several approaches to evaluate gas density profiles \cite{bird1976molecular, lafferty1987vacuum, brush2003kinetic}. The most general one would be to tackle directly the nonlinear integro-differential Boltzmann equation\cite{harris2004introduction}. However, the solution of the Boltzmann equation requires an important computational effort due to the complicated structure of the collision integral. To reduce the complexity of the computation, gas density profile calculations have been performed by probabilistic Monte Carlo simulations, either by Direct~Simulation Monte Carlo method~(DSMC) \cite{bird1976molecular} or, in a simpler way, by the test~particle simulation Monte Carlo method~(TPMC) \cite{nakhosteen2016handbook}. Among the latter methods, MolFlow+ \cite{ady2014introduction} is largely spread in the vacuum technology community. However, although such methods have found applications beyond particle accelerators \cite{ady2016monte}, their extensions to time-dependent behaviours and multiple gas species phenomena are difficult, in particular for long vacuum sectors.
Analytical models can overcome such obstacles if the evaluation of one-dimension gas-density profiles is sufficient, preferably in simple geometries with cylindrical symmetry. A typical example is VASCO \cite{rossi2004vasco, bregliozzi2012vacuum}, which was developed at CERN in 2004 for the interaction regions of the LHC.
Recently, the preliminary design-study of the Future Circular Colliders~(FCC) \cite{benedikt2014future} with unprecedented high energy and vacuum requirements offers to extend the application of analytical methods. As an example, the hadron-hadron version of the FCC with around 100~km circumference and 100~TeV centre of mass energy, requires a gas density in the arcs that is five times lower than the one in the LHC. In this paper, we revise and update the previous models, present the underlying theory and introduce a new elaborated software PyVASCO.
In this new analytical method, we combined multiple effects due to material outgassing, beam induced desorption, conductance limitations, and different pumping mechanisms. A coupled differential equation system describes the mathematical framework of the model. Each equation represents the mass balance of one of the four dominant gas species $\ce{H2}, \ce{CH4}, \ce{CO}$ and $\ce{CO2}$. These equations are coupled due to interaction of the different gas species among each other. For example, $\ce{CO2+}$\textendash after ionization by the beam\textendash may desorb $\ce{H2}$ from the beam pipe materials.
Mathematically, the problem translates into a large sparse matrix equation system of first order, as~in~\cite{rossi2004vasco}. We developed a new optimized solving algorithm and implemented the model in a Python environment. This resulted in a significantly improved performance in speed and memory storage allowing to simulate 100~times~longer vacuum systems than those achievable by the previous work \cite{rossi2004vasco} in less than 30~seconds.
We benchmarked the simulation output with MolFlow+ and cross checked it to the readings of pressure gauges installed in accelerators. The latter verification is presented for the Long~Straight~Section~(LSS)~4~and~5 of the LHC with a total length of over 1000~m along the beam line. \\
The focus of the simulation in this paper lies on circular hadron accelerators, since LHC's gauge reading are available for verification and additionally the FCC-hh presents the ultimate goal of the design study. Nevertheless, the code PyVASCO is applicable to any other type of particle accelerators, like lepton machines, linacs and heavy-ion accelerators.
\section{Setting up the physical vacuum model} \label{physical_model}
This section provides an introduction to the physical quantities and laws that form the equation system of the vacuum dynamics with the particle density $\mathbf{n}$ as unknown. The variables introduced here are summarized at the end of this section in TABLE \ref{PhyDescrTable}.\\
Mathematically, $\mathbf{n}$ represents the vector-valued density function of the four dominating gas species $\ce{H2}, \ce{CH4}, \ce{CO}$ and $\ce{CO2}$ as observed in a mass spectrum for a ultra-high vacuum (UHV) system (see Fig.~\ref{fig:massSpectrum}).
\begin{eqnarray}
\mathbf{n} = (n_{\ce{H2}}, n_{\ce{CH4}}, n_{\ce{CO}}, n_{\ce{CO2}})^T
\end{eqnarray}
\begin{figure}[b]
\centering
\includegraphics[width=0.4\textwidth]{1IdaAichingerPRAB.pdf}
\caption{Mass spectrum in a UHV system. The four peaks correspond to the presence of $\ce{H2}, \ce{CH4}, \ce{CO}$ and $\ce{CO2}$ gas particles. The measured ion-current can be converted to the gas density with appropriate calibration coefficients.}
\label{fig:massSpectrum}
\end{figure}
The framework of the Frenet-Serret coordinate system~\cite{kuhnel2015differential} is an appropriate choice for circular colliders, and the Cartesian coordinate system for straight linear colliders. In any case, we will refer to the vector pointing in the direction of the beam as $x$. The horizontal and vertical coordinates are noted by $y$ and $z$, the time is noted by $t$.
UHV systems are characterized by a high Knudsen number $Kn > 10$ with:
\begin{eqnarray}
Kn = \frac{k_B T}{\sqrt{2}\pi \delta^2 p d},
\end{eqnarray}
where $k_B$ is the Boltzmann constant, $T$ the temperature, $d$ the beam pipe diameter, $\delta$ is the particle hard shell diameter and $p$ the total pressure. \\
The pressure $\mathbf{p}$ of the gas and the particle density $\mathbf{n}$ in a UHV-system are correlated via the ideal gas law:
\begin{eqnarray}
\mathbf{p} = \mathbf{n} \cdot k_B \cdot T
\end{eqnarray}
For clarification, the total pressure is the sum of the partial pressure values of each gas specie $i$:
$$ p = \sum\limits_{i=1}^4 \mathbf{p}_i $$
The Maxwell Boltzmann distribution describes the particle speed $v$ for ideal gases \cite{nakhosteen2016handbook, chapman1970mathematical}.
The corresponding mean velocity is given by
\begin{eqnarray}
\overline{\mathbf{v}} = \sqrt{\frac{8 k_B T}{\mathbf{m} \pi}} \label{velo}
\end{eqnarray}
depending on the molecular mass $\mathbf{m}$ and $T$.
Fick's first and second laws of diffusion \cite{crank1979mathematics, bird1976molecular} define the fundamental balance equation of the gas kinetics in a UHV-system. We get the following equation for an isotropic medium with a constant diffusion coefficient $\mathbf{a}$:
\begin{eqnarray} \label{balance}
\frac{\partial \mathbf{n}}{\partial t} = \mathbf{a} \frac{\partial^2 \mathbf{n}}{\partial x^2}
\end{eqnarray}
if diffusion is one-dimensional.
This assumption is appropriate since most of the time the length of the vacuum chamber is much bigger than its cross section, thus diffusion occurs mainly along the beam line. Furthermore, experimental data from the laboratory is also one-dimensional since they measure the gradient of gas concentration along the x-axis.
The diffusion coefficient $\mathbf{a}$ depends on the particle's speed, its mass, and on the beam pipe geometry. Under molecular flow conditions, $\mathbf{a}$ is given for cylindrical vacuum chambers by the specific conductance based on the calculations of Knudsen \cite{knudsen1909gesetze, nakhosteen2016handbook}:
\begin{eqnarray}
\mathbf{a} = \Big(\frac{d}{2}\Big)^2 \pi \cdot \frac{ d}{3} \cdot \overline{\mathbf{v}} = \frac{d^3 \pi }{12} \cdot \overline{\mathbf{v}}
\end{eqnarray}
In order to generate a correct mass balance, continuous flow into and out of the system (see also subsection \ref{flow_into}, \ref{flow_out}) , e.g. in terms of desorption or pumping, at a rate $\mathbf{q} $ and $\mathbf{r} $ per unit volume, must be added to the right side of Eq. \eqref{balance}:
\begin{eqnarray}
\frac{\partial \mathbf{n}}{\partial t} = \mathbf{a} \frac{\partial^2 \mathbf{n}}{\partial x^2} + \mathbf{q}(x,t) - \mathbf{r}(x,t) \cdot \mathbf{n} \label{equSys}
\end{eqnarray}
Note, that the flow out of the system is always proportional to the prevailing density $\mathbf{n}$, whereas the flow into the system can act independently of $\mathbf{n}$.
Local sinks and sources, e.g. due to lumped pumps or outgassing related to possible tiny leaks are considered in the boundary conditions.
The idea is now to fragment the domain into a finite number of elements, where $ \mathbf{q} $, $ \mathbf{r} $ and $ \mathbf{a} $ can be taken as constant vectors. This allows to solve the equation system \eqref{equSys} locally on each element. The piecewise solution concept is identified by an additional index $k$ (see~Fig.~\ref{fig:segmenting}).
\begin{figure}[b]
\centering
\includegraphics[width=0.4\textwidth]{2IdaAichingerPRAB.pdf}
\caption{Segmenting domain in $N$ parts. Solution $n_k$ describes density on segment $k$.}
\label{fig:segmenting}
\end{figure}
Accordingly, appropriate intersection conditions combine the solutions to a global model.
The conservation principle applied at the interface between two elements $k$ and $k+1$ leads to density and flux continuity conditions. Thus, it holds at the intersection point $x_k$ with a lumped gas source $\mathbf{g}$ and a lumped pump $\mathbf{s}$ (ion-pumps, turbomolecular pumps) that:
\begin{eqnarray}
\mathbf{n}_{k-1} &=& \mathbf{n}_k \label{boundary_cond_IC1} \\
- \frac{D\mathbf{n}_{k-1}}{Dt} + \frac{D\mathbf{n}_{k}}{Dt} &=& \mathbf{s}_k \mathbf{n}_k - \mathbf{g}_k \label{boundary_cond_IC2} ,
\end{eqnarray}
where the total derivative for a space-and-time dependent variable is given by \cite{bird2007transport, ferziger1973mathematical}:
\begin{eqnarray}
\frac{D\mathbf{n}}{Dt} := \frac{\partial \mathbf{n}}{\partial t} + \mathbf{a} \frac{\partial \mathbf{n}}{\partial x}
\end{eqnarray}
Note, the change of sign in front of the term involving the total derivative: The derivative at the beginning of segment $k$ points in the positive direction, while at the end of segment $ k-1 $, it points in the negative direction.\\
At the extremities of the vacuum chamber open boundary conditions are assumed, hence only half of the pumping speed and half of the gas release is considered.
\begin{eqnarray}
\frac{D \mathbf{n}_1}{Dt} = \frac{\mathbf{s}_1}{2} \mathbf{n}_1 - \frac{\mathbf{g}_1}{2} \qquad &\textrm{for }& x = x_0\\
\frac{D \mathbf{n}_N}{Dt} = \frac{\mathbf{s}_{N+1}}{2} \mathbf{n}_N - \frac{\mathbf{g}_{N+1}}{2} \qquad &\textrm{for }& x = x_{N+1}
\end{eqnarray}
Note, also periodic boundary conditions at the extremities present a possible choice that set $\mathbf{n}_1 $ and $\mathbf{n}_N $ in the sense of \eqref{boundary_cond_IC1} and \eqref{boundary_cond_IC2} in relation.
\subsection{Flow into the vacuum-system $\mathbf{q}$} \label{flow_into}
There are four main phenomena, where particles are added to the system.
The first three phenomena, described in the following subsections, are due to beam dynamic effects and are considered as the main impact on the dynamic gas density. In these cases, energetic particles as photons, ions and electrons bombard the chamber. If the energy spectrum of the impinging particles is in the electronic structure of the beam pipe material, then molecular desorption from the chamber walls may take place. The desorption probability is described by the parameter $\eta$. Its value is mainly specified by experimental data.
In general, we consider three different types of impinging particles as photons, ions and electrons which lead to different desorption phenomena:
\subsubsection{Photon-induced desorption} \label{flowinto1}
Accelerated charged particles emit photons in the presence of magnetic fields. This process is called synchrotron radiation (SR) \cite{hofmann2004physics}. Its impact on vacuum systems is common for electron-positron circular colliders as for LEP \cite{bailey1998synchrotron} and could also be observed for high energy-proton ciruclar colliders as the LHC \cite{tuckmantel2005synchrotron}. The total number of photons emitted~per~second and per meter is described with the photon flux $\dot{\Gamma}$. It depends on the energy of the accelerated particle expressed by the relativistic factor $\gamma$ and the strength of the magnetic field in terms of the bending radius $\rho$:
\begin{eqnarray} \label{photonflux}
\dot{\Gamma} = \frac{5 \sqrt{3} \gamma}{24 \pi \epsilon_0 \hbar \rho c},
\end{eqnarray}
where $\hbar$ is the reduced Planck constant, $c$ the speed of light and $\epsilon_0$ the permittivity of vacuum.
The process of the photon-induced desorption depends on the energy spectrum of the impinging photons and on the material of the vacuum chamber. Nearly a linear dependence has been observed between the critical photon energy $E_c$ in the range from 10-1000eV and the desorption yield for $\eta_{ph}$ for the most commonly used materials of the beam chambers \cite{gomez1994comparison}.
$E_c$ is the median of the photon energy spectrum:
\begin{eqnarray*}
E_c = \frac{3}{2}\frac{\hbar c}{\rho} \gamma^3,
\end{eqnarray*}
The derived desorption yield for a copper-lined chamber, e.g. is given by \cite{gomez1994comparison}:
\begin{eqnarray*}
\eta(\ce{H_2}) \sim E_c^{ 0.74}, \quad \eta(\ce{CH_4}) \sim E_c^{ 0.94}, \\
\eta(\ce{CO}) \sim E_c^{ 1.01}, \quad \eta(\ce{CO_2}) \sim E_c^{ 1.12}
\end{eqnarray*}
Note, that photons below 4 eV do not provoke desorption, consequently the previous formula is valid, when the integral up to 4 eV presents only a small fraction of the total integral. In \cite{gomez1994comparison}, the photon induced desorption was studied for a $E_c$ up to 300eV.
\subsubsection{Ion-induced desorption} \label{flowinto2}
The beam can ionize the RGP \cite{turner1996ion}. Consequently the generated positive ions are repelled by the positive space charge of the proton beam and imping on the vacuum chamber walls, where they may cause desorption of tightly bounded molecules from the surface. This phenomena was observed for example in the Intersecting Storage Ring (ISR) at CERN \cite{calder1974ion}.
In this process the different gas species may influence each other.
The interaction is described by the desorption yield matrix
\footnotesize
\begin{eqnarray*}
\mathbf{H}_{\textrm{ion}} = \left(
\begin{array}{llll}
\eta_{\ce{H2+} \rightarrow \ce{H2}} & \eta_{\ce{CH4+} \rightarrow \ce{H_2}} & \eta_{\ce{CO+} \rightarrow \ce{H2}} & \eta_{\ce{CO2+} \rightarrow \ce{H2}} \\
\eta_{\ce{H2+} \rightarrow \ce{CH4}} & \eta_{\ce{CH4+} \rightarrow \ce{CH4}} & \eta_{\ce{CO+} \rightarrow \ce{CH4}} & \eta_{\ce{CO2+} \rightarrow \ce{CH4}} \\
\eta_{\ce{H2+} \rightarrow \ce{CO}} & \eta_{\ce{CH4+} \rightarrow \ce{CO}} & \eta_{\ce{CO+} \rightarrow \ce{CO}} & \eta_{\ce{CO2+} \rightarrow \ce{CO}} \\
\eta_{\ce{H2+} \rightarrow \ce{CO2}} & \eta_{\ce{CH4+} \rightarrow \ce{CO2}} & \eta_{\ce{CO+} \rightarrow \ce{CO2}} & \eta_{\ce{CO2+} \rightarrow \ce{CO2}} \\
\end{array}
\right).
\end{eqnarray*}
\normalsize{}
For clarification, the entry $H_{4,1} = \eta_{\ce{CO2+} \rightarrow \ce{H2}}$ describes the probability that a $\ce{CO2+}$ ion desorbs a $\ce{H2}$ molecule.
The process of desorbed particles is thus described with the product of the desorption yield matrix $ \mathbf{H}_{\textrm{ion}} $ and the ion flux $ \dot{\mathbf{I}}_{\textrm{ion}} $:
\begin{eqnarray*}
\mathbf{H}_{\textrm{ion}} \cdot \dot{\mathbf{I}}_{\textrm{ion}} \cdot \mathbf{n} = \sum\limits_{j=1}^4\mathbf{H}_{i,j} \cdot \frac{I}{e} \sigma_j \cdot \mathbf{n}_{j} ,
\end{eqnarray*}
where the summation runs over all impinging species $j$ that desorb particles of species $i$. The summand describes the product of the ion-induced desorption matrix entry $ (H_{i,j}) $ with the number of ionized particles $(\frac{I}{e} \sigma_j \cdot \mathbf{n}_{j})$, where $I$ is the beam current, $e$ the elementary charge, $\sigma$ the ionization cross section and $\mathbf{n}$ the RGP density. The ionization cross section for energies greater than 100 keV for a gas particle is calculated in \cite{mathewson1996beam}.
Additionally, it is worth mentioning that $\sigma$ is also depending on the ion energy, which in turn, depends on the beam current, making the ion-induced desorption term quadratic dependent on the beam current.
\subsubsection{Electron-induced desorption - Electron cloud} \label{flowinto3}
The beam can generate some electrons from synchrotron radiation, impinging ionized gas molecules and spontaneous desorption induced by sufficiently high electromagnetic fields \cite{iadarola2014electron}. These primary electrons are accelerated by the positively charged beam, that hit the chamber wall and may produce a cascade of secondary electrons. The electrons are accelerated towards the positively charged beam, they cross the chamber and hit the wall again at the opposite side - producing more electrons and lead eventually to beam instabilities and gas desorption. This phenomenon depends in a complex way on the beam and chamber parameters and also on the bunch filling pattern. A CERN proprietary software PyECloud~\cite{iadarola2013pyecloud} addresses this phenomena.
Based on observations, we can conclude that the bigger the aperture of the vacuum chamber, the longer is the duration that an electron is accelerated, the bigger is the surface where desorption can take place and the higher is the electron induced desorption. In addition, it has been observed in the LHC that the reduction of the bunch spacing to 25ns causes a significantly increased electron cloud effect in comparison to 50ns or to 100ns~\cite{bradu2016compensation}.
\subsubsection{Other sources and summary of the total flow into the system} \label{flowinto4}
Thermal desorption generates what is usually called the static vacuum, which is present even in the absence of the beam. For example, chamber walls and the components within the chamber randomly release gas which was adsorbed at the surfaces or entrapped into the bulk of the material. Air and water vapour may enter the system through leaks or permeate through seals. Gauges and beam instrumentation provide an additional source of outgassing.\\
Summarizing the flow into the vacuum system, gives the following expression for $\mathbf{q}$:
\begin{eqnarray*}
\mathbf{q}(x,t) &=& \underbrace{ \mathbf{H}_{ion} \cdot \dot{I}_{ion} \cdot \mathbf{n} }_{\textrm{ion-induced desorption}}
+ \underbrace{\mathbf{\eta}_{ph} \cdot \dot{\Gamma}_{ph}}_{\textrm{photon-induced desorption}} \\
&&
+ \underbrace{\mathbf{\eta}_{e} \cdot \dot{N}_{e}}_{\textrm{electron-induced desorption}} +
\underbrace{\pi \cdot d \cdot \mathbf{q}_{\textrm{th}}}_{\textrm{thermal outgassing}}
\end{eqnarray*}
\subsection{Flow out of the vacuum-system $\mathbf{r}$} \label{flow_out}
Particles are continuously added to the UHV-system, as we have seen in the previous section. Therefore sufficient pumping systems have to be installed, in terms of distributed and spatially localized pumps. The latter one is mainly provided by conventional pumps e.g. ion pumps or turbomolecular pumps.
Apart from this, impinging RGP may also stick on the wall due to thermodynamic or chemical binding at the surface of the chamber. This describes then distributed pumping. For example, a special surface coating of the vacuum chamber of $ \ce{Al}, \ce{Zr}, \ce{Ti}, \ce{V} $ and $ \ce{Fe} $, better known as Non-Evaporable Getter (NEG) \cite{benvenuti2001vacuum} induces distributed pumping. The RGP of $\ce{H2}$, $ \ce{CO} $ and $ \ce{CO2}$ are first chemically trapped by the NEG coating and then adsorbed into the bulk of the material. After a surface coverage of about one monolayer of adsorption of $ \ce{CO} $ and $ \ce{CO2} $ on the surface, the NEG saturates and the pumping efficiency drops down to negligible values. $\ce{CH_4}$ and noble gases are not adsorbed by NEG \cite{chiggiato2006ti}. \\
Distributed pumping also occurs in cryogenic areas, when a gas-particle hits the wall and immediately condenses. This phenomenon is known as cryo-pumping \cite{haefer2013kryo}.
Generally, the number of molecules impinging on the wall under molecular flow equilibrium conditions is given by
\begin{eqnarray}
\frac{A \cdot \overline{\mathbf{v}} }{4},
\end{eqnarray}
where $ A $ describes the lateral surface of the vacuum chamber, and $\overline{\mathbf{v}}$ the mean velocity of the RGP as defined in Eq.~\eqref{velo}.
In contrast to the distributed pumping particles sometimes get pumped only at an orifice on the beam pipe wall. For example, the holes in the LHC beam screen provide a linear pumping \cite{cruikshank1997mechanical}.
\onecolumngrid
\vspace{\columnsep}
\subsection{Problem description} \label{problem_descr}
The central part of the model are the balance equations and the boundary conditions, summarized here: \\
Mass-Balance equation for segment $k$ (discarding here the index $k$ for each coefficient):
\begin{eqnarray} \label{balanceEQ}
\underbrace{ \frac{\partial \mathbf{n}}{\partial t}}_{\substack{\text{Time variation} \\ \text{of particles}}} =
\underbrace{\mathbf{a} \circ \frac{\partial^2 \mathbf{n}}{d x^2}}_{\substack{\text{Diffusion}}}+\underbrace{\mathbf{\eta}_{ph} \cdot \dot{\Gamma}_{ph}}_{\substack{\text{Desorption} \\ \text{by photons}}}
+ \underbrace{\mathbf{\eta}_{e} \cdot \dot{N}_{e}}_{\textrm{by electrons}}
+ \underset{{\color{red}Multi gas model}} {\boxed{ \underbrace{ \mathbf{H}_{ion} \cdot \dot{I}_{\textrm{ion}} \circ \mathbf{n} }_{\substack{\text{Ionization by beam} \\ \text{and desorption by ions}}} }}
+ \underbrace{ A \cdot \mathbf{q}_{\textrm{th}}}_{\substack{\text{thermal} \\ \text{outgassing}}}
- \underbrace{\alpha \circ \frac{A \cdot \overline{\mathbf{v}} }{4} \circ (\mathbf{n}- \chi_{\textrm{cryo}} \mathbf{n_e} )}_{\textrm{wall distributed pumping}}
- \underbrace{\mathbf{p}_{l} \circ \mathbf{n}}_{\substack{\text{linear} \\ \text{pumping}}} \qquad \quad
\end{eqnarray}
Boundary and intersection conditions for segment $k-1$ and $k$:
\begin{eqnarray} \label{IC}
\mathbf{n}_{k-1}(x_k) &=& \mathbf{n}_k(x_k) \\
-\mathbf{a}_{k-1} \circ \mathbf{n}'_{k-1} (x_k) + \mathbf{a}_{k} \circ \mathbf{n}'_{k} (x_k) &=& \mathbf{s}_k \circ \mathbf{n}_k (x_k) - \mathbf{g}_k \\
\mathbf{a}_{1} \circ \mathbf{n}'_{1} (x_1) &=& \frac{\mathbf{s}_1}{2} \circ \mathbf{n}_1 (x_1) - \frac{\mathbf{g}_1}{2}\\
- \mathbf{a}_{N} \circ \mathbf{n}'_{N} (x_{N+1}) &=& \frac{\mathbf{s}_{N+1}}{2} \circ \mathbf{n}_N (x_{N+1}) - \frac{\mathbf{g}_{N+1}}{2}
\end{eqnarray}
\begin{remark}
The symbol $ \circ $ indicates a component wise multiplication of two vectors. The symbol $'$ indicates the normal derivative at the boundaries.\\
Vectors are presented by lower-case bold letters (with the exception of Greek symbols) and matrices are presented by upper-case bold letters.\\
\end{remark}
\begingroup
\begin{table}[h]
\caption{\label{PhyDescrTable} Model parameters for physical description.}
\begin{ruledtabular}
\begin{tabular}{llll}
\textbf{Symbol} &
\textbf{Dim} &
\textbf{Unit}&
\textbf{Description} \\
\hline
$ \mathbf{n} $ & $\mathbb{R}^4$ & Particles/$\textrm{m}^3$ & vector-valued particle density function of $ \ce{H2}, \ce{CH4}, \ce{CO} $ and $ \ce{CO2} $ \\
$ \mathbf{a} $ & $\mathbb{R}^4$ & $\textrm{m}^4/s$ & Specific conductance \\
$ \dot{\Gamma}_{ph} $ & $\mathbb{R}$& $photons/ (s\cdot \textrm{m})$ & Emitted photon flux by the bended beam in the magnetic areas \\
$ \dot{N}_{e} $ & $\mathbb{R}$& $ electrons/(s\cdot \textrm{m})$ & Electron flux impinging on the chamber wall due to the electron cloud phenomena \\
$ \dot{I}_{\textrm{ion}} $& $\mathbb{R}$ & $ ion/(s\cdot \textrm{m})$ & Ion flux, it is proportional to n\\
$ \frac{I}{e} $ & \quad & $ \frac{1}{s} $ & Number of high energy protons passing per second \\
$ \eta_{ph} $ & $\mathbb{R}^{4}$ & 1 & Photon induced desorption yield $(\eta \geq 0 )$ describes the number of molecules desorbed per photon\\
$ \eta_{e} $ & $\mathbb{R}^{4}$ & 1 & Electron induced desorption yield $(\eta \geq 0) $ describes the number of molecules desorbed per electron\\
$ \mathbf{H}_{ion} $ & $\mathbb{R}^{4 \times 4}$ & 1 & Ion induced desorption yield, probability that ion of specie $ i $ desorbs molecule of \\
&&&specie $ j $ for $ i,j \in \{\ce{H2}, \ce{CH4}, \ce{CO}, \ce{CO2} \}$ \\
A & $ \mathbb{R} $ & m & Lateral surface per unit-length of beam chamber \\
$ \mathbf{r} $ & & & Sinks of a UHV-system \\
$ \mathbf{q} $ & & & Sources of a UHV-system \\
$ \mathbf{q}_{\textrm{th}} $ & $\mathbb{R}^{4}$ & $ 1/(\textrm{m}^2 s) \footnotemark[1] $ & Thermal outgassing rate \\
$ \alpha $ & $\mathbb{R}^{4}$ & $ 1 $ & Sticking coefficient \\
$ \overline{\mathbf{v}} $ & $\mathbb{R}^{4}$ & $ \textrm{m}/s $ & Average Maxwell-Boltzmann velocity of the four gas species \\
$ \mathbf{p}_l $ & $\mathbb{R}^{4}$ & $ \textrm{m}^2/s $ & Linear pumping per unit-length \\
N & $\mathbb{R}$ & 1 & Number of segments \\
d & $\mathbb{R}$ & m & Diameter \\
L & $\mathbb{R}$ & m & Length of segment \\
T & $\mathbb{R}$ & K & Absolute temperature of segment \\
$ \mathbf{s}_k $ & $\mathbb{R}^{4}$ & $ \textrm{m}^3/s $ & Pumping speed of lumped pump located on the beginning of segment $k$ \\
x & $\mathbb{R}$ & m & Spatial coordinate along beam line\\
$ x_k $ & $\mathbb{R}$ & m & Intersection point of segment $k-1$ and $k$\\
$ \mathbf{g} $ & $\mathbb{R}^{4}$ & $1/s$ & Local punctual gas source (e.g. gas leak) \\
$ \sigma $ & \quad & $\textrm{ion}/\textrm{proton} \cdot \textrm{m}^2 $ & Ionisation cross section of residual gas molecules by high energy protons \\
$ k_B $& \quad & $ \textrm{m}^2 kg/(s^2 K) $ & Boltzmann constant $ k_B $ = $ 1.3806488 \cdot 10^{-23} $\\
$ p $& $\mathbb{R}$ & Pa & Total pressure\\
$ \mathbf{p} $& $\mathbb{R}^{4}$ & Pa & Equivalent pressure for particle density $\mathbf{n} $ using ideal gas equation\\
$ \chi_{\textrm{cryo}} $& $\mathbb{N}$ & \quad & $\chi = 1$ for cryogenic areas and 0 for room temperature areas\\
$ \mathbf{n_e} $& $\mathbb{R}^4$ & $ 1/\textrm{m}^3 $ & Background density without beam (static density)\\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{[ $\widetilde{\mathbf{q}}] =[ \frac{\textrm{mbar} \cdot \textrm{l}}{\textrm{s} \cdot \textrm{cm}^2}] $ is more common in practice. $\mathbf{q}_{\textrm{th}} = \widetilde{\mathbf{q}} \cdot \frac{10^3}{k_B \cdot T}$ }
\end{table}
\endgroup
\twocolumngrid
\section{Analytical Solution Method}
The introduced physical description of a vacuum system provides us now a model, that needs to be solved. We focus here on the solution concept from a mathematical point of view.
The differential Eq.~\eqref{balanceEQ} has a solution that can be written in an exact and closed form under stationary assumptions. This assumption is applicable after a specific amount of pump-down time or when the accelerator operates at stable beam, then time variations are negligible and a stationary solution delivers accurate results.
This leads to an elliptic partial differential equation with piecewise constant coefficients.\\
Introduced variables are summarized again at the end of this section in Table \ref{para_math}.\\
An equivalent stationary problem description of the balance equation is given by:
\begin{eqnarray}
\overrightarrow{0}_4 = \mathbf{A}(x) \frac{d \mathbf{n}^2}{dx^2} + \mathbf{B}(x) \mathbf{n} + \mathbf{c}(x) \label{ProblemD1}
\end{eqnarray}
with $ \mathbf{A}, \mathbf{B} \in \mathbb{R}^{4\times 4} $ and $ \mathbf{c} \in \mathbb{R}^4 $ being the matrix and vector assemblies of the parameters from the previous section:
\begin{eqnarray*}
\mathbf{A} &=& \mathbf{a} \cdot \mathbf{I}_{4}\\
\mathbf{B} &=& \mathbf{H}_{ion} \cdot \dot{I}_{\textrm{ion}}-
\mathbf{\alpha} \circ \frac{\text{A} \cdot \overline{\mathbf{v}}}{4} - \mathbf{p}_l\\
\mathbf{c} &=&
- \chi_{cryo} \cdot \mathbf{\alpha} \circ \frac{A \cdot \overline{\mathbf{v}}}{4} \circ n_e +
\mathbf{\eta}_{ph} \cdot \dot{\Gamma}_{ph} +
\mathbf{\eta}_{e} \cdot \dot{N}_{e} +
A \cdot \mathbf{q}_{th} \label{Math_description}
\end{eqnarray*}
The major challenge in solving this system lies in $\mathbf{B}$. A fully occupied matrix $ \mathbf{B} $ couples the balance equation of each specie with each other.
The idea is to transform Eq.~\eqref{ProblemD1} to a system of first order equations, for which a solution concept is known \cite{hsieh2012basic, crank1979mathematics, saff2015fundamentals}. Additionally, the idea is to split the domain into a finite number of segments as it was already done in the previous section (see also Fig.~\ref{fig:segmenting}), so that Eq.~\eqref{ProblemD1} provides constant coefficients on each segment. We solve the equation system independently on each segment and connect the obtained solutions with transformed intersection conditions. Thus, we can finally formulate a global solution $\mathbf{n}$:
\begin{eqnarray}
\mathbf{n}(x) =
\left\{
\begin{aligned}
\mathbf{n}_1(x) & \quad & x_1 &\leq x \leq x_{2} \\
\mathbf{n}_2(x) & \quad & x_2 &< x \leq x_{3} \\
\vdots \quad & \quad & \quad & \quad \\
\mathbf{n}_N(x) & \quad & x_N &< x \leq x_{N+1}
\end{aligned}
\right.
\label{globalSol}
\end{eqnarray}
\subsection{Transformed problem description}
The problem description \eqref{Math_description} is converted to a system of first-order linear equations with a change of variable. This modification reduces the order by one, but also doubles the amount of equations posed.
\begin{eqnarray*}
\mathbf{y} &:=& \begin{pmatrix} \mathbf{n} \\ \frac{d\mathbf{n}}{dx} \end{pmatrix} \\
\mathbf{M} &:=&
\left( \begin{array}{cc}
\mathbf{0}_{4 \times 4} & \mathbf{I}_{4} \\
-\mathbf{A}^{-1} \mathbf{B} & \mathbf{0}_{4 \times 4}
\end{array}\right) ,
\mathbf{b} := \begin{pmatrix}
\mathbf{0}_{4 \times 4} \\
-\mathbf{A}^{-1} \mathbf{c}
\end{pmatrix} \\
\mathbf{F_1} &:=& \begin{pmatrix}
-\frac{\mathbf{s}_{1} \mathbf{I}_{4}}{2} \quad \mathbf{A}
\end{pmatrix} ,
\mathbf{F_N} := \begin{pmatrix}
-\frac{\mathbf{s}_{N+1} \mathbf{I}_{4} }{2} \quad -\mathbf{A}
\end{pmatrix}
\end{eqnarray*}
\begin{eqnarray*}
\mathbf{H}_k &:=&
\begin{pmatrix}
\mathbf{I}_{4 } & \mathbf{0}_{4 \times 4} \\
\mathbf{0}_{4 \times 4} & -\mathbf{A}_{k}
\end{pmatrix} ,
\mathbf{S}_k :=
\begin{pmatrix}
\mathbf{0}_{4 \times 4} & \mathbf{0}_{4 \times 4} \\
\mathbf{s}_k \mathbf{I}_{4 } & \mathbf{0}_{4 \times 4}
\end{pmatrix} \textrm{ and } \\
\overline{\mathbf{g}}_k &:=&
\begin{pmatrix}
\overrightarrow{0}_4 \\
-\mathbf{g}_k
\end{pmatrix}
\end{eqnarray*}
The problem description reads now as follows: \\
\begin{eqnarray}
\frac{d\mathbf{y}}{d x}(x) = \mathbf{M} \mathbf{y}(x) + \mathbf{b} \label{balance1}
\end{eqnarray}
\begin{eqnarray}
\mathbf{H}_{k-1} \mathbf{y}_{k-1} (L) - (\mathbf{H}_k+\mathbf{S}_k) \mathbf{y}_k (0) &=& \overline{\mathbf{g}}_k \label{IC1}\\
\mathbf{F}_1 \mathbf{y}_1 (0) &=& -\mathbf{g}_1 \label{BC1}\\
\mathbf{F}_N \mathbf{y}_N(x_{N+1}) &=& \mathbf{g}_{N+1}, \label{EC1}
\end{eqnarray}
where $ \eqref{balance1} $ describes the balance equation, $ \eqref{IC1} $ the intermediate condition, $ \eqref{BC1} $ the initial condition and $ \eqref{EC1} $ the end condition.
\subsection{Solution for segment $k$}
The existence and uniqueness of a solution $ y(x) $ for segment $k$ to the Eq.~\eqref{balance1} with an arbitrary constant $u$ is posed by the fundamental Theorem of Picard Lindel\"{o}f \cite{picard1890memoire, lindelof1894application, lindelof1900demonstration}. The index $ k $ is again discarded for readability.
For each segment $k$, the solution $ y(x) $ can be stated as
\begin{eqnarray}
{\boxed{
\mathbf{y}(x) = \underbrace{e^{(x-x_k) \mathbf{M}}}_{\mathbf{P}(x)} \mathbf{u} + \underbrace{\int_{x_k}^{x}e^{(x-\tilde{x}) \mathbf{M}} \mathbf{b} \; d \tilde{x}}_{\mathbf{z}(x)}.
}}
\label{MultiSol}
\end{eqnarray}
$\textrm{ for } x_k \leq x \leq x_{k+1}.$ \\
The integration constant $\mathbf{u}$ needs to be determined from the boundary and intersection conditions, demonstrated in subsection \ref{global_solution}.
$ \mathbf{P}(x) $ describes the fundamental system and $ \mathbf{z}(x) $ represents a particular solution of Eq.~\eqref{balance1}.
The integral in $ \mathbf{z}(x) $ can be solved to
\begin{eqnarray}
\mathbf{z}(x) =(\mathbf{P}(x) - \mathbf{I}_{8}) \mathbf{M}^{-1}\mathbf{b}
\end{eqnarray}
The solution for an invertible matrix $ \mathbf{M} $, in case of a no-beam simulation, where $ \mathbf{B} $ equals a zero-matrix, is shown later in section~\ref{subsection_Singlegas}. \\
The validity of the solution \eqref{MultiSol} can be easily verified by differentiation.
\subsection{Global solution by implementing boundary and intersection conditions} \label{global_solution}
The local solutions $\mathbf{y}_k(x)$ are now connected with boundary and intersection conditions to form the global solution $ \mathbf{y}(x) $ (similar to expression \eqref{globalSol}) and to determine the integration coefficient $u_k$.
Some algebraic transformations of the boundary conditions are needed to proceed. We know that:
\begin{eqnarray}
\mathbf{y}(0) = P(0) \cdot \mathbf{u} + \mathbf{z}(0) \label{H1} \\
\mathbf{y} (L) = P(L) \cdot \mathbf{u} + \mathbf{z}(L) \label{H2}
\end{eqnarray}
We use $\eqref{H1}, \eqref{H2}$ to transform $\eqref{IC1}$ to the form
\begin{eqnarray}
\begin{pmatrix}
\mathbf{u}_{k} \\ 1
\end{pmatrix} = \mathbf{TM}(k-1,k)
\begin{pmatrix}
\mathbf{u}_{k-1} \\ 1
\end{pmatrix},\label{transform1}
\end{eqnarray}
where $\mathbf{TM} \in \mathbb{R}^{9\times 9} $ describes the transformation matrix that maps the unknown $\mathbf{u}$ from segment k-1 to segment k. $\mathbf{TM}$ has the following form for $2 \leq k \leq N$:
\\
$ \mathbf{TM}(k-1, k) = $
\begin{eqnarray*}
\left( \begin{array}{c|c}
[ ( \mathbf{H}_k+\mathbf{S}_k)\cdot \mathbf{P}_k(0) ]^{-1} \, \cdot \, & \quad -\overline{\mathbf{g}}_k+\mathbf{H}_{k-1} \mathbf{z}_{k-1}(L) \, - \quad \\
\quad \qquad \qquad \mathbf{H}_{k-1}\mathbf{P}_{k-1}(L) & \qquad \qquad (\mathbf{H}_k+\mathbf{S}_k) \cdot \mathbf{z}_k(0) \\
\\[-1.9ex]
\hline
\\[-2.2ex]
0 \dots \dots 0 & 1
\end{array}\right)
\end{eqnarray*}
This form can be deduced by the transformation of the intermediate conditions and further elementary algebraic calculations. For more details see Appendix \ref{Appendix}.
We observe with this the following expression that maps the integration coefficient of the first segment to the last segment.
\begin{eqnarray}
\begin{pmatrix}
\mathbf{u}_{N} \\1 \end{pmatrix} =
\underbrace{\prod \limits_{k=2}^{N} \mathbf{TM}(k, k-1)}_{=: \mathbf{SM}} \cdot
\begin{pmatrix}
\mathbf{u}_{1} \\1 \label{transform} \end{pmatrix}
\end{eqnarray}
The transformation product defines a new matrix $\mathbf{SM}$.
In the same way we use $\eqref{H1}, \eqref{H2}$ to modify $\eqref{BC1}$ and $\eqref{EC1}$: \\
Let $ \overline{\mathbf{F}_1}, \overline{\mathbf{F}_N} \in\mathbb{R}^{4\times 9}$ and $\overline{\mathbf{u}_1}, \overline{\mathbf{u}_N} \in\mathbb{R}^{9}$, then we can write:
\begin{eqnarray*}
\overbrace{\left( \begin{array}{c|c}
\quad \mathbf{F}_1 \mathbf{P}_1(0) \quad & \quad \mathbf{F}_1 \mathbf{z}_1(0) + \mathbf{g}_1 \qquad
\end{array}\right)}^{=: \overline{\mathbf{F}_1}}
\cdot \overbrace{\begin{pmatrix}
\mathbf{u}_{1} \\ 1
\end{pmatrix}}^{=: \overline{\mathbf{u}_1}}
&=& \begin{pmatrix} 0\\ 0 \\ 0\\ 0 \end{pmatrix} \\
\underbrace{\left( \begin{array}{c|c}
\mathbf{F}_N \mathbf{P}_N(L) \quad & \quad \mathbf{F}_N \mathbf{z}_N(L) - \mathbf{g}_{N+1}
\end{array}\right)}_{=: \overline{\mathbf{F}_N}}
\cdot \underbrace{\begin{pmatrix}
\mathbf{u}_{N} \\ 1
\end{pmatrix}}_{=: \overline{\mathbf{u}_N}}
&=& \begin{pmatrix} 0\\ 0 \\ 0\\ 0 \end{pmatrix}
\end{eqnarray*}
Note, that the additional vector entry of $\overline{\mathbf{u}_1}$ is required in order to also describe constant algebraic transformations in the boundary conditions.
We rewrite the boundary conditions now to the final system of equations:
\begin{eqnarray}
\underbrace{\left( \begin{array}{c|c}
\quad \mathbf{SM} \quad & \quad -I_{(9 \times 9)} \quad \\ \hline
\overline{F_1} & 0_{(4 \times 9)} \\ \hline
\mathbf{0}_{(4 \times 9)} & \overline{\mathbf{F}_N} \\ \hline
0 \dots 0 \, 1 & 0 \dots 0 -1
\end{array}\right)}_{\in\mathbb{R}^{18 \times 18}} \cdot
\underbrace{\begin{pmatrix}
\overline{\mathbf{u}_{1}} \\ \overline{\mathbf{u}_{N}}
\end{pmatrix}}_{\in\mathbb{R}^{18 \times 1}} =
\underbrace{\begin{pmatrix}
0 \\ \vdots \\ \vdots \\ 0 \\0
\end{pmatrix}}_{\in\mathbb{R}^{18 \times 1}} \label{final}
\end{eqnarray}
Solving Eq.~$\eqref{final}$ with a Gauss-Jordan elimination algorithm \cite{saad2003iterative}, using the transformation-identity of Eq.~$\eqref{transform1}$ and evaluating $\eqref{MultiSol}$, gives us the solution $\mathbf{y}(x)$. The backward transformation of $\mathbf{y}$ defines the particle density $\mathbf{n}$ at each axial point of the simulation domain.
\subsection{Special case: Single-gas model} \label{subsection_Singlegas}
The equation system \eqref{ProblemD1} becomes decoupled, if the gas species do not interact with each other. This is the case when the ion-induced desorption matrix $\mathbf{H}_{ion}$ is diagonal, or if $\mathbf{H}_{ion}$ describes the zero-matrix in the case of no beam.
Note, that a diagonal matrix of $\mathbf{H}_{ion}$ can be forced to approximate the solution by the following transformation:
\begin{eqnarray}
\mathbf{\tilde H}_{\textrm{single}}^{\textrm{ion}} = \frac{\sum\limits_{l}\mathbf{H}_{kl}^{\textrm{ion}}\cdot \sigma_l}{\sum\limits_l \sigma_l}
\end{eqnarray}
We solve the equation system individually for each gas species using an exponential approach $n(x) = exp(\lambda x)$ for a $ \lambda \in \mathbb{R}$. This gives us the following real solutions $n_i(x) $ of Eq.~\eqref{ProblemD1} for the gas specie $i$: $n_i(x) = $
\begin{eqnarray*}
\left\{
\begin{aligned}
&C_1 \cdot \exp{( \sqrt{-\frac{b}{a}} x)} + C_2 \cdot \exp{(- \sqrt{-\frac{b}{a}} x)} - \frac{c}{b} & \text{for } b < 0 \\
&C_1 \cdot \cos{(\sqrt{\frac{b}{a}} x)} + C_2 \cdot \sin{(\sqrt{\frac{b}{a}} x)} - \frac{c}{b} & \text{for } b > 0\\
&C_1 + C_2 x - \frac{c}{2a} x^2 & \text{for } b= 0\\
\end{aligned}
\right.
\end{eqnarray*}
$ a,b $ and $ c $ are in this case one dimensional coefficients of gas specie $i$ and $ n_i $ is its one-dimensional density function.
The integration constants $ C_1 $ and $ C_2 $ can be easily obtained with the boundary and intersection conditions, following the simplified solution concept from the previous subsection.
\begingroup
\squeezetable
\begin{table}[h]
\caption{\label{para_math} Model parameters for mathematical description.}
\begin{ruledtabular}
\begin{tabular}{llll}
\textbf{Symbol} &
\textbf{Dim} &
\textbf{Description} \\
\hline
$ \mathbf{y} $ & $\mathbb{R}^{8}$ & vector-valued function describing the RGP density\\
&& and its derivative\\
$ \mathbf{I}_{4} $ & $\mathbb{R}^{4 \times 4}$ & Identity matrix \\
$ \mathbf{0}_{4 \times 4} $ & $\mathbb{R}^{4 \times 4}$ & Zero matrix \\
$\overrightarrow{0}_4 $ & $\mathbb{R}^{4 }$ & Zero vector \\
$ x $ & $\mathbb{R}$ & Spatial coordinate along beam line\\
$ x_k $ & $\mathbb{R}$ & Intersection point of segment $k-1$ and $k$\\
$ \mathbf{A}, \mathbf{B} $ & $\mathbb{R}^{4 \times 4}$ & Coefficients of balance equation \\
$ \mathbf{c} $ & $\mathbb{R}^{4 }$ & Coefficients of balance equation \\
$ \mathbf{M}$ & $\mathbb{R}^{8 \times 8}$ & Coefficients of transformed balance equation \\
$ \mathbf{b} $ & $\mathbb{R}^{8 }$ & Coefficients of transformed balance equation \\
$ \mathbf{P} $ & $\mathbb{R}^{8 \times 8}$ & Fundamental system of transformed balance equation \\
$ \mathbf{z} $ & $\mathbb{R}^{8 }$ & Particular solution of transformed balance equation \\
$ \mathbf{u} $ & $\mathbb{R}^{8 }$ & Integration constants of balance equation \\
$ \overline{\mathbf{u}} $ & $\mathbb{R}^{8 }$ & Integration constants of balance equation with\\
&& an artificial extra vector entry at the end. \\
$ \mathbf{F_1}, \mathbf{F_N} $ & $\mathbb{R}^{4 \times 8}$ & Coefficients of boundary conditions \\
$ \overline{\mathbf{F_1}}, \overline{\mathbf{F_N}} $ & $\mathbb{R}^{4 \times 9}$ & Coefficients of homogenized boundary conditions \\
$ \mathbf{H}, \mathbf{S} $ & $\mathbb{R}^{8 \times 8}$ & Coefficients of intersection conditions \\
$ \mathbf{H}_{\textrm{ion}},\mathbf{\tilde H}_{\textrm{ion}} $ &$\mathbb{R}^{4 \times 4}$& Ion- induced desorption matrix wrt multi-\\
&& and single-gas framework\\
$ \mathbf{g} $ & $\mathbb{R}^{4}$ & Local lumped gas source (e.g. leak) \\
$ \mathbf{s}_k $ & $\mathbb{R}^{4}$ & Pumping speed of lumped pump located at the beginning\\
&& of segment $k$ \\
$ \overline{\mathbf{g}} $ & $\mathbb{R}^{8}$ & Coefficients of intersection conditions \\
$ \mathbf{TM} $ & $\mathbb{R}^{9 \times 9}$ & Transformation matrix: maps coefficients \\
&&from segment (k-1) to k \\
$ \mathbf{SM} $ & $\mathbb{R}^{9 \times 9}$ & Transformation matrix: maps coefficients from \\
&&the first segment to the last one. \\
$ n(x) $ & $\mathbb{R}$ & Particle density for one gas specie\\
&& (Single-gas framework) \\
$ a, b, c $ & $\mathbb{R}$ & Coefficients of balance equation for one gas specie \\
&&(Single-gas framework), e.g. $a= A_{11}$ \\
$ C_1, C_2 $ & $\mathbb{R}$ & Integration constants of balance equation\\
&& (Single-gas framework) \\
\end{tabular}
\end{ruledtabular}
\end{table}
\endgroup
\section{Validation of the model by benchmark examples}
The model has been thoroughly tested in the framework of benchmark examples and real-case scenarios for the Large Hadron Collider (LHC) at CERN.
We give five representative examples out of the many used as benchmark for this study.
The analytical model presented in this paper is referred to as ``PyVASCO'' in the following section.
\subsection{Crosscheck with Molflow+}
Molflow+ uses a stochastic approach to simulate the RGPs with Test-Particle Monte Carlo methods for one gas specie at a time. Molflow+ traces the trajectory of virtual particles from the gas source to the pumping location and derives from this the RGP density in the vacuum chamber. The advantage of Molflow+ is that it can consider complex geometries. PyVASCO, on the other hand, can consider multiple gas species at a time and beam induced effects. As a side-note, Molflow+ can also consider photon induced desorption by coupling Molflow+ with the closely related program SynRad+ \cite{ady2016monte}.
To meet the assumptions of both models, we choose a simple cylindrical geometry and put the ionization matrix to zero to avoid intermolecular dependencies. We explicitly tested variations of outgassing rates $\mathbf{q}$, sticking coefficients $\alpha$, conductances and diameters. In these benchmark examples both models show a very good agreement. For readability reasons, all figures and tables are inserted at the end of the subsections.
The geometry for the first three examples is visualized in Fig.~\ref{fig:Ex1}. It is one single beam pipe consisting of two materials M$_1$ and M$_2$ defined in Table \ref{material_1}-\ref{material_3}.
\subsubsection{Example - Variation of mass} \label{example1}
The first example represents the influence of the particle's mass to the density distribution. Explicitly, the conductance depends on the mass of the RGP, hence different gas species provide different conductances. Table~\ref{table_mass} lists the molecular masses of $ \ce{H2}, \ce{CH4}, \ce{CO} $ and $ \ce{CO2} $. Fig.~\ref{fig:Plot1} presents then the simulation output for the distribution of the particle density assuming the same outgassing $ \mathbf{q} $ and sticking $ \alpha $ properties for each gas species. The results confirm the well-known fact that the higher the molecular mass of the specie, the lower is the conductance and the higher is the RGP density.
\subsubsection{Example - Variation of outgassing rate} \label{example2}
This and the following two examples simulate only the density distribution of $\ce{H2}$ particles for the geometry of Fig.~\ref{fig:Ex1}. Example~\ref{example2} focuses on the behaviour of different outgassing values $\widetilde{\mathbf{q}}$ to the simulations. The results are presented in Fig.~\ref{fig:Plot2} and they clearly show a linear relation among the different outgassing coefficients $\widetilde{\mathbf{q}}$. This is an expected result.
\subsubsection{Example - Variation of sticking coefficient} \label{example3}
Example~\ref{example3} determines the effect of different sticking factors to the density profile. Fig.~\ref{fig:Plot3} shows that its effect can not be as easily deduced as it was in Example~\ref{example2} for the outgassing rate. The reason is that the amount of particles removed from the system due to the sticking coefficient is depending on the prevailing density, see balance Eq.~\eqref{balanceEQ} from the previous sections.
\subsubsection{Example - Variation of diameter} \label{example4}
Example~\ref{example4} is applied to the geometry of Fig.~\ref{fig:ex2} of material M$_3$ and M$_4$ listed in Table~\ref{material_4} to test the effect of different geometries. The vacuum chamber is described by four segments with an increasing diameter from 100 - 400mm.
The results in Fig.~\ref{fig:Plot4} show a good match except for the last case that assumes a very high sticking factor $\alpha \geq 0.1$ and zero outgassing.
We want to note here, that this is an hypothetical test case and there are barely domains in the LHC where this configuration could be found.
The explanation of the underestimation of the density profile is due to the beaming effect. This means, that particles from a gas source may propagate along the vacuum chamber direction and these particles do not experience the sticking coefficient at all and increase the density in a domain several meters further away. The piecewise solution implemented in our simulation-method cannot capture such an effect \cite{bonucci2007transmission}.
A solution to this problem is to set an additional very small outgassing rate $ \widetilde{\mathbf{q}} = 10^{-14} \frac{\textrm{mbar} \cdot \textrm{l}}{\textrm{s} \cdot \textrm{cm}^2} $ in the corresponding domain, here it is set at the first segment that has a diameter of 100mm. The result is plotted in Fig.~\ref{fig:Plot5}.
\subsubsection{Observations - PyVASCO vs. Molflow+}
All benchmark examples show a good match between the analytic code PyVASCO and the Monte-Carlo code Molflow+. The fact that two models with different approaches give the same result increases the credibility of both simulation codes.
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{3IdaAichingerPRAB.pdf}
\caption{\label{fig:Ex1} Geometry for Example~\ref{example1}-\ref{example3}: One beam pipe with two materials $ \textrm{M}_1 $ and $ \textrm{M}_2 $}
\centering
\includegraphics[width=0.4\textwidth]{4IdaAichingerPRAB.pdf}
\caption{\label{fig:ex2} Geometry for Example~\ref{example4} with materials $ \textrm{M}_3 $ and $ \textrm{M}_4 $}
\end{figure}
\begingroup
\squeezetable
\begin{table}[!h]
\caption{Mass of the four main gas species in a UHV system \cite{nistgov}.}
\begin{ruledtabular}
\begin{tabular}{l|cccc}
\quad &
\ce{H2} &
\ce{CH4} &
\ce{CO}&
\ce{CO2} \\
\hline
\textbf{mass}[g/mol] & 2 & 16 & 28 & 44 \\
\end{tabular}
\end{ruledtabular}
\label{table_mass}
\end{table}
\endgroup
\begingroup
\squeezetable
\begin{table}[!h]
\caption{Material $ \textrm{M}_1 $ and $ \textrm{M}_2 $ specifications for Example~\ref{example1}.}
\begin{ruledtabular}
\begin{tabular}{l|ll}
\quad &
$ \alpha $ &
$ \widetilde{\mathbf{q}} \times 10^{-12} [\frac{\textrm{mbar}\cdot \textrm{l}}{\textrm{s}\cdot \textrm{cm}^2}] $ \\
\hline
$ \textrm{M}_1 $ & $ 8 \cdot 10^{-3} $ & $3.97 $ \\
$ \textrm{M}_2 $ & $ 10^{-12} $ & $39.8$ \\
\end{tabular}
\end{ruledtabular}
\label{material_1}
\end{table}
\endgroup
\begingroup
\squeezetable
\begin{table}[!h]
\caption{Material $ \textrm{M}_1 $ and $ \textrm{M}_2 $ specifications for Example~\ref{example2} (only $ \ce{H2} $).}
\begin{ruledtabular}
\begin{tabular}{l|lllll}
\quad &
$ \alpha $ &
$ \Big[ \widetilde{\mathbf{q}}_1 $ & $ \widetilde{\mathbf{q}}_2 $ & $ \widetilde{\mathbf{q}}_3 $ & $ \widetilde{\mathbf{q}}_4 \Big] \times 10^{-14} [\frac{\textrm{mbar}\cdot \textrm{l}}{\textrm{s}\cdot \textrm{cm}^2}] $ \\
\hline
$ \textrm{M}_1 $ & $ 8 \cdot 10^{-3} $ & $1000 $ &$ 100 $ & $10 $ & $1 $ \\
$ \textrm{M}_2 $ & $ 10^{-12} $ & $ 10000 $ & $ 1000 $ &$ 100 $ & $ 10 $ \\
\end{tabular}
\end{ruledtabular}
\label{material_2}
\end{table}
\endgroup
\begingroup
\squeezetable
\begin{table}[!h]
\caption{Material $ \textrm{M}_1 $ and $ \textrm{M}_2 $ specifications for Example~\ref{example3} (only $ \ce{H2} $).}
\begin{ruledtabular}
\begin{tabular}{l|llll}
\quad &
$ \alpha_1 $ & $ \alpha_2 $ & $ \alpha_3 $ & $ \widetilde{\mathbf{q}}\times 10^{-14}$ \\
\hline
$ \textrm{M}_1 $ & $ 10^{-5} $ & $10^{-4} $ &$ 10^{-3} $ & $8 $ \\
$ \textrm{M}_2 $ & $ 10^{-13} $ & $ 10^{-12} $ & $ 10^{-11}$ & $ 800 $ \\
\end{tabular}
\end{ruledtabular}
\label{material_3}
\end{table}
\endgroup
\begingroup
\squeezetable
\begin{table}[!h]
\caption{Material $ \textrm{M}_3 $ and $ \textrm{M}_4 $ specifications for Example~\ref{example4} (only $ \ce{H2} $) .}
\begin{ruledtabular}
\begin{tabular}{l|llll}
\quad &
$ \alpha_1 $ & $ \alpha_2 $& $ \alpha_3 $ & $ \textrm{Q}$ \\
\hline
$ \textrm{M}_3 $ & $ 10^{-3} $ & $10^{-2} $ &$ 10^{-1} $ & $10^{-10} $ \\
$ \textrm{M}_4 $ & $ 10^{-3} $ & $ 10^{-2} $ & $ 10^{-1} $ & $ 0 $ \\
\end{tabular}
\end{ruledtabular}
\label{material_4}
\end{table}
\endgroup
\begin{figure}[!h]
\includegraphics[width=0.4\textwidth]{5IdaAichingerPRAB.pdf}
\caption{\label{fig:Plot1} Example \ref{example1}: Conductance variation in comparison with PyVASCO~(solid line) and Molflow+~(dashed line) for the geometry of Fig.~\ref{fig:Ex1}.}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=0.4\textwidth]{6IdaAichingerPRAB.pdf}
\caption{\label{fig:Plot2} Example \ref{example2}: Outgassing variation for $ \ce{H2} $ in comparison with PyVASCO~(solid line) and Molflow+~(dashed line) for the geometry of Fig.~\ref{fig:Ex1}.}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=0.4\textwidth]{7IdaAichingerPRAB.pdf}
\caption{\label{fig:Plot3} Example \ref{example3}: Sticking variation for $ \ce{H2} $ in comparison with PyVASCO~(solid line) and Molflow+(dashed line) for the geometry of Fig.~\ref{fig:Ex1}.}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=0.4\textwidth]{8IdaAichingerPRAB.pdf}
\caption{\label{fig:Plot4} Example \ref{example4}: Sticking variation for $ \ce{H2} $ in comparison with PyVASCO~(solid line) and Molflow+ (dashed line) for the geometry of Fig.~\ref{fig:ex2}.}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=0.4\textwidth]{9IdaAichingerPRAB.pdf}
\caption{\label{fig:Plot5} Example \ref{example4}: A high sticking factor makes beaming effect visible; comparison PyVASCO~(solid line) and Molflow+ (dashed line) with corrected outgassing $\widetilde{\mathbf{q}}$.}
\end{figure}
\newpage
\subsection{Sensitivity of the ion-induced desorption}
We study in this subsection the sensitivity of the ion-induced desorption phenomena (see subsection \ref{physical_model}.\ref{flow_into}.\ref{flowinto2}) on the simulation output. A high beam current and a fully occupied matrix $ \mathbf{H}_{\textrm{ion}} $ imply that the different gas-species influence each other and hence mathematically one numerical instability of one gas-specie can map to the other gas species. Hence, an emphasize lies on a stable influence of this phenomena.
We study therefore the variation in the output, when we use a multi-gas framework ($\mathbf{H}_{\textrm{multi}} $), a single-gas framework ( $\mathbf{H}_{\textrm{single}} $) and a zero-beam framework ( $\mathbf{H}_{\textrm{zero}} $) with:
\begin{eqnarray*}
\mathbf{H}_{\textrm{multi}} &=& \begin{pmatrix}
0.54 & 0.1 & 0.1 & 0.1 \\
0.1 & 0.54 & 0.1 & 0.1 \\
0.1 & 0.1 & 0.54 & 0.1 \\
0.1 & 0.1 & 0.1 & 0.54 \\
\end{pmatrix}, \\
\mathbf{H}_{\textrm{single}} &=& \textrm{diag}(\mathbf{H}_{\textrm{multi}}), \quad
\mathbf{H}_{\textrm{zero}} = 0 \cdot \mathbf{H}_{\textrm{multi}}
\end{eqnarray*}
Fig.~\ref{fig:Geometry3} shows the 6~m long cylindrical vacuum chamber that we use for this analysis and Table~\ref{pumps} lists the parameter of pumping speed and outgassing rate. The material has no sticking property in this case to illustrate the ion-induced desorption dynamics better.
We compare all three frameworks in Fig.~\ref{fig:Plot7}, with the presence of a weak beam ($I=0.01$A). The matrix entries of $\mathbf{H}^{\textrm{ion}}$ are therefore all close to zero and we expect, as shown also in Fig.~\ref{fig:Plot7}, three times the same result.\\
In Fig.~\ref{fig:Plot8} we see, how a higher beam current of $I = 10$A influences the output for the single-gas framework; the result is as expected. A higher current implies more collisions of beam particles with RGP, which consequently get ionized. They impinge on the wall and hence increase the ion-induced desorption. The heavier $\ce{CO2}$ molecules present a higher ionization cross-section than for example $\ce{H2}$. The density increase for a higher beam current is therefore stronger for $\ce{CO2}$ than for $\ce{H2}$.\\
Fig.~\ref{fig:Plot9} reflects the difference between the multi-gas framework and the single-gas framework at a high beam current ($I = 10$A) and hence especially analyses how the off-diagonal entries influence the result. Firstly, we observe a different values in the density profiles. The additional cross-desorption probability from the off-diagonal entries causes this reasonably small increase. Secondly, the shape of the profile remains the same, as we expect from a stable model, when the input parameters are changed only by a small quantity. Thirdly, $\ce{H2}$ shows the biggest increase in its density. The reason is that the ion-induced desorption is proportional to the prevailing density $\mathbf{n}$. This means that the higher density of $\ce{CO2}$ has a stronger influence on the density of $\ce{H2}$ than inverse.\\
Concluding, the results for testing the sensitivity of the model to the coupled equation term of the ion-induced desorption reflects a stable model and gives no indication to any instabilities.
\begin{figure}[!t]
\centering
\includegraphics[width=0.4\textwidth]{10IdaAichingerPRAB.pdf}
\caption{Beam-pipe with three pumps to test the ion-induced desorption sensitivity.}
\label{fig:Geometry3}
\end{figure}
\begingroup
\squeezetable
\begin{table}[!t]
\caption{\label{pumps} Parameters for pumps and material properties of Fig.~\ref{fig:Geometry3}.}
\begin{ruledtabular}
\begin{tabular}{l|cccc}
\quad &
\ce{H2} &
\ce{CH4} &
\ce{CO}&
\ce{CO2} \\
\hline
$\mathbf{s}_1$[l/s] &
1100 &
1100 &
1100 &
1100 \\
$\mathbf{s}_2$l/s] &
550 &
550 &
550 &
550 \\
$\widetilde{\mathbf{q}} [\frac{\textrm{mbar} \cdot \textrm{l}}{\textrm{s} \cdot \textrm{cm}^2}] \times 10^{-15} $ &
1 &
1 &
1 &
1\\
\textbf{$ \alpha $} &
0 &
0 &
0 &
0\\
\textbf{$ \sigma \times 10^{-23} $} &
4.45 &
31.8 &
27.5 &
42.9 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\endgroup
\begin{figure}[!t]
\includegraphics[width=0.4\textwidth]{11IdaAichingerPRAB.pdf}
\caption{\label{fig:Plot7} Comparison of zero-beam (solid line) and single-gas (dashed line) and multi-gas (dotted line) framework with current $ I= 0.01$A }
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.4\textwidth]{12IdaAichingerPRAB.pdf}
\caption{\label{fig:Plot8} Difference between zero beam (solid line) and a high beam current $ I = 10$A (dashed line) for the single-gas framework.}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=0.4\textwidth]{13IdaAichingerPRAB.pdf}
\caption{\label{fig:Plot9} Difference between single-gas with $ \mathbf{H}_{\textrm{Single}} $ (solid line) and multi-gas with $ \mathbf{H}_{\textrm{Multi}} $ (dashed line) simulations, both at $ I = 10$A. }
\end{figure}
\section{Validation of the model in comparison to LHC gauge readings}
The last step to validate the model lies in the comparison of the simulation results with measured data from gauges in the LHC. The LHC containes eight long straight sections (LSS) consisting of two parts of about 265~meters with a point of interest (detector) in the middle. The CMS experiment is installed in one LSS and we compare the pressure gauge readings in this area for a beam of 6.5~TeV with our simulations.
The simulations contains a combination of many different density regimes driven by the 14 varying input of geometries, materials and beam induced effects. We divided the domain into 464 segments on which we assumed constant values for the parameters. The segments are connected with the intersection conditions \eqref{IC} described previously.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{14IdaAichingerPRAB.pdf}
\caption{Sketch of the CMS experimental area:
1:~Arc, 2:~Dispersion surpressor, 3:~Long Straight Section, 4:~CMS experimental area, 5:~Inner triplet, 6:~Recombination chamber }
\label{fig:CMS_sketch}
\end{figure}
The collision point is at the centre of the simulation domain. This area is dedicated to provide the lowest possible gas density. The components of the vacuum chamber on both sides of the collision point are roughly symmetric: The inner triplets, with high magnetic gradients to focus the beam for the collisions; the normal-conducting separation magnets, that splits the vacuum chambers in two separate parts for each beam; the so called dispersion suppressor that is connected then to the arcs (see Fig.~\ref{fig:CMS_sketch}).
\subsection{Geometry and material assumption}
The LHC database stores all the parameter specifications of the vacuum chamber \cite{layoutdatabase, cdd, bruning2004lhc}. For our calculations, we have extracted from the database the diameter, length and material specification, that affect the conductance, thermal outgassing and sticking probability.
\subsubsection{Arc}
The bending arcs consist of a repetitive structure of three times 15m long chambers for dipole magnets and of shorter 6m long chambers for focusing magnets and beam instrumentation measurements. The vacuum chambers or so-called cold bores are joined by a short stainless steel bellow.
The diameter of the cold bores are small about 40-60mm and additionally, a racetrack-shaped beam screen is implemented inside the chamber \cite{cruikshank1997mechanical, }. To avoid multiple reflections of the photons on the wall a so-called sawtooth surface is indented on the internal side of the beam screen, where the primary synchrotron radiation photons hit. The sawtooth profile is characterized by a reduced photon reflectivity which allows localisation of the molecular desorption and of the photoelectrons (see e-cloud effect in subsection~\ref{flowinto3}). In the strong magnetic regions (as in the arcs) the cold-bores of the superconducting magnets are cooled by liquid helium at 1.9-4.5 K. They act as distributed cryogenic pumps, via the pumping slots on the beamscreen.
\subsubsection{Transition area}
The vacuum chambers of the straight sections are generally kept at room temperature. All transitions from cryogenic to room temperature chambers happen within half a meter containing usually a valve, a vacuum module with beam instrumentation equipment and flanges at their extremities. These chambers are connected with bellows that compensate thermal expansions due to the large temperature gradients during the commissioning of the system.
\subsubsection{Straight Section}
The vacuum chambers in the straight sections are at room temperature and are principally made out of copper with an additional NEG coating, in order to reduce photon-induced desorption and the generation of secondary electrons. In between NEG coated parts, there can be short higher outgassing parts found, e.g. vacuum modules with beam instrumentation or beam collimation equipment. Collimators are often located before sensitive instruments, close to the detectors and at the end of magnet assemblies to intercept stray particles.
The most critical area, the vacuum chamber in the centre of CMS, is made out of beryllium. This material has a higher radiation length than copper to provide a higher transparency to, and lower absorption of, the exotic particles resulting from the beams' collisions.
Generally, the diameter in the straight section varies from 80 to 230mm. It reaches its maximum aperture in the recombination chamber, where two beam lines combine into one common chamber.
\subsubsection{Material properties}
The realistic outgassing rates and desorption coefficients for the materials are estimated on the basis of laboratory results measured by the BVO section of the vacuum group\cite{Giuseppe} or by reference values from literature \cite{chiggiato2014vacuum, baglin2002synchrotron, baglincoupled}, \cite{bruning2004lhc, nistgov}. However, material treatment for ultra-high vacuum, like vacuum firing, bake out, activation and beam-conditioning are special treatments, and therefore parameters may vary in time and from standard values in the literature.
\subsubsection{Beam induced parameters}
During the operation, beam induced effects are the predominant factor that influence the RGP density and hence increase it by orders of magnitude. Fig.~\ref{fig:timeserieschartimagetime25092809} shows the correlation between the dynamics of the beam energy and the readings of one specific pressure gauge.
\begin{figure}[b]
\includegraphics[width=0.4\textwidth]{15IdaAichingerPRAB.pdf}
\caption{\label{fig:timeserieschartimagetime25092809} Time dynamics of gauge VGI.220.1R5.X (blue curve) in the common beam chamber close to CMS from 25.09.2016 to 28.09.2016, derived from \cite{timber} and compared to the time-dynamics of the beam energy (red curve).}
\end{figure}
Photon and electron induced desorption take place especially in the magnetic areas of the LHC, which present about $ 90 \% $ of the total accelerator.
Formula~$\eqref{photonflux} $ implies a photon flux generated by the dipoles in the arcs of
\begin{eqnarray}
\dot{\Gamma}_{ph} = \frac{1.5414 \cdot 10^{21}}{2 \rho \pi} = 8.77 \cdot 10^{16} \textrm{Photons/(m s)}
\end{eqnarray}
where $\rho = 2795.84$~m, $ I = 0.5 $~A and $ E = 6.5 $~TeV.
Additionally, we consider that synchrotron radiation emitted by the beam travels tangentially to the orbit, deducing a spread between the location of the generation of the photons and their impingement on the chamber wall. This distance can be up to 20 meters and it depends on the geometry of the beam chamber (see Fig.~\ref{fig:SR_Travel}).
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{16IdaAichingerPRAB.pdf}
\caption{Distance s between the generation of a photon and its impingement on the wall in the relativistic case.}
\label{fig:SR_Travel}
\end{figure}
This implies two consequences for the input parameters: Firstly, there may be photons impinging on the following chambers downstream of the radiation source points even if there is no magnetic field present. Thus, we must take into account a longitudinal shift of the impinging photon flux at the appropriate spot in our simulations. Secondly, a change in the geometry may cause a high spike of impinging photons, when a chamber is smaller than the previous one.
Beam collimators for example provide a small variable chamber aperture to remove the beam particles in the halo and consequently are subjected to a high photon bombardment \cite{valentino2013beam}. Additionally, synchrotron radiation is also produced by an off-axis beam in quadrupoles and orbit correctors. However the magnetic strength of these magnets is much less compared to the bending magnets in the arcs. The critical energy $E_c$ for dipoles in the LHC can be estimated with 40eV, whereas the strong focusing quadrupoles before the experiments provide a maximum of 9eV. \\
Summarizing, the distribution of the photon flux in the arcs is characterized by a more or less continuous distribution, whereas in the straight sections it consists of distinct peaks.
The electron cloud is strongly depending on the chamber diameter, the material, bunch spacing and the beam energy and intensity.
Moreover, two beams are present in the common chambers of the LHC, that leads to a significant beam-induced density increase especially in the quadrupole triplets close to the detector.
In addition, the study of the evolution of the heat dissipation $ Q $ on the vacuum chambers may also give a hint for values of the beam induced parameters. These parameters are as well logged in \cite{timber}. It holds that:
\begin{eqnarray}
Q_{\textrm{heat}} = Q_{\textrm{SR}} + Q_{\textrm{IC}} + Q_{\textrm{EC}},
\end{eqnarray}
where SR refers to synchrotron radiation, IC to image current and EC to electron cloud.
\subsection{LHC gauges}
There are a total of 98 gauges installed in the area of CMS that monitor the vacuum dynamics. The final goal now is to compare the simulation output with the readings of the installed gauges. Most of the LHC gauges are inverted-magnetron penning gauges (IKR 070, Pfeiffer), which mainly measure down to $10^{-11}$mbar. However for beam lifetime and radiation background reasons, the pressure in the LHC is in some parts lower than this value, and therefore they serve mainly as an alarm system that indicates potential vacuum degradation. This explains the rather high gauge reading in the NEG-coated recombination chamber in Fig.~\ref{fig:cmspartc0differencewhitetotal}, located about 100~m left and right from the CMS interaction point. Bayard-Alpert ionization gauges (SVT 305) are also employed. These measure down to values of the order of $10^{-12}$mbar \cite{pigny2015measurements}.
For our discussions here, we derived the gauge's readings of four very similar runs from mid-August to the end of October in 2016 indicated with a fill number of 5211, 5338, 5416 and 5451 from the LHC logging database \cite{timber}. In Fig.~\ref{fig:13073} the time evolution of the gauge VGPB.242.7L5.B is plotted for the Fill 5211 and 5338. This graphic should visualize that fills with similar parameters provide similar results. The slight difference is due to a higher monitored emittance for beam of Fill 5211.
The monitored pressure value of the gauges is nitrogen equivalent and therefore the gauges' sensitivity to different gas species must be taken into account in our calculations. Unfortunately, the harsh radiation environment of the LHC tunnel does not allow the installation of residual-gas analysers and their delicate electronics.
\begin{figure}[b]
\centering
\includegraphics[width=0.9\linewidth]{17IdaAichingerPRAB.pdf}
\caption{Time evolution of penning gauge VGPB.242.7L5.B for Fill 5211 (orange line) and Fill 5338 (blue line).}
\label{fig:13073}
\end{figure}
\subsection{Results and discussion}
The RGP density in the experimental area around CMS provides many different characterizing aspects, that are visualized as our main result in Fig.~\ref{fig:cmspartc0differencewhitetotal}.\\
We implemented the model in a Python environment and embedded it in a graphical user interface based on the library of PyQt \cite{meier2015python, newman2013computational, lutz2013learning, hill2016learning} (see Fig.~\ref{fig:picture1}). Results are calculated within less than a minute.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{18IdaAichingerPRAB}
\caption{Screenshot of the simulation program ``PyVASCO''. }
\label{fig:picture1}
\end{figure}
\begin{figure*}[bt]
\includegraphics[width=0.8\textwidth]{19IdaAichingerPRAB.pdf}
\caption{Vacuum simulation of the Long Straight Section with the CMS detector located in the middle: a)~total pressure plot, b)~density plot of the four gas species and c)~a geometry sketch.}
\label{fig:cmspartc0differencewhitetotal}
\end{figure*}
Fig.~\ref{fig:cmspartc0differencewhitetotal} shows that the general goal is fulfilled, that the maximum density in the LHC should not exceed $ 10^{15} \ce{H2}$-equivalent gas particles per $ \textrm{m}^3 $ in the presence of the circulating beams. The RGP density in the experimental area is even by orders of magnitude lower to minimize the background noise to the experiments. Hence, a beam lifetime in the order of 100 hours supports an efficient operation of the high energy experiments with respect to vacuum requirements.
The four different graphs in Fig.~\ref{fig:cmspartc0differencewhitetotal} represent the prevailing densities of $ \ce{H2}, \ce{CH4}, \ce{CO} $ and $ \ce{CO2} $. Their different shapes symbolize their different behaviours. Hydrogen constitutes the major part of the gas load. Hydrogen's low mass and low binding energy compared to the other gas species results in a higher probability of beam induced desorption and a higher thermal outgassing of the vacuum chamber walls.
The shape of the plot of $ \ce{H2}, \ce{CO} $ and $ \ce{CO2} $ show a similar structure. $\ce{CO} $ and $ \ce{CO2} $ are of the same order of magnitude. The biggest difference of each gas specie's value can range of three orders of magnitude.
The different materials are represented by different colours in the geometry sketch of CMS in Fig.~\ref{fig:cmspartc0differencewhitetotal}. As an outline, cryogenic areas are marked in dark blue with a grey background, room temperature areas are marked in red.
The biggest density gradient can be observed in the transition areas from room temperature to cryogenic area. The following relation holds:
\begin{eqnarray}
\frac{n_1}{n_2} = \sqrt{\frac{T_2}{T_1}}
\end{eqnarray}
This tells us that the RGP density in areas with lower temperatures is higher than in areas with higher temperatures, assuming the same material specifications. Additionally, the high density has to be related with beam induced effects that occur mainly in the arcs. The density profile also visualizes the importance of the beam direction. The density is slightly higher, when the beam travels from the arcs to the Straight Section, because of the fact that emitted photons can travel several meters until they imping on the wall (see Fig.~\ref{fig:SR_Travel}).
The density plot for hydrogen shows some peaks in the arcs. They appear exactly at the interconnects between two vacuum chambers. In this part the liquid helium cooling pipe on the beam screen is usually absent for a few centimetres, which leads to a slight temperature rise and hence the hydrogen equilibrium density goes up.
Otherwise, cryo- and NEG pumping results in a flat line, at the equilibrium of surface pumping and degassing. Lumped pumps are only located in room-temperature areas.
Thin-film NEG coating deposited along all room-temperature chambers, capture getterable gas species such as $ \ce{H2}, \ce{CO} $ and $ \ce{CO2} $. This configuration provides very efficient distributed surface pumping. Methane is, due to its closed-symmetric atomic structure, not reacting with the surface and is not pumped \cite{chiggiato2006ti}. This is the main reason, why the density profile of methane is clearly different from the other gas species.
Methane is only pumped by lumped ion pumps. Its pressure profile therefore resembles a parabola from one pump to the next. This can be seen very well between the quadrupoles Q4 and Q7 in Fig.~\ref{fig:cmspartc0differencewhitetotal} about 250~m right of the CMS-interaction point. Its should be added that there are indications that $\ce{CH4}$ is also pumped by beam-ionisation, and therefore if this effect is not taken into account the density curves for this gas are to be intended as worst case scenario \cite{mathewson1996beam}.
The low RGP density in the collision area is due to the outgassing characteristics of beryllium and the extremely low photon bombardment in this area.
\section{Conclusion}
This article introduces a mathematical model and a computer code to calculate and efficiently forecast the residual gas particle density in a particle accelerator. This quantity should be kept as low as possible to support an efficient operation of the machine. Several effects influence it and make this requirement challenging. Among them are beam-induced effects as well as thermal outgassing, diffusion inside the chamber and interactions among the different residual gases. The idea was to combine all of them in one mathematical model which gives as output the density distribution of the four dominating gas species $ \ce{H2}, \ce{CH4}, \ce{CO} $ and $ \ce{CO2} $ inside a beam pipe. A mass-balance equation system of second order serves this purpose. Based on mathematical theorems a solution was found and the fundamental steps to this goal have been shown.
The validation of the model was established by a cross check with the Test-Particle Monte Carlo code Molflow+, a sensitivity analysis of the ion-induced desorption term including a comparison between the single-gas and the multi-gas framework, and finally by a comparison to gauge readings in the LHC. The latter cross-check was presented for the Long Straight Section close to the CMS experiment. All these simulations show reasonably meaningful results and consequently suggest realistic replication of the vacuum environment. The knowledge on how, where and why these values influence the vacuum quality provides consequently a great aid in the design and analysis of vacuum systems. In addition, the results are computed rather fast in less than 30 seconds, even for large simulation domains.
This model provides the potential to undergo detailed parameter variation studies, hence to understand the main influencing effects at different locations and therefore to detect critical configurations in advance that could lead to vacuum degradation.
Nowadays, CERN's new challenge is to develop concepts for post-LHC circular particle colliders (FCC) \cite{benedikt2014future} and the next step is to use this simulation model to present a variety of possible designs and to choose among them, in agreement with further specifications, the best possible solution.
\begin{acknowledgments}
The discussions with Giuseppe Bregliozzi, Josef Sestak and Vincent Baglin were most helpful to find appropriate input parameters. Many thanks as well to Jan Sopousek, who helped the authors with the implementation of the model in a Python environment and to Marton Ady, who supported the authors with Molflow+ simulations.
Thanks also to Adriana Rossi for discussions about the previous model VASCO.
All authors work in the Vacuum Surfaces and Coatings group at CERN. This project and its achievements are part of the global future circular collider study hosted by CERN. Ida Aichinger is a doctoral student at the Johannes Kepler University Linz, Austria, supported by the Austrian Doctoral Student Programme of CERN.
\end{acknowledgments}
|
2,877,628,091,269 | arxiv | \section{Introduction}
\hspace*{.4cm} At the beginning of the $20^{th}$ century, the importance of
geometry in physical applications has been illuminated by Albert
Einstein. He has advocated a new philosophy known as ``The
Geometerization Philosophy". This philosophy can be summarized in
the following statement: ``To understand nature, one has to start
with geometry and end with physics" \cite{Wanasphilo}. In 1915,
Einstein used this philosophy to understand the essence of Gravity,
starting with a 4-dimensional Riemannian geometry, ending with a
successful theory for gravity; the General theory of Relativity (GR)
\cite{Einstein}. After the success of the theory, by testing its
predictions and applications, many authors have directed their
attention to the use of geometry to solve physical problems.
Einstein in his continuous attempts to understand more physical
interactions, has searched for a wider geometry to unify gravity and
electromagnetism. The problem with Riemannian geometry, however, is
that it has only ten degrees of freedom (the components of the
metric tensor in four dimensions) which are just sufficient to
describe gravity. Thus to construct a successful geometric theory
that would encompass both gravity and electromagnetism, one needs to
enlarge the number of degrees of freedom. This can be done in two
different ways: either by increasing the dimension of the underlying
space (\`{a} la-Kaluza-Klein) or by replacing the Riemannian
structure by another geometric structure having more degrees of
freedom (without increasing the dimension of the underlying space).
In tackling the problem of unification, Einstein has chosen the
second alternative. This led him to consider Absolute Parallelism
geometry (AP-geomety) \cite{AP-Ein} which has sixteen degrees of
freedom (the number of components of the vector fields forming the
parallelization); six extra degrees of freedom are gained. Many
developments of AP-geometry have been achieved (e.g.,
\cite{unificationT, Wanashis, local}). Theories
constructed in this geometry (e.g., \cite{unificationP, Moller, unificationT}) together with applications
(e.g., \cite{Nashed1, Nashed2, Wanas}) show the
advantages of using AP-geometry in physics. Moreover, absolute parallelism characterizes the generalized Berwald spaces
among the Finsler spaces \cite{Tamassy1, Tamassy2}.
\vspace{6pt} In this paper, we establish a global approach to
AP-geometry. The global formulation of the different geometric
aspects of AP-geometry has many advantages. Some advantages of the
global formalism are:
\begin{itemize}
\item It could give more insight into the infra-structure of physical
theories constructed in the context of AP-geometry. Moreover, it may
offer the opportunity to unify field theories in a more economic
scheme.
\item It helps better understand the meaning and the essence of the
geometric objects and formulae without being trapped into the
complexity of indices. As a consequence, it reduces the probability
of mistake
\item It connects AP-geometry with the modern language of the
differential geometry.
\item In local coordinates some important expressions, such as the
Lie bracket $[\frac{\partial}{\partial x^i},\frac{\partial}{\partial
x^j}]$, disappear. Consequently, the contribution, geometrical or
physical, of all Lie brackets are completely hidden. Such
expressions do not vanish in global formalism. This may produce new
geometric or physical information.
\item The local formalism represents roughly a \textbf{micro}
viewpoint or a micro approach whereas the global formalism
represents a \textbf{macro} viewpoint. The two viewpoints are not
alternatives but rather complementary and are indispensable both for
geometry and physics.
\item As global results hold on the entire manifold (not only
on coordinate neighborhoods), they also hold locally. The converse
is not true; a result may hold locally but not globally. Moreover,
one can easily shift from global to local; it suffices to view the
global result in a coordinate neighborhood.
\end{itemize}
These are the main motivations of the present work, where all
results obtained are formulated in a prospective modern coordinate
free form.
\vspace{6pt} The paper is organized in the following manner. In
section 1, we define globally the basic elements of the AP-geometry
and prove an existence and uniqueness theorem for a remarkable
linear connection which we call the canonical connection \big(a
(flat) connection for which the parallelization vector fields are
parallel\big). We also study some properties of this connection. In
section 2, we define three other natural connections (the dual,
symmetric and Levi-Civita connections) and investigate their
properties together with the tensor fields associated to them. In
section 3, we express the curvature tensors of the above mentioned
three connections in a simple and compact form in terms of the
torsion tensor of the canonical connection only. We then use the
Bianchi identities to derive some further interesting identities. In
section 4, we give a global treatment of the W-tensor and
investigate some of its properties. In section 5, we present a
double-view for the fundamental geometric objects of AP-geometry: On
one hand, we consider the local expressions of these geometric
objects in the natural basis and, on the other hand, we compute
their expressions in the parallelization basis, and then compare
between the two sets
of expressions.\\
It should finally be noted that this work is based
mainly on \cite{local}.
Throughout the present paper we use the following notation:\\
$M$: an n-dimensional smooth real manifold,\\
$\mathfrak{F}(M)$: the $\mathds{R}$-algebra of $C^{\infty}$ functions on $M$,\\
$\mathfrak{X}(M)$: the $\mathfrak{F}(M)$-module of vector fields on $M$,\\
$T_xM$: the tangent space to $M$ at $x\in M$,\\
${T_x}^*M$: the cotangent space to $M$ at $x\in M$.\\
We make the assumption that all geometric objects we consider are of
class $C^{\infty}$.
\section{Canonical connection}
\hspace*{.4cm} In this section, we give the definition of an AP-space and prove an existence and uniqueness theorem for a remarkable linear connection, which we call the canonical connection. Also, we prove some properties concerning this connection.
\begin{definition}\cite{Brickell}
A parallelizable manifold is a pair $(M,\;\undersym{X}{i})$, where $M$ is an n-dimensional smooth manifold and $\;\undersym{X}{i}\,(i = 1,\,...,\,n)$ are n independent vector fields defined globally on $M$. The vector fields $\;\undersym{X}{1},\,...,\;\undersym{X}{n}$ are said to form a parallelization on $M$.
\end{definition}
Such a space is also known in the literature as an \emph{Absolute Parallelism space} or a \emph{Teleparallel space}. For simplicity, we will rather use the expressions ``\emph{AP-space} and \emph{AP-geometry}".
Since $\;\undersym{X}{i}$ are n independent vector fields on $M$, $\{\;\undersym{X}{i}(x): i=1,\,...,\,n\}$ is a basis of $T_xM$ for every $x\in M$. Any vector field $Y\in\mathfrak{X}(M)$ can be written globally as $Y=Y^{i}\,\undersym{X}{i}$, where $Y^{i}\in\mathfrak{F}(M)$. Here we use the notation $Y^{i}$ to denote the components of $Y$ with respect to $\,\undersym{X}{i}$. Einstein summation convention will be applied on Latin indices whatever their position is (even if the two repeated indices are downword).
\begin{definition}
The n differential $1$-forms $\;\undersym{\Omega}{i}:\mathfrak{X}(M)\longrightarrow\mathfrak{F}(M)$ defined by
\begin{equation} \label{1form}
\;\undersym{\Omega}{i}(\;\undersym{X}{j})=\delta_{ij}
\end{equation}
are called the parallelization forms.
\end{definition}
\hspace*{-.6cm}Clearly, if $Y=Y^{i}\;\undersym{X}{i}$, then
\begin{equation} \label{base}
\undersym{\Omega}{i}(Y)=Y^{i},\qquad\;\undersym{\Omega}{i}(Y)\;\undersym{X}{i}=Y.
\end{equation}
It follows directly from (\ref{1form}) that $\{\;\undersym{\Omega}{i}_{x}=\;\undersym{\Omega}{i}|_{T_xM}:i=1,\,...,\,n\}$ is the dual basis of the parallelization basis $\{\;\undersym{X}{i}(x):i=1,\,...,\,n\}$ for every $x\in M$. We call $\{\;\undersym{\Omega}{i}_{x}:i=1,\,...,\,n\}$ the dual parallelization basis of ${T_x}^*M$. The parallelization forms $\;\undersym{\Omega}{i}$ are independent in the $\mathfrak{F}(M)$-module $\mathfrak{X}^*(M)$.
\begin{lemma}\label{AP-condition}
Let $D$ be a linear connection on $M$. The $D$-covariant derivative of $\;\undersym{\Omega}{i}$ vanishes if and only if the $D$-covariant derivative of $\;\undersym{X}{i}$ vanishes.
\end{lemma}
\begin{proof}
For every $Y,\,Z\in\mathfrak{X}(M)$, we have, by (\ref{base}) and (\ref{1form}),
$$(D_Y\;\undersym{\Omega}{i})(Z)=(D_Y\;\undersym{\Omega}{i})\big(\;\undersym{\Omega}{j}(Z)\;\undersym{X}{j}\big)= -\;\undersym{\Omega}{j}(Z)\;\undersym{\Omega}{i}(D_Y\;\undersym{X}{j}).$$
Consequently, by (\ref{base}),
$$\big((D_Y\;\undersym{\Omega}{i})(Z)\big)\;\undersym{X}{i}=-\;\undersym{\Omega}{j}(Z)D_Y\;\undersym{X}{j},$$
from which the result follows.
\end{proof}
\begin{theorem}
On an AP-space $(M,\;\undersym{X}{i})$, there exists a unique linear connection $\nabla$ for which the parallelization vector fields $\;\undersym{X}{i}$ are parallel.
\end{theorem}
\begin{proof}
First we prove the uniqueness. Assume that $\nabla$ is a linear connection satisfying the condition $\nabla\;\undersym{X}{i}=0$. For all $Y,\,Z\in\mathfrak{X}(M)$, we have
$$\nabla_YZ=\nabla_Y\big(\;\undersym{\Omega}{i}(Z)\;\undersym{X}{i}\big)=\;\undersym{\Omega}{i}(Z)\nabla_Y\;\undersym{X}{i}
+\big(Y\cdot\;\undersym{\Omega}{i}(Z)\big)\;\undersym{X}{i} =\big(Y\cdot\;\undersym{\Omega}{i}(Z)\big) \;\undersym{X}{i}.$$
Hence, the connection $\nabla$ is uniquely determined by the relation
\begin{equation} \label{canonical}
\nabla_YZ=\big(Y\cdot\;\undersym{\Omega}{i}(Z)\big)\;\undersym{X}{i}.
\end{equation}
To prove the existence, let $\nabla:\mathfrak{X}(M)\times\mathfrak{X}(M)\longrightarrow\mathfrak{X}(M)$ be defined by (\ref{canonical}).
We show that $\nabla$ is a linear connection on $M$.
In fact, let $Y,\,Y_1,\,Y_2,\,Z,\,Z_1,\,Z_2\in\mathfrak{X}(M),\;f\in\mathfrak{F}(M)$. It is clear that
$\nabla_{Y_1+Y_2}Z=\nabla_{Y_1}Z+\nabla_{Y_2}Z$ and $\nabla_Y(Z_1+Z_2)=\nabla_YZ_1+\nabla_YZ_2$. Moreover,
\begin{eqnarray*}
\nabla_{fY}Z&=&\big((fY)\cdot\;\undersym{\Omega}{i}(Z)\big)\;\undersym{X}{i}=f\big(Y\cdot\;\undersym{\Omega}{i}(Z)\big)\;\undersym{X}{i}=f\nabla_YZ, \\ \nabla_Y(fZ)&=&\big(Y\cdot\;\undersym{\Omega}{i}(fZ)\big)\;\undersym{X}{i}=\Big(Y\cdot\big(f\;\undersym{\Omega}{i}(Z)\big)\Big)\;\undersym{X}{i} \\
&=&f\big(Y\cdot\;\undersym{\Omega}{i}(Z)\big)\;\undersym{X}{i} + (Y\cdot f)\;\undersym{\Omega}{i}(Z)\;\undersym{X}{i} \\
&=&f\nabla_YZ+(Y\cdot f)Z,\;\text{by (\ref{base}) and (\ref{canonical})}.
\end{eqnarray*}
It remains to show that $\nabla$ satisfies the condition $\nabla\;\undersym{X}{i}=0$:
$$\nabla_Y\;\undersym{X}{j}=\big(Y\cdot\;\undersym{\Omega}{i}(\;\undersym{X}{j})\big)\;\undersym{X}{i}=(Y\cdot\delta_{ij})\;\undersym{X}{i}=0.$$
This completes the proof.
\end{proof}
As a consequence of Lemma \ref{AP-condition}, we also have $\nabla\;\undersym{\Omega}{i}=0$. Hence
\begin{equation}\label{AP-cond}
\nabla\;\undersym{X}{i}=0,\quad\quad\nabla\;\undersym{\Omega}{i}=0.
\end{equation}
This property is known (locally) in the literature as the AP-condition.
\begin{definition}
Let $(M,\;\undersym{X}{i})$ be an AP-space. The unique linear connection $\nabla$ on $M$ defined by (\ref{canonical}) will be called the canonical connection of $(M,\;\undersym{X}{i})$.
\end{definition}
The canonical connection is of crucial importance because almost all geometric objects in the AP-space will be built up of it, as will be seen throughout the paper.
\vspace*{.26cm}Now we give an intrinsic formula of the torsion tensor $T$ of $\nabla$.
\begin{proposition}
The torsion tensor $T$ of the canonical connection is given by
\begin{equation} \label{torsion}
T(Y,\,Z)=\;\undersym{\Omega}{i}(Y)\;\undersym{\Omega}{j}(Z)[\;\undersym{X}{j},\;\undersym{X}{i}].
\end{equation}
\end{proposition}
\begin{proof}
The torsion tensor $T$ of $\nabla$ is defined, for all $Y,\,Z\in\mathfrak{X}(M)$, by
$$T(Y,\,Z)=\nabla_YZ-\nabla_ZY-[Y,\,Z].$$
Using the AP-condition (\ref{AP-cond}), we get
\begin{eqnarray*}
T(Y,\,Z)&\overset{\;\text{(\ref{base})}}{=}&T\big(\;\undersym{\Omega}{i}(Y)\;\undersym{X}{i},\;\undersym{\Omega}{j}(Z)\;\undersym{X}{j}\big) =\;\undersym{\Omega}{i}(Y)\;\undersym{\Omega}{j}(Z)T(\;\undersym{X}{i},\;\undersym{X}{j})\\
&=&\;\undersym{\Omega}{i}(Y)\;\undersym{\Omega}{j}(Z)(\nabla_{\;\undersym{X}{i}}\;\undersym{X}{j}-\nabla_{\;\undersym{X}{j}}\;\undersym{X}{i} -[\;\undersym{X}{i},\;\undersym{X}{j}])= \;\undersym{\Omega}{i}(Y)\;\undersym{\Omega}{j}(Z)[\;\undersym{X}{j},\;\undersym{X}{i}].
\end{eqnarray*}
\vspace*{-1.6cm}\[\qedhere\]
\end{proof}
\begin{theorem}\label{flat}
Let $(M,\;\undersym{X}{i})$ be an AP-space. The canonical connection of $(M,\;\undersym{X}{i})$ is flat.
\end{theorem}
\begin{proof}
The result follows from the definition of the curvature tensor $R$ of $\nabla$:
$$R(Y,\,Z)V=\nabla_Y\nabla_ZV-\nabla_Z\nabla_YV-\nabla_{[Y,Z]}V$$
and the AP-condition (\ref{AP-cond}).
\end{proof}
\begin{remark}\cite{local}
It is for this reason that many authors think that the AP-space is a flat space. This is by no means true. In fact, it is meaningless to speak
of curvature without reference to a connection. All we can say is that the AP-space is flat with respect to its canonical connection. However, there are other three natural connections on an AP-space which are nonflat, as will be shown later.
\end{remark}
\section{Other linear connections on an AP-space}
\hspace*{.4cm} In this section, we define a metric on an AP-space and investigate the properties of three other natural connections on the space. Moreover, we define the contortion tensor and give its relation to the torsion tensor (\ref{torsion}).
\begin{theorem}
Let $(M,\;\undersym{X}{i})$ be an AP-space and $\;\undersym{\Omega}{i}$ the parallelization forms on $M$. Then
\begin{equation} \label{metric}
g:=\;\undersym{\Omega}{i}\otimes\;\undersym{\Omega}{i}
\end{equation}
defines a metric tensor on $M$.
\end{theorem}
\begin{proof}
Clearly $g$ is a symmetric tensor of type $(0,2)$ on $M$. For all $Y\in\mathfrak{X}(M)$, we have
$$g(Y,\,Y)=(\;\undersym{\Omega}{i}\otimes\;\undersym{\Omega}{i})(Y,Y)=\sum_{i=1}^{n}\big(\;\undersym{\Omega}{i}(Y)\big)^2\geq 0.$$
Moreover,
$$g(Y,Y)=0\Longrightarrow\sum_{i=1}^{n}\big(\;\undersym{\Omega}{i}(Y)\big)^2=0\Longrightarrow\;\undersym{\Omega}{i}(Y)=0\;\,\forall i\Longrightarrow\;\undersym{\Omega}{i}(Y)\;\undersym{X}{i}=0\overset{(\ref{base})}{\Longrightarrow} Y=0.$$
Hence, $g$ is a metric tensor on $M$.
\end{proof}
\begin{remark}\label{rem0}
It is clear that:
\begin{description}
\item[(a)] $g(\;\undersym{X}{i},\;\undersym{X}{j})=\delta_{ij}.\hfill\refstepcounter{equation}(\theequation)\label{orthogonal}$
\item[(b)] $g(\;\undersym{X}{i},Y)=\;\undersym{\Omega}{i}(Y).\hfill\refstepcounter{equation}(\theequation)\label{base2}$
\end{description}
\end{remark}
Property {\bf{(a)}} shows that the parallelization vector fields $\;\undersym{X}{i}$ are $g$-orthonormal and {\bf{(b)}} provides the duality between $\;\undersym{X}{i}$ and $\;\undersym{\Omega}{i}$ via $g$.
\begin{lemma}
Let $(M,\;\undersym{X}{i})$ be an AP-space. A linear connection $D$ on $M$ is a metric connection if and only if
\begin{equation*}
\;\undersym{\Omega}{i}(D_V\;\undersym{X}{j})+\;\undersym{\Omega}{j}(D_V\;\undersym{X}{i})=0.
\end{equation*}
\end{lemma}
\begin{proof}
By simple calculation, using (\ref{orthogonal}) and (\ref{base2}), one can show that
$$(D_Vg)(\;\undersym{X}{i},\;\undersym{X}{j})=-\;\undersym{\Omega}{i}(D_V\;\undersym{X}{j})-\;\undersym{\Omega}{j}(D_V\;\undersym{X}{i}),$$
from which the result follows.
\end{proof}
The last lemma together with the AP-condition (\ref{AP-cond}) give rise to the next result.
\begin{proposition}
The canonical connection is a metric connection.
\end{proposition}
\begin{proposition}
On an AP-space there are three other (built-in) linear connections:
\begin{description}
\item[(a)] The dual connection $\widetilde{\nabla}$ given by
\begin{equation} \label{dualcon}
\widetilde{\nabla}_Y Z:=\nabla_Z Y + [Y,Z].
\end{equation}
\item[(b)] The symmetric connection $\widehat{\nabla}$ given by
\begin{equation} \label{symcon}
\widehat{\nabla}_YZ:=\frac{1}{2}(\nabla_YZ+\nabla_ZY+[Y,Z]).
\end{equation}
\item[(c)] The Levi-Civita connection $\oversetc{\nabla}$ is given by \cite{Kuhnel}
\begin{eqnarray} \label{riemannian}
2g(\oversetc{\nabla}_YZ,\,V)
&=&Y\cdot\,g(Z,\,V)+Z\cdot\,g(V,\,Y)-V\cdot\,g(Y,\,Z) \nonumber \\
& &-g(Y,\,[Z,\,V])+g(Z,\,[V,\,Y])+g(V,\,[Y,\,Z]).
\end{eqnarray}
\end{description}
\end{proposition}
The proof is straightforward and we omit it.
\begin{remark}\label{rem1}
One can easily show that:
\begin{description}
\item[(a)] $\widetilde{\nabla}_YZ=\nabla_YZ-T(Y,\,Z)$.
\item[(b)] $\widehat{\nabla}_YZ=\nabla_YZ-\frac{1}{2}T(Y,\,Z)=\frac{1}{2}(\nabla_YZ+\widetilde{\nabla}_YZ)$.
\item[(c)] $\widehat{\nabla}$ and $\oversetc{\nabla}$ are torsionless whereas $\nabla$ and $\widetilde{\nabla}$ have the same torsion up to a sign.
\end{description}
Here $T$ is the torsion tensor of the canonical connection $\nabla$. Since there are no other torsion tensors in the space, we can say that $T$ is the torsion of the space.
\end{remark}
In Reimannian geometry the Levi-Civita connection has no explicit expression. However, in AP-geometry we can have an \emph{explicit expression} for the Levi-Civita connection $\oversetc{\nabla}$ as shown in the following.
\begin{theorem}
Let $(M,\,\;\undersym{X}{i})$ be an AP-space. Then the Levi-Civita connection $\oversetc{\nabla}$ can be written in the form:
\begin{equation} \label{riemannian2}
\oversetc{\nabla}_YZ=\widehat{\nabla}_YZ-\frac{1}{2}(\mathcal{L}_{\;\undersym{X}{i}}g)(Y,\,Z)\;\undersym{X}{i},
\end{equation}
where $\mathcal{L}_Y$ is the Lie derivative with respect to $Y\in\mathfrak{X}(M)$.
\end{theorem}
\vspace{0pt}
\begin{proof}
By replacing $V$ in (\ref{riemannian}) by $\;\undersym{X}{i}$ and using (\ref{base2}), we get
$$2\;\undersym{\Omega}{i}(\oversetc{\nabla}_YZ)=Y\cdot\;\undersym{\Omega}{i}(Z)+Z\cdot\;\undersym{\Omega}{i}(Y)-\;\undersym{X}{i}\cdot g(Y,\,Z)+g(Y,\,[\;\undersym{X}{i},\,Z])+g(Z,\,[\;\undersym{X}{i},\,Y])+\;\undersym{\Omega}{i}([Y,\,Z]).$$
Taking into account (\ref{base}) and (\ref{canonical}), the above equation reads
\begin{eqnarray*}
2\oversetc{\nabla}_YZ&=&\nabla_YZ+\nabla_ZY-\big(\;\undersym{X}{i}\cdot g(Y,\,Z)\big)\;\undersym{X}{i}+g(Y,\,[\;\undersym{X}{i},\,Z])\;\undersym{X}{i}+g(Z,\,[\;\undersym{X}{i},\,Y])\;\undersym{X}{i}+[Y,\,Z] \\
&=&2\widehat{\nabla}_YZ-\Big(\;\undersym{X}{i}\cdot g(Y,\,Z)-g(Y,\,[\;\undersym{X}{i},\,Z])-g(Z,\,[\;\undersym{X}{i},\,Y])\Big)\;\undersym{X}{i},\;\text{by (\ref{symcon})} \\
&=&2\widehat{\nabla}_YZ-(\mathcal{L}_{\;\undersym{X}{i}}g)(Y,\,Z)\;\undersym{X}{i}.
\end{eqnarray*}
\vspace*{-1.6cm}\[\qedhere\]
\end{proof}
\begin{corollary}
In an AP-space, the Levi-Civita connection and the symmetric connection coincide if, and only if, the parallelization vector fields are Killing vector fields:
$$\oversetc{\nabla}=\widehat{\nabla}\Longleftrightarrow\mathcal{L}_{\;\undersym{X}{i}}g=0\;\,\forall i.$$
\end{corollary}
\begin{definition}
The contortion tensor $C$ is defined by the formula:
\begin{equation} \label{contortion1}
C(Y,\,Z) = \nabla_Y Z - \oversetc{\nabla}_Y Z.
\end{equation}
\end{definition}
The contortion tensor may also be written in the form:
\begin{equation} \label{contortion2}
C(Y,\,Z)=(\oversetc{\nabla}_Y\;\undersym{\Omega}{i})(Z)\;\undersym{X}{i}.
\end{equation}
In fact, using (\ref{base}) and (\ref{canonical}), we have for all $Y,\,Z\in\mathfrak{X}(M)$,
$$C(Y,\,Z)=\nabla_YZ-\oversetc{\nabla}_YZ=\big(Y\,\cdot\;\undersym{\Omega}{i}\;(Z)\big)\;\undersym{X}{i}-\;\undersym{\Omega}{i}(\oversetc{\nabla}_YZ)\;\undersym{X}{i}=(\oversetc{\nabla}_Y\;\undersym{\Omega}{i})(Z)\;\undersym{X}{i}.$$
The identities (\ref{contortion1}) and (\ref{contortion2}) show that the geometry of an AP-space can be built up from the Levi-Civita connection instead of the canonical connection:
$$\nabla_YZ=\oversetc{\nabla}_YZ+(\oversetc{\nabla}_Y\;\undersym{\Omega}{i})(Z)\;\undersym{X}{i}.$$
The next proposition establishes the mutual relations between the torsion and contortion tensors.
\begin{proposition}
The following identities hold:
\begin{description}\label{rtc}
\item[(a)]$T(Y,\,Z)=C(Y,\,Z)-C(Z,\,Y).
\item[(b)]$C(Y,\,Z)=\frac{1}{2}\Big(T(Y,\,Z)+T(\;\undersym{X}{i},\,Y,\,Z)\;\undersym{X}{i}+T(\;\undersym{X}{i},\,Z,\,Y)\;\undersym{X}{i}\Big).$
\end{description}
From which,
\begin{description}
\item[(a)$'$]$T(Y,\,Z,\,V)=C(Y,\,Z,\,V)-C(Z,\,Y,\,V).$
\item[(b)$'$]$C(Y,\,Z,\,V)=\frac{1}{2}\Big(T(Y,\,Z,\,V)+T(V,\,Y,\,Z)+T(V,\,Z,\,Y)\Big),$
\end{description}
where $C(Y,\,Z,\,V)=g\big(C(Y,\,Z),\,V\big)$ and $T(Y,\,Z,\,V)=g\big(T(Y,\,Z),\,V\big)$.\\ Consequently, the torsion tensor vanishes if and only if the contortion tensor vanishes.
\end{proposition}
\begin{proof}
Let $Y,\,Z,\,V\in\mathfrak{X}(M)$. Then,
\begin{description}
\item[(a)] The first identity gives the torsion tensor in terms of the contortion tensor.
\begin{eqnarray*}
T(Y,\,Z)&=&\nabla_YZ-\nabla_ZY-[Y,\,Z] \\
&=&(\nabla_Y Z -\nabla_Z Y)-(\oversetc{\nabla}_Y Z-\oversetc{\nabla}_Z Y),\;\text{since $\oversetc{\nabla}$ is torsionless} \\
&=& (\nabla_Y Z - \oversetc{\nabla}_Y Z)-(\nabla_Z Y - \oversetc{\nabla}_Z Y)=C(Y,\,Z)-C(Z,\,Y).
\end{eqnarray*}
\item[(b)] The second identity gives the contortion tensor in terms of the torsion tensor. In the following proof we make use of (\ref{riemannian2}), Remark \ref{rem1}, (\ref{base}), Remark \ref{rem0} and (\ref{torsion}).
\begin{eqnarray*}
2C(Y,\,Z)&=&2\nabla_YZ-2\oversetc{\nabla}_YZ=2\nabla_YZ-2\widehat{\nabla}_YZ+(\mathcal{L}_{\;\undersym{X}{i}}g)(Y,\,Z)\;\undersym{X}{i}\\
&=&2\nabla_YZ-2\nabla_YZ+T(Y,\,Z)+\;\undersym{\Omega}{j}(Y)\;\undersym{\Omega}{k}(Z)(\mathcal{L}_{\;\undersym{X}{i}}g)(\;\undersym{X}{j}, \,\;\undersym{X}{k})\;\undersym{X}{i}\\
&=&T(Y,\,Z)+\;\undersym{\Omega}{j}(Y)\;\undersym{\Omega}{k}(Z)\Big(\;\undersym{X}{i}\cdot g(\;\undersym{X}{j},\;\undersym{X}{k})-g([\;\undersym{X}{i},\;\undersym{X}{j}],\;\undersym{X}{k}) -g([\;\undersym{X}{i},\;\undersym{X}{k}],\;\undersym{X}{j})\Big)\;\undersym{X}{i} \\
&=&T(Y,\,Z)-\Big(g\big(T(Y,\;\undersym{X}{i}),\,Z\big)+g\big(T(Z,\;\undersym{X}{i}),\,Y\big)\Big)\;\undersym{X}{i} \\
&=&T(Y,\,Z)+\Big(T(\;\undersym{X}{i},\,Y,\,Z)+T(\;\undersym{X}{i},\,Z,\,Y)\Big)\;\undersym{X}{i}.
\end{eqnarray*}
\end{description}
\vspace*{-1.2cm}\[\qedhere\]
\end{proof}
\vspace*{.26cm}
\begin{remark}
$T(Y,\,Z,\,V)$ is skew-symmetric in the first two arguments whereas $C(Y,\,Z,\,V)$ is skew-symmetric in the last two arguments.
\end{remark}
\begin{definition}
Let $(M,\;\undersym{X}{i})$ be an AP-space. The contracted torsion or the basic form $B$ is defined, for every $Y\in\mathfrak{X}(M)$ by
\begin{equation*}
B(Y):={\rm Tr}\{Z\longmapsto T(Z,\,Y)\}.
\end{equation*}
\end{definition}
This $1$-form is known (locally) in the literature as the basic vector. In terms of the metric tensor (\ref{metric}), using (\ref{orthogonal}), the basic form can be written as
\begin{equation} \label{basicvector}
B(Y)=g\big(T(\;\undersym{X}{i},\,Y),\;\undersym{X}{i}\big)=T(\;\undersym{X}{i},\,Y,\;\undersym{X}{i}).
\end{equation}
Using Proposition \ref{rtc}{\bf{(b)$'$}}, $B(Y)$ can also be expressed in the form
\begin{equation*}
B(Y)=C(\;\undersym{X}{i},\,Y,\;\undersym{X}{i}).
\end{equation*}
Making use of (\ref{basicvector}) and (\ref{torsion}), we have
\begin{equation*}
B(Y)=\;\undersym{\Omega}{j}(Y)\;\undersym{\Omega}{i}([\;\undersym{X}{j},\;\undersym{X}{i}]).
\end{equation*}
\emph{It should be noted that in the above three expressions and in similar expressions summation is carried out on repeated mesh indices, although they are situated in different argument positions.
}
\begin{proposition} \label{different-connections}
Concerning the four connections of the AP-space, the difference tensors are given by:
\begin{description}
\item[(a)] $\nabla_YZ-\widetilde{\nabla}_YZ=T(Y,\,Z)$.
\item[(b)] $\nabla_YZ-\widehat{\nabla}_YZ=\frac{1}{2}T(Y,\,Z)$.
\item[(c)] $\nabla_YZ-\oversetc{\nabla}_YZ=C(Y,\,Z)$.
\item[(d)] $\widetilde{\nabla}_YZ-\widehat{\nabla}_YZ=-\frac{1}{2}T(Y,\,Z)$.
\item[(e)] $\widetilde{\nabla}_YZ-\oversetc{\nabla}_YZ=C(Z,\,Y)$.
\item[(f)] $\widehat{\nabla}_YZ-\oversetc{\nabla}_YZ=\frac{1}{2}(\mathcal{L}_{\;\undersym{X}{i}}g)(Y,\,Z)\;\undersym{X}{i}.$
\end{description}
\end{proposition}
\begin{proof}
Properties {\bf{(a)}}, {\bf{(b)}}, {\bf{(d)}} follow from Remark \ref{rem1}, {\bf{(c)}} is the definition of the contortion tensor, {\bf{(e)}} follows from (\ref{contortion1}) and the fact that $\oversetc{\nabla}$ is torsionless, and {\bf{(f)}} follows from (\ref{riemannian2}).
\end{proof}
As a consequence of the above proposition, we have the following useful relations.
\begin{corollary} \label{difofconnections}
For every $Y,\,Z,\,V\in\mathfrak{X}(M)$, we have the following relations:
\begin{description}
\item[(a)] $(\nabla_VT)(Y,\,Z)-(\widetilde{\nabla}_VT)(Y,\,Z)=\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{T\big(V,\,T(Y,\,Z)\big)\Big\}.$
\item[(b)] $(\nabla_VT)(Y,\,Z)-(\widehat{\nabla}_VT)(Y,\,Z)=\frac{1}{2}\;\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{T\big(V,\,T(Y,\,Z)\big)\Big\}.$
\item[(c)] $(\nabla_VT)(Y,\,Z)-(\oversetc{\nabla}_VT)(Y,\,Z)=-T\big(Y,\,C(V,\,Z)\big)+T\big(Z,\,C(V,\,Y)\big)+C\big(V,\,T(Y,\,Z)\big),$
\end{description}
where $\underset{Y,\,Z,\,V}{\mathfrak{S}}$ denotes the cyclic permutation of $Y,\,Z,\,V$ and summation.
\end{corollary}
\section{Curvature tensors and Bianchi identities}
\vspace{4pt}
\hspace*{.4cm} In an AP-space the curvature $R$ of the canonical connection $\nabla$ vanishes identically. This section is devoted to show that the other three curvature tensors $\widetilde{R}, \widehat{R}$ and $\;\overcirc{R}$, associated with $\widetilde{\nabla}, \widehat{\nabla}$ and $\oversetc{\nabla}$ respectively, do not vanish. Also, we show that the vanishing of $R$ enables us to express these three curvature tensors in terms of the torsion tensor only.
\begin{theorem}
The three curvature tensors $\widetilde{R}, \widehat{R}$ and $\;\overcirc{R}$ of the connections $\widetilde{\nabla}, \widehat{\nabla}$ and $\oversetc{\nabla}$ are given respectively by:
\begin{description}
\item[(a)] $\;\widetilde{R}(Y,\,Z)V=(\nabla_VT)(Y,\,Z).\hfill\refstepcounter{equation}(\theequation)\label{dual}$
\end{description}
\vspace*{8pt}
{\bf{(b)}}
\vspace*{-0.9cm}
\begin{eqnarray} \label{sym} \hspace*{-25.5cm}\widehat{R}(Y,\,Z)V&=&\frac{1}{2}\Big((\nabla_ZT)(Y,\,V)-(\nabla_YT)(Z,\,V)\Big)-\frac{1}{2}T\big(T(Y,\,Z),\,V\big)\quad\quad\quad\quad\nonumber\\
& &+\frac{1}{4}\Big(T\big(Y,\,T(Z,\,V)\big)-T\big(Z,\,T(Y,\,V)\big)\Big).
\end{eqnarray}
{\bf{(c)}}
\vspace*{-1.0cm}
\begin{eqnarray} \label{Riemanniancurv}
\hspace*{-35cm}\overcirc{R}(Y,\,Z)V&=&(\nabla_Z C)(Y,\,V)-(\nabla_Y C)(Z,\,V)-C\big(T(Y,\,Z),\,V\big)\quad\quad\quad\quad\quad\quad\;\; \nonumber \\
& &+C\big(Y,\,C(Z,\,V)\big)-C\big(Z,\,C(Y,\,V)\big).
\end{eqnarray}
\end{theorem}
\begin{proof}
We prove {\bf{(a)}} only. The proof of the other parts can be carried out in the same manner. Using (\ref{dualcon}), we get
\begin{eqnarray*}
\widetilde{\nabla}_Y\;\widetilde{\nabla}_ZV&=&\widetilde{\nabla}_Y(\nabla_VZ+[Z,\,V])=\widetilde{\nabla}_Y\nabla_VZ+\widetilde{\nabla}_Y[Z,\,V] \\
&=&\nabla_{\nabla_VZ}Y+[Y,\,\nabla_VZ]+\nabla_{[Z,\,V]}Y+[Y,\,[Z,\,V]].
\end{eqnarray*}
Similarly,
$$\widetilde{\nabla}_Z\;\widetilde{\nabla}_YV=\nabla_{\nabla_VY}Z+[Z,\,\nabla_VY]+\nabla_{[Y,\,V]}Z+[Z,\,[Y,\,V]].$$
and
$$\widetilde{\nabla}_{[Y,\,Z]}V=\nabla_V[Y,\,Z]+[[Y,\,Z],\,V].$$
Using the above three identities, together with the Jacobi identity, we get
\begin{eqnarray*}
\widetilde{R}(Y,\,Z)V&=&\widetilde{\nabla}_Y\widetilde{\nabla}_ZV-\widetilde{\nabla}_Z\widetilde{\nabla}_YV-\widetilde{\nabla}_{[Y,\,Z]}V \\
&=&\nabla_{\nabla_VZ}Y+[Y,\,\nabla_VZ]-\nabla_{\nabla_VY}Z-[Z,\,\nabla_VY] \\
& &-\nabla_V[Y,\,Z]+\nabla_{[Z,\,V]}Y-\nabla_{[Y,\,V]}Z.
\end{eqnarray*}
Using the fact that the curvature tensor of the canonical connection vanishes (Theorem \ref{flat}), it follows that
$$\nabla_{[Y,\,Z]}V=\nabla_Y\nabla_ZV-\nabla_Z\nabla_YV.$$
Using the above identity, we get
\begin{eqnarray*}
\widetilde{R}(Y,\,Z)V&=&\nabla_{\nabla_VZ}Y+[Y,\,\nabla_VZ]-\nabla_{\nabla_VY}Z-[Z,\,\nabla_VY]-\nabla_V[Y,\,Z] \\
& &+\nabla_Z\nabla_VY-\nabla_V\nabla_ZY-\nabla_Y\nabla_VZ+\nabla_V\nabla_YZ\\
&=&(\nabla_V\nabla_YZ-\nabla_V\nabla_ZY-\nabla_V[Y,\,Z]) \\
& &-(\nabla_{\nabla_VY}Z-\nabla_Z\nabla_VY-[\nabla_VY,\,Z]) \\
& &-(\nabla_Y\nabla_VZ-\nabla_{\nabla_VZ}Y-[Y,\,\nabla_VZ]) \\
&=&\nabla_VT(Y,\,Z)-T(\nabla_VY,\,Z)-T(Y,\,\nabla_VZ) \\
&=&(\nabla_VT)(Y,\,Z).
\end{eqnarray*}
\vspace*{-1.6cm}\[\qedhere\]
\end{proof}
The above theorem shows that the curvature tensors $\widetilde{R},\,\widehat{R}$ and $\;\overcirc{R}$ are expressible in terms of the torsion tensor of the space only. This proves that the geometry of an AP-space depends crucially on the torsion tensor. It is worth mentioning that the vanishing of that tensor implies that the four connections $\nabla,\,\widetilde{\nabla},\,\widehat{\nabla}$ and $\oversetc{\nabla}$ coincide and a trivial flat Riemannian space is achieved. Thus, a sufficient condition for the non-vanishing of the torsion tensor is the non-vanishing of any one of the three curvature tensors $\widetilde{R},\,\widehat{R}$ or $\;\overcirc{R}$.
\vspace*{.26cm}The next result gives the expressions of the Ricci tensor ${\rm\oversetc{R}ic}$ of $\oversetc{\nabla}$ and the Ricci-like tensors ${\rm\widetilde{R}ic}$ and ${\rm\widehat{R}ic}$ of $\widetilde{\nabla}$ and $\widehat{\nabla}$ together with their respective contractions (the scalar curvature ${\rm\oversetc{S}c}$ and the curvature-like scalars ${\rm\widetilde{S}c}$ and ${\rm\widehat{S}c}$). The orthonormality of the parallelization vector fields $\;\undersym{X}{i}$ plays an essential role in the proof.
\begin{theorem}\label{exception}
In an AP-space $(M,\;\undersym{X}{i})$ we have, for every $Y,\,Z\in\mathfrak{X}(M)$,
\begin{description}
\item[(a)] ${\rm\widetilde{R}ic}(Y,\,Z)=-(\nabla_ZB)(Y).$
\item[(b)] ${\rm\widehat{R}ic}(Y,\,Z)=\frac{1}{2}(\mathcal{L}_{\;\undersym{X}{i}}T)(Y,\,Z,\;\undersym{X}{i}) +\frac{1}{4}T\big(Y,\,T(Z,\;\undersym{X}{i}),\;\undersym{X}{i}\big)-\frac{1}{2}(\nabla_YB)(Z)- \frac{1}{4}B\big(T(Y,\,Z)\big).
\item[(c)] ${\rm\oversetc{R}ic}(Y,\,Z)=(\mathcal{L}_{\;\undersym{X}{i}}C)(Y,\,Z,\;\undersym{X}{i}) +C(Y,\,C\big(Z,\;\undersym{X}{i}),\;\undersym{X}{i}\big)-(\nabla_YB)(Z)-B(C(Y,\,Z)).$
\item[(a)$'$] ${\rm\widetilde{S}c}=-\;\undersym{X}{i}\cdot B(\;\undersym{X}{i}).$
\item[(b)$'$] ${\rm\widehat{S}c}=-\frac{1}{2}\;\undersym{X}{i}\cdot B(\;\undersym{X}{i})+\frac{1}{4}T(\;\undersym{X}{j},\,[\;\undersym{X}{i},\;\undersym{X}{j}],\;\undersym{X}{i}).$
\item[(c)$'$] ${\rm\oversetc{S}c}=-2\;\undersym{X}{i}\cdot B(\;\undersym{X}{i})+B(\;\undersym{X}{i})B(\;\undersym{X}{i})+C(T(\;\undersym{X}{i},\;\undersym{X}{j}),\;\undersym{X}{j},\;\undersym{X}{i}) +C\big(\;\undersym{X}{j},\,C(\;\undersym{X}{i},\;\undersym{X}{j}),\;\undersym{X}{i}\big)$.
\end{description}
\end{theorem}
\begin{proof} We prove {\bf{(b)}} and {\bf{(c)$'$}} only. The other
identities can be proved similarly.
\begin{description}
\item [(b)] Using (\ref{orthogonal}), (\ref{sym}), (\ref{AP-cond}) and (\ref{basicvector}), we have
\begin{eqnarray*}
{\rm\widehat{R}ic}(Y,\,Z)&=&g\big(\widehat{R}(Y,\;\undersym{X}{i})Z,\;\undersym{X}{i}\big)\\
&=&\frac{1}{2}\Big((\nabla_{\;\undersym{X}{i}}T)(Y,\,Z,\;\undersym{X}{i})-(\nabla_YT)(\;\undersym{X}{i},\,Z,\;\undersym{X}{i})- T\big(T(Y,\;\undersym{X}{i}),\,Z,\;\undersym{X}{i}\big)\Big)\\
& &+\frac{1}{4}\Big(T\big(Y,\,T(\;\undersym{X}{i},\,Z),\;\undersym{X}{i}\big) -T\big(\;\undersym{X}{i},\,T(Y,\,Z),\;\undersym{X}{i}\big)\Big)\\
&=&\frac{1}{4}\Big(2\;\undersym{X}{i}\cdot
T(Y,\,Z,\;\undersym{X}{i})-2T(\nabla_{\;\undersym{X}{i}}Y,\,Z,\;\undersym{X}{i}) -2T(Y,\,\nabla_{\;\undersym{X}{i}}Z,\;\undersym{X}{i})-2(\nabla_YB)(Z)\\
& &-2T\big(T(Y,\;\undersym{X}{i}),\,Z,\;\undersym{X}{i}\big)+T\big(Y,\,T(\;\undersym{X}{i},\,Z),\;\undersym{X}{i}\big) -B\big(T(Y,\,Z)\big)\Big)\\
&=&\frac{1}{4}\Big(2\;\undersym{X}{i}\cdot T(Y,\,Z,\;\undersym{X}{i})-T(Y,\,\nabla_{\;\undersym{X}{i}}Z,\;\undersym{X}{i})-2(\nabla_YB)(Z)\\
& &-2T([\;\undersym{X}{i},\,Y],\,Z,\;\undersym{X}{i})-T(Y,\,[\;\undersym{X}{i},\,Z],\;\undersym{X}{i})-B\big(T(Y,\,Z)\big)\Big)\\
&=&\frac{1}{4}\Big(2(\mathcal{L}_{\;\undersym{X}{i}}T)(Y,\,Z,\;\undersym{X}{i})-T(Y,\,\nabla_{\;\undersym{X}{i}}Z,\;\undersym{X}{i}) -T(Y,\,[Z,\;\undersym{X}{i}],\;\undersym{X}{i})\\
& &-2(\nabla_YB)(Z)-B\big(T(Y,\,Z)\big)\Big)\\
&=&\frac{1}{2}(\mathcal{L}_{\;\undersym{X}{i}}T)(Y,\,Z,\;\undersym{X}{i}) +\frac{1}{4}T\big(Y,\,T(Z,\;\undersym{X}{i}),\;\undersym{X}{i}\big)-\frac{1}{2}(\nabla_YB)(Z)- \frac{1}{4}B\big(T(Y,\,Z)\big).
\end{eqnarray*}
\item [(c)$'$] Using (\ref{orthogonal}), {\bf{(c)}}, (\ref{AP-cond}), (\ref{contortion1}), Proposition \ref{rtc}{\bf{(b)}} and the torsionless property of $\oversetc{\nabla}$, we get
\begin{eqnarray*}
{\rm\oversetc{S}c}&=&{\rm\oversetc{R}ic}(\;\undersym{X}{j},\;\undersym{X}{j})\\
&=&(\mathcal{L}_{\;\undersym{X}{i}}C)(\;\undersym{X}{j},\;\undersym{X}{j},\;\undersym{X}{i}) +C\big(\;\undersym{X}{j},\,C(\;\undersym{X}{j},\;\undersym{X}{i}),\;\undersym{X}{i}\big)- (\nabla_{\;\undersym{X}{j}}B)(\;\undersym{X}{j})-B\big(C(\;\undersym{X}{j},\;\undersym{X}{j})\big)\\
&=&\;\undersym{X}{i}\cdot C(\;\undersym{X}{j},\;\undersym{X}{j},\;\undersym{X}{i})-C([\;\undersym{X}{i},\;\undersym{X}{j}],\;\undersym{X}{j},\;\undersym{X}{i}) -C(\;\undersym{X}{j},\,[\;\undersym{X}{i},\;\undersym{X}{j}],\;\undersym{X}{i})- \;\undersym{X}{j}\cdot B(\;\undersym{X}{j})\\
& &-C(\;\undersym{X}{j},\,\oversetc{\nabla}_{\;\undersym{X}{j}}\;\undersym{X}{i},\;\undersym{X}{i})+ B(\;\undersym{X}{i})B(\;\undersym{X}{i})\\
&=&-2\;\undersym{X}{i}\cdot B(\;\undersym{X}{i})+B(\;\undersym{X}{i})B(\;\undersym{X}{i})-C([\;\undersym{X}{i},\;\undersym{X}{j}],\;\undersym{X}{j},\;\undersym{X}{i})\\
& &-\Big(C(\;\undersym{X}{j},\,\oversetc{\nabla}_{\;\undersym{X}{j}}\;\undersym{X}{i},\;\undersym{X}{i}) +C(\;\undersym{X}{j},\,[\;\undersym{X}{i},\;\undersym{X}{j}],\;\undersym{X}{i})\Big)\\
&=&-2\;\undersym{X}{i}\cdot B(\;\undersym{X}{i})+B(\;\undersym{X}{i})B(\;\undersym{X}{i})+C\big(T(\;\undersym{X}{i},\;\undersym{X}{j}),\;\undersym{X}{j},\;\undersym{X}{i}\big)- C\big(\;\undersym{X}{j},\,\oversetc{\nabla}_{\;\undersym{X}{i}}\;\undersym{X}{j},\;\undersym{X}{i}\big)\\
&=&-2\;\undersym{X}{i}\cdot B(\;\undersym{X}{i})+B(\;\undersym{X}{i})B(\;\undersym{X}{i})+C\big(T(\;\undersym{X}{i},\;\undersym{X}{j}),\;\undersym{X}{j},\;\undersym{X}{i}\big)+ C\big(\;\undersym{X}{j},\,C(\;\undersym{X}{i},\;\undersym{X}{j}),\;\undersym{X}{i}\big).
\end{eqnarray*}
\end{description}
\vspace*{-1.2cm}\[\qedhere\]
\end{proof}
\vspace*{.26cm}
\begin{center}
\large{Table1: Linear connections in AP-geometry}
\end{center}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{Connection} & \multirow{2}{*}{Symbol} & \multirow{2}{*}{Torsion} & \multirow{2}{*}{Curvature} & \multirow{2}{*}{Metricity} \\
&&&&\\ \hline
\multirow{2}{*}{Canonical} & \multirow{2}{*}{$\nabla$} & \multirow{2}{*}{$T$} & \multirow{2}{*}{$0$} & \multirow{2}{*}{metric} \\
&&&&\\ \hline
\multirow{2}{*}{Dual} & \multirow{2}{*}{$\widetilde{\nabla}$} & \multirow{2}{*}{$-T$} & \multirow{2}{*}{$\widetilde{R}$} & \multirow{2}{*}{nonmetric} \\
&&&&\\ \hline
\multirow{2}{*}{Symmetric} & \multirow{2}{*}{$\widehat{\nabla}$} & \multirow{2}{*}{$0$} & \multirow{2}{*}{$\widehat{R}$} & \multirow{2}{*}{nonmetric} \\
&&&&\\ \hline
\multirow{2}{*}{Levi-Civita}& \multirow{2}{*}{$\oversetc{\nabla}$} & \multirow{2}{*}{$0$} & \multirow{2}{*}{$\oversetc{R}$} & \multirow{2}{*}{metric} \\
&&&&\\ \hline
\end{tabular}
\end{center}
\vspace*{1cm}
Let $D$ be an arbitrary linear connection on $M$ with torsion ${\bf{T}}$ and curvature ${\bf{R}}$. Then the Bianchi identities are given, for all $Y,\,Z,\,V,\,U\in\mathfrak{X}(M)$, by \cite{Kobayashi}:
\begin{description}
\item[First Bianchi identity:] $\underset{Y,\,Z,\,V}{\;\mathfrak{S}}\Big\{\;{\bf{R}}(Y,\,Z)V\Big\}=\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(D_V{\bf{T}})(Y,\,Z) +{\bf{T}}\big({\bf{T}}(Y,\,Z),\,V\big)\Big\}.$
\item[Second Bianchi identity:] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(D_V {\bf{R}})(Y,\,Z)U-{\bf{R}}\big(V,\,{\bf{T}}(Y,\,Z)\big)U\Big\}=0$.
\end{description}
In what follows, we derive some identities using the above Bianchi identities. Some of the derived identities will be used to simplify other formulae thus obtained.
\begin{proposition} \label{1st}
The first Bianchi identity for the connections $\nabla,\,\widetilde{\nabla},\,\widehat{\nabla}$ and $\oversetc{\nabla}$ reads:
\begin{description}
\item[(a)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\nabla_VT)(Y,\,Z)+T\big(T(Y,\,Z),\,V\big)\Big\}=0.$
\item[(b)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{\widetilde{R}(Y,\,Z)V\Big\}=\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{T\big(T(Y,\,Z),\,V\big) -(\widetilde{\nabla}_VT)(Y,\,Z)\Big\}.$
\item[(c)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{\widehat{R}(Y,\,Z)V\Big\}=0.
\item[(d)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{\;\overcirc{R}(Y,\,Z)V\Big\}=0.
\end{description}
\end{proposition}
The proof is straightforward. We have to use the relations $R=0,\,\widetilde{T}=-T$ and $\widehat{T}=\oversetc{T}=0$.
\begin{corollary}\label{corrbian}
The following identities hold:
\begin{description}
\item[(a)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\widetilde{\nabla}_VT)(Y,\,Z)\Big\} =2\;\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{T\big(T(Y,\,Z),\,V\big)\Big\}.
\item[(b)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\widehat{\nabla}_VT)(Y,\,Z)\Big\} =\frac{1}{2}\;\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{T\big(T(Y,\,Z),\,V\big)\Big\}.
\item[(c)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{\widetilde{R}(Y,\,Z)V\Big\} =-\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{T\big(T(Y,\,Z),\,V\big)\Big\}.
\end{description}
\end{corollary}
The proof follows from the above proposition together with Corollary \ref{difofconnections} and (\ref{dual}).
\begin{proposition} \label{2nd}
The second Bianchi identity for the connections $\widetilde{\nabla},\,\widehat{\nabla}$ and $\oversetc{\nabla}$ reads:
\begin{description}
\item[(a)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\widetilde{\nabla}_V\widetilde{R})(Y,\,Z)U\Big\} =\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\nabla_UT)\big(T(Y,\,Z),\,V\big)\Big\}.$
\item[(b)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\widehat{\nabla}_V\widehat{R})(Y,\,Z)U\Big\} =0.
\item[(c)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\;\overcirc{\nabla}_V\oversetc{R})(Y,\,Z)U\Big\} =0.
\end{description}
\end{proposition}
The proof is straightforward making use of (\ref{dual}).
\vspace*{.26cm}Now, we will give another formula for the curvature tensor $\widehat{R}$ of the symmetric connection $\widehat{\nabla}$ which is more compact than (\ref{sym}).
\begin{theorem}
The curvature tensor $\widehat{R}$ can be written in the form:
\begin{equation*}
\widehat{R}(Y,\,Z)V=\frac{1}{2}(\nabla_VT)(Y,\,Z)-\frac{1}{4}\Big(T\big(Y,\,T(Z,\,V)\big)+T\big(Z,\,T(V,\,Y)\big)\Big).
\end{equation*}
\end{theorem}
\begin{proof}
Taking into account (\ref{sym}) and Proposition \ref{1st}{\bf{(a)}}, one has
\begin{eqnarray*}
\widehat{R}(Y,\,Z)V&=&-\frac{1}{2}\;\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\nabla_VT)(Y,\,Z)\Big\}+\frac{1}{2}(\nabla_VT)(Y,\,Z) \\
& &+\frac{1}{4}\;\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{T\big(V,\,T(Y,\,Z)\big)\Big\}+\frac{1}{4}T\big(V,\,T(Y,\,Z)\big)\\
&=&-\frac{1}{4}\;\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{T\big(V,\,T(Y,\,Z)\big)\Big\}+\frac{1}{4}T\big(V,\,T(Y,\,Z)\big) +\frac{1}{2}(\nabla_VT)(Y,\,Z)\\
&=&\frac{1}{2}(\nabla_VT)(Y,\,Z)-\frac{1}{4}\Big(T\big(Y,\,T(Z,\,V)\big)+T\big(Z,\,T(V,\,Y)\big)\Big).
\end{eqnarray*}
\vspace*{-1.6cm}\[\qedhere\]
\end{proof}
\begin{corollary}\label{exception1}
On an AP-space $(M,\;\undersym{X}{i})$ the Ricci-like tensor ${\rm\widehat{R}ic}$ with respect to the symmetric connection $\widehat{\nabla}$ can be written as:
\begin{equation*}
{\rm\widehat{R}ic}(Y,\,Z)=-\frac{1}{2}(\nabla_ZB)(Y)+\frac{1}{4}B\big(T(Y,\,Z)\big)-\frac{1}{4}T\big(Y,\,T(\;\undersym{X}{i},\,Z),\;\undersym{X}{i}\big).
\end{equation*}
\end{corollary}
\vspace*{.26cm}
It is to be noted that the expression $\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{T\big(T(Y,\,Z),\,V\big)\Big\}$ appears in many of the identities obtained above. We discuss now the case in which this expression vanishes.
Let us write $[\;\undersym{X}{i},\;\undersym{X}{j}]=:C^h_{ij}\;\undersym{X}{h}$. The functions $C^h_{ij}\in\mathfrak{F}(M)$ are global functions on $M$ and will be referred to as the global structure coefficients of the AP-space. They can be written explicitly in the form $C^h_{ij}=\;\undersym{\Omega}{h}([\;\undersym{X}{i},\;\undersym{X}{j}])$. The last expression may be considered as a definition of the global structure coefficients.
\begin{theorem}\label{Golden}
On an AP-space $(M,\;\undersym{X}{i})$ the expression $\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{T\big(T(Y,\,Z),\,V\big)\Big\}$ vanishes if and only if, for all $h$, the expression $\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{\;\undersym{X}{k}\cdot C^h_{ij}\Big\}$ vanishes.\\
Consequently, if the global structure coefficients of the AP-space are constant functions on $M$, then $\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{T\big(T(Y,\,Z),\,V\big)\Big\}=0$.
\end{theorem}
\begin{proof}
Using the parallelization vector fields instead of $Y,\,Z$ and $V$, we have:
\begin{eqnarray*}
\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{T\big(T(\;\undersym{X}{i},\;\undersym{X}{j}),\;\undersym{X}{k}\big)\Big\}=0
&\Longleftrightarrow&-\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{T([\;\undersym{X}{i},\;\undersym{X}{j}],\;\undersym{X}{k})\Big\}=0,\;\text{by (\ref{torsion})}\\
&\Longleftrightarrow&\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{\nabla_{\;\undersym{X}{k}}[\;\undersym{X}{i},\;\undersym{X}{j}] +\big[[\;\undersym{X}{i},\;\undersym{X}{j}],\;\undersym{X}{k}\big]\Big\}=0,\;\text{by (\ref{AP-cond})}\\
&\Longleftrightarrow&\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{\nabla_{\;\undersym{X}{k}}[\;\undersym{X}{i},\;\undersym{X}{j}]\Big\}=0,\;\text{by Jacobi identity}\\
&\Longleftrightarrow&\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{\big(\;\undersym{X}{k}\cdot \;\undersym{\Omega}{h}([\;\undersym{X}{i},\;\undersym{X}{j}])\big)\;\undersym{X}{h}\Big\}=0,\;\text{by (\ref{canonical})}\\
&\Longleftrightarrow&\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{\big(\;\undersym{X}{k}\cdot \;\undersym{\Omega}{h}([\;\undersym{X}{i},\;\undersym{X}{j}])\big)\Big\}\;\undersym{X}{h}=0\\
&\Longleftrightarrow&\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{\big(\;\undersym{X}{k}\cdot \;\undersym{\Omega}{h}([\;\undersym{X}{i},\;\undersym{X}{j}])\big)\Big\}=0\;\,\forall h,\;\text{by the independence of $\;\undersym{X}{i}$}\\
&\Longleftrightarrow&\underset{i,\,j,\,k}{\mathfrak{S}}\Big\{\;\undersym{X}{k}\cdot C^h_{ij}\Big\}=0\;\,\forall h,\;\text{by (\ref{base})}.
\end{eqnarray*}
\vspace*{-1.6cm}\[\qedhere\]
\end{proof}
\vspace*{.1cm} It should be noted that for the natural basis
$\{\frac{\partial}{\partial x^{\alpha}}:\alpha=1,\,...,\,n\}$, the bracket
$[\frac{\partial}{\partial x^{\alpha}},\,\frac{\partial}{\partial x^{\beta}}]=0$ and so the
structure coefficients associated with $(\frac{\partial}{\partial x^{\alpha}})$
vanish. For this reason the \textit{local} expression (in the
natural basis) of the identity
$\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{T\big(T(Y,\,Z),\,V\big)\Big\}=0$ is
valid as is established in \cite{local}.
\vspace*{.1cm}The last proposition gives rise to the following interesting formulae.
\begin{corollary}\label{golden}
In an AP-space $(M,\;\undersym{X}{i})$, if the global structure coefficient of the AP-space are constant functions on $M$, then the next formulae hold:
\begin{description}
\item[(a)] $(\nabla_VT)(Y,\,Z)=(\widetilde{\nabla}_VT)(Y,\,Z)=(\widehat{\nabla}_VT)(Y,\,Z)$.
\item[(b)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{(\nabla_VT)(Y,\,Z)\Big\}=0$.
\item[(c)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{(\widetilde{\nabla}_VT)(Y,\,Z)\Big\}=0$.
\item[(d)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{(\widehat{\nabla}_VT)(Y,\,Z)\Big\}=0$.
\item[(e)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{\widetilde{R}(Y,\,Z)V\Big\}=0$.
\item[(f)] $\widehat{R}(Y,\,Z)V=\frac{1}{2}(\nabla_VT)(Y,\,Z)-\frac{1}{4}T\big(T(Y,\,Z),\,V\big)$.
\item[(h)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{(\nabla_VC)(Y,\,Z)\Big\}=\underset{Y,\,Z,\,V}{\mathfrak{S}}\Big\{(\nabla_VC)(Z,\,Y)\Big\}$.
\end{description}
\end{corollary}
\section{Wanas Tensor}
\hspace*{.4cm} The Wanas tensor was first defined locally by M. I. Wanas in 1975
\cite{unificationT}. It has been used by F. Mikhail and M. Wanas
\cite{unificationP} to construct a pure geometric theory unifying
gravity and electromagnetism. In this section, we introduce the
global definition of the Wanas tensor and investigate it.
\begin{definition}
Let $(M,\;\undersym{X}{i})$ be an AP-space. The tensor field $W$ of type (1,\,3) on $M$ defined by the formula
\begin{equation*}
W(Y,\,Z)\;\undersym{X}{i}=\widetilde{\nabla}^2_{\;Y,\,Z}\;\undersym{X}{i}-\widetilde{\nabla}^2_{\;Z,\,Y}\;\undersym{X}{i},
\end{equation*}
where $\,\widetilde{\nabla}^2_{\;Y,\,Z}=\widetilde{\nabla}_Y\widetilde{\nabla}_Z-\widetilde{\nabla}_{\widetilde{\nabla}_YZ}$, is called the Wanas tensor, or simply the W-tensor, of $(M,\;\undersym{X}{i})$.
\end{definition}
Using (\ref{base}), for every $Y,\,Z,\,V\in\mathfrak{X}(M)$, we get
\begin{equation}\label{wanas1}
W(Y,\,Z)V=\big(\widetilde{\nabla}^2_{\;Y,\,Z}\;\undersym{X}{i}-\widetilde{\nabla}^2_{\;Z,\,Y}\;\undersym{X}{i}\big)\;\undersym{\Omega}{i}(V).
\end{equation}
The next result gives a quite simple expression for a such tensor.
\begin{theorem}
The W-tensor satisfies the following identity
\begin{equation} \label{impw}
W(Y,\,Z)V=\widetilde{R}(Y,\,Z)V-T\big(T(Y,\,Z),\,V\big).
\end{equation}
\end{theorem}
\begin{proof}
Consider the commutation formula for the parallelization vector field $\;\undersym{X}{i}$ with respect to $\widetilde{\nabla}$:
$$\widetilde{\nabla}^2_{\;Y,\,Z}\;\undersym{X}{i}-\widetilde{\nabla}^2_{\;Z,\,Y}\;\undersym{X}{i}=\widetilde{R}(Y,\,Z)\;\undersym{X}{i} -\widetilde{\nabla}_{\widetilde{T}(Y,\,Z)}\;\undersym{X}{i}$$
Consequently,
\begin{eqnarray*}
W(Y,\,Z)V&\overset{(\ref{wanas1})}{=}&\;\undersym{\Omega}{i}(V)\widetilde{R}(Y,\,Z)\;\undersym{X}{i} -\;\undersym{\Omega}{i}(V)\widetilde{\nabla}_{\widetilde{T}(Y,\,Z)}\;\undersym{X}{i} \\
&=&\widetilde{R}(Y,\,Z)V+\;\undersym{\Omega}{i}(V)\widetilde{\nabla}_{T(Y,\,Z)}\;\undersym{X}{i},\;\text{by (\ref{base})} \\
&=&\widetilde{R}(Y,\,Z)V+\widetilde{\nabla}_{T(Y,\,Z)}\;\undersym{\Omega}{i}(V)\;\undersym{X}{i} -\big(T(Y,\,Z)\cdot\;\undersym{\Omega}{i}(V)\big)\;\undersym{X}{i} \\
&=&\widetilde{R}(Y,\,Z)V+\widetilde{\nabla}_{T(Y,\,Z)}V-\nabla_{T(Y,\,Z)}V,\;\text{by (\ref{base}) and (\ref{canonical})} \\
&=&\widetilde{R}(Y,\,Z)V-T\big(T(Y,\,Z),\,V\big),\;\text{by Proposition \ref{different-connections}}.
\end{eqnarray*}
\vspace*{-1.6cm}\[\qedhere\]
\end{proof}
\begin{corollary}
The W-tensor can be expressed in the form:
\begin{equation}\label{wanas2}
W(Y,\,Z)V=(\nabla_VT)(Y,\,Z)-T\big(T(Y,\,Z),\,V\big).
\end{equation}
\end{corollary}
In fact, this expression follows from (\ref{dual}). This shows that the W-tensor is expressed in terms of the torsion tensor of the AP-space only.
\begin{proposition} \label{w1stt}
The Wanas tensor has the following properties:
\begin{description}
\item[(a)] $W(Y,\,Z)V$ is skew symmetric in the first two arguments $Y,\,Z$.
\item[(b)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{W(Y,\,Z)\,V\Big\} =-2\;\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{T\big(T(Y,\,Z),\,V\big)\Big\}.
\item[(c)] $\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{W(Y,\,Z)\,V\Big\} =-\;\underset{Y,\,Z,\,V}{\mathfrak{S}}\;\Big\{(\widetilde{\nabla}_VT)(Y,\,Z)\Big\}.
\end{description}
\end{proposition}
\begin{proof}
Property {\bf{(a)}} is trivial, {\bf{(b)}} follows from Proposition \ref{1st}{\bf{(a)}} and (\ref{wanas2}), {\bf{(c)}} follows from {\bf{(b)}} and Corollary \ref{corrbian}{\bf{(a)}}.
\end{proof}
{The identity satisfied by the W-tensor in Proposition \ref{w1stt}{\bf{(b)}} is the same as the first Bianchi identity \big(Corollary \ref{corrbian}{\bf{(c)}}\big) of the dual curvature tensor up to a constant. The
identity corresponding to the second Bianchi identity is given by:}
\begin{proposition}
The W-tensor satisfies the following identity:
\begin{eqnarray} \label{w2nd}
& & \underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{(\widetilde{\nabla}_VW)(Y,\,Z)U\Big\} \nonumber \\ &=&-\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{T\Big(T\big(T(Y,\,Z),\,V\big),\,U\Big)+T\Big(T\big(T(Y,\,Z),\,U\big),\,V\Big) +T\big(T(U,\,V),\,T(Y,\,Z)\big)\Big\} \nonumber \\
& &+\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{(\nabla_UT)\big(T(Y,\,Z),\,V\big)-(\nabla_VT)\big(T(Y,\,Z),\,U\big)\Big\}
\end{eqnarray}
\end{proposition}
\begin{proof}
Taking into account (\ref{impw}) together with Proposition \ref{2nd}{\bf{(a)}}, Corollary \ref{corrbian}{\bf{(a)}} and Corollary \ref{difofconnections}{\bf{(a)}}, we get
\begin{eqnarray*}
& &\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{(\widetilde{\nabla}_VW)(Y,\,Z)U\Big\} \\
&=&\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{\Big((\widetilde{\nabla}_V\widetilde{R})(Y,\,Z)U -(\widetilde{\nabla}_VT)\big(T(Y,\,Z),\,U\big)-T\big((\widetilde{\nabla}_VT)(Y,\,Z),\,U\big)\Big)\Big\}\\
&=&\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{(\nabla_UT)\big(T(Y,\,Z),\,V\big)-(\widetilde{\nabla}_VT)\big(T(Y,\,Z),\,U\big) -2T\Big(T\big(T(Y,\,Z),\,V\big),\,U\Big)\Big\} \\
&=&\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{(\nabla_UT)\big(T(Y,\,Z),\,V\big)-(\nabla_VT)\big(T(Y,\,Z),\,U\big)\Big\} \\
& &-\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{2T\Big(T\big(T(Y,\,Z),\,V\big),\,U\Big) +\underset{V,\,T(Y,\,Z),\,U}{\mathfrak{S}}\;T\Big(T\big(T(Y,\,Z),\,U\big),\,V\Big)\Big\} \\
&=&\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{(\nabla_UT)\big(T(Y,\,Z),\,V\big)-(\nabla_VT)\big(T(Y,\,Z),\,U\big)\Big\} \\
& &-\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{T\Big(T\big(T(Y,\,Z),\,V\big),\,U\Big)+T\Big(T\big(T(Y,\,Z),\,U\big),\,V\Big) +T\big(T(U,\,V),\,T(Y,\,Z)\big)\Big\}.
\end{eqnarray*}
\vspace*{-1.6cm}\[\qedhere\]
\end{proof}
\begin{corollary}
In an AP-space $(M,\;\undersym{X}{i})$, if the global structure coefficient of the AP-space are constant, we have
\begin{description}
\item[(a)] $\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{W(Y,\,Z)V\Big\}=0$.
\item[(b)] $\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{(\widetilde{\nabla}_VW)(Y,\,Z)U\Big\} =\underset{V,\,Y,\,Z}{\mathfrak{S}}\;\Big\{(\nabla_UT)\big(T(Y,\,Z),\,V\big)- (\nabla_VT)\big(T(Y,\,Z),\,U\big)\Big\}$
\end{description}
\end{corollary}
The proof is straightforward from Theorem \ref{Golden} and Corollary \ref{golden}.
\vspace{8pt} We end this section by the following comments and
remarks on Wanas tensor.
\begin{itemize}
\item The W-tensor is defined by using the commutation formula with respect to the dual connection $\widetilde{\nabla}$.
Nothing new arose from the same definition if we use the three other connections ($\nabla,\,\widehat{\nabla}$ and $\oversetc{\nabla}$).
\item Using the commutation formula for the parallelization form $\;\undersym{\Omega}{i}$ instead of the parallelization
vector field $\;\undersym{X}{i}$ in the definition of the
W-tensor:
\begin{equation*}
W(Y,\,Z)V=\Big((\widetilde{\nabla}^2_{\;Z,\,Y}\;\undersym{\Omega}{i})(V) -(\widetilde{\nabla}^2_{\;Y,\,Z}\;\undersym{\Omega}{i})(V)
\Big)\;\undersym{X}{i}
\end{equation*}
gives the same formula (\ref{impw}) for the W-tensor and consequently the same properties.
\item Being defined by using the parallelization vector fields $\;\undersym{X}{i}$, the Wanas tensor is defined
only in AP-geometry. It has no analogue in other geometries.
\item Although the W-tensor and the dual curvature tensor have some common properties (for example,
Proposition \ref{w1stt}{\bf{(b)}}), there are significantly different properties (for example, (\ref{w2nd})).
In the case of constant global structure coefficients, the W-tensor has some properties common with the Riemannian curvature $\;\overcirc{R}$.
\item For a physical discussion concerning the W-tensor we refer to \cite{local}.
\end{itemize}
\section{Parallelization basis versus natural basis}
\hspace*{.4cm} This section is devoted to a double-view for the fundamental
geometric objects of AP-geometry. On one hand, we consider the local
expressions of these geometric objects in the natural basis
\cite{local} and, on the other hand, we compute their expressions in
the parallelization basis, giving rise to a concise table expressing
this double-view.
\vspace*{.26cm}Let $\big(U,\,(x^{\alpha})\big)$ be a local
coordinate system of $M$. At each point $x\in U$, we have two
distinguished bases of $T_xM$, namely, the natural basis
$\{\partial_{\mu}:=\frac{\partial}{\partial x^{\mu}}:
\mu=1,\,...,\,n\}$ and the parallelization basis
$\{\;\undersym{X}{i}(x):i=1,\,...,\,n\}$. These two bases are
fundamentally different. The parallelization vector fields
$\;\undersym{X}{i}$ are defined globally on the manifold $M$ whereas
the natural basis vector fields $\partial_{\mu}$ are defined only on
the coordinate neighborhood $U$. Consequently, the natural basis
vector fields depend crucially on coordinate systems whereas the
parallelization vector fields do not.
\vspace*{.26cm} Greek (world) indices are related to the natural
basis and Latin (mesh) indices are related to the parallelization
basis. Einstein summation convention will be applied as usual on
Greek indices. It will also be applied on Latin indices whatever
their position is (even if the two repeated indices are upward or
downward).
\vspace*{.26cm}A tensor field $H$ of type $(r,\,s)$ on $M$ is
written in the natural basis in the form:
$$\;H=H^{\alpha_1\,...\,\alpha_r\,}_{{\mu_1\,...\,\mu_s\,}}\partial_{\alpha_1}\otimes...
\otimes\partial_{\alpha_r}\otimes{dx^{\mu_1}}\otimes...\otimes{dx^{\mu_s}},\,\,
\text{on $U$}$$ and in the parallelization basis in the form:
$$\;H=H^{i_1\,...\,i_r\,}_{{j_1\,...\,j_s\,}}\;\underset{i_1}{X}
\otimes...\otimes\;\underset{i_r}{X}\otimes\;\underset{j_1}{\Omega}\otimes...\otimes\;\underset{j_s}{\Omega},
\,\, \text{on}\,\, M,$$ where\,
$H^{\alpha_1\,...\,\alpha_r}_{\mu_1\,...\,\mu_s}\in\mathfrak{F}(U)$
and $H^{i_1\,...\,i_r}_{j_1\,...\,j_s}\in\mathfrak{F}(M)$.
\vspace*{.26cm}A vector field $Y\in\mathfrak{X}(M)$ is written in the natural basis
in the form $Y=Y^{\alpha}\partial_{\alpha}$ and in the
parallelization basis in the form $Y=Y^{i}\;\undersym{X}{i}$. In
particular,
$\;\undersym{X}{i}=\;\undersym{X}{i}^{\alpha}\partial_{\alpha}$\,
and \,
$\partial_{\alpha}=\;\undersym{\Omega}{i}(\partial_{\alpha})\;\undersym{X}{i}=\;\undersym{\Omega}{i}_{\alpha}\;\undersym{X}{i}$.
Hence\, $(\;\undersym{X}{i}^{\alpha})_{1\leq\alpha,i\leq n}$\, is
the matrix of change of bases and\,
$(\;\undersym{\Omega}{i}_{\alpha})_{1\leq\alpha,i\leq n}$\, is the
inverse matrix.
\vspace*{.26cm}We use the following notations (with similar notations with respect to mesh indices):\\
$\Gamma^{\alpha}_{\mu\nu},\,\widetilde{\Gamma}^{\alpha}_{\mu\nu},\,\widehat{\Gamma}^{\alpha}_{\mu\nu},\;\overcirc{\Gamma}^{\alpha}_{\mu\nu}$: the coefficients
of the linear connections $\nabla,\,\widetilde{\nabla},\,\widehat{\nabla},\,\oversetc{\nabla}$ respectively,\\
$\widetilde{|}$: the covariant derivative with respect to the dual connection $\widetilde{\nabla}$,\\
$g_{\mu\nu}$ (resp. $g^{\mu\nu}$): the covariant (resp. contravariant) components of the metric tensor $g$,\\
$\Lambda^{\alpha}_{\mu\nu}$: the components of the torsion tensor $T$,\\
$B_{\alpha}$: the components of the basic form $B$,\\
$\gamma^{\alpha}_{\mu\nu}$: the components of the contortion tensor $C$,\\
$W^{\alpha}_{\sigma\mu\nu}$: the components of the Wanas tensor $W$.
\vspace*{.26cm}Let $D$ be an arbitrary connection on $M$ with
torsion tensor $T$ and curvature tensor $R$. We use the following
conventions:
$D_{\partial_{\mu}}\partial_{\nu}=D^{\alpha}_{\nu\mu}\,\partial_{\alpha}$,\,
$T(\partial_{\mu},\,\partial_{\nu})=T^{\alpha}_{\nu\mu}\,\partial_{\alpha}$,\,
$R(\partial_{\mu},\,\partial_{\nu})\partial_{\sigma}=R^{\alpha}_{\sigma\mu\nu}\partial_{\alpha}$,
with similar conventions with respect to Latin indices.
\vspace*{.26cm} The next table gives a comparison between the most
important geometric objects of AP-geometry expressed in the natural
basis and in the parallelization basis. Geometric objects, equations
or identities having the same form in the two bases are not
included in that table. However, if a
geometric object has the same form in the two bases, that is, if its
expressions in world indices and mesh indices are similar, this
does not mean that the geometric meaning of these two expressions is
the same.
\clearpage
\begin{center}
\large{Table2: Parallelization basis versus natural basis}
\end{center}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\multirow{3}{*}{Geometric object}& Local form & Global form \\
& In the natural basis & In the parallelization basis \\
& (world indices) & (mesh indices) \\ \hline
&\multirow{3}{*}{$\;\undersym{X}{i}^{\alpha}\;\undersym{\Omega}{j}_{\alpha}=\delta_{ij}, \quad\;\undersym{X}{i}^{\alpha}\;\undersym{\Omega}{i}_{\mu}=\delta^{\alpha}_{\mu}$}&\multirow{3}{*}{$\;\undersym{X}{j}^{k}=\delta_{j}^{k}, \quad\;\undersym{\Omega}{j}_{k}=\delta_{jk}$} \\ Parallelization vector fields, & & \\
parallelization forms & & \\[8pt] \hline
\multirow{2}{*}{Metric tensor}&\multirow{2}{*}{$g_{\mu\nu}=\;\undersym{\Omega}{i}_{\mu}\;\undersym{\Omega}{i}_{\nu}$}&\multirow{2}{*}{$g_{jk}=\delta_{jk}$}\\
& & \\ \hline
\multirow{4}{*}{Canonical connection} & \multirow{2}{*}{$\Gamma^{\alpha}_{\nu\mu}=\;\undersym{X}{i}^{\alpha}\;\undersym{\Omega}{i}_{\nu,\mu}$} & \multirow{4}{*}{$\Gamma_{jk}^{h}=0$} \\ & & \\ & \multirow{2}{*}{where $,_{\mu}$ denotes $\partial_{\mu}$} & \\ & & \\ \hline
\multirow{2}{*}{Dual connection}&\multirow{2}{*}{$\widetilde{\Gamma}^{\alpha}_{\nu\mu}=\Gamma^{\alpha}_{\mu\nu}$}&\multirow{2}{*}{$\widetilde{\Gamma}^h_{jk}=C^h_{kj}$}\\
& & \\ \hline
\multirow{2}{*}{Symmetric connection}&\multirow{2}{*}{$\widehat{\Gamma}^{\alpha}_{\nu\mu}=\frac{1}{2}(\Gamma^{\alpha}_{\nu\mu}+\Gamma^{\alpha}_{\mu\nu})$}& \multirow{2}{*}{$\widehat{\Gamma}^h_{jk}= \frac{1}{2}C^h_{kj}$}\\ && \\ \hline
\multirow{2}{*}{Levi-Civita connection}&\multirow{2}{*}{$\;\overcirc{\,\Gamma}^{\alpha}_{\nu\mu}=\frac{1}{2}g^{\alpha\sigma}(g_{\sigma\nu,\mu}+g_{\mu\sigma,\nu}- g_{\nu\mu,\sigma})$}&\multirow{2}{*}{$\;\overcirc{\,\Gamma}^h_{jk}=\frac{1}{2}\Big(C^h_{kj}+C^j_{hk}+C^k_{hj}\Big)$}\\ && \\ \hline
\multirow{2}{*}{Torsion tensor}&\multirow{2}{*}{$\Lambda^{\alpha}_{\nu\mu}=\Gamma^{\alpha}_{\nu\mu}-\Gamma^{\alpha}_{\mu\nu}$}&\multirow{2}{*}{$\Lambda_{jk}^h=C^h_{jk}$} \\
& & \\ \hline
\multirow{2}{*}{Contortion tensor}&\multirow{2}{*}{$\gamma^{\alpha}_{\nu\mu}=\Gamma^{\alpha}_{\nu\mu}-\;\overcirc{\,\Gamma}^{\alpha}_{\nu\mu}$}& \multirow{2}{*}{$\gamma^h_{jk}=-\;\overcirc{\,\Gamma}^h_{jk}$}\\
& & \\ \hline
&\multirow{2}{*}{$\Lambda^{\alpha}_{\nu\mu}=\gamma^{\alpha}_{\nu\mu}-\gamma^{\alpha}_{\mu\nu}$}&\multirow{6}{*}{$\Lambda^{h}_{jk}=\gamma^{h}_{jk} -\gamma^{h}_{kj}$} \\ [-.15cm] & & \\
Torsion in terms &\multirow{2}{*}{$\Lambda_{\sigma\nu\mu}=\gamma_{\sigma\nu\mu}-\gamma_{\sigma\mu\nu}$}& \\ of contortion & & \\[-3pt]
& \multirow{2}{*}{where $\Lambda_{\mu\nu\sigma}=g_{\epsilon\mu}\Lambda^{\epsilon}_{\nu\sigma}$ and $\gamma_{\mu\nu\sigma}=g_{\epsilon\mu}\gamma^{\epsilon}_{\nu\sigma}$} & \\ & & \\ \hline
&\multirow{2}{*}{$\gamma^{\alpha}_{\nu\mu}=\frac{1}{2}\Big(\Lambda^{\alpha}_{\nu\mu}+(\Lambda_{\mu\nu\epsilon} +\Lambda_{\nu\mu\epsilon})g^{\alpha\epsilon}\Big)$}&\multirow{4}{*}{${\gamma^h_{jk}}= \frac{1}{2}(C^h_{jk}+C^k_{jh}+C^j_{kh})$}\\
Contortion in terms& &\\
of torsion &\multirow{2}{*}{$\gamma_{\mu\nu\sigma}=\frac{1}{2}(\Lambda_{\sigma\nu\mu}+\Lambda_{\mu\nu\sigma}+\Lambda_{\nu\sigma\mu})$}&\\
& &\\ \hline
\multirow{2}{*}{Basic form}&\multirow{2}{*}{$B_{\mu}=\Lambda^{\alpha}_{\mu\alpha}=\gamma^{\alpha}_{\mu\alpha}$}&\multirow{2}{*}{$B_j=\Lambda^k_{jk}=\gamma^k_{jk}=C^k_{jk}$}\\ && \\ \hline
\multirow{4}{*}{Wanas tensor}&\multirow{2}{*}{$W^{\alpha}_{\sigma\mu\nu}=(\;\undersym{X}{i}^{\alpha}_{\;\;\widetilde{|}\nu\mu} -\;\undersym{X}{i}^{\alpha}_{\;\;\widetilde{|}\mu\nu})\;\undersym{\Omega}{i}_{\sigma}$} &\multirow{2}{*}{$W^h_{kij}=\;\undersym{X}{k}^h_{\;\;\widetilde{|}ji}-\;\undersym{X}{k}^h_{\;\;\widetilde{|}ij}$}\\
& & \\
&\multirow{2}{*}{$W^{\alpha}_{\sigma\mu\nu}=\Lambda^{\epsilon}_{\mu\nu}\Lambda^{\alpha}_{\sigma\epsilon}-\Lambda^{\alpha}_{\mu\nu|\sigma}$}
&\multirow{2}{*}{$W^h_{kij}=C^l_{ij}C^h_{kl}-\;\undersym{X}{k}\cdot C^h_{ij}$}\\
& & \\ \hline
\end{tabular}
\end{center}
The above table merits some comments. We conclude this section and
the paper by the following remarks and comments.
\begin{itemize}
\item The third column of the above table is obtained by computing
the expression of the geometric objects in the parallelization
basis. For example, to compute the coefficients of the Levi-Civita connection $\;\overcirc{\Gamma}^{h}_{jk}$, set\, $Y=\;\undersym{X}{k},\,
Z=\;\undersym{X}{j},\, V=\;\undersym{X}{h}$ in (\ref{riemannian}).
Then, we get
\begin{eqnarray*}
2g(\nabla_{\;\undersym{X}{k}}\;\undersym{X}{j},\;\undersym{X}{h})&=&\;\undersym{X}{k}\cdot g(\;\undersym{X}{j},\;\undersym{X}{h})
+\;\undersym{X}{j}\cdot g(\;\undersym{X}{h},\;\undersym{X}{k})-\;\undersym{X}{h}\cdot g(\;\undersym{X}{k},\;\undersym{X}{j})\\
& &-g(\;\undersym{X}{k},[\;\undersym{X}{j},\;\undersym{X}{h}])+g(\;\undersym{X}{j},[\;\undersym{X}{h},\;\undersym{X}{k}])
+g(\;\undersym{X}{h},[\;\undersym{X}{k},\;\undersym{X}{j}]).
\end{eqnarray*}
For the left-hand side (LHS),
$$LHS=2\, g(\;\overcirc{\Gamma}^l_{jk}\;\undersym{X}{l},\;\undersym{X}{h})=2\, g_{lh}\; \overcirc{\Gamma}^l_{jk}=
2\, \delta_{lh}\; \overcirc{\Gamma}^l_{jk}=2\; \overcirc{\Gamma}^h_{jk}.$$
As $\;\undersym{X}{k}\cdot g(\;\undersym{X}{j},\;\undersym{X}{h})=\;\undersym{X}{k}\cdot g_{jh}=\;\undersym{X}{k}\cdot \delta_{jh}=0$,
the first three terms of the right-hand side (RHS) vanish.
Hence,
\vspace{-6pt}
\begin{eqnarray*}
RHS &=& -g(\;\undersym{X}{k},C^l_{jh}\;\undersym{X}{l})+g(\;\undersym{X}{j},C^l_{hk}\;\undersym{X}{l})+g(\;\undersym{X}{h},C^l_{kj}\; \undersym{X}{l})\\
&=& -g_{kl}\, C^l_{jh}+g_{jl}\, C^l_{hk}+g_{hl}\, C^l_{kj} \\
&=& -C^k_{jh}+C^j_{hk}+C^h_{kj}.
\end{eqnarray*}
Accordingly, $$\;\overcirc{\Gamma}^h_{jk}=\frac{1}{2}(C^j_{hk}+C^h_{kj}-C^k_{jh}).$$
\item It is clear from the third column that almost all geometric objects of AP-geometry are expressed in terms of the
global structure coefficients $C^h_{jk}$.
The global structure coefficients thus play a dominant role in AP-geometry
formulated in mesh indices. Its role is similar to, and even more
important than, the role played by the torsion tensor
$\Lambda^h_{jk}$ in AP-geometry formulated in world indices.
\item The structure coefficients $C^\alpha_{\mu\nu}$ with respect to an arbitrary
basis $(e_\alpha)$ are not the components of a $(1,2)$-tensor field. In
fact, let $e_{\alpha'}=A^\alpha_{\alpha'}\,e_\alpha$ under a change of local
coordinates from $(x^\alpha)$ to $(x^{\alpha'})$ and let
$[e_\mu,e_\nu]=C^\alpha_{\mu\nu}e_\alpha$ and
$[e_{\mu'},\,e_{\nu'}]=C^{\alpha'}_{\mu'\nu'}\, e_{\alpha'}$. Then, one can easily show that the transformation formula
for $C^\alpha_{\mu\nu}$ has the form:
$$C^{\alpha'}_{\mu'\nu'}=A^{\alpha'}_{\alpha}\, A^{\mu}_{\mu'}\,
A^{\nu}_{\nu'}\,
C^\alpha_{\mu\nu}+K^{\alpha'}_{\mu'\nu'}-K^{\alpha'}_{\nu'\mu'},$$
where $K^\alpha_{\mu\nu}=A^{\mu'}_{\mu}\, A^{\alpha}_{\alpha'}\,(e_{\mu'}\cdot A^{\alpha'}_\nu)$.
Thus, $C^\alpha_{\mu\nu}$ are not the components of a tensor field of
type $(1,2)$ unless $e_{\mu'}\cdot A^{\alpha'}_{\nu}=0$ (that is, the matrix of
change of bases ${A^{\alpha'}}_{\alpha}$ is a constant matrix) or $K^\alpha_{\mu\nu}$ is
symmetric with respect to $\mu$ and $\nu$. Also, the global
structure coefficients $C^h_{jk}$ are not the components of a
$(1,2)$-tensor field (they are $n^3$ functions defined globally on $M$
and having certain properties). Nevertheless, for fixed $j$ and
$k$, $C^h_{jk}$ are the components of the $(1,0)$-tensor field $[\;\undersym{X}{j},\;\undersym{X}{k}]$.
\item In the parallelization basis, although the coefficients of the canonical connection $\nabla$ vanish: $\Gamma^h_{jk}=0$
($\nabla_{\;\undersym{X}{j}}\;\undersym{X}{k}=0$ because of the AP-condition), its
torsion tensor $T$ does not vanish: $\Lambda^{h}_{jk}=C^h_{jk}$; a phenomenon that
never exist in natural local coordinates. This is due to the
non-vanishing of the bracket $[\;\undersym{X}{j},\;\undersym{X}{k}]$
in the expression of the torsion tensor:
$T(\;\undersym{X}{j},\;\undersym{X}{k})=\nabla_{\;\undersym{X}{j}}\;\undersym{X}{k}-\nabla_{\;\undersym{X}{k}}\;\undersym{X}{j}-
[\;\undersym{X}{j},\;\undersym{X}{k}]$.
For the same reason the dual connection $\widetilde{\nabla}$ has also non-vanishing
coefficients: $\widetilde{\Gamma}^h_{jk}=C^h_{jk}$.
\item From the table we have $\widetilde{\Gamma}^h_{jk}=2
\widehat{\Gamma}^h_{jk}=-C^h_{jk}$ and
$\;\overcirc{\,\Gamma}^{h}_{jk}=-\gamma^h_{jk}=\frac{1}{2}(C^h_{kj}+C^j_{hk}+C^k_{hj})$. This means that
the dual connection coefficients and the symmetric connection coefficients coincide (up to a constant) and are
both equal to the global structure
coefficients (up to a constant). On the other hand, the Levi-Civita connection
coefficients coincide with the contortion coefficients (up to a
sign). This shows again that everything in AP-geometry is expressible in terms of
the global structure coefficients. Also, all surviving connections
in the space may be represented by only one of them, say the
Levi-Civita connection.
\item A quick look at the third column of the above table may deceive and lead to erroneous conclusions: the symmetric connection is skew-symmetric and the
Levi-Civita connection is non-symmetric. This is by no means true. The formulation of the notion of symmetry of connections using indices
is not applied any more in this context. In fact, a linear connection is symmetric if and only if it coincides with its dual connection,
and this is the case for both the symmetric and Levi-Civita connections.
Another example: although the symmetric and dual connections coincide (up to a constant) in the parallelization basis, the symmetric
connection has no torsion while the dual connection has a surviving torsion. This is, once more, due to the fact that the torsion
expression has a bracket term which does not depend on the connection.
\item The torsion and contortion tensors of type $(0,3)$ are present in the natural basis while they are not in the parallelization
basis. This is because the metric matrix is the identity matrix $(\delta_{jk})$. Consequently, mesh indices can not be
raised or lowered using the metric $g_{jk}$.
\item In local coordinates the structure coefficients vanish:
$[\partial_{\mu},\,\partial_{\nu}]=0$ (while in the
parallelization basis the global structure coefficients are alive:
$[\;\undersym{X}{j},\;\undersym{X}{k}]=C^h_{jk}\; \undersym{X}{h}$). For
this reason the structure coefficients, in local coordinates, have no
effect and the second column of the above table give thus the usual expressions we are accustomed to
\cite{local}. As an example, as the connection coefficients depend on coordinate systems, the canonical
connection coefficients do not vanish in the natural basis (while they vanish in the parallelization basis).
\item For physical applications, especially in general relativity and gravitation, one can assign a signature to the positive definite metric $g$ defined by (\ref{metric}). This can be achieved, for $n=4$, by writing \,$ g=\eta_{ij}\, \undersym{\Omega}{i}\otimes\undersym{\Omega}{j},$
where $\eta_{ij}=0 \text{ for } i\neq j, \,\, \eta_{ij}=-1 \text{ for } i=j=0, \,\, \eta_{ij}=+1 \text{ for } i=j=1, 2, 3.$
The metric $g$ is thus nomore positive definite but rather nondegenerate.
\end{itemize}
\bibliographystyle{plain}
|
2,877,628,091,270 | arxiv | \section{Introduction}\label{S:1}
Let $D\subset \mathbb{R}^3$ be a bounded domain with a connected smooth boundary $S$,
$D':= \mathbb{R}^3\setminus D$, $k^2=const>0$ is the wave number, $\omega>0$ is frequency,
the boundary impedance $\zeta=const$, Re $\zeta\ge 0$, $\epsilon>0$ and $\mu>0$ are dielectric and magnetic
constants, $\epsilon'=\epsilon +i\frac{\sigma}{\omega}$, $\sigma=const\ge 0$ is the conductivity of $D'$, $x\in D'$,
$r=|x|$, $N$ is the unit normal to $S$ pointing into $D'$.
Let us assume that the electrical field $E=E_0+e$, where $E_0$ is the incident field and $e$ is the scattered
field. Then $e$ solves the problem:
\begin{equation}\label{e1} \nabla \times e=i\omega \mu h,\qquad \nabla \times h=-i\omega \epsilon' h \qquad \text {in}\, D', \end{equation}
\begin{equation}\label{e2} r(e_r-i k e)=o(1), \qquad r\to \infty, \end{equation}
\begin{equation}\label{e3} [N,[e,N]]-\frac {\zeta}{i\omega \mu}[N, \nabla \times e]=-f \text {\,\, on\,\,} S.\end{equation}
Here $f=[N,[E_0,N]]-\frac {\zeta}{i\omega \mu}[N,\nabla \times E_0]$ is a given smooth tangential field on $S$,
$H=\frac {\nabla \times E}{i\omega \mu}$, $h=\frac {\nabla \times e}{i\omega \mu}$, $\epsilon'$ and $\mu$ are
dielectric and magnetic constants of the medium $D'$,
$[A,B]=A\times B$ is the cross product of two vectors,
$A\cdot B$ is their scalar product. Problem (1)-(3)
is the scattering problem for electromagnetic (EM) waves
by an impedance body $D$ of an arbitrary shape. This problem has been discussed in \cite{R635}, where
the uniqueness of its solution has been proved. Explicit formula for the plane
EM wave scattered by a small impedance body ($ka\ll 1$, $a$ is the characteristic size of this body)
of an arbitrary shape is derived in \cite{R635}. There one can also find a solution to many-body
scattering problem in the case of small impedance particles (bodies) of an arbitrary shape.
A few historical remarks are in order. The theory of wave scattering by small bodies was
originated by Rayleigh in 1871. He understood that the main term in the scattered field is the dipole radiation.
How to calculate this radiation, in other words, how to calculate the induced dipole moment
on a small body of an arbitrary shape, Rayleigh did not show. This was done nearly 100 years later
by the author, see \cite {R476}. In 1908 G.Mie published a method for solving scattering problems for well conducting spherical particles
using separation of variables in the spherical coordinates. His method works also for spherical impedance particles,
but does not work for particles of an arbitrary shape for which separation of variables cannot be used.
Smallness of the particle is not required by the Mie's method.
There were no analytical methods for solving EM wave
scattering problems for small bodies of an arbitrary shape. No explicit formulas for the scattered fields
were obtained by other authors for small bodies of an arbitrary shape. Such methods for acoustic and EM waves were
developed in the monograph \cite{R635}.
Let us highlight the novel points in our theory:
a) For one small impedance particle of an arbitrary shape an analytic formula for the scattered
field is derived; this formula is asymptotically exact as $a\to 0$; the scattering
amplitude is $O(\zeta |S|)$. Assuming $|S|=O(a^2)$ and $\zeta=O(a^{-\kappa})$, $\kappa \in [0,1)$
is a constant, one obtains that the scattering amplitude is $O(a^{2-\kappa})>>O(a^3)$,
where $O(a^3)$ is the value of the scattering amplitude in the classical theory. This is
a new physical phenomenon. If the particle is perfectly conducting, the corresponding scattering
amplitude is $O(a^3)$ for small $a$.
b) In the case of many small impedance particles a method for solving the EM wave scattering
problem is developed (see formulas (10)-(12)) and the limiting integral equation (see formula (13))
is derived for the limiting effective field in the medium where very many small impedance particles are embedded.
c) These results are used to formulate a method for creating materials with a desired
refraction coefficient, see Section 5.
These results do not intersect with the results published by other authors.
\section{Formula for the solution of the EM wave scattering by one small body}\label{S:2}
Our {\em first result} is the following explicit formula for the scattered field:
\begin{equation}\label{e4} e=[\nabla g(x,x_1), Q], \qquad g(x,y):=\frac{e^{ik|x-y|}}{4\pi |x-y|},
\end{equation}
which is asymptotically, as $a\to 0$, exact. The quantity $Q$ in formula (4) is given by the following formula:
\begin{equation}\label{e5} Q=-\frac{\zeta |S|}{i\omega \mu}\tau \nabla \times E_0,
\end{equation}
where $E_0$ is the incident field, the tensor $\tau$ is defined as follows:
\begin{equation}\label{e6} \tau=I-b, \qquad b=(b_{jm})=\frac 1{|S|}\int_S N_j(s)N_m(s)ds,
\end{equation}
and $ds$ is an element of the surface area. In formula (5) the terms of the higher order
of smallness, as $a\to 0$, are neglected.
When we write $a\to 0$, it means that $\lambda$
is fixed: the physical meaning has the ratio $a/\lambda$.
Formulas (4)-(6) are proved in \cite{R635}. In their derivation a new representation of the scattered
field $e$ is used:
$$ e=\nabla \times \int_S g(x,t)J(t)dt, \qquad \int_SJ(t)dt:=Q,$$
where $J$ is a tangential to $S$ field. One can find $e$ of this form
if $k^2$ is not a Dirichlet eigenvalue of the Laplacian in $D$.
If $k^2$ is an eigenvalue of the Laplacian in $D$, then one can use
$g_\rho(x,y)$ in place of $g(x,y)$. Here $g_\rho(x,y)$ is the Green's function
of the Helmholtz operator in the exterior of $B(0,\rho)$, the ball centered at the origin and of radius $\rho$.
The $g_\rho(x,y)$ satisfies the Dirichlet condition on the boundary of $B(0,\rho)$ and the radiation condition at infinity.
The small number $\rho>0$ is chosen
so that $k^2$ is not a Dirichlet eigenvalue of the Laplacian in $D\setminus B(0,\rho)$.
Such a ball
can always be found and $\rho>0$ can be chosen as small as one wishes.
Let us discuss formulas (4)-(6). The choice of $x_1\in D$ in formula (4) is not important since $a$
is small. If $D$ is centrally symmetric body, then one can take as $x_1$ its symmetry center.
Formula (5) shows that the scattering amplitude $A$, $A=O(|\zeta||S|)=O(a^2)$ if $\zeta$
does not depend on $a$. The classical dependence of the scattering amplitude on $a$ is
$O(a^3)$, which is much smaller than $O(a^2)$ as $a\to 0$.
This fact might be of physical interest in applications.
Formula (6) gives explicitly the dependence of the scattered field on the shape of the small body.
The main physical (and mathematical) idea, used in \cite{R635} for the derivation
of the above results is the reduction of the scattering problem for a small particle to finding
just one quantity $Q$ rather than a boundary function.
In the next Section the many-body scattering problem is discussed.
\section{EM wave scattering by many small impedance particles}\label{S:3}
Let us formulate our basic results for EM wave scattering by many
small impedance particles $D_m$, $1\le m \le M$, distributed in a bounded domain $\Omega$. Let $x_m\in D_m$
be some points.
Assume that the number $\mathcal{N}(\Delta)$ of small bodies (or points
$x_m$) in any sub-domain $\Delta\subset \Omega$ is given by the formula
\begin{equation}\label{e7} \mathcal{N}(\Delta)= \frac{1}{a^{2 - \kappa}}\int_\Delta N(x)dx(1 + o(1)),\quad a
\to 0,
\end{equation}
where $N(x) \geq 0$ is a continuous in $\Omega$ function,
$\kappa\in [0,1)$
is a parameter, and the boundary impedance of the $m-$th small particle is $\zeta_m:=h(x_m)/a^{\kappa}$,
where $h(x)$ is a continuous function in $\Omega$, such that Re$h\ge 0$, and $x_m\in D_m$ is an arbitrary point.
The choice $\zeta=h(x)a^{-\kappa}$ is not dictated by physical laws. As we mentioned earlier,
the choice of $\zeta$ is made by the experimentalist as he wishes.
{\em The only physical restriction on the
boundary impedance is the relation Re$\zeta \ge 0$, that is, Re$h(x)\ge 0$.}
The restriction for the parameter $\kappa\in [0,1)$ is of technical nature and is not related to a physical law.
{\em The functions $N(x)$, $h(x)$ and the parameter $\kappa$ can be chosen by the
experimentalist as he/she wants.}
Our main physical assumption is:
\begin{equation}\label{e8} a<<d<<\lambda,
\end{equation}
where $d$ is the minimal distance between neighboring small particles.
Our {\em second result} is the following formula for the solution of many-body EM wave scattering problem:
\begin{equation}\label{e9} E(x)=E_0(x)+\sum_{m=1}^M [\nabla g(x,x_m), Q_m], \qquad a\to 0,
\end{equation}
where $Q_m$ are defined by the formula:
\begin{equation}\label{e10} Q_m=-\frac{\zeta_m |S|}{i\omega \mu}\tau \nabla \times E_{em}.
\end{equation}
Here we assumed for simplicity that the particles have the same shape, so
tensor $\tau$ does not depend on $m$, and $E_{em}$ is the effective field acting on the $m-$th particle.
This field is defined by the formula:
\begin{equation}\label{e11} E_{em}=E_0(x_m)+\sum_{j=1, j\neq m}^M [\nabla g(x_m,x_j), Q_j].
\end{equation}
Formulas (10)-(11) lead to the linear algebraic system for finding the unknown quantities $E_m:=E_{em}$:
\begin{equation}\label{e12} E_{m}=E_0(x_m)-\frac{c_S}{i\omega \mu}\sum_{j=1, j\neq m}^M [\nabla g(x_m,x_j),\tau \nabla \times E_{j}]h_ja^{2-\kappa},\quad 1\le m\le M,
\end{equation}
where $c_S>0$ is the constant in the formula $|S|=c_S a^2$.
Our {\em third result} is the following integral equation for the limiting effective field in $\Omega$:
\begin{equation}\label{e13} E(x)=E_0(x)-\frac{c_S}{i\omega \mu}\nabla \times \int_\Omega g(x,y),\tau \nabla \times E(y)h(y)N(y)dy,
\end{equation}
where $N(y)$ is defined in formula (7) and the limit is taken as $a\to 0$. The existence of this limit is
proved in \cite{R635}.
Equation (13) is equivalent to the following {\em local} differential equation:
\begin{equation}\label{e14} \nabla \times \nabla \times E(x)=k^2 E(x)-\frac {c_S}{i\omega \mu}\nabla \times \Big(h(x)N(x)\tau \nabla \times E(x)\Big),
\end{equation}
as one can check by applying the operator $ \nabla \times \nabla \times$ to equation (13), see the details in \cite{R635}.
Let us assume that $\tau $ is proportional to a diagonal matrix $I$. This happens, for example, if the particles are balls
of radius $a$, in which case $\tau=\frac {2}{ 3} I$, as one can easily verify.
Then equation (14) takes the form:
\begin{equation}\label{e15} \nabla \times \nabla \times E(x)=\frac{k^2 E(x)}{1+\frac {2c_S}{3i\omega \mu}h(x)N(x)}-
\frac {2c_S}{3i\omega \mu}\cdot\frac{[\nabla \Big(h(x)N(x)\Big), \nabla \times E(x)]}{1+\frac {2c_S}{3i\omega \mu}h(x)N(x)}.
\end{equation}
\section{Physical interpretation of formula (15)}\label{S:4}
To interpret physically formula (15), consider the Maxwell's equations:
\begin{equation}\label{e16} \nabla \times E(x)=i\omega \mu H, \qquad \nabla \times H(x)=-i\omega\epsilon' H,
\end{equation}
where $\mu=\mu (x)$, apply the operator $\nabla \times$ to the first equation and then use the second one. This yields:
\begin{equation}\label{e17} \nabla \times\nabla \times E(x)=K^2 E(x)+ [\frac{\nabla \mu (x)}{\mu(x)}, \nabla \times E(x)],
\end{equation}
where $K^2=\omega^2 \mu(x)\epsilon'=k^2 \frac {\mu(x)\epsilon'}{\mu_0 \epsilon_0}$, $k^2=\omega^2\epsilon_0 \mu_0$,
and $\mu_0,\epsilon_0$ are the parameters of the free space. Let $n(x):=\frac{\mu(x)\epsilon'}{\mu_0 \epsilon_0}$.
Comparing formulas (15) and (17) one concludes that the limiting medium, obtained by the embedding of many
small impedance particles, has {\em the new refraction coefficient}:
\begin{equation}\label{e18} n(x)= \frac {n_0(x)}{\Big(1+\frac {2c_S}{3i\omega \mu}h(x)N(x)\Big)^{1/2}}, \qquad n_0(x):=\Big(\frac{\epsilon' \mu}{\epsilon_0\mu_0}\Big)^{1/2},
\end{equation}
and {\em the new magnetic permeability}:
\begin{equation}\label{e19} \mu(x)=\Big(1+\frac {2c_S}{3i\omega \mu}h(x)N(x)\Big)^{-1}.
\end{equation}
On formula (18) our recipe for creating materials with a desired refraction coefficient is based.
This recipe is discussed in the next Section.
\section{A recipe for creating materials with a desired refraction coefficient}\label{S:5}
Let us rewrite formula (18) as
\begin{equation}\label{e20} n(x)= \frac {n_0(x)}{\Big(1-ic_1h(x)N(x)\Big)^{1/2}},
\end{equation}
where $c_1:=\frac {2c_S}{3\omega \mu}>0$ and $h=h_1+ih_2$, $h_1\ge 0$. The functions
$h(x)$ and $N(x)\ge 0$ are at the disposal of the experimentalist, as was mentioned earlier.
By choosing these functions properly, one can get any desired refraction coefficient which has the property Im $n(x)\ge 0$.
One rewrites formula (20) as follows:
\begin{equation}\label{e21} n(x)=n_0(x) \Big(1-ic_1h(x)N(x)\Big)^{-\frac 1 2}=n_0( x)\Big(1+c_1h_2(x)N(x)-ic_1h_1(x)N(x)\Big)^{-\frac 1 2}.
\end{equation}
Let $z$ be a complex number. Define $z^{1/2}=|z|^{1/2}e^{i\phi/2}$, where $\phi$ is the argument of $z$,
$0\le \phi<2\pi$. By choosing $h_2<0$ so that $ 1+c_1h_2(x)N(x)$ is small and choosing $h_1\ge 0$ suitably,
one can make $|n(x)|$ to be any desired non-negative function. The argument $\phi$ of the expression
$$1+c_1h_2(x)N(x)-ic_1h_1(x)N(x)$$
can be made arbitrary by choosing $h_1\ge 0$ and $h_2\in (-\infty, \infty)$
suitably.
For example, assume that $n_0>0$, $h_1>0$ is small and $1+c_1h_2(x)N(x)>0$. Then the $\phi=2\pi-2\delta$,
where $\delta$ is arbitrarily small, $\phi/2= \pi-\delta$ and $n(x)=|n(x)|e^{-i(\pi-\delta)}$.
Thus, Re $n(x)<0$, Im $n(x)\ge 0$, and Im $n(x)=|n(x)|\sin \delta$ can be made
as small as one wishes if $\delta$ is sufficiently small. Therefore, one gets a material with negative refraction and negligible losses.
Similarly, using formula (19), one can change magnetic permeability in a desired direction by embedding in a given medium many small impedance particles with the suitable boundary impedances.
\newpage
|
2,877,628,091,271 | arxiv | \section{Introduction}
\label{sec:intro}
Intelligent personal assistants such as Apple's Siri, Microsoft's Cortana, or Google Now are commonly used for answering questions and task optimization. Many companies are deploying specialized automated assistants, known as Intelligent Virtual Agents (IVAs), for efficient problem resolution, cutting costs in call centers, and also as the first layer of technical and product support~\citep{icmi2013}. In these business domains, IVA accuracy and efficiency directly impact customer satisfaction and company support costs. In one case study~\citep{Caramenico2013}, a Fortune 50 insurance company experienced a $29\%$ reduction in contact center volume within five months of deploying an IVA on their website. Domino's Pizza reported that product order time was reduced by $50\%$ through their IVA~\citep{Frearson2015}.
To better assist humans, IVA designers strive to support human-like interactions. Take, for example, Amazon's Alexa Prize competition where student developers attempt to build IVAs that can carry on meaningful, coherent, and engaging conversations for 20 minutes~\citep{levy2016alexa}. As IVAs become more human-like, we theorize that users will increasingly use \textbf{relational strategies} (e.g. self-exposure and justification) with IVAs similar to conversing with humans. There is a large body of work on development of trust between humans engaged in virtual dialog~\citep{gibson2003virtual,ballantyne2004dialogue,holton2001building,coppola2004building}. The focus of these works is on how relational strategies contribute to trust between human speakers. From this literature, we predict the types of strategies humans may employ with IVAs as they relate to them in an increasingly human manner.
In customer service and personal assistant domains, trust is necessary between the human agent and customer. The customer's issues must be viewed by the agent as legitimate for proper attention to be given. Likewise, customers must trust that the agent is capable of assisting them and will not mistreat their information. Current research shows that human-like virtual agents are associated with not only greater user trust but also trust resilience when the agent makes mistakes~\citep{de2016almost}. To build trust with the agent, customers may establish credibility through small talk, self-exposure, and by providing justification of their requests~\citep{bickmore2001relational}.
In interactive question answering, such as dialogs with an IVA, understanding \textbf{user intent} is essential for the success of the IVA~\citep{chai2006towards}. The intent can be defined as the interpretation of a user input that allows an agent to formulate the \textit{best} response. However, when relational strategies are applied to IVAs, the additional language introduced is often unnecessary and can even obfuscate user intent. Such language can lead to confusion in the IVA and a degradation of user experience in the form of clarification questions and wrong information.
\begin{exampleb}
\label{ex1}
I need a ticket to Boston this Saturday, my son is graduating!
\end{exampleb}
In Example~\ref{ex1}, the fact that the customer's son is graduating is unnecessary for determining the user's intent to purchase a ticket. By including unnecessary background information, the IVA may incorrectly deduce that the customer is booking a ticket \textit{for} his or her son instead. Thus, the identification of relational segments is a useful feature for an IVA; unfortunately, no corpus of annotated relational segments exists to develop identification techniques~\citep{serban2015survey}.
This lack inspired us to create such a corpus. Within this corpus, we needed to not only identify the location of relational language but also label its type (Gratitude, Greetings, etc.) so that automated methods to determine the relational strategy in use can be explored.
If these strategies are practiced by users of IVAs, it is important to identify them; enabling IVAs to separate such language can help better clarify the users' main intention. For IVAs to become more human-like, determining which segments of a request are relational is necessary to allow these IVAs to both understand the user intent correctly and to include empathetic or reciprocal relational strategies.
The identification of relational strategies in a single conversational turn can be structured as a multi-intent detection problem. The user not only wants the task completed (the \textit{primary} intent); they may also attempt to build credibility or some common ground with the IVA (the \textit{secondary} intent). Segments of text such as justification or backstory can be annotated as secondary intent and ignored while determining the primary intent. Once relational language is isolated, a separate classification can determine what relational strategies are in use and how to properly respond.
Multi-intent detection within dialog systems is still an emerging field; in recent work, only one intent is assumed to be present per turn~\citep{sarikaya2016overview}. A few methods exist such as~\cite{xu2013exploiting} which uses multi-label learning and~\cite{kim2016two} which employs a two-stage intent detection strategy. However,~\cite{xu2013exploiting} provided no explanation of how data was annotated nor any mention of annotator agreement. In~\cite{kim2016two}, multi-intent data was fabricated by concatenating all combinations of single-intent sentences.
In this article, we provide several contributions. Most importantly, we create the first publicly available customer service corpus with annotated relational segments. We propose an evaluation measure and set a baseline by comprehensive human annotation, ultimately confirming that the addition of relational language can obfuscate the user's intention to IVAs not designed to recognize it. Along with annotated relational segments, our corpus includes multi-intent requests to further research in multi-intent detection. We analyze human agreement in determining the presence of multiple intents so that future research on multi-intent detection can be evaluated in the light of prior human performance. Through these contributions, we hope to encourage further research and ultimately aid in the design of more intelligent IVAs.
In the following sections, we discuss in detail how the data was collected, annotated, and merged to create \textbf{highlighted} sections. Another round of review was then done on these highlighted sections to determine the class of language present in these sections (e.g. Greeting, Gratitude, etc). We then measure and compare the frequency of relational strategies when users present their requests to IVAs versus humans. Finally, we conduct an experiment with three commercial IVAs, demonstrating that removal of relational strategies lowers confusion and leads to improved responses.
\section{Data Collection}
\label{sec:data}
Next IT Corporation designs and builds IVAs on behalf of other companies and organizations, typically for customer service automation. This unique position allows access to a large number of IVA-human conversations that vary widely in scope and language domain. We selected IVAs for data collection based on the volume of conversations engaged in, the scope of knowledge, and the diversity of the customer base.
For diversity, we considered whether the target user base of the IVA was local, regional, national, or international and mapped the locations of the users engaging in conversations to visually verify. We only considered IVAs that had a national or international target user base and did not appear to have a dominate regional clustering to ensure that conversations were well distributed across users from different regions. This was to control for relational styles that may differ between regions.
IVAs deployed in domains that were highly sensitive, such as human resource management or health care, were not considered. As a result, human-computer data was collected from three live customer service IVAs in the language domains of airline, train travel, and telecommunications. Each agent met our criteria of a broad knowledge base, sufficient conversation volume, and a very diverse user base.
The selected IVAs are implemented as mixed-initiative dialog systems, each understanding more than 1,000 unique user intentions. The IVAs have conversational interfaces exposed through company websites and mobile applications. In addition, the IVAs are multi-modal, accepting both speech and textual inputs, and also have human-like qualities with simulated personalities and interests. A random sample of 2,000 conversations was taken from each domain. The samples originate from conversation logs during November 2015 for telecommunications and train travel and March 2013 for airline travel. There were 127,379 conversations available in the logs for the airline IVA. The telecommunications and train travel logs contained 837,370 and 694,764 conversations, respectively. The first user turn containing the problem statement was extracted. We focus on the initial turn as a user's first impression of an IVA is formed by its ability to respond accurately to his or her problem statement, and these impressions persist once formed~\citep{bentleyU,madhavan2006automation}. Therefore, it is imperative that any relational language present does not interfere with the IVA's understanding of the problem statement.
Finding a large mixed-initiative human-human customer service dataset for comparison with our human-computer dialogs proved difficult. Despite mentions of suitable data in~\cite{vinyals2015neural} and~\cite{roy2016qart}, the authors did not release their data. Inspecting the human-human chat corpora among those surveyed by \cite{serban2015survey} revealed only one candidate: the Ubuntu Dialogue Corpus~\citep{lowe2017training}. The corpus originates from an Internet Relay Chat (IRC) channel where many users discuss issues relating to the Ubuntu operating system. After a user posts a query on the channel, all following threads between the querying user and each responding user are isolated to create two-way task-specific dialogs. However, we want to study the initial problem statements to compare their composition with those extracted from our data. In the Ubuntu corpora, these are posed to a large unpaid audience in the hopes that someone will respond. The observed relational language and behavior was, therefore, no different than problem statements inspected in other IRC or forum datasets, and, for our purposes, was no more fitting than any other forum or open IRC dataset.
In addition, we desire to not just measure relational language content but also feed the problem statements into an IVA and measure the effect of any relational language on its understanding of the user intent. To do this, we needed requests that were very similar to those already handled by one of the selected IVAs to have any hope of the user intent already existing in the agent's knowledge base. Unsatisfied with the Ubuntu dataset, we instead focused on publicly visible question and answering data in domains similar to those of the selected IVAs.
Upon searching publicly visible chat rooms and forums in the domains of travel and telecommunications support, we found the TripAdvisor.com airline forum to be the closest in topic coverage. This forum includes discussions of airlines and polices, flight pricing and comparisons, flight booking websites, airports, and general flying tips and suggestions. We observed that the intentions of requests posted by users were very similar to that of requests handled by our airline travel IVA. While a forum setting is a different type of interaction than chatting with a customer service representative (user behavior is expected to differ when the audience is not paid to respond), it was the best fit that we could obtain for our study and subsequent release. A random sample of 2,000 threads from the 62,736 present during August 2016 was taken, and the initial post containing the problem statement was extracted. We use \textbf{request} hereafter to refer to the complete text of an initial turn or post extracted as described.
\subsection{Annotation}
\label{subsec:annotations}
From our four datasets of 2,000 requests each, we formed two equally-sized partitions of 4,000 requests with 1,000 pulled from every dataset. Each partition was assigned to four reviewers; thus, all 8,000 requests had exactly four independent annotations. All eight reviewers were employees of Next IT Corporation who volunteered to do the task in their personal time. As payment, each reviewer received a \$150 gift card.
\begin{table}[b]
\caption{Dataset statistics. The Multi-Intent column represents the count of Requests where one or more reviewers flagged it as containing more than one user intention. The Unnecessary column represents the percentage of Single Intent requests where one or more reviewers selected \textit{any} text as being unnecessary in determining user intent. Avg. Length is the number of words present in All Requests, on average.}
\label{tbl:stats}
\begin{center}
\begin{tabular}{ l ccccc}
\toprule
& All Requests & Multi-Intent & Single Intent & Unnecessary & Avg. Length \\
\cmidrule(l){2-6}
\textbf{TripAdvisor} & 2000 & 734 & 1266 & 94.1\% & 93.26 \\
\textbf{Telecom} & 2000 & 149 & 1851 & 77.3\% & 19.81 \\
\textbf{Airline} & 2000 & 157 & 1843 & 68.6\% & 21.64 \\
\textbf{Train} & 2000 & 201 & 1799 & 55.3\% & 20.07 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
The reviewers were instructed to read each request and mark \textit{all} text that appeared to be additional to the user intention. The reviewers were given very detailed instructions, shown in Appendix B, and were required to complete a tutorial demonstrating different types of relational language use before working on the actual dataset. As the data was to be publicly released, we ensured that the task was clear. If more than one user intention was observed, the reviewer was instructed to flag it for removal. This was a design decision to simplify the problem of determining language necessary for identifying the user intention. Furthermore, as mentioned in section~\ref{sec:intro}, IVAs with the ability to respond to multiple intentions are not yet commonplace. Although flagged requests were not used for further analysis, they are included in the released data to enable future research. After discarding all multi-intent requests, 6,759 requests remained. Per-dataset statistics are given in Table~\ref{tbl:stats}.
A request from the TripAdvisor data is given in Example~\ref{ex2} below. A reviewer first read over the request and determined that the user intent was to gather suggestions on things to do during a long layover in Atlanta. The reviewer then selected all of the text that they felt was not required to determine that intent. This unnecessary text in Example~\ref{ex2} is shown in gray. Each of the four reviewers performed this task independently, and we discuss in the next sections how we compare their agreement and merged the annotations.
\begin{exampleb}
\label{ex2}
{\small
\noindent \textbf{Original Request:} Hi My daughter and I will have a 14 hour stopover from 20.20 on Sunday 7th August to 10.50 on Monday 8th August. Never been to Atlanta before. Any suggestions? Seems a very long time to be doing nothing. Thanks
\vspace{0.3cm}
\noindent \textbf{Determine User Intent:} \textit{Things to do during layover in Atlanta}
\vspace{0.3cm}
\noindent \textbf{Annotated Request:} \textcolor{light-gray}{Hi} My daughter and I will have a 14 hour stopover \textcolor{light-gray}{from 20.20 on Sunday 7th August to 10.50 on Monday 8th August.} Never been to Atlanta before. Any suggestions? \textcolor{light-gray}{Seems a very long time to be doing nothing. Thanks}
}
\end{exampleb}
Reviewers averaged 1 request per minute over 1,000 requests on TripAdvisor data and 4 per minute over 3,000 requests from the three IVA datasets. We observed that each of the eight reviewers required 29 hours on average to complete their 4,000 assigned requests.
\section{Annotation Alignment}
To compare the raw agreement of annotations between two reviewers, we use a modification of \textit{alignment} scores, a concept in speech recognition from hypothesis alignment to a reference transcript~\citep{zechner2000minimizing}. We modify this procedure as insertions and deletions do not occur. Reviewers mark sequences of text as being unnecessary in determining user intention. When comparing annotations between two reviewers, an error ($e_i$) is considered to be any character position $i$ in the text where this binary determination does not match between them. The alignment score can be calculated as:
$$align = \frac{n - \sum_{i=1}^{n} e_i}{n}$$
\noindent where $n$ is the total number of characters. Thus, $align \in [0,1]$ where $1$ is perfect alignment. Reviewers may or may not include whitespace and punctuation on the boundaries of their selections which can lead to variations in $e_i$. Therefore, when two selections overlap, we ignore such characters on the boundaries while determining $e_i$. Figure~\ref{fig:align} shows a fabricated example of alignment between two annotations. In the first selection, the trailing whitespace and punctuation are ignored as they occur within overlapping selections. Notice, however, that whitespace and punctuation count in the last selections as there is no overlapping selection with the other reviewer; therefore, there is no possibility of disagreement on the boundaries.
\begin{figure}[h]
\definecolor{shadecolor}{rgb}{0.96,0.96,0.96}
\begin{snugshade}
$A$: [Hi, ]I need a new credit card[\underline{, my old doesn't work any more.}] Can you help? \\
$B$: [Hi], I need a new credit card, my old doesn't work any more.[\underline{ Can you help?}] \\
{\small
$n = 73$ \hspace{0.45cm} $\sum_{i=1}^{73} e_i = 45$ \hspace{0.45cm} $align_{AB} = \frac{73 - 45}{73} = 0.384$ }
\end{snugshade}
\caption{Example alignment scoring between two fabricated annotations $A$ and $B$. Text between ``['' and ``]'' was marked as unnecessary for intent determination. Positions with an alignment error are underlined.}
\label{fig:align}
\end{figure}
The alignment score was calculated for every request between all four annotations and then averaged. For example, an alignment score was calculated for each request between reviewer $A$ and $B$, $A$ and $C$, $A$ and $D$. The same process was repeated between reviewer $B$ and $C$, $B$ and $D$, then $C$ and $D$. Finally, alignment scores between all unique pairs of reviewers over all requests were averaged per dataset. The distribution of average scores per dataset is shown in Figure~\ref{fig:alignments12} \textbf{(a)}. It may appear, at first, that two annotators could inflate the dataset alignment score by simply making annotations infrequently. However, as each request had four annotators, the average alignment score would actually be lower as those reviewers would have large error compared to the other two. The per dataset alignment averages can, in fact, be higher if a dataset has a large number of requests where \textit{no} reviewer selected any text.
\begin{figure}[h]
\centering\subfigure[Overall alignment scores]{\includegraphics[scale=.7]{Fig2a}}
\subfigure[Alignment scores when reviewers agree that additional language is present]{\includegraphics[scale=.7]{Fig2b}}
\caption{The distribution of average alignment scores between all four annotations per dataset is shown in \textbf{(a)}. We compute average alignment scores where all reviewers agree that additional language is present in \textbf{(b)}.}
\label{fig:alignments12}
\end{figure}
Therefore, it is interesting to remove the effect of these cases and compare the ability of reviewers to agree on the selection boundaries given they both agree that selection is necessary. To measure this, we compute average alignment scores where both reviewers agree that additional language is present, shown in Figure~\ref{fig:alignments12} \textbf{(b)}. Observe that although the Train dataset has the highest overall alignment in both cases, it is lower when the reviewers both select text, indicating it has many cases where no reviewers selected anything (which is in agreement with Table~\ref{tbl:stats}). In the case of TripAdvisor, it appears that there are a significant number of requests where one or more reviewers do not select text, but the others do, lowering the overall alignment score in Figure~\ref{fig:alignments12} \textbf{(a)}.
Alignment based on word-level instead of character-level agreement was also considered. For each word, if the reviewer selected at least 50\% of the word it was considered to be marked. This resolves situations where a reviewer accidentally missed the first or last few characters of a word in their selection. However, this may introduce errors where two letter words have only one character selected. In this case it is impossible to automatically decide if the reviewer meant to select the word or not as always selecting such words will be susceptible to the same error. Besides this ambiguous case, we felt it was safe to assume that words of longer length with less than half of the word selected were not intended to be marked.
Selected words were then used in place of selected characters in calculating the alignment scores between the reviewers in the same manner as Figure~\ref{fig:align}. We discovered that the alignment scores were only 0.2\% different on average across the datasets than the character level alignment scores shown in Figure~\ref{fig:alignments12}. This indicates that reviewers are rarely selecting partial words, and any disagreement is over \textit{which} words to include in the selections. Therefore, in the released corpus and in this article, we consider selections using absolute character position which retains the reviewers' original selection boundaries.
\subsection{Agreement Between Reviewers}
\label{subsec:agree}
\begin{table}
\caption{Reviewer agreement on if any text should be selected. For example, row 3 is the number of requests with selections by at least three reviewers.}
\label{tbl:selectkappa}
\begin{center}
\begin{tabular}{cllll}
\toprule
& TripAdvisor & Train & Airline & Telecom \\
\cmidrule(l){2-5}
$\kappa$ & 0.270 & 0.450 & 0.405 & 0.383 \\
\cmidrule(l){2-5}
1 & 1192 & 995 & 1264 & 1431 \\
2 & 1092 & 709 & 948 & 1154 \\
3 & 863 & 458 & 644 & 795 \\
4 & 534 & 205 & 292 & 410 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
As it is difficult to determine how often all reviewers agree additional language is present from alignment scores alone, we measured reviewer agreement on the presence of additional language and multiple user intentions. For additional language presence, we calculated Fleiss' $\kappa$ over the annotations where the classes compared were if a reviewer did or did not select text. As demonstrated in Table~\ref{tbl:selectkappa}, regardless of domain, this is a subjective task. While there is moderate agreement in the Train and Airline sets, the TripAdvisor set, in particular, is lower in agreement which reinforces our previous observations in Figures~\ref{fig:alignments12} \textbf{(a)} and \textbf{(b)}. Due to the sensitivity of $\kappa$ measurements~\citep{feinstein1990high,guggenmoos1993reliable}, these values must be interpreted in light of the task. Despite the lower values, we are only measuring presence or absence of unnecessary language, and these two categories did not necessarily occur in equal frequencies. Under these conditions, according to~\cite{bakeman1997detecting}, a $\kappa$ between 0.2 and 0.45 may suggest reviewer reliabilities between 80\% to 90\%, respectively. Therefore, despite the lower values for $\kappa$, the individual reviewer annotations appear reliable and can be further improved when merged based on agreement as discussed in the following section.
\begin{exampleb}
\label{ex:3}
\noindent $R1$: Our tv reception is horrible. \textcolor{light-gray}{is there an outage in my area?}
\vspace{0.1cm}
\noindent $R7$: \textcolor{light-gray}{Our tv reception is horrible.} is there an outage in my area?
\end{exampleb}
We did observe situations where two reviewers disagree on the real intent of the user, therefore, causing conflict in the selection of unnecessary text. While these were rare, Example~\ref{ex:3} demonstrates how even humans sometimes struggle with determining the intention of written requests. Reviewer 1 appears to believe that the primary intent of the user is to notify the agent about poor television reception, and the query about the outage in the area is out of curiosity. However, reviewer 7 appears to believe the primary intent is to discover if a cable outage is present in the area, and the complaint about reception justifies the query. The effects of these disagreements on intent can be mitigated by merging the annotations based on the number of reviewers who agreed on a selected character.
Next, we considered the reviewers' determination of multiple intentions. A $\kappa$ was calculated over how reviewers flagged requests containing more than one user intention. As shown in Table~\ref{tbl:multikappa}, we see somewhat similar performance in this task as in the previous selection task. This table demonstrates the difficulty of multi-intent detection, even for humans. The domain does not seem to be a factor as $\kappa$ is similar across datasets. It is apparent, however, that in the forum setting, users are much more likely to insert multiple intentions in a single request than in a chat setting.
\begin{table}
\caption{Reviewer agreement on multi-intent detection. For example, row 3 is the number of requests flagged as containing multiple intentions by at least three reviewers.}
\label{tbl:multikappa}
\begin{center}
\begin{tabular}{l p{1.8cm} p{0.9cm} p{1.0cm} p{1.2cm}}
\toprule
& TripAdvisor & Train & Airline & Telecom \\
\cmidrule(l){2-5}
$\kappa$ & 0.415 & 0.374 & 0.434 & 0.386 \\
\cmidrule(l){2-5}
1 & 734 & 201 & 157 & 149 \\
2 & 480 & 85 & 69 & 56 \\
3 & 275 & 50 & 38 & 32 \\
4 & 71 & 8 & 15 & 11 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\subfigure[Alignment between group 1 reviewers.]{\includegraphics[scale=.46]{Fig3a.png}}
\hfill
\subfigure[Alignment between group 2 reviewers.]{\includegraphics[scale=.46]{Fig3b.png}}
\caption{Alignment scores between each reviewer and the other three members of their group, averaged across the four datasets.}
\label{fig:align_p1_p2}
\end{figure}
How reviewers compare to the rest in their selections is another aspect to be considered. Figure~\ref{fig:align_p1_p2} \textbf{(a)} compares how each reviewer agreed with the other 3 in the first group. We can see that, overall, the mean is very close. However, reviewer 7, in particular, had more variation in his or her selections. Similarly, Figure~\ref{fig:align_p1_p2} \textbf{(b)} compares how each reviewer agreed with the other 3 in the second group. In the second group, we see slightly more disagreement, particularly with reviewer 6. This could be because he did not interpret the user intention the same as others or because the reviewer was more generous or conservative in selections compared to the others in the group.
\subsection{Merging Selections By Agreement}
\label{sec:merge}
\begin{figure}[ht]
\includegraphics{Fig4.png}
\caption{Mean number of words highlighted per request by dataset. Agreement is the number of reviewers who marked the same word for removal, where 0 is the original request length.}
\label{fig:numselection}
\end{figure}
The four annotations per request were \textbf{merged} using the following strategy: for every character position in the request, if at least a threshold of two annotations contained that position, \textbf{highlight} it. To quantify the average reduction of request size, we count the number of words highlighted for each level of reviewer agreement. In Figure~\ref{fig:numselection}, we can see that as the agreement required increases, the size of the highlight decreases significantly.
\section{Annotating Relational Content}
\label{sec:annotate-rc}
To determine the use of relational strategies, a second round of manual analysis was performed. An increase in agreement corresponds to a significant removal of remaining annotations as can be seen in Figure~\ref{fig:numselection}. Therefore, in order to have sufficient data for analysis given the sample size, an agreement of two is used. A comparison of relational annotation using all agreement levels is left for future works.
Once merged, highlighted sections were analyzed by the authors to determine the classes of language present. Each such section was evaluated and given one or more of the following tags: \textit{Greeting, Backstory, Justification, Gratitude, Rant, Express Emotion, Other}. See Figure~\ref{fig:process} for an overview of the entire process.
\textbf{Greetings} are a common relational strategy humans use to build rapport with other humans and machines~\citep{lee2010receptionist}.
\textbf{Backstory} is a method of self-exposure that may be employed by the customer. In Example~\ref{ex1} given in section~\ref{sec:intro}, the customer included the fact that he or she is attending a graduation as a means of self-exposure. This may be an attempt to build common ground with the agent or it may indicate the importance of the trip and motivate the agent to help the customer succeed.
\textbf{Justification} is used by the customer to argue why the agent should take some action on the part of the customer. For instance, when trying to replace a defective product, a customer may explain how the product failed to establish credibility that the product was at fault.
\textbf{Gratitude}, like greetings, are used by humans to also build rapport with humans and machines~\citep{lee2010receptionist}.
\textbf{Ranting} is a means of expressing dissatisfaction when a customer feels frustrated, ignored, or misunderstood. In computer-mediated conversations, the non-verbal emotional cues present in face-to-face conversations are missing; thus, humans resort to such negative strategies to convey their emotions~\citep{laflen2012okay}. For tagging purposes, we define a \textit{Rant} to encompass any excessive complaining or negative narrative.
\textbf{Expressing emotions} can be a means of showing displeasure when a customer feels a conversation is not making adequate progress or in reaction to an unexpected or disagreeable agent response. This can also indicate joking or other positive emotional expression. The tag \textit{Express Emotion} is used as a catch-all for any emotional statement that is not covered by \textit{Rant}. Examples would be: \textit{``i love that!", ``UGH!", ``WHY???"}.
The \textbf{Other} tag indicates that some or all of the section does not contain any relational language. This is commonly a restatement of the primary intent or facts that reviewers marked as unnecessary.
\subsection{Analysis of Relational Tags}
\begin{figure}[h]
\centering\includegraphics{Fig5.png}
\caption{Incidence of relational language per dataset. An incidence of 0.5 means the tag is present in 50\% of all requests.}
\label{fig:tags_hm2}
\end{figure}
As shown in Figure~\ref{fig:tags_hm2}, we see that backstory is more common in human-to-human forum posts. However, both Airline and Telecom IVAs also have a significant amount of backstory. Although minimal, ranting and justification were present in Telecom. The Train dataset appeared to contain the least amount of relational language. It is difficult to speculate why without deeper analysis of the user demographic, the presentation of the IVA on the website, and the IVA knowledge base.
\begin{figure}[h]
\centering\includegraphics{Fig6.png}
\caption{Pearson coefficients of tag correlation across datasets.}
\label{fig:tags_hm}
\end{figure}
The correlation between tags is shown in Figure~\ref{fig:tags_hm}. When greetings are present, it appears that there is a likelihood there will also be gratitude expressed which agrees with the findings in~\cite{lee2010receptionist} and~\cite{makatchev2009relating}. Also interesting is the apparent correlation between backstory and gratitude. Those that give background on themselves and their situations appear more likely to thank the listener. Ranting appears to be slightly negatively correlated with greetings, which is understandable assuming frustrated individuals are not as interested in building rapport as they are venting their frustrations.
\begin{figure}
\includegraphics[scale=0.5]{Fig7.png}
\caption{An overview of the review and merging process. In this example from the TripAdvisor corpus, reviewers 2, 3, and 4 all agree on which text is unnecessary. \textit{Selections} are \textit{merged} to form \textit{highlighted} text that is then removed from the original text to create a \textit{cleaned request}. A second round of annotation was done on highlighted texts to determine the classes of language present. The colors of the text correspond to the class present.}
\label{fig:process}
\end{figure}
\section{Experiments and Results}
To measure the effect on IVA performance and determine what level of reviewer agreement is acceptable, we first constructed highlights for the 6,759 requests using all four levels of reviewer agreement. Next, four \textit{cleaned} requests were generated from each original request by removing the highlighted portion for each level of agreement resulting in 27,036 requests with various amounts of relational language removed.
Every unaltered request was fed through its originating IVA, and the intent confidence score and response was recorded. We then fed each of the four cleaned requests to the IVA and recorded the confidence and response. The TripAdvisor data was fed through the Airline IVA as it provided the most similar domain. This was also a test to see if lengthy human-to-human forum posts could be condensed and fed into an existing IVA to generate acceptable responses. We observed an increase in confidence in all domains with an average of 4.1\%. The Telecom set, which had the highest incidence of backstory outside of TripAdvisor, gained 5.8\%.
In addition to intent confidence, we measured the effect of relational language removal on overall IVA understanding. An A-B test was conducted where four reviewers were shown the user's original request along with the IVA response from the original request and the IVA response from a cleaned request. They were asked to determine which, if any, response they believed better addressed the request. If the original IVA response was preferred, it was assigned the value -1. If the response to the cleaned request was preferred, it was assigned the value 1. Finally, if neither response even remotely addressed the user's request or if both responses were comparable, it was given the value 0.
\begin{figure}
\includegraphics[scale=.68]{Fig8.png}
\caption{Results of the A-B test on IVA response to original request versus cleaned request. Black bars indicate 95\% confidence intervals.}
\label{fig:abtest}
\end{figure}
This A-B test was done only on responses that changed as a result of the cleaned request (3,588 IVA responses changed out of the 27,036 total responses). The result of this analysis is shown in Figure~\ref{fig:abtest}. Note that the lower bound is -1, indicating the original IVA response is preferred. If language is removed, the IVA response to the cleaned request is more likely preferred as made evident by the significantly positive skew. 95\% confidence intervals are included, and although they may seem large, this is expected; recall that a 0 was assigned if both IVA responses address the user request comparably or neither did. In 10 of the 16 cases, the skew is towards the cleaned response within the 95\% confidence interval.
This is evidence that the current usage of unnecessary language has a measurable negative effect on live commercial IVAs. TripAdvisor is an interesting exception, especially when the threshold is 4. However, this can be somewhat expected as it is a human-to-human forum where user inputs are significantly longer, and primary intent can be difficult to identify even for a human.
Although, in general, the removal of language is preferred, how \textit{much} removal? This is another question addressed in Figure~\ref{fig:abtest}. The higher the threshold, the more reviewers need to agree on the removal of the same segment of text. Thus, although language may still be removed, less language is removed with a high threshold than if the threshold was lower due to low kappa (see~\ref{subsec:agree}). In effect, the higher thresholds may remove less unneeded language but the language that \textit{is} removed is more likely to be actually unnecessary which appears to improve the IVA understanding. However, using a threshold of 4 seems to have limited improvement over 3 due to the reviewer disagreement.
\section{Conclusion}
Through the collection of this corpus and the annotation of relational segments, we have shown that users of commercial IVAs are already applying relational strategies to these IVAs. It is our prediction that these strategies will only increase as IVAs become more ubiquitous and human-like. We have also shown that the removal of unnecessary language during intent determination not only increases intent classifier confidence but also improves response by reviewer standards. It is our hope that by providing this data to the community, others can work on the automation of both the separation of business content from relational content and the classification of relational strategies.
The fact that it is possible to improve IVA responses to noisy forum data by the removal of relational language gives hope that automated methods of relational language detection may allow IVAs to contribute to human-to-human forum settings without substantial modifications to their underlying language models. For example, an IVA could be employed on a commercial airline website while also monitoring and contributing to airline forum threads related to its company. This saves substantial effort and cost compared to deploying two special-purpose IVAs for each task.
Given the problematic presence of relational language in task-based inputs and our promising preliminary results, we encourage the research community to investigate ways on automating this annotation using our publicly available data\footnote{https://s3-us-west-2.amazonaws.com/nextit-public/rsics.html}. There are many applications for such automation. Determining if turns contain ranting in automatic quality assurance monitoring systems like the one presented by~\cite{roy2016qart} could help surface poor customer service more efficiently. In systems for automating IVA refinement such as the one described by~\cite{beaver2016prioritization}, automatic detection of the presence of backstory or justification can be used as an indicator of possible missed intention. In live IVAs, simplifying inputs before determining user intention as in our experiment can increase intent classification accuracy. Finally, IVAs can appear more human-like by classifying relational language to explicitly deal with relational content and respond in a relational manner. Such applications would greatly improve the quality and relational abilities of customer service IVAs.
|
2,877,628,091,272 | arxiv | \section{Introduction}
General relativity and the standard model of particle physics depend on a number of independent numerical parameters that determine the strengths of the different forces and the relative masses of all known fundamental particles. There is no theoretical explanation of why they have the values they have but they determine the properties of atoms, cells, stars and the whole Universe. They are commonly referred to as the fundamental constants of Nature, but a variation of the constants, at some level, is a common prediction of most modern extensions of the Standard Model (see Uzan 2003 for a review).
That physical constants could vary over cosmological
time is an idea that has been around ever since
Dirac's ``Large Number Hypothesis'' \citep{dirac}.
It is currently of great interest in the context of
cosmologically relevant scalar fields, like
quintessence (see
\citealt{quint} for a review).
An attractive implication of quintessence models for the dark energy is that the rolling scalar field producing a negative pressure and therefore the acceleration of the universe may couple with other fields and be revealed by a change in the fundamental constants \citep{amendola}
Variation of the fundamental constants is foreseen also in other theories beyond the standard model. For instance, in theories involving more than four space-time dimensions the constants we observe are merely four-dimensional shadows of the truly fundamental high dimensional constants and they may be seen to vary as the extra dimensions change slowly in size during their cosmological evolution.
The fine structure constant $\alpha = e^2/\hbar c$
is dimensionless and governs the coupling between
photons and electrons. By solving the Schr\"odinger equation
for the hydrogen atom, the bound states are given by
\begin{equation}
E_{n} = -\alpha ^2 {mc^2\over 2 n^2}
\end{equation}
\noindent where $n$ is the principal quantum number (n=1,2,...,$\infty$)
and $\alpha$ is the above defined fine structure constant
\citep[][p. 354 eq. 17]{messiah1}.
When the relativistic
corrections are considered the
eigenvalues corresponding to angular momentum $J$
and principal quantum number $n$ can be approximated to
\begin{equation}
E_{nJ} = mc^2\left [ 1 + {\alpha^2\over (n-\epsilon_J)^2} \right ] ^{-1/2}
\end{equation}
\noindent
where $\epsilon_J$ is a function of $J$ and $\alpha^2$ \citep[][p. 802, eq. 179]{messiah2}.
Whenever we have a fine-structure multiplet, i.e.
transitions between energy levels with the same
principal quantum number and different $J$, the relativistic
corrections are proportional to $\alpha^2$, to first order,
as can be seen by doing a power series expansion of the term in square brackets
in eq. 2.
The simplest case is that of alkali
doublets such as \ion{Li}{i}, \ion{Na}{i}, \ion{K}{i},
but also of the alkali ions \ion{C}{iv} and \ion{Si}{iv} where
the splitting of the doublet, i.e. the wavelength
separation of the two components is a function of
$\alpha$. By
measuring the alkali splitting in gas at redshift $z$ we can
measure the value of $\alpha$ at a different instant
of space-time. This means we can effectively probe the variations
of $\alpha$ over space-time.
Earth-based laboratories have so far revealed no variation in their values. For example, the constancy of the fine structure constant stability is ensured to within a few parts per 10$^{-17}$ over a 1 yr period (Rosenband et al. 2008). Hence its status as truly constants is amply justified. Astronomy has a great potential in probing their variability at very large distances and in the early Universe.
The first attempts to measure variation of $\alpha$
using QSO spectra \citep{savedoff,bahcall} could only
achieve an accuracy of $10^{-2}$ in $\Delta\alpha/\alpha$.
However, the transition frequencies of the narrow metal absorption lines observed in the spectra of distant quasars are sensitive to $\alpha$.
Thus
the many-multiplet (MM)
method has been introduced, which allows all observed transitions to
be compared, gaining access to the typically much larger dependence of
the ground state energy levels on $\alpha$ \citep{Dzuba:1999:888}. Overall,
the MM method improves the sensitivity to the measurement of a
variation of $\alpha$ by more than an order of magnitude over the
alkali-doublet method.
The change in the rest-frame frequencies between the laboratory,
$\omega_{i}(0)$, and in an absorber at redshift $z$, $\omega_{i}(z)$,
due to a small variation in $\alpha$, i.e.~\ensuremath{\Delta\alpha/\alpha} $\ll 1$, is
proportional to a $q$-coefficient for that transition:
\begin{equation}\label{eq:da1}
\omega_{i}(z) \equiv \omega_{i}(0) + q_i\left[\left(\alpha_z/\alpha_0\right)^2-1\right]\,,
\end{equation}
\noindent
where $\alpha_0$ and $\alpha_z$ are the laboratory and absorber values
of $\alpha$, respectively \citep{Dzuba:1999:888}.
The change in frequency is observable as a
velocity shift, $\Delta v_i$, of the $i-th$ transition.
\begin{equation}\label{eq:da2}
\hspace{1em}\frac{\Delta v_i}{c} \approx -2\frac{\Delta\alpha}{\alpha}\frac{q_i}{\omega_{i}(0)}\,.
\end{equation}
The MM method is based on the comparison of measured velocity shifts
from several transitions having different $q$-coefficients to compute
the best-fitting \ensuremath{\Delta\alpha/\alpha}.
The MM method and the
advent of 8m class telescopes that could provide
high resolution spectra of QSOs gave the first hints that the fine
structure constant might change its value over time, being lower in the past
by about 6 part per million (ppm) (Webb et al. 1999, Murphy et al. 2004).
With the addition of other 143 VLT-UVES absorbers
Webb and collaborators arrived at the surprising
conclusion that although on average
there is no variation of $\alpha$ there are significant
variations along certain directions in the sky.
They have found a 4-$\sigma$ evidence for a dipole-like
variation in $\alpha$ across the sky at the 10 ppm level
(Webb et al. 2011; King et al. 2012). Several other constraints
from higher-quality spectra of individual absorbers exist
\citep{chand2006,lev2007}
but none directly
support or strongly conflict with the $\alpha$ dipole
evidence and a possible systematic producing opposite values
in the two hemispheres is not easy to identify.
The
proton-to-electron mass ratio, $\mu$, is also a dimensionless constant
which can be probed experimentally.
It is known that the wavelengths of the rovibronic
molecular transitions are sensitive to $\mu$.
In a diatomic molecule the energy of the rotational transitions
is proportional to the reduced mass
of the molecule, $M$, and that of vibrational transitions is proportional
to $\sqrt{M}$, in the first order approximation. The frequency of
the rovibronic transitions in Born-Oppenheimer approximation
can be written as,
\begin{equation}
\nu = c_{\rm elec} + c_{\rm vib} / \sqrt{\mu} + c_{\rm rot} / \mu
\end{equation}
where $c_{\rm elec}$, $c_{\rm vib}$, and $c_{\rm rot}$ are some numerical coefficients
related, respectively, to electronic, vibrational and rotational transitions.
Therefore, by comparing the wavelength of the molecular transitions detected in
quasar spectra with their laboratory values one can measure the
variation in $\mu$ (i.e. $\Delta \mu/\mu$\ $\equiv (\mu_z-\mu_0)/\mu_0$ where $\mu_z$ and $\mu_0$ are
the values of proton-to-electron mass ratio at redshift $z$ and today) over cosmological time scales.
Using intervening molecular absorption lines seen in the high-$z$
quasar spectra for measuring $\Delta \mu/\mu$\ in the distant universe was first proposed
by \citet{Thompson75}. As H$_2$ is the most abundant
molecule its Lyman and Werner absorption lines seen in the quasar absorption
spectra have been frequently used to constrain the variation of $\mu$.
However, H$_2$ molecules are
detected in only a few percent of
the high redshift damped Lyman-$\alpha$ (DLA) systems
\citep{Srianand12}
with only a handful
of them being suitable for probing the variation of $\mu$.
If $\mu$ varies, the observed wavelengths of different H$_2$ lines will shift
differently with respect to their expected wavelengths based on laboratory
measurements and the absorption redshift. The sensitivity of the wavelength of
the i'th H$_2$ transition to the variation
of $\mu$ is generally parametrised as
\begin{equation}
\lambda_i = \lambda_i^0 (1+z_{\rm abs})\big{(}1+K_i\frac{\Delta\mu}{\mu}\big{)},
\label{eq_dm}
\end{equation}
where $\lambda_i^0$ is the rest frame wavelength of the transition, $\lambda_i$
is the observed wavelength, $K_i$ is the sensitivity coefficient
of i'th transition, and \ensuremath{z_{\rm abs}}\ is the redshift of the H$_2$ absorber.
Alternatively Eq. \ref{eq_dm} can be written as
\begin{equation}
z_i = z_{\rm abs} + C K_i, ~~~~~ C = (1+z_{\rm abs})\frac{\Delta\mu}{\mu}
\label{eq_dm_a}
\end{equation}
which clearly shows that \ensuremath{z_{\rm abs}}\ is only the mean redshift of transitions with $K_i$ = 0.
Eq. \ref{eq_dm_a} is sometimes presented as
\begin{equation}
z_{\rm red} \equiv \frac{(z_i - z_{\rm abs})}{(1 + z_{\rm abs})} = K_i \frac{\Delta\mu}{\mu}
\label{eq_dm_b}
\end{equation}
that shows the value of $\Delta \mu/\mu$\ can be determined using a linear regression analysis of reduced
redshift ($z_{\rm red}$) vs $K_i$.
This method has been frequently used in the literature for constraining the variation
of $\mu$ \citep[see][]{Varshalovich93,Cowie95,Levshakov02mnras,malec,wm11,vw11,wm12}.
However, at present measurements of $\Delta \mu/\mu$\ using $H_2$ is limited
to 6 $H_2$-bearing DLAs at $z \ge$ 2. All of these analyses suggest that
$|$$\Delta \mu/\mu$$|\le10^{-5}$ at $2\le z \le 3$. The best reported constraints
based on a single system being $\Delta \mu/\mu$\ = $+(0.3\pm3.7)\times 10^{-6}$
reported by \citet{King11} towards Q~0528$-$250. Among the developments in the field since then
we must signal
a number of high precision measurements
of $\Delta\mu/\mu$ both using UV lines
\citep{malec,wm11,vw11,wm12}
At $z\le 1.0$ a stringent constraint on $\Delta \mu/\mu$\ is obtained using inversion transitions
of NH$_3$ and rotational molecular transitions \citep{Murphy08,Henkel09,Kanekar11}.
The best reported limit using this technique is $\Delta \mu/\mu$\ = $-(3.5\pm1.2)\times 10^{-7}$ \citep{Kanekar11}.
\citet{Bagdonaite13} obtained the strongest
constraint to date of $\Delta \mu/\mu$\ = $(0.0\pm1.0)\times10^{-7}$ at $z = 0.89$
using methanol transitions.
In the Galaxy stringent bounds have been obtained in the millimetre and sub-millimetre
domain by \citep{l10a,l10b,l10c}.
However, $\Delta \mu/\mu$\ measurements using NH$_3$ and
CH$_3$OH are restricted to only two specific systems at $z\le 1$. Alternatively
one can place good constraints using 21-cm absorption in conjunction
with metal lines and assuming all other constants have not changed. \citet{Rahmani12} have
obtained $\Delta \mu/\mu$ = $(0.0\pm1.50)\times 10^{-6}$ using a well selected sample
of four 21-cm absorbers at \ensuremath{z_{\rm abs}} $\sim$1.3. \citet{Srianand10} have
obtained $\Delta \mu/\mu$ = $(-1.7\pm1.7)\times10^{-6}$ at $z \sim$3.17 using
the 21-cm absorber towards J1337$+$3152.
However, one of the main systematic uncertainties in this method comes from
how one associates 21-cm and optical absorption components.
An ESO Large Programme (LP)
has been undertaken in the last four semesters to address the case of constant variability.
We shall here briefly describe the programme,
and show some of its first
results.
\begin{table}
\caption{QSO targets of the Large Programme.\label{targets}}
\centering
\begin{tabular}{lllll}
\hline
QSO & $\alpha$(2000) & $\delta$(2000) & $V$ &\\
\hline
HE 0002--4214 & 00 04 48.20&--41 57 28.0&17.2\\
HE 0027--1836 & 00 30 23.63&--18 19 56.0&17.9\\
QSO J0120+2133 & 01 20 17.26& +21 33 46.4&16.1\\
PKS 0237--23 & 02 40 08.17&--23 09 15.75&16.6\\
QSO J0407--4410 & 04 07 17.99&--44 10 13.4&17.6\\
QSO J0455--4216 & 04 55 22.90&--42 16 16.9&17.1\\
HE 0940--1050 & 09 42 53.49&--11 04 25.9&16.6\\
QSO J1215--0034 & 12 15 49.81&--00 34 32.2&17.5\\
QSO J1333+1649 & 13 33 35.78& +16 49 04.0&16.7\\
HE 1341--1020 & 13 44 27.10&--10 35 42.0&17.1\\
HE 1347--2457 & 13 50 38.88&--25 12 16.7&16.3\\
QSO J1549+1911 & 15 51 52.48& +19 11 04.2&15.8\\
QSO J2136--4308 & 21 36 06.04&--43 08 18.1&17.7\\
HE 2217--2818 & 22 20 06.77&--28 03 23.4&16.0\\
QSO J2208--1944 & 22 08 52.07&--19 44 00.0&17.3\\
\hline
\end{tabular}
\end{table}
\section{The UVES Large Programme}
The main drawback of the sample assembled by
\citet{webb} is that the observations where
mainly acquired with scientific objectives other
than the measurement of $\Delta\alpha/\alpha$
thus systematic errors are not
monitored and minimised.
In 2010 our Large Program
of optical observations dedicated to
measuring $\alpha$ and $\mu$ in distant galaxies was approved
by the ESO Observing Programmes Committee.
The Large Program was
granted 42 nights beginning mid 2010 at UVES at the ESO VLT to obtain
a high-quality sample of quasar spectra, calibrated specifically for the purpose
of constraining variations in $\alpha$ and $\mu$
to the ultimate precision allowed by current technology.
For the first time the spectra were observed primarily
for this purpose, with the explicit aim to keep
calibration errors under control.
The fundamental physical questions being addressed demand a level of rigour in quasar absorption
studies well beyond the norm
previously adopted.
The signal-to-noise ratio of quasar spectra
is one of the main factors in the error budget. This, in turn,
limits one's ability to track systematic errors.
However, by careful selection of targets our Large Program focuses on
\begin{itemize}
\item 15 among the brightest known quasars showing a suitable absorber
\item a relatively large number of absorbers along their sight-lines: 22 in total.
\end{itemize}
The coordinates and magnitude of the target QSOs are given in Table \ref{targets}.
This means we have observed
each absorber for more than
three nights on average, which allowed us to build for many absorbers a much higher
signal-to-noise ratio
than achieved in all previous studies except the
two ``test case'' absorbers studied in \cite{Molaro:2008:173}. In these
cases the photon statistical noise was reduced well below that from systematic errors.
Our Large Programme
achieved this for all relevant absorbers.
For each absorber we have a high enough signal-to-noise ratio
to convincingly detect, model and remove any remaining systematic
errors down to the level of few ppm,
thereby allowing a convincing detection of any variation
in $\alpha$ at the level seen
in the Keck spectra \citep{Murphy:2003:609}.
The measurements rely on detecting a pattern of small relative wavelength shifts between different
transitions spread throughout the spectrum. Normally, quasar spectra are calibrated by comparison with spectra
of a hollow cathode lamp (normally thorium) rich in unresolved spectral lines.
However since the lamp is located inside the
spectrograph, the calibration light
traverses a slightly different optical path with respect
to the quasar light, so the comparison
is not perfect . The Large Program adopts several
innovations to ensure that we achieve the ultimate precision available:
\begin{itemize}
\item we systematically observed bright asteroids, whose reflected sunlight
spectra contain many spectral features,
typically narrower and sharper than QSO absorption lines.
These observations allow to generate a transfer function for correcting the comparison lamp
wavelength scale. This technique was recently pioneered by us \citep{Molaro:2008:559}
\item we observed bright stars through an iodine gas
absorption cell, as done for extrasolar planet searches, providing an even more precise
transfer function for part of the wavelength range, important for varying constants;
\item we took a series of lamp exposures bracketing
the quasar exposures to ensure the best possible starting
point for this transfer function.
\end{itemize}
Previous estimates of wavelength calibration errors in varying measurements are at
the 3 - 5 ppm level \cite{Molaro:2008:173}. With the three innovative approaches above, we
expect to suppress/remove them below the 1 ppm level for individual quasar absorbers.
\subsection{Systematic effects in the wavelength calibration}
A major step forward towards the understanding of the systematic
effects that limit the precision of wavelength calibration
has been achieved by the use of a Laser Frequency
Comb (LFC) on the HARPS spectrograph \citep{Wilken:2010:L16,Wilken}.
These observations were capable of highlighting the presence
of tiny differences in the pixel sizes of the CCD detectors,
that are due to the manufacturing process. Quite interestingly the
list of wavelengths of the Th-Ar lamp normally used to
calibrate HARPS \citep{Lovis:2007:1115}, has the inaccuracies
of $\pm$40\,\ensuremath{\textrm{m\,s}^{-1}}\ due to the detector, folded in, thus when
this line list is used as reference one should expect
locally errors of this order of magnitude.
No experiment with an LFC has been carried out so far on the
spectrographs on 8m class telescopes, such as UVES or HIRES,
yet pixel size differences of the same order as those
found in HARPS should be expected for these detectors too.
\citet{Griest:2010:158} and \citet{Whitmore:2010:89}
compared the calibration obtained with the Th-Ar lamp with
that obtained from an absorption cell of molecular iodine,
for HIRES and UVES, respectively. In both cases they were able
to highlight distortions of the wavelength scale with a
jig-saw pattern and peak-to-peak amplitude of several hundreds \ensuremath{\textrm{m\,s}^{-1}}\
along the echelle orders.
Although the origin of these distortion is not completely
elucidated it is likely due to a combination of inhomogeneity
in pixel size of the detectors and errors in the reference
line positions of the Th-Ar lamps.
\subsection{Solar-asteroid comparison}\label{sol-ast}
Comparison of different calibration laboratory sources, like Th-Ar, LFC and
$\mathrm I_2$ cell helps to better characterize
the systematics of our wavelength scale. A complementary and
very interesting technique is the use of the spectrum of
an astronomical object.
This has the advantage that the light follows exactly the
same optical path as the scientific target.
A very attractive astronomical source for wavelength
calibration are asteroids \citep{Zwitter}, that reflect
the solar spectrum, imprinting on it minor signatures,
mainly broad and shallow abosptions, and that have
radial velocities that are known, from their orbital
solution, to an accuracy of a few \ensuremath{\textrm{m\,s}^{-1}} .
To test the accuracy of our wavelength scale
one possibility is to compare the measured
line positions in an asteroid spectrum
with those from a solar atlas obtained
with a different instrument.
A frequently used solar atlas for this
purpose is the Kurucz solar flux spectrum
\citep{Kurucz05}\footnote{
\url{http://kurucz.harvard.edu/sun/fluxatlas2005}}.
The
wavelength scale
of this atlas is corrected for the gravitational redshift ($\sim$ 0.63 \ensuremath{\textrm{km\,s}^{-1}}).
The claimed accuracy
of the absolute wavelength scale
is $\sim 100$ ms$^{-1}$ \citep{Kurucz05}, However, this
should be taken as an average value, since comparison
of individual lines with synthetic spectra computed from
hydrodynamical models of the stellar photosphere, that take
into account the convective shifts, may show deviations
as large as several hundreds \ensuremath{\textrm{m\,s}^{-1}}\ \citep[see e.g.][]{caffau}.
Following this approach \citet{Molaro08}
compared the positions of individual lines measured
in the spectra of asteroids and in the solar atlas.
This is not possible in the near ultra-violet,
where the line blending in the solar spectrum
is so high that positions of individual lines
cannot be measured.
An alternative way to perform this comparison
has been explored by our group in \citet{Rahmani2013}
where we cross-correlated the spectra of asteroids, corrected
for their radial velocity,
observed with UVES over several years with the
solar atlas. This technique allows to highlight
the presence of wavelength-dependent velocity offsets
between the asteroid spectrum and the solar atlas.
We show in
Fig.~\ref{fig_sol_ast}
the results of the analysis of \citet{Rahmani2013}, where
different spectra of the same asteroid observed at
a different epochs are shown as
different symbols.
It is clear from the figure that the offsets increase
with wavelength, but the slope is not the same at all
epochs, being larger for the asteroids observed in
2012.
From our point of view it is important to assess
the effect of these offsets on a measurement
of the variation of a constant, such as \ensuremath{\Delta\alpha/\alpha}\ or
$\Delta \mu/\mu$ , assuming that the offsets seen in the
asteroid spectra are the same in the QSO spectra.
In Fig. \ref{fig_sol_ast}, taken from \citet{Rahmani2013},
an intrinsic $\Delta \mu/\mu$ = 0 is assumed and the H$_2$ lines are
assumed to be imprinted on the spectrum, displaying the
measured velocity offset.
The estimated $\Delta \mu/\mu$\ , assuming all the velocity
differences to be due to a variation in $\mu$ is then
given in each panel.
It is striking that at least in two cases one
would conclude on the existence of a variation
in $\mu$ at a level of 4.5$\sigma$.
It is thus crucial to detect and remove such offsets
before analysing the QSO data, to avoid spurious detections.
\citet{Molaro:2011:167} and \citet{Whitmore2013} compared solar features observed both with
HARPS and UVES and found such `intra-order distortions' in the UVES
spectrum. In HARPS the
offsets were measured up to 50 \ensuremath{\textrm{m\,s}^{-1}} within one order and in UVES, where
the pixel size is a factor of three larger, the offsets are found a factor
of three
larger.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=90]{fig_1.ps}}
\caption{The velocity shift measurements using cross-correlation analysis between solar and
asteroids spectra. The solid line in each panel shows the fitted line
to the velocities. The $\Delta \mu/\mu$\ corresponding to the slope of the fitted straight line is
also given in each panel.
}
\label{fig_sol_ast}
\end{figure}
\section{\ensuremath{\Delta\alpha/\alpha} ~ towards HE\,2217$-$2818 }
The first result of our Large Programme
is the analysis of \ensuremath{\Delta\alpha/\alpha}\ in the absorption
systems towards
HE\,2217$-$2818 and has been presented
in \citet{mol13}. We refer the reader to that paper
for all the details of this analysis,
that is here summarized.
Of the five potentially useful absorption systems
only the one at $z_{abs}=1.6919$ provides
a tight bound on \ensuremath{\Delta\alpha/\alpha}.
In spite of the fact that the system is complex,
constituted of several sub-components that span
about 250 \ensuremath{\textrm{km\,s}^{-1}} , each sub-component is narrow
enough to allow a precise determination of
its wavelength.
A matter of concern are the telluric absorptions, that
are imprinted on the spectrum and can seriously
affect the measured wavelength of the intergalactic
absorptions. The telluric lines were identified with the
help of the spectrum of a hot, fast rotating star.
No attempt was made to remove the telluric absorptions,
two different approaches were adopted to deal with them.
In the first case any
intergalactic absorption affected by telluric lines
was removed
from the analysis, in the second case
the portion of the spectra affected were masked and
not considered in the analysis.
In Fig.\ref{fig:sys169}, reproduced from \citet{mol13},
the six ``clean'' lines that are used in the first case
are shown together with the best fitting (in the $\chi^2$ sense)
model.
The best fitting model shown includes as many as 32 sub-components
for each transition.
The number of components was determined by iteratively fitting the profiles with an increasing number of components, until a minimum in the reduced $\chi^2_{\nu}$ was obtained.
The best-fit provides
\ensuremath{\Delta\alpha/\alpha}= $+1.3\pm 2.4_{ stat} \pm 1.0_{sys}$ {\rm ppm}
In the second approach, in which a larger number of transitions
is considered, acceptable fits can be obtained with a slightly
smaller number of components, thirty, rather than thirty-two.
It is nevertheless reassuring that the two
approaches yield consistent results, within
our estimated statistical error, supporting
the robustness of our analysis:
$ \ensuremath{\Delta\alpha/\alpha}=
\ReportStatisticalError{-3.8}
{2.1}\,{\rm ppm}
$ for the second approach.
One matter of concern is the use of different ions,
given that ionization effects may introduce a systematic effect
in the \ensuremath{\Delta\alpha/\alpha}\ measurements
\citep[e.g.][]{Levshakov:2005:827}. In this system we can use as many as six \ion{Fe}{ii}
transitions, which have different q coefficients making it feasible to perform an analysis of \ensuremath{\Delta\alpha/\alpha}\
based on this ion only.
Within the second approach this leads to
$\ensuremath{\Delta\alpha/\alpha} =
\ReportStatisticalError{+1.14}
{2.58}$\,ppm,
which is, again, statistically consistent with the
other two analysis.
\begin{figure*}
\begin{center}
\includegraphics[width=0.75\textwidth]{aa21351-13-fig2.ps}
\caption{Transitions in absorption system at $\ensuremath{z_{\rm abs}}=1.6919$ used to
derive \ensuremath{\Delta\alpha/\alpha}\ in our second analysis approach. The Voigt profile
model (green line) is plotted over the data (blue histogram). The
velocity of each fitted component is marked with a vertical line and
the residuals between the data and model, normalised by the error
spectrum, are shown above each transition. The top panel shows the
composite residual spectrum -- the mean spectrum of the normalised
residuals for all transitions shown -- in units of $\sigma$.
Credit: Molaro et al. A\&A 555, 68, 2013
reproduced
with permission, \copyright ESO}
\label{fig:sys169}
\end{center}
\end{figure*}
\subsection{Implications for the spatial dipole in \ensuremath{\Delta\alpha/\alpha}}
Our results are consistent with no variation
in $\alpha$ along the line of sight to
HE\,2217$-$2818, the system at $z_{abs}= 1.6919$
with a very stringent bound, but the other
five systems at lower redshifts are consistent with
this conclusion.
It is interesting to compare this null result with
the prediction of the dipole model for the
spatial variation of \ensuremath{\Delta\alpha/\alpha} .
We consider the model proposed
by \citet{King:2012:3370}, that stems
from the analysis of nearly 200 measurements
obtained both with UVES at VLT and HIRES at Keck.
The
combined data, at an approximate
mean redshift $\ge$1.8, suggests a spatial variation of \ensuremath{\Delta\alpha/\alpha}\
that can be described by the sum of
a monopole and a dipole
in the direction with equatorial coordinates
$\mathrm 17.3h\pm1.0h$, $-61^\circ\pm10^\circ$
\citep{King:2012:3370}.
This can also be described by a simpler model,
with only the dipole term
in the direction
$\mathrm 17.4h\pm0.9h$, $-58^\circ\pm9^\circ$ \citep{King:2012:3370}.
For the line of sight towards
HE\,2217$-$2818
the simple dipole-only model
predicts $\ensuremath{\Delta\alpha/\alpha}=+5.4\pm1.7$\,ppm.
Thus our measurement
differs from the simple dipole prediction
by 1.3\,$\sigma$. The corresponding
prediction for the monopole plus dipole model
is $\ensuremath{\Delta\alpha/\alpha}=+3.2\pm1.7$\,ppm.
Our null result does not support the existence of
the dipole, yet it is not stringent enough
to rule it out.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=90,clip=true]{Fig_3.ps}}
\caption{Absorption profile of $H_2$ transitions of $J$ = 3 level and the best
fitted Voigt profile to the combined spectrum. The normalized residual
(i.e.([data]$-$[model])/[error]) for each
fit is also shown in the top of each panel along with the 1$\sigma$ line.
We identify the clean absorption lines by using
the letter "C" in the right bottom of these transitions. The vertical ticks mark the
positions of fitted contamination. Reproduced from \citet{Rahmani2013}, with permission.}
\label{fig_J3}
\end{figure}
\section{$\Delta \mu/\mu$\ towards HE 0027-1836}
The DLA at $z = $ 2.4018 towards HE 0027-1836 shows a H$_2$ cloud with over 100 H$_{2}$
lines in the observed wavelength range of
3330 \AA\ to 3800 \AA\ which can effectively be used to probe $\mu$.
The detected lines are from different
rotational states ( $0 \le J\le 6 $) and have
a wide range of oscillator strengths thus
allowing a very accurate
modeling of the molecular cloud.
The analysis of this system has been
reported by \citet{Rahmani2013} and we refer
the reader to that paper for all the details,
we here summarize the main results.
From all the detected
absorptions 71 strong and relatively unblended H$_2$ lines
were selected for the analysis.
To derive $\Delta \mu/\mu$\
we used either: i) a linear regression analysis between the line redshift $z_{\rm red}$ and its sensitivity coefficient $K_i$
\citep[cfr][]{wm10,wm11} or ii) by detailed modeling of the lines inserting $\Delta \mu/\mu$\ as an
additional parameter \citep[cfr][]{King11}.
For the former method we obtained the redshifts of individual transitions from
\textsc{vpfit}. An example of the fitted
Voigt profile for $J$ = 3 transitions is provided in Figure \ref{fig_J3}.
Since differences
in the excitation temperatures and broadening
between high and low $J$-levels
may be due to different
phases in the absorbing gas we allowed for the redshift of absorptions from different $J$
levels to be different.
In Fig.~\ref{fig_zvsk_int} we plot the reduced redshift vs $K$
for different transitions. The different $J$-levels are
marked with different symbols and the fitted line for different $J$-levels are shown with different
line styles. The slope (i.e. $\Delta \mu/\mu$) of
these lines is forced to be same.
The velocity drift as shown in Fig 1, was corrected for to give our final value:
$\Delta \mu/\mu$\ = $+15.0\pm9.3$ ppm .
In the second approach we used both a single and a two-component model.
The single component model provides
$\Delta \mu/\mu$\ = $+15.6\pm6.9$ ppm . This is consistent with what we have
found above using $z$-vs-$K$ analysis.
The model with two components accounts better for the multi phase nature of the absorbing gas. In this case $z$ and
$b$ of the two components are forced to be the same for different $J$-levels.
The best-fit values is
$ {\Delta}{\mu}/{\mu} = (-7.6 \pm 8.1_{\mathrm stat} \pm 6.3_{\mathrm sys}) $ ppm,
after correction for the velocity drift.
The reduced $\chi^2$ is 1.177, that is slightly lower than the corresponding
single component fit. The two component model is marginally favored by the statistical indicators with respect to the single component and provides our favoured value for $\Delta \mu/\mu$.
\begin{table*}
\caption{Selected values of $\Delta \mu/\mu$\ from the literature\label{dmtab}}
\centering
\begin{tabular}{lllll}
\hline
$\Delta \mu/\mu$\ & Ref. & absorber & $z$ & QSO\\
$10^{-6}$\\
$4.3\pm7.2$ & \citet{wm12} &$\mathrm H_2$ & 3.025 & Q0347$-$383\\
$0.3\pm3.2_{\rm stat}\pm1.9_{\rm sys}$ & \citet{King11} & $\mathrm H_2$ and HD & 2.811 & Q0528$-$250\\
$8.5\pm4.2$ & \citet{vw11} & $\mathrm H_2$ and HD & 2.059 & J2123$-$005\\
$10.9\pm7.1$ & \citet{King08} &$\mathrm H_2$ & 2.595 & Q0405$-$443\\
$3.7\pm14 $ & \citet{Thompson09apj}&$\mathrm H_2$ & 2.595 & Q0405$-$443\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth,angle=90]{fig_4.ps}
\caption{Reduced redshift vs the $\rm K_i$ for all the fitted $H_2$ lines.
Lines
from different $J$-levels are plotted with different symbols. The best fitted
linear line for different $J$-levels with the constraint that the slope should
be same is also shown.
This analysis provides $\Delta \mu/\mu$ = 20.0+9.3 ppm, without any correction
for the velocity drift.
}
\label{fig_zvsk_int}
\end{figure*}
Our measurement
is consistent with a constant $\mu$ over the last $\approx$ 11 Gyr within one part in 10$^{5}$.
This is consistent with $\Delta \mu/\mu$\ measurements in literature as reported in Table
\ref{dmtab}.
The measurement towards Q0528$-$250
has the smallest estimated error,
there is however some concern on this measurement, due
to the fact that \citet{King11} and \citet{Noterdaeme08}
derive molecular hydrogen column densities that differ
by a factor of 50. Further investigation of this system
is highly desirable.
We note that three out of four UVES based measurements show positive values
of $\Delta \mu/\mu$.
However, since
wavelength dependent drift, as observed by us,
could bias $\Delta \mu/\mu$\ measurements towards positive values so this cannot be taken
as evidence of variation until the origin of the UVES velocity drifts is fully elucidated.
\balance
\section{Conclusions and future prospects}
The analysis of the first two lines of sight of the ESO Large Program dedicated to the study of the variability of the fundamental constants provided results which are consistent with a null variation of the fine structure constant $\alpha$ and of the proton-to-electron mass ratio $\mu$. Namely:
\\
\centerline{
\ensuremath{\Delta\alpha/\alpha}= $+1.3\pm 2.4_{\mathrm stat} \pm 1.0_{\mathrm sys}$ {\rm ppm} }
and
\centerline{
$\Delta \mu/\mu$\ = $-7.6 \pm 8.1_{\mathrm stat} \pm 6.3_{\mathrm sys} $ ppm }
\bigskip
The analysis of the other absorption systems towards the remaining lines of sight is in progress.
With the first analysis we have confirmed the importance of accurate observational strategy targeted to minimize the systematics. In particular the use of the solar spectrum obtained by regular asteroid observations proved to be crucial to check the wavelength accuracy of the UVES spectrograph. This analysis revealed a systematic in the UVES wavelength scale with intra-order distortions which may have an impact into a possible signal in the variability of $\alpha$ and $\mu$. A full characterization of these distortions is required in order to make a significant advance in the accuracy of these measurements.
\acknowledgements
P.B. acknowledges support from the Conseil Scientifique
de l'Observatoire de Paris.
S.A.L.'s work is supported by the grant DFG
Sonderforschungsbereich SFB 676 Teilprojekt C4.
P.M. and C.J.M. acknowledge the financial support of grant
PTDC/FIS/111725/2009 from FCT (Portugal). C.J.M. is also supported by an FCT Research Professorship, contract reference IF/00064/2012.
M.T.M. thanks the Australian Research Council for funding under the Discovery Projects scheme (DP110100866).
\bibliographystyle{an}
|
2,877,628,091,273 | arxiv | \section{Introduction}
Controlling collisions and scattering has always played an essential role in physics. Thanks to model experiments ranging from collisions of alpha particles with gold foils, conducted more than a century ago \cite{rutherford1911lxxix,geiger1909diffuse}, to high energy collisions between hadrons at the LHC \cite{aad2008atlas}, a wealth of intimate information were revealed about the nature of atoms and elementary particles as well as their interactions. In this framework, the most fundamental description of the interaction of a beam of particles and a scatterer is the famous Rutherford formula, describing the differential cross section dependence on the scattering angle, energy of incident beam, and potential shape of the scatterer \cite{friedrich2013scattering}. Collisions are also ubiquitous in solid state physics, in particular when considering charge transport. Charge carriers indeed scatter on a large variety of "defects": lattice vacancies, phonons, potential of remote ionized impurities, etc. Due to this complexity, it is almost impossible to reach the same degree of control in charge transport scattering experiments as in the case of collisions involving beams of elementary charged particles propagating in vacuum.
However, in the ballistic regime of charge transport, the bulk carrier mean free path becomes larger than the device size, and transport properties can be tailored by tuning the device geometry \cite{datta1997electronic}. This is of course achieved most favorably in nanodevices, which are probably the most adequate system to attempt to perform "ideal" scattering experiments with electrons in solids and their associated quasiparticles. Nevertheless, even in the ballistic regime, a full treatment of scattering in solid-state devices requires to take into account complex many-body interactions with the Fermi sea \cite{Saraga2004,Saraga2005}.
The archetypal ballistic device is the so-called quantum point contact (QPC). Thanks to a metalic split gate deposited on top of a semiconductor heterostructure hosting a high mobility two-dimensional electron gas (2DEG), one can create a constriction whose width can be varied at will with gate voltage. The smooth resulting potential ensures adiabaticity, which leads to a quantized conductance of the QPC \cite{van1988quantized,Wharam1988}.
This canonical realization of ballistic transport allowed to go one step further, in particular when combining transport measurements with a local electrostatic perturbation by a scanning probe. This method lead to explore deviations from this perfect picture of QPCs, such as the observation of branched electron flow in the leads or rich many-body physics \cite{Thomas1996,cronenwett2002low,brun2014wigner,brun2016electron}.
In other studies, geometric scatterers with an asymmetric shape were designed to act as mirrors redirecting electrons towards a particular lead through specular reflection \cite{song1998nonlinear}, leading to a rectifying behavior similar to diode bridges. Such devices could yield applications at high frequency, given the short electron transit time in the ballistic regime \cite{song2001room,bednarz2005nonlinear}. In addition, the magnetic field is a particularly useful knob to focus electrons at desired locations through the so-called "magnetic focusing" effect \cite{aidala2007imaging,bhandari2016imaging}. In a surprising way, up to our knowledge, there are much less examples where fine tuning of the electrostatic potential is used for similar lensing purposes \cite{poltl2016classical}.
\begin{figure}[h!]
\centering
\includegraphics[width=8 cm]{Figure_1_v3.pdf}
\caption{\label{Figure_1}a) Illustration of the ring-like geometry (orange, not on scale), with the superimposed potentials used to tailor the potential landscape next to the edge of the ring antidot. b) and c) Tight binding simulations of the current density modulus (\textit{J}) and iso-current density lines (black with arrows) for depletion (red potential) or accumulation (blue potential), respectively. $G_0$ is the total conductance without any perturbation potential.}
\end{figure}
Here, we study the geometry presented in Fig. 1 where specular electron reflection on the hard-wall facing the entrance of a quantum ring is either enhanced or reduced by tailoring the local electrostatic potential in the vicinity of the wall. The idea is that a Rutherford-like scattering effect - induced by an attractive/repulsive potential - should deflect electron trajectories and hence ease or unease electron injection in the QR arms. Using simulations we indeed show that even small changes in the electrostatic potential at a specific location in the device have strong impacts on ballistic charge transmission, and hence on the device conductance. Experiments fully reproduce the simulated behavior by applying positive or negative potentials on a scanning metallic tip positioned over the hard-wall. Counter-intuitively the highest conductance is observed for a depleting tip potential, and vice versa.
\section{Results and discussion}
Quantum transport simulation results were obtained using the KWANT package \cite{groth2014kwant} for the ring-like geometry depicted in Fig. 1, where device boundaries are defined by infinitely sharp hard-walls. We focus here on the two T-shaped junctions located next to its leads, as this is where ballistic trajectories will be tuned. Note that the central branch connecting the two circular arms plays no role in this work.
The colored regions in Fig. 1a correspond to either raised (red) or lowered (blue) potential with respect to the otherwise flat background potential (disorder will be introduced later in the paper). This color convention will be followed all along the paper: red meaning depleting perturbation (raised potential) and blue meaning accumulating perturbation (lowered potential).
Figs. 1b and c, corresponding to the simulated current density distribution in the potential landscapes of Fig. 1a, visually illustrate the impact of reversing the added potential experienced by electrons impinging on the T-junction. In Fig. 1b, the current injected trough the left contact is favorably redirected towards the lateral branches of the device. In contrast, Fig. 1c reveals that current lines are focused on the hard-wall, which enhances reflection back to the entrance lead.
At this point it seems that current redirection might yield a strong signature in the device conductance $G$ which may look counter-intuitive at first sight : simulations indeed predict that a repulsive perturbation should increase $G$ while an attractive potential should degrade it. Furthermore, one can wonder how sensitive is this peculiar focusing/defocusing behavior with respect to the amplitude, spatial extension and location of the introduced potential perturbation presented in Fig. 1, as well as to the disorder in the background potential. The effect of all these parameters will be simulated in detail later in the paper where transmission through the device - converted in conductance - will be computed. In addition, it is tempting to test these predictions by measuring the conductance of a real-world device.
We thus carved out a ring-like structure from an InGaAs/InAlAs heterostructure hosting a 2DEG. The device geometry shown in Fig. 2a is lithographically very comparable to the one simulated above (the layer structure is similar to the one described in Ref. \cite{liu2015formation}, except for the doped substrate). The 2DEG density and mobility can be tuned thanks to an applied electrostatic back-gate potential ($V_{BG}$). The following data were measured at the maximal accessible charge carrier density ($\sim 10^{16}~\mathrm{m^{-2}}$) and mobility ($\sim 10~\mathrm{m^{2}/Vs}$) corresponding to $V_{BG}=4~\mathrm{V}$. The Fermi energy is thus $E_{F}=55~\mathrm{meV}$ and the Fermi wavelength is $\lambda_F=25~\mathrm{nm}$. The 4-contacts conductance measurements were performed at a temperature $T=40$ mK using a standard lock-in technique with a polarization that remained comparable to $\frac{k_{B}T}{e}$. It is important to note one difference with the simulation results : since the conductance is measured using an alternative current, it is averaged over two different current signs contrary to simulations where current flows only from one side to the other. The physical characteristics of the host heterostructure allowed the modeling of a fixed disorder potential represented in Fig. 2b that will be used in the forthcoming simulations.
\begin{figure}[h!]
\centering
\includegraphics[width=8 cm]{Figure_2.pdf}
\caption{\label{Figure_3}a) Scanning electron micrograph of the fabricated sample in an InGaAs/InAlAs heterostructure. b) Computed real-space disorder potential ($\varphi_{d}$) at the level of the 2DEG that will be used in the forthcoming simulations. The disorder standard deviation ($S_{d}$) is 4.78 meV, calculated taking into account a distribution of Si ionized dopants located 20 nm above the 2DEG (\textit{i.e.} thickness of the InAlAs spacer). The inset to b) shows a map of the autocorrelation as the correlation lag becomes a vector in the $x-y$ plane.}
\end{figure}
Experimentally, a convenient way to generate the kind of perturbation potential used in the simulations presented above is by approaching an electrically biased nanoscale tip ($V_{tip}$) at a distance $d_{tip}$ above the patterned quantum ring (as illustrated in Fig. 3a). The tip can then be scanned along the transport direction, i.e. along the dashed line in Fig. 2b. In order to achieve a large effect, we brought the tip to a distance $d_{tip}=60~\mathrm{nm}$ above the sample surface, and polarized the tip with large positive and negative voltages up to $|V_{tip}|=14~\mathrm{V}$.
\begin{figure}[h!]
\centering
\includegraphics[width=8 cm]{Figure_3.pdf}
\caption{\label{Figure_2}a) Illustration of the potential used for the simulations. It is composed of a disorder potential $\varphi_{d}$ and a Lorentzian-shaped perturbation potential $\varphi_{p}$ caused by a polarized conductive AFM tip located above the 2DEG. b) Simulated conductance profiles as $\mathit{\varphi}_{p}$ is swept along the dashed line in a). Simulation parameters are as follows (same conditions as in Fig. 1) : the red profile corresponds to $\varphi_{p}^{max}$=$0.9~E_{F}$ (depletion) and $R_{p}$=$150~\mathrm{nm}$; the blue profile corresponds to a reversed perturbation potential (accumulation; $\varphi_{p}^{max}$=$-0.9~E_{F}$). These profiles are extracted from the conductance mapping obtained when $\varphi_{p}$ is swept in the $(x,y)$ plane. They are presented in c) ($\varphi_{p}>0$) and d) ($\varphi_{p}<0$). The vertical dashed lines correspond to the locations of the hard-walls along the scanned line in a).}
\end{figure}
The presence of the polarized conductive AFM tip is numerically modeled using a Lorentzian-shape perturbation potential $\mathit{\varphi_{p}}(x,y)$ - illustrated in Fig. 1a - parametrized by the position of its center ($x_{tip},y_{tip}$), height $\mathit{\varphi_{p}}^{max}$, and width $\mathit{R}_{p}$ which is half the potential FWHM.
The superposition of $\mathit{\varphi_{p}}(x,y)$ on the modeled disordered potential $\varphi_{d}$, together with the hard-wall boundaries that mark the edges of the nanodevice, define the potential landscape used in the simulations.
In Fig. 3b, the conductance is computed as $\mathit{\varphi_{p}}$ moves along the axis joining the entrance and the exit contacts (dashed line in Figs. 2a and 3a). Remarkably, when the tip position stands nearby the location of the hard-wall (vertical dashed lines), the conductance significantly deviates from that in the absence of perturbation ($\sim 11~\times\frac{2e^{2}}{h}$), e.g. when the tip stands at the center of the device. Beyond fluctuations originating from the presence of the random disorder, the effect is symmetric as positioning the tip near both T-junctions gives the same result. In other words, the Lorentzian potential has a similar effect on conductance when it modifies either the entry or the exit conditions.
More importantly, this behavior is somewhat counter-intuitive: while a repulsive potential close to both T-junctions actually helps electrons crossing the overall structure (enhanced conductance), an attractive pertubation reduces their ability to pass through the device.
Looking further in the simulation results, we observe that reversing the sign of $\varphi_{p}$ essentially reverses the change in conductance. Surprisingly, the back-scattering to the leads, due to current focusing on the hard-wall potential of the antidot (described in Figs. 1a and c), is similar in amplitude to the enhanced transmission due to defocusing (Figs. 1a and b). On the other hand, we observe that the symmetry naturally breaks when the tip locates above the leads. In that case, while depleting the lead strongly reduces the conductance, accumulating electrons has naturally a much weaker effect. Finally, when moving the perturbation from the T-junction area towards the center of the device, the effect on $G$ naturally vanishes over a distance corresponding roughly to $R_{p}$ (Fig. 3b).
Beside moving the tip along the device axis, one can also wonder how sensitive $G$-variations are to the perturbation position in the $(x,y)$ plane. This aspect is examined in the $G$ maps plotted in Figs. 3c and d obtained for locally raised or lowered moving potentials, respectively. The main contrast is observed over the T-junctions as well as over the device leads for a depleting potential. In both $x$ and $y$ directions, this contrast fades away over distances comparable to $R_{p}$. When positioning the perturbation potential over the device arms and their vicinities, the $G$ map is decorated with short characteristic length scale fluctuations which are similar to those reported in previous works \cite{martins2007imaging,crook2003imaging,hackens2006imaging}. This weaker amplitude contrast was attributed to the perturbation of resonant states in the local density of states (LDOS) by the moving potential \cite{martins2007imaging,crook2003imaging}, as well as to the electrostatic Aharonov-Bohm effect \cite{hackens2006imaging}. Note that here the mapping conditions are not suitable for imaging the LDOS because the moving potential is in the strong perturbation and not in the linear regime discussed in Ref. \onlinecite{martins2007imaging}. In this framework we are not using the scanning gate with microscopy purposes in mind.
It is now time to compare these predictions with experimental results on the sample described above. Figure 4 summarizes the data in a way to ease the comparison with the simulations. We first scanned the biased tip along a line linking the device leads for two opposite sign polarities. Figure 4a shows, like simulations in Fig. 3b, that a depleting (red) potential, near the border of the inner quantum dot, eases electron injection, while an accumulating potential (blue) located at the same place tends to reduce electron transmission through the device.
\begin{figure}[h!]
\centering
\includegraphics[width=8 cm]{Figure_4.pdf}
\caption{\label{Figure_4}a) Experimental conductance profile as a voltage biased tip is scanned along the dashed line presented in Fig. 2a. The tip is scanned at a distance of 60~nm from the sample surface with $V_{tip}$=$-14~\mathrm{V}$ (red curve) or $+8~\mathrm{V}$ (blue curve). A qualitative scenario is also illustrated for the peculiar electron forth-scattering (red) and back-scattering (blue). b) Conductance map as the polarised tip ($V_{tip}$=$-14~\mathrm{V}$ - depletion) is scanned in a plane at the same constant distance from the sample surface. c) Same map as the one presented in b) but with $V_{tip}=+8~\mathrm{V}$ (accumulation). Note that Fig. S4 presents the same data as c), with an enhanced contrast.}
\end{figure}
For a strongly depleting potential ($V_{tip}=-14~\mathrm{V}$ - red curve in Fig. 4a), corresponding roughly to $\varphi_{p}^{max}\sim 0.4*E_{F}$ (see Fig. S2), $G$ exhibits local maxima when the tip is located above the limit of the etched area in front of the entrance and exit leads (dashed lines in Fig. 4a). As expected, the conductance is reduced all the more as the tip decreases the 2DEG density over the leads. But, counter-intuitively, a strongly accumulating potential ($V_{tip}=8~\mathrm{V}$) brings $G$ to a minimum. Moreover, the effect is essentially symmetric when the tip moves from one T-branch to the other.
The qualitative match with the curves presented in Fig. 3b (obtained for $\varphi_{p}^{max}=\pm0.9*E_{F}$) is striking, and the experimental conductance maps presented in Figs. 4c and d compare well with the simulations presented in Figs. 3c and d. We observe a remarkable coincidence of simulated and experimental positions and lateral extensions of the peaks and dips located around the hard-walls in the T-junctions.
Resonant features along the ring circumference are also observed in all cases, but the smallest ones that are visible in simulated $G$ maps in Figs. 3c and d, in particular those with concentric shape observed mostly outside the device area, are absent in the experimental data. This is most probably related to thermal averaging, which is not taken into account in the simulations.
At this stage, we can conclude that the experiments confirm, at least qualitatively, that a focusing/defocusing can be induced by a Lorentzian perturbation combined to a hard-wall potential in a ballistic device. While defocusing (Fig. 4a red) is clearly reminiscent of the Rutherford scattering - here in 2D -, focusing on the hard-wall induces a peculiar back-scattering mechanism as the lensing is combined with the specular reflection illustrated in Fig. 4a (blue).
At first sight, the weaker absolute value of the voltage applied on the tip in accumulation (blue in Fig. 4a) could explain why the effect on the conductance is weaker than in depletion (red in Fig. 4a). However, we need to dig deeper in the simulations to test the quantitative correspondance between experiments and predictions.
Figure 5 shows the evolution of the conductance when $\varphi_{p}$ travels along the axis of the quantum ring, and when either $\varphi_{p}^{max}$ or $R_{p}$ is varied, the other parameters remaining constant (a similar map with a varying disorder amplitude $S_{d}$ is shown in supplementary materials - Fig. S1). We obviously focus our attention on the two regions near the edge of the inner QR, i.e. $x\sim\pm$ 800 nm (dashed lines in Figs. 5a and b). We first observe no obvious threshold when $|\varphi_{p}^{max}|$ increases (Fig. 5a). $G$ undergoes a smooth evolution at least up to $2*E_{F}$. However, on the depletion side ($\varphi_{p}>0$), the positions of the local $G$ maxima are gradually shifting towards the center of the device as $\varphi_{p}^{max}$ is made more positive. This reflects the fact that roughly identical potential perturbation conditions are found in the T-junctions both for a weakly perturbing potential ($\varphi_{p}^{max}<E_{F}$) centered close to the hard-wall, and a strongly perturbing potential ($\varphi_{p}^{max}>E_{F})$ centered further away from the hard-wall. On the accumulation side ($\varphi_{p}<0$), the position of the dips' centers remains essentially unaffected : charge accumulation in the T-junctions does not modify its geometry.
\begin{figure}[h!]
\centering
\includegraphics[width=8 cm]{Figure_5.pdf}
\caption{\label{Figure_3} Simulated conductance profiles as the potential perturbation is swept along the black dashed line in Fig. 2a for several values of : (a) $\varphi_{p}^{max}$ with $R_{p}$=$150~\mathrm{nm}$ and disorder strength $S_{d}$=$4.78~\mathrm{meV}$, (b) $R_{p}$ with $\varphi_{p}^{max}$=$-0.9~E_{F}$ and $S_{d}$=$4.78~\mathrm{meV}$, (c) $R_{p}$ with $\varphi_{p}^{max}$=$0.9~E_{F}$ and $S_{d}$=$4.78~\mathrm{meV}$. The vertical dashed lines correspond to the locations of the hard-walls along the scanned line.}
\end{figure}
Varying $R_{p}$ has an interesting effect on the conductance peaks and dips. Beyond a few tens of nm, and up to 200 nm where the arms themselves start to be narrowed, varying $R_{p}$ has essentially no effect on the amplitude of conductance extrema, either for negative (Fig. 5b) or positive (Fig. 5c) perturbation potentials. Indeed, the amplitude of conductance peaks and dips saturates for $R_{p}\geq\lambda_F=$ 25 nm, i.e. in the classical regime (Fig. S3).
On the other hand, the evolution of the width of conductance extrema (Figs. 5b and c) is smoother and gives us the possibility to determine the value of $R_{p}^{exp}$ that characterizes our experimental configuration. Based on the FWHM of the strongest (red) conductance peaks in Fig. 4a, we obtain that $R_{p}^{exp}\sim$ 135 $\mathrm{nm}$. This value is well in the range investigated in the simulations and indeed consistent with data discussed in the supplementary informations.
Finally, our results show that increasing the disorder dampens the effect but no qualitative change is observed even when multiplying the initial disorder (Fig. 2b) by a factor of four (see supplementary materials, Fig. S1). This robustness is a clear signature that distinguishes the present effect from universal conductance fluctuations \cite{lee1985universal}, even sensitive to a change of potential amplitude on a single tight-binding site.
To go beyond the good qualitative correspondance between Figs. 3 and 4, we now need to question the experimental data more quantitatively. This is the purpose of Fig. 6 that adresses the effect of the density, or Fermi energy, and finally provides a quantitative comparison between experiments and simulations.
\begin{figure}[h!]
\centering
\includegraphics[width=8 cm]{Figure_6_v3.pdf}
\caption{\label{Figure_6}a) and b) Relative variation of conductance ($\Delta G/G_{0}$) with respect to that at $x=0$. c) Relative conductance variation - averaged over the range 110 meV $<E_{F}<$ 11 meV - at $x=800~\mathrm{nm}$ (dark red/blue for depletion/accumulation), and at $x=-800~\mathrm{nm}$ (pale red/blue for depletion/accumulation). The two experimental data points are indicated in c) in the form of two vertical bars.
}
\end{figure}
The variation of the conductance with $E_{F}$, while keeping the absolute value of the ratio $\frac{\varphi_{p}^{max}}{E_{F}}=0.9$, is presented in Figs. 6a and b. Since $G$ increases with $E_{F}$, it makes sense to examine the relative change of conductance $\Delta G/G_{0}=(G-G_{0})/G_{0}$, where $G_0$ is the conductance of the device when the tip is above the device center ($x=0$). It is immediately apparent that $\Delta G/G_0$ is insensitive to $E_{F}$. In other words, the efficiencies of both focusing and defocusing are not sensitive to $E_{F}$ alone, but, as Fig. 6c reveals clearly, to the ratio $\frac{\varphi_{p}^{max}}{E_{F}}$.
More precisely, Fig. 6c shows a linear dependence of $\Delta G/G_0$ as a function of $\frac{\varphi_{p}^{max}}{E_{F}}$ up to $\frac{\varphi_{p}^{max}}{E_{F}}\simeq 1$ in the depletion regime. For $\frac{\varphi_{p}^{max}}{E_{F}}\geq 1$, the symmetry of defocusing with respect to entry and exit breaks down and defocusing becomes less efficient as the arms themselves start to shrink. No such deviation from either linearity or symmetry is observed in the case of depletion (blue lines in Fig. 6c). The counter-intuitive entry/exit symmetry persists in all the range investigated and the linearity with respect to $\frac{\varphi_{p}^{max}}{E_{F}}$ is preserved.
How can we understand this linear dependence, at least in the depletion regime? The defocusing of ballistic electrons facing a Lorentzian-shape repulsive potential is clearly reminiscent of the Rutherford scattering. The original Rutherford formalism provides an expression for the differential cross-section in three dimensions (3D) for a scattering potential $\frac{C}{r}$ - where $C$ is the amplitude and $r$ the distance from the scattering center - as a function of the energy of incident particles $E$ and of the scattering angle $\theta$. Since the arms of the quantum ring capture electrons in a finite angle range from the leads, one can consider the differential cross-section at a given angle as related to the conductance of our ballistic device.
Coincidentally, in the 3D case, the Rutherford formula is independent of wether you treat particles classically or quantumly \cite{friedrich2013scattering}. In 2D, this elegant result is no longer valid in general. In the 2D quantum regime, one has to find an analytical expression of the differential cross-section ($\frac{d\lambda}{d\theta}$) by solving the 2D version of the Lippmann-Schwinger equation \cite{lippmann1950variational} with a Lorentzian-shaped potential distribution, which is far beyond the scope of the present work.
In the 2D classical regime however, an equivalent formula was derived \cite{barton1983rutherford,friedrich2013scattering}. For the same $\frac{C}{r}$ potential :
\begin{equation}
\frac{d\lambda}{d\theta}=\frac{|C|}{4Esin^{2}(\theta/2)}
\label{equ_1}
\end{equation}
One readily finds from Equ. (1) that, for a given angle $\theta$, the scattering amplitude is fully determined by the ratio between the amplitude $C$ of the perturbative potential and the energy of the particles. In the case of our device, this ratio would correspond to $\frac{\varphi_{p}^{max}}{E_{F}}$. The linear response of $\Delta G/G_0$ with respect to changes in $\frac{\varphi_{p}^{max}}{E_{F}}$ revealed in Fig. 6c is thus reminiscent of the 2D Rutherford scattering in the classical regime (note that Equ. (1) is also valid in the accumulation regime, but in our QR geometry, specular reflection on the hard-wall must also be taken into account). Beyond Equ. (1) that is probably not strictly applicable to our Lorentzian-shape potential, the Rutherford analogy helps visualizing the observed ballistic defocusing.
We finally turn to what is probably the most important information presented in Fig. 6c : the quantitative comparison between experiments and simulations. To reach that point, we first need to evaluate the amplitude of the perturbation potential induced by the tip. A direct view of the shape of the tip-induced potential experienced by electrons inside the device is obtained by mapping the conductance of a narrow channel in a similar device (whose width is comparable to the leads of the device) close to pinch-off as a function of the electron density, with the tip scanning along a line perpendicular to the channel axis (see supplementary materials, Fig. S2 - this second device is located on the same sample). Following this procedure, we determined that $\varphi_{p}^{max}=3.9~\mathrm{meV}$ for $V_{tip}=-4$ V and $d_{tip}=80~\mathrm{nm}$, and scaled this value taking into account the parameters used in Fig. 3a. Knowing the values of $\varphi_{p}^{max}$ for both the depletion and accumulation potentials data in Fig. 4a, we were able to plot the experimental $\Delta G/G_0$ vs $\frac{\varphi_{p}^{max}}{E_{F}}$ in Fig. 6c. The good agreement between experiments and simulations reveals the global consistency of our study and that it is indeed possible to strongly enhance or reduce the injection of ballistic electrons in a ballistic device by tuning the shape of the potential faced by electrons.
It also means that the simple tight-binding model used here captures the essential physics of the phenomena. In the experiment, a conductance change of up to $\sim$10\% relative to the unperturbed device conductance is observed, which is relatively important, compared to e.g. coherent effects at this temperature (40 mK). The phenomenon seems also particularly robust with respect to disorder. This may seem surprising at first sight if its origin is a "ballistic redirecting effect" induced by the tip potential. However, high contrast magnetic focusing effects were observed in semiconductor heterostructures with comparable or lower mobilities \cite{hackens2002long}. This common robustness in both cases further reinforces the idea that ballistic focusing is at the heart of the observed phenomenon.
\section{Conclusion}
In conclusion, we have evidenced surprising ballistic electron focusing and defocusing behaviors governed by a local electrostatic potential. The phenomenology is similar to the 2D Rutherford scattering assuming classical electron dynamics. The applicability of this relatively simple classical formalism in the case of a 2DEG-based device was not expected. Indeed, the scattering amplitude for the interaction between charged particles and a sharp electrostatic potential should in principle be governed by complex interactions related to the presence of the many-particle background of the Fermi sea \cite{Saraga2005}. Other unexpected results of this work resides in two symmetries. The first symmetry concerns the effect of the scattering potential with respect to incoming and outgoing electrons in the T-junctions (i.e. the left-right symmetry in the simulated results). While it is quite straightforward to understand the focusing or defocusing effect of a locally accumulating or depleting potential for incoming electrons, one could not anticipate that a similar effect would be visible for outgoing electrons (i.e. not impinging the hard wall close to normal incidence), in particular in the case of an accumulating potential. A second unexpected symmetry was revealed between the amplitude of the Rutherford defocusing effect (when a depleting potential is applied) and reflective focusing, as experienced by electrons scattered by an accumulating potential in front of a hard wall. All these puzzling fundamental questions will require additional scrutiny and will probably foster further experimental and theoretical work.
In a broader context, our observations help in the understanding of charge carrier injection in ballistic devices, as it shows that fine tuning of the potential in the vicinity of the entrance and exit leads can have huge effects on transmission through the whole device. In turn, this work provides useful tools in the perspective of building 'electron optics' devices, where a local modulation of the electrostatic potential inside a device redirects the electron flow in a similar way as a optical lens curves light rays \cite{boggild2017two}. In this framework, scanning gate microscopy can play an important role, as pointed out in various theoretical proposals where scattering is investigated by tuning the electrostatic potential at the local scale using a charged metallic tip \cite{Saraga2005,Braun2008,Cserti2007}.
Although the description of scattering in two spatial dimensions was considered as a curiosity up to the early eighties \cite{barton1983rutherford}, nowadays high mobility two-dimentional charge systems gives this fundamental question a complete relevance and the possibility of testing this description, even with relativistic Dirac particles \cite{wu2014scattering,russo2008observation,cabosart2014imaging,cabosart2017recurrent}, also opens new directions of research.
\section*{Acknowledgements}
This work was funded by the Fonds de la Recherche Scientifique FRS-FNRS (Grants No. J.0067.13, T.0172.13, 326 U.N025.14, J.0009.16, and 2450312F) and by the Communaut\'e Fran\c{c}aise de Belgique (ARC Grant No. 11/16-037, Stresstronics Project and ARC Grant No. 16/21-077, NATURIST Project). S.T. is funded by a Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture FRIA fellowship. B.H. is FRS-FNRS research associate. Computational resources have been provided by the supercomputing facilities of the Universit\'e catholique de Louvain (CISM/UCL) and the Consortium des Equipements de Calcul Intensif en F\'ed\'eration Wallonie Bruxelles (CECI) funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS). S. T. addresses a special thank to D. Fran\c{c}ois for his valuable help concerning parallel computing.
|
2,877,628,091,274 | arxiv | \section{Introduction}
Nonequilibrium transport in spin-degenerate superconductors has been investigated intensely in the 1970s and 80s \cite{langenberg1986}. In the spin-degenerate case, the nonequilibrium distribution function is characterized by the two-fold particle-hole degree of freedom, described by a ``longitudinal'' energy and a ``transverse'' charge mode. Quasiparticles are coupled to the superconducting condensate, and one of the most striking implications is a conversion between energy and charge modes induced by a supercurrent \cite{schmid1975,schmid1979,pethick1979b,clarke1980}. The energy-charge conversion can be understood in terms of the Doppler shift of the quasiparticle spectrum due to the superfluid velocity. Experimentally, the conversion was observed by applying a temperature gradient and a supercurrent simultaneously to a superconducting wire \cite{clarke1979,fjordboge1981,heidel1981}.
Recently, the field of nonequilibrium superconductivity has been reinvogorated by the investigation of spin-polarized quasiparticle transport \cite{johnson1994,poli2008,yang2010,wakamura2015,beckmann2016}. The two-fold spin degree of freedom leads to additional spin and spin-energy nonequilibrium modes \cite{morten2004}.
Part of the motivation for these investigations comes from the idea of using spin to implement electronic functionality in the context of superconducting spintronics \cite{linder2015,eschrig2015}, either via spin-polarized supercurrents or nonequilibrium quasiparticles. For example, spin-polarized quasiparticles can control spin-polarized supercurrents \cite{bobkova2010,bobkov2011}, and supercurrents can control spin-polarized distributions \cite{amundsen2020}.
In addition to spin-dependent distribution functions, thin superconducting films in high magnetic fields have spin-dependent spectral properties \cite{meservey1970}. The spin splitting of the density of states leads to long-range spin transport \cite{huebler2012b,quay2013,silaev2015,krishtop2015,bobkova2015a,beckmann2016,bergeret2018,heikkila2019,kuzmanovic2020} and large spin-dependent thermoelectric effects \cite{machon2013,ozaeta2014,kolenda2016,heidrich2019}. Recently, it has been predicted that the spin-dependence of the spectral supercurrent creates an additional coupling term between supercurrent and quasiparticles in high-field superconductors, leading to conversion between spin-degenerate and spin-polarized nonequilibrium modes \cite{aikebaier2018}.
Here, we report the experimental observation of this additional coupling term via the conversion of energy nonequilibrium to charge and spin-energy modes in high-field superconducting aluminum wires.
\section{Experiment}
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.png}
\caption{False-color scanning electron microscopy image of sample B. All samples consist of an aluminum strip with several copper wires attached by tunnel contacts. Contact $\alpha$ at the center of the strip is used as the injector, the remaining ones as detectors. The ends of the aluminum strip are split so that a supercurrent $I_\mathrm{S}$ can be applied without affecting the conductance measurement. The image is shortened, the total length of the aluminum strip between the splits is about \SI{24}{\micro\meter}.}
\label{fig:sampleSEM}
\end{figure}
Nonequilibrium quasiparticle transport was investigated in two samples of similar design (labeled A and B). Figure \ref{fig:sampleSEM} shows a false-color scanning electron microscopy (SEM) image of sample B along with the measurement scheme. All samples were fabricated by electron beam lithography and shadow evaporation. The substrates are pieces of silicon wafer with \SI{1}{\micro\meter} silicon oxide. The structures consist of a long (24 - \SI{50}{\micro\meter}), thin (12 - \SI{17}{\nano\meter}) aluminum strip with split end sections and tunnel contacts with aluminum oxide barriers and copper electrodes. The normal state tunnel resistances are in the range of 1.5 - \SI{4}{\kilo\ohm}.
The tunnel contacts are arranged with one in the center of the strip ($\alpha$) for injection and the others to one side as detectors. The small copper artifact left in the split region is separated from the superconductor by the same aluminum oxide barrier as the contacts, and therefore does not affect the measurement.
The measurement setup consists of the local circuit ($I_\mathrm{inj}$), the nonlocal circuit ($I_\mathrm{det}$) plus the supercurrent circuit ($I_\mathrm{S}$). The local (injector) circuit was used to measure the tunnel conductance spectra via low-frequency lock-in detection with a small ac excitation $V_\mathrm{ex}$ superimposed on a dc voltage $V_\mathrm{bias}$. The local conductance was measured in a three point configuration due to limitations of the cryostat wiring, and the effect of the injector lead resistance was corrected during data analysis. The nonlocal current $I_\mathrm{det}$ due to nonequilibrium quasiparticle injection was measured simultaneously with the local conductance. In addition, a supercurrent $I_\mathrm{S}$ could be passed through the wire using the split end sections of the wire without disturbing the ac conductance measurements.
Similar results were obtained on both samples. All data shown here were taken on sample A, except where otherwise noted.
The measurements were performed with excitation voltage rms amplitudes of about \SI{15}{\micro\volt} (Sample A) and \SI{6}{\micro\volt} (Sample B). Measurements were performed at \SI{100}{\milli\kelvin} on sample A and at \SI{20}{\milli\kelvin} on sample B if not stated otherwise. A magnetic field could be applied in plane along the direction of the copper electrodes.
\section{Theory}
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.png}
\caption{Overview of the model. (a) Sketch of the sample geometry. A one-dimensional superconducting wire is placed between two equilibrium reservoirs. Quasiparticles are injected via a tunnel junction at $x=0$, and detected via a second junction at $x=x_\mathrm{det}$. In addition, a supercurrent $I_\mathrm{S}$ can be passed through the wire. (b) Nonequilibrium modes without supercurrent. The $f_\mathrm{L}$ mode falls linearly, all other modes decay rapidly. (c) Nonequilibrium modes with supercurrent. Coupling to the supercurrent generates $f_\mathrm{T}$ and $f_\mathrm{L3}$ proportional to the constant gradient of $f_\mathrm{L}$. (d) Nonlocal conductance contributions. Without supercurrent, the injected $f_\mathrm{T}$ mode creates a symmetric contribution (dotted line). The supercurrent coupling terms $j_\mathrm{E}$ and $j_\mathrm{Es}$ generate antisymmetric contributions. These contributions have equal/opposite sign in the lower/upper Zeeman band.}
\label{fig:sampleModel}
\end{figure}
In this section, we give a simplified description of the theory of our experiment, focussing on the features relevant to understand the experimental results. The full model used for the numerical simulations is given in the appendix.
We are mostly interested in the behavior at high magnetic fields, where the quasiparticle energies acquire a Zeeman splitting $2\mu_\mathrm{B}B$. As a consequence, all spectral properties are spin-dependent, and can be conveniently decomposed into a spin-symmetric and spin-antisymmetric part. For example, the spin-resolved density of states $N_\downarrow(E)$ and $N_\uparrow(E)$ can be decomposed into $N_\pm(E)=(N_\downarrow\pm N_\uparrow)/2$, where $E$ is the quasiparticle energy.
Figure \ref{fig:sampleModel}(a) shows a sketch of the model geometry. The sample is modeled as a quasi-onedimensional wire along the $x$-axis, terminated at both ends by equilibrium reservoirs. An injector junction is placed at the center ($x=0$), and a detector junction is placed at $x=x_\mathrm{det}$. The total length of the wire is $2L$. The magnetic field $B$ is applied in-plane, perpendicular to the wire.
The nonequilibrium state of a spin-split dirty-limit superconductor can be described by four distribution functions, the nonequilibrium modes $f_\mathrm{L}$, $f_\mathrm{T3}$, $f_\mathrm{T}$ and $f_\mathrm{L3}$. They describe energy, spin, charge, and spin-energy imbalance of the quasiparticle excitations, respectively \cite{morten2004,silaev2015,bobkova2015a}. Nonequilibrium in our experiment is driven by tunnel injection. Figure \ref{fig:sampleModel}(b) shows the qualitative behavior of the four nonequilibrium modes without applied supercurrent. The charge and spin-dependent modes $f_\mathrm{T}$, $f_\mathrm{T3}$ and $f_\mathrm{L3}$ decay relatively fast due to charge relaxation or spin flips. The charge relaxation length is a few $\mathrm{\mu m}$ at zero field in our structures, but drops very quickly upon increasing the magnetic field \cite{huebler2010,huebler2012b,wolf2013} due to orbital depairing. The spin relaxation length is typically a few hundred nm in our structures \cite{huebler2012b,wolf2014c}, smaller than the contact spacing of the present experiment. The $f_\mathrm{L}$ mode relaxes only via inelastic scattering, which is weak at the low temperatures of our experiment. The electron-phonon relaxation length is typically a few 100 $\mathrm{\mu m}$ in metal wires at temperature far below $1~\mathrm{K}$ \cite{giazotto2006}, much larger than the length of our wires. Electron-electron scattering does not relax energy, but leads to a thermalization of the nonequilibrium distribution. Previous comparison of theory to similar experiments on high-field superconductors have shown that neglecting inelastic scattering is a reasonable assumption \cite{silaev2015,heidrich2019}, with at most small deviations due to thermalization by electron-electron scattering \cite{heidrich2019}. We therefore neglect inelastic scattering in the model.
Due to the weak relaxation, $f_\mathrm{L}$ is the dominant mode created by tunnel injection. Neglecting all other modes, $f_\mathrm{L}$ is given by
\begin{equation}
f_\mathrm{L}(x)=G_\mathrm{inj}R\frac{N_+f_\mathrm{L}^\mathrm{inj}(V_\mathrm{inj})}{D_\mathrm{L}+G_\mathrm{inj}RN_+}\left(1-\frac{x}{L}\right),
\label{eqn:fL}
\end{equation}
where $G_\mathrm{inj}$ is the normal-state injector conductance, $R$ is the normal-state resistance of the left and right branches of the superconducting wire in parallel, $D_\mathrm{L}$ is the spectral diffusion constant of the $f_\mathrm{L}$ mode, and $f_\mathrm{L}^\mathrm{inj}(V_\mathrm{inj})$ is the injector distribution function. $f_\mathrm{L}$ falls linearly towards the ends of the aluminum strip. Note that Fig.~\ref{fig:sampleModel}(b) is not to scale, but only illustrates the spatial dependence qualitatively. Actually, $f_\mathrm{L}$ is orders of magnitude larger than the other modes.
When a supercurrent is applied to the wire, transport of all nonequilibrium modes is coupled. Far from the injector, the part of the kinetic equation relevant for our experiment is
\begin{equation}
\begin{pmatrix}
R_\mathrm{T} & R_\mathrm{L3}\\
R_\mathrm{L3} & R_\mathrm{T}+S_\mathrm{L3}
\end{pmatrix}
\begin{pmatrix}f_\mathrm{T}\\ f_\mathrm{L3}\end{pmatrix}
=
\begin{pmatrix} j_\mathrm{E}\nabla\phi\\ j_\mathrm{Es}\nabla\phi\end{pmatrix} \nabla f_\mathrm{L}.
\end{equation}
$j_\mathrm{E}$ and $j_\mathrm{Es}$ are the spin-symmetric and antisymmetric parts of the spectral supercurrent and $\nabla\phi$ is the superconducting phase gradient. $R_\mathrm{T}$ and $R_\mathrm{L3}$ describe charge relaxation, and $S_\mathrm{L3}$ is the spin relaxation rate. The historic experiments correspond to $B=0$, where $j_\mathrm{Es}$ and $R_\mathrm{L3}$ are zero and only $f_\mathrm{T}$ is generated. Thus, we arrive at the qualitative picture that tunnel injection drives $f_\mathrm{L}$, and then the gradient of $f_\mathrm{L}$ in combination with the supercurrent generates $f_\mathrm{T}$ and $f_\mathrm{L3}$ along the wire. The generation is balanced by charge and spin relaxation. The qualitative behavior is shown in Fig.~\ref{fig:sampleModel}(c). For the constant $\nabla f_\mathrm{L}$ following from the linear decay in Eq.~(\ref{eqn:fL}), the generated modes are independent of position. We would also like to note that the generated modes are proportional to $\nabla \phi$, and therefore odd functions of supercurrent. This property will be used later in the data analysis.
The nonlocal differential conductance $g_\mathrm{nl}=dI_\mathrm{det}/dV_\mathrm{inj}$ depends on the $f_\mathrm{T}$ and $f_\mathrm{L3}$ modes at the detector contact via
\begin{equation}
I_{\mathrm{det}}=-\frac{G_\mathrm{det}}{2e}\int^{\infty}_{-\infty}dE(N_{+}f_\mathrm{T}+N_{-}f_\mathrm{L3}).
\end{equation}
Here, $G_\mathrm{det}$ is the normal-state detector conductance, and $e$ is the elementary charge. $f_\mathrm{L}$ and $f_\mathrm{T3}$ do not contribute since we use spin-degenerate junctions.
Figure \ref{fig:sampleModel}(d) qualitatively shows the different contributions to the nonlocal conductance. Without supercurrent, the signal comes mainly from the injected $f_\mathrm{T}$ mode (and a small $f_\mathrm{L3}$ contribution). The injected contribution is even in bias, and decays quickly as a function of contact distance and increasing magnetic field. $j_\mathrm{E}$ and $j_\mathrm{Es}$ generate contributions which are odd in bias. These contributions have the same sign in the lower Zeeman band, but opposite signs at higher energy. In the remainder of the paper, we will refer to the even and odd contributions as the ``injected'' and ``generated'' signals.
\section{Results}
\begin{figure}
\includegraphics[width=\columnwidth]{fig3.eps}
\caption{Sample characterization: (a) Differential conductance $g$ of one of the tunnel contacts as a function of bias voltage $V_\mathrm{bias}$ for different in-plane magnetic fields $B$. (b) Phase diagram of the superconducting wire as a function of magnetic field $B$ and temperature $T$. Symbols are obtained by measuring the resistive transition, the line is a fit explained in the text.}
\label{fig:characterization}
\end{figure}
The sample parameters were obtained from the characterization measurements shown in Fig. \ref{fig:characterization}. Figure \ref{fig:characterization}(a) shows the differential conductance of the injector contact. The spectra were fitted to the standard model of the tunnel conductance, with an additional series resistance to account for the three-probe measurement.
The resistance of the superconducting wire was measured in a four-probe geometry using the split ends to obtain the residual resistance $R_\mathrm{4K}$ at liquid Helium temperature. The critical temperature $T_\mathrm{c}$ and critical field $B_\mathrm{c}(T)$ shown in Fig. \ref{fig:characterization}(b) were then measured by sweeping the temperature or magnetic field and taking the mid-point of the resistance of the superconducting transition. The pair potential was calculated using the BCS relation $\Delta_0=1.74k_\mathrm{B}T_\mathrm{c}$, where $k_\mathrm{B}$ is the Boltzmann constant. The width and length of the aluminum strip as well as the detector distances were extracted from the SEM images. The nominal thickness of the aluminum wire obtained from a quartz microbalance during fabrication does not reflect the metallic cross-section relevant for the transport properties due to the unknown oxide thickness and surface roughness. Since the orbital depairing rate, and therefore the critical field, depends strongly on film thickness, we have instead determined the effective thickness by fitting the temperature dependence of the critical field (see appendix for details). The sample parameters extracted from the characterization measurements are summarized in table \ref{tab:parameters} in the appendix.
All measurement based input parameters for the numerical simulation of the nonlocal conductance measurements are gained from these characterization measurements plus the applied supercurrent.
\begin{figure}
\includegraphics[width=\columnwidth]{fig4.eps}
\caption{Overview of the nonlocal differential conductance $g_\mathrm{nl}$ as a function of bias voltage. Data taken on sample A. Signals in (c) and (d) are offset vertically for better visibility. (a) $g_\mathrm{nl}$ for different detector distances without supercurrent. The signal is caused by injected charge imbalance and decays with detector distance. (b) $g_\mathrm{nl}$ for different supercurrents at fixed distance and magnetic field. (c) $g_\mathrm{nl}$ for different detector distances. The signal from injected charge imbalance decays with distance, the supercurrent-induced part is independent of distance. (d) $g_\mathrm{nl}$ for different magnetic fields.}
\label{fig:overviewRaw}
\end{figure}
Figure \ref{fig:overviewRaw} is an overview of the nonlocal differential conductance $g_\mathrm{nl}$ measured on sample A (symbols) and the corresponding numerical simulations (lines). Measurements were performed for $B=0-0.8~\mathrm{T}$, below the field where the energy gap closes and up to about half of the theoretical critical current of the samples, where superconductivity starts to collapse from injection and noise of the applied current.
Figure \ref{fig:overviewRaw}(a) shows the signal as a function of bias voltage without supercurrent for different detectors at fixed magnetic field. The signal is even in bias, and falls with increasing detector distance as the injected charge imbalance relaxes. Figure \ref{fig:overviewRaw}(b) shows the effect of supercurrent on the signal. Above the gap an additional contribution appears, which is odd in both bias and supercurrent. At higher bias, the additional signal disappears. Figure \ref{fig:overviewRaw}(c) shows the nonlocal conductance for different detectors with applied supercurrent, corresponding to the data shown in Fig.~\ref{fig:overviewRaw}(a). As in Fig.~\ref{fig:overviewRaw}(a), the even contribution from injected charge imbalance falls with detector distance while the odd contribution generated by the supercurrent is nearly independent of distance, indicating continuous creation of imbalance along the superconducting strip. Figure \ref{fig:overviewRaw}(d) shows the evolution of the signal with increasing magnetic field for a fixed detector distance. At zero field, the odd contribution consists of relatively sharp peaks at the gap. With increasing field, the signal broadens and develops a Zeeman splitting (better resolved in the upper plot of Fig.~\ref{fig:overviewRaw}(c), where the data for $B=0.8~\mathrm{T}$ are shown on a different scale). Both the even and odd contributions decrease with increasing field due to the increased charge relaxation rate.
\begin{figure}
\includegraphics[width=\columnwidth]{fig5.eps}
\caption{Overview of the antisymmetric part $g_\mathrm{a}=(g_\mathrm{nl}(I_\mathrm{S})-g_\mathrm{nl}(-I_\mathrm{S}))/2$ of the nonlocal conductance caused by quasiparticle supercurrent coupling. The signals are offset vertically for better visibility. (a) and (b) The signal at different detector distances. The signal is nearly independent of distance, indicating a continuous generation along the aluminum strip. (c) and (d) dependence of the signal on the applied supercurrent.}
\label{fig:overviewAnalysis}
\end{figure}
To further analyze the supercurrent coupling, we use the symmetry to extract only the supercurrent-induced part of the signal, {\em i.e.}, we calculate the antisymmetric part of the conductance $g_\mathrm{a}=(g_\mathrm{nl}(I_\mathrm{S})-g_\mathrm{nl}(-I_\mathrm{S}))/2$. We also normalize the signal by $G_\mathrm{det}$ to eliminate small variations of the detector conductances. Figure \ref{fig:overviewAnalysis} is an overview over the measured and simulated $g_\mathrm{a}$.
Figure \ref{fig:overviewAnalysis}(a) and (b) show the nonlocal signal for different detector distances at $B=0$ and $B=\SI{0.8}{\tesla}$, respectively. In both cases, the signals are nearly independent of contact distance which confirms that nonequilibrium is generated continuously along the superconducting strip by supercurrent-quasiparticle coupling.
Figure \ref{fig:overviewAnalysis}(c) shows the signal at $B=0$ for three different supercurrents.
The signal is sharply peaked near the gap edge, with an increasing broadening as the supercurrent is increased. This reflects the increasing depairing by the supercurrent. Intuitively, one would expect an increase of the peak height with supercurrent, but this is compensated by the increased broadening for the differential signal shown here. The integrated signal, however, increases monotonically with supercurrent as one expects (not shown). Figure \ref{fig:overviewAnalysis}(d) shows the signal at $B=\SI{0.8}{\tesla}$ for different supercurrents. Here, the depairing due to the magnetic field is much larger than the depairing by the supercurrent, and the signal simply increases with supercurrent as expected.
\begin{figure}
\includegraphics[width=\columnwidth]{fig6.png}
\caption{Comparison of $g_\mathrm{a}$ to the simulation for sample A (a) and sample B (b). The solid lines are the measurement results, the dashed lines are the full simulations, and the dotted lines are the simulations with $j_\mathrm{Es}$ set to zero. The shaded areas are the maximum error estimate for the simulation, corresponding to the errors of the parameters.}
\label{fig:result}
\end{figure}
Figure \ref{fig:result} shows a detailed comparison of the nonlocal signal to the numerical simulations for samples A and B. Note that all parameters were determined independently, with no free parameters left to fit. The shaded regions indicate the maximum errors determined by propagating the estimated uncertainty in sample geometry, resistances, supercurrent, $B_\mathrm{c}$ and $T_\mathrm{c}$ to the simulation result. The measured signal (solid lines) agrees with the simulation (dashed lines) within the error bars.
The slight shift of the signal to lower bias voltage compared to the model can be explained by the reduction of the energy gap by quasiparticle injection, which is not included in the simulation. To test the effect of $j_\mathrm{Es}$, we have repeated the simulation with $j_\mathrm{Es}$ set to zero (dotted lines). These simulations do not match the data, with a downward deviation in the lower Zeeman band, and an upward deviation in the upper Zeeman band, as expected from the schematic view of signal contributions in Fig.~\ref{fig:sampleModel}(d).
\section{Discussion}
First, we would like to discuss the results in zero field, {\em i.e.}, $j_\mathrm{Es}=0$ without Zeeman splitting. The supercurrent coupling described by the term $j_\mathrm{E}\nabla\phi$ has been predicted \cite{schmid1975,schmid1979,pethick1979b,clarke1980} and experimentally confirmed \cite{clarke1979,fjordboge1981,heidel1981} in the 1970s and 80s. The historic experiments were made on cm-sized structures, much larger than the inelastic relaxation length. In this case, the $f_\mathrm{L}$ mode is given by a local equilibrium distribution with a local temperature $T(x)$, and $\nabla f_\mathrm{L}\propto \nabla T$ was created by heating one end of the wire. Also, the historic experiments were focused mainly on the temperature range close to the critical temperature, where the charge relaxation time diverges.
In contrast, the present experiments were performed on $\mathrm{\mu m}$-sized structures with tunnel injection at low temperature, where the Fermi distribution has a relatively sharp edge and inelastic scattering can be mostly neglected. As a consequence, our experiments have spectral resolution, and provide an indirect measurement of the spectral supercurrent $j_\mathrm{E}$. $j_\mathrm{E}$ is not easily accessible to experiments. The spectral supercurrent in SNS Josephson junctions has been probed by controlling the distribution function \cite{baselmans1999}, but we are not aware of similar experiments in bulk superconducting wires. For weak depairing, $j_\mathrm{E}$ is sharply peaked above the gap, and quickly drops to zero at higher energy. This behavior is reflected in the signal shown in Fig.~\ref{fig:overviewAnalysis}(c).
The generated signals are nearly independent of contact distance, as expected for a constant gradient of $f_\mathrm{L}$. The constant gradient of $f_\mathrm{L}$ in the model is the result of neglecting inelastic scattering. Inelastic scattering will eventually lead to a relaxation of $f_\mathrm{L}$, with a relaxation length of $\lambda_\mathrm{L}\approx5-10~\mathrm{\mu m}$ found in our previous experiments on similar structures \cite{huebler2012b,wolf2013,heidrich2019}. In the present experiments, the contact distances were $x_\mathrm{det}\lesssim\lambda_\mathrm{L}$, so that neglecting inelastic relaxation is justified. Also, in previous comparisons of our experiments to the model the signals could be adequately described neglecting inelastic scattering \cite{silaev2015,heidrich2019}.
At high fields, the nonlocal signals broaden due to depairing, and a double-step structure is visible due to the Zeeman splitting. Since the contributions generated by both $j_\mathrm{E}$ and $j_\mathrm{Es}$ are odd functions of supercurrent and bias, they can not by distinguished by symmetry. Instead, we have compared the signals to simulations including and excluding $j_\mathrm{Es}$, and found that neglecting $j_\mathrm{Es}$ does not describe the signal within the error bars. In particular, the relative signal weight in the upper and lower Zeeman band requires inclusion of $j_\mathrm{Es}$. This follows from the opposite sign of the contribution of $j_\mathrm{Es}$ in the upper and lower Zeeman band as shown schematically in Fig.~\ref{fig:sampleModel}(d), and is independent of any small errors in overall signal magnitude due to inaccuracies of model parameters.
To conclude, we have experimentally investigated supercurrent-induced coupling of nonequilibrium modes in high-field superconductors, and found evidence for the recently predicted spin-dependent coupling term $j_\mathrm{Es}\nabla\phi$ \cite{aikebaier2018}. The interplay of spin-dependent supercurrents and quasiparticles may find applications in superconducting spintronics.
\section*{Appendix}
The samples are modeled using the quasiclassical model for dirty superconductors, with two additional approximations. First, spectral properties are calculated for a homogeneous wire in equilibrium, and only the kinetic equations contain gradients and nonequilibrium distributions. Second, inelastic scattering is neglected, as explained in the main text.
In the following, all energies are in units of the pair potential $\Delta_0$ at zero temperature and zero field, and lengths are measured in units of the dirty-limit coherence length $\xi=\sqrt{\hbar D_\mathrm{N}/\Delta_0}$, where $D_\mathrm{N}$ is the diffusion constant in the normal state. The spin index is $\sigma=\pm 1$, and $\sigma=+1$ corresponds to spin down $(\downarrow)$, {\em i.e.}, magnetic moment parallel to the applied magnetic field.
\paragraph{Spectral properties.} The model for the spectral properties used is based on \cite{maki1964}. The Usadel equation for a homogeneous superconductor has the form
\begin{equation}
\Delta G_\sigma + i(\varepsilon+\sigma\varepsilon_\mathrm{z})F_\sigma + \Sigma_\zeta + \Sigma_\mathrm{so} = 0,
\label{eqn:usadel}
\end{equation}
where $G_\sigma$ and $F_\sigma$ are the normal and anomalous Green's functions for spin $\sigma$, $\varepsilon$ is the normalized energy, and $\varepsilon_\mathrm{z}$ is the normalized spin splitting. $\Delta$ is the normalized pair potential, which has to be determined self-consistently, as explained below. The Green's functions are normalized by $F_\sigma^2+G_\sigma^2=1$, which we satisfy using the parametrization $F_\sigma=\sin\left(\theta_\sigma\right)$ and $G_\sigma=\cos\left(\theta_\sigma\right)$ with the complex pairing angle $\theta_\sigma$.
The self-energy due to orbital pair breaking is given by
\begin{equation}
\Sigma_\zeta = -\zeta F_\sigma G_\sigma,
\end{equation}
where the pair-breaking parameter $\zeta$ has two contributions,
\begin{equation}
\zeta=\frac{1}{2}\left(\frac{B}{B_\mathrm{c,orb}}\right)^2+\frac{1}{2}\left(\nabla\phi\right)^2.
\end{equation}
The first term is due to the applied in-plane magnetic field \cite{maki1964}, and the second term is due to the phase gradient $\nabla\phi$ induced by the supercurrent \cite{anthore2003}.
The effect of the magnetic field is conveniently parametrized by the ``orbital'' critical field $B_\mathrm{c,orb}$, which is related to sample parameters by \cite{maki1969}
\begin{equation}
\frac{D_\mathrm{N} e^2 B_\mathrm{c,orb}^2 t^2}{6 \hbar\Delta_0}y(\frac{\pi l}{d}) = \frac{1}{2},
\label{eqn:Bcorb}
\end{equation}
where $t$ is the film thickness, $l$ is the mean free path, and
\begin{equation}
y(z)=\frac{3}{2}\frac{(1+z^2)\mathrm{arctan}(z)-z}{z^3}
\end{equation}
is a correction due to nonlocal electrodynamics. Note that the actual critical field $B_\mathrm{c}$ is smaller than $B_\mathrm{c,orb}$ due to additional depairing by the Zeeman splitting.
The self-energy due to spin-orbit scattering is
\begin{equation}
\Sigma_\mathrm{so}=-\sigma b_\mathrm{so}(F_\uparrow G_\downarrow-F_\downarrow G_\uparrow),
\end{equation}
where the spin-orbit scattering parameter is
\begin{equation}
b_\mathrm{so}=\frac{\hbar}{3\tau_\mathrm{so}\Delta},
\end{equation}
and $\tau_\mathrm{so}$ is the spin-orbit scattering time. $b_\mathrm{so}$ can not be determined accurately from our tunnel conductance measurement. We have therefore assumed $b_\mathrm{so}=0.02$, similar to the values obtained from earlier nonlocal spin-valve experiments \cite{huebler2012b,wolf2014c} on our aluminum films.
The model is completed by the self-consistency equation for the pair potential $\Delta$ and the Zeeman splitting $\varepsilon_\mathrm{z}$ including Fermi-liquid renormalization \cite{alexander1985},
\begin{equation}
\mathrm{ln}\left(\frac{T}{T_\mathrm{c}}\right)=\frac{\omega}{\Delta}\sum_{\omega_n}\left(F_\mathrm{s}(i\omega_n)-\frac{\Delta}{\omega_n}\right),
\label{sc1}
\end{equation}
\begin{equation}
\varepsilon_\mathrm{z} - (1-A^a_0)\frac{\mu_\mathrm{B}B}{\Delta_0}=A^a_0\omega_1\sum_{\omega_n}i G_\mathrm{t}(i\omega_n),
\label{sc2}
\end{equation}
where
\begin{equation}
A^a_0=\frac{G_0}{G_0+1}.
\end{equation}
$G_0$ is the Fermi liquid parameter, and $\omega_n=(2n-1)\pi k_\mathrm{B} T/\Delta_0$ is the $n$-the Matsubara frequency. $F_\mathrm{s}=(F_\downarrow+F_\uparrow)/2$ and $G_\mathrm{t}=(G_\downarrow-G_\uparrow)/2$ are the singlet anomalous and triplet normal Green's function, respectively. Literature values for $G_0$ range from 0.16 to 0.3 \cite{catelani2008,alexander1985}, and we have assumed $G_0=0.2$. The phase transition to the normal state is always second order in our samples due to the effect of orbital depairing, and for the fits of the critical field, we have used the usual approximation of the self-consistency equations for $\Delta\rightarrow 0$ (Eq.~(85) of Ref. \onlinecite{alexander1985}).
\begin{table}[t]
\centering
\begin{tabular}{c|cccccccc}
Sample & $T_\mathrm{c}$ & $B_\mathrm{c,orb}$ & $\Delta_0$ & $I_0$ & $\xi$ & $G_\mathrm{inj}$ & $G_\mathrm{det}$ \\
& (K) & (T) & ($\mathrm{\mu eV}$) & ($\mathrm{\mu A}$) & (nm) & ($\mathrm{\mu S}$) & ($\mathrm{\mu S}$) \\ \hline
A & 1.50 & 1.66 & 228 & 121 & 122 &294 & 559 - 599 \\
B & 1.50 & 1.63 & 228 & 185 & 120 &766 & 731 - 742
\end{tabular}
\caption{Overview of sample parameters. Critical temperature $T_\mathrm{c}$, orbital critical field $B_\mathrm{c,orb}$, pair potential $\Delta_0$, characteristic current $I_0$, coherence length $\xi$, injector conductance $G_\mathrm{inj}$ and detector conductance $G_\mathrm{det}$.}
\label{tab:parameters}
\end{table}
\paragraph{Kinetic equations.} Quasiparticle transport is described by four distribution functions $f_\mathrm{L}$, $f_\mathrm{T3}$, $f_\mathrm{T}$ and $f_\mathrm{L3}$, corresponding to energy, spin, charge and spin-energy currents $j_\mathrm{e}$, $j_\mathrm{s}$, $j_\mathrm{c}$ and $j_\mathrm{se}$, respectively. In equilibrium, only $f_\mathrm{L}$ is nonzero and given by
\begin{equation}
f_0(\varepsilon)=\tanh{\left(\frac{\varepsilon}{2t}\right)},
\end{equation}
where $t=k_\mathrm{B}T/\Delta_0$ is the normalized temperature.
In the following, we will only consider the deviation from equilibrium, {\em i.e.}, we implicitly subtract $f_0$ from $f_\mathrm{L}$.
The distribution functions and currents are related by \cite{silaev2015,bobkova2016,aikebaier2018}
\begin{equation}
\begin{pmatrix}
j_\mathrm{e}\\j_\mathrm{s}\\j_\mathrm{c}\\j_\mathrm{se}
\end{pmatrix}
=
\begin{pmatrix}
D_\mathrm{L}\nabla & D_\mathrm{T3}\nabla & j_\mathrm{E}\nabla\phi & j_\mathrm{Es}\nabla\phi \\
D_\mathrm{T3}\nabla & D_\mathrm{L}\nabla & j_\mathrm{Es}\nabla\phi & j_\mathrm{E}\nabla\phi \\
j_\mathrm{E}\nabla\phi & j_\mathrm{Es}\nabla\phi & D_\mathrm{T}\nabla & D_\mathrm{L3}\nabla \\
j_\mathrm{Es}\nabla\phi & j_\mathrm{E}\nabla\phi & D_\mathrm{L3}\nabla & D_\mathrm{T}\nabla
\end{pmatrix}
\begin{pmatrix}
f_\mathrm{L} \\ f_\mathrm{T3} \\ f_\mathrm{T} \\ f_\mathrm{L3}
\end{pmatrix}.
\label{eqn:transport1}
\end{equation}
Here, the $D_m$ are the spectral diffusion coefficients for mode $m$. The spectral supercurrent densities are
\begin{eqnarray}
j_\mathrm{E} & = & \frac{1}{2}\mathrm{Im}\left(F_\downarrow^2+F_\uparrow^2\right), \\
j_\mathrm{Es} & = & \frac{1}{2}\mathrm{Im}\left(F_\downarrow^2-F_\uparrow^2\right).
\end{eqnarray}
Current densities are either driven by gradients of the distribution functions, or through their coupling to the supercurrent. Measuring the off-diagonal coupling due to $j_\mathrm{Es}$ is the goal of this paper.
Relaxation of the nonequilibrium currents is given by
\begin{equation}
\begin{split}
\nabla j_\mathrm{e} & = 0\\
\nabla j_\mathrm{s} & = S_\mathrm{T3}f_\mathrm{T3}\\
\nabla j_\mathrm{c} & = R_\mathrm{T} f_\mathrm{T} + R_\mathrm{L3}f_\mathrm{L3}\\
\nabla j_\mathrm{se} & = (R_\mathrm{T}+S_\mathrm{L3})f_\mathrm{L3} + R_\mathrm{L3}f_\mathrm{T}
\end{split}
\label{transport2}
\end{equation}
$S_\mathrm{T3}$ and $S_\mathrm{L3}$ are spin relaxation rates due to spin-orbit scattering (we neglect spin flips by magnetic impurities). The coefficients $R_\mathrm{T}$ and $R_\mathrm{L3}$ describe charge relaxation by coupling to the superconducting condensate. As described above, we neglect inelastic scattering, and therefore the relaxation rate of $j_\mathrm{e}$ is zero.
The differential equations are supplemented by boundary conditions. For a spin-degenerate injector at $x=0$, these read
\begin{equation}
\begin{pmatrix}
\left[j_\mathrm{e}\right]\\\left[j_\mathrm{s}\right]\\\left[j_\mathrm{c}\right]\\\left[j_\mathrm{se}\right]
\end{pmatrix}
= \kappa_\mathrm{I}
\begin{pmatrix}
N_+ & N_- & 0 & 0 \\
N_- & N_+ & 0 & 0 \\
0 & 0 & N_+ & N_- \\
0 & 0 & N_- & N_+
\end{pmatrix}
\begin{pmatrix}
\left[f_\mathrm{L}\right] \\ \left[f_\mathrm{T3}\right] \\ \left[f_\mathrm{T}\right] \\ \left[f_\mathrm{L3}\right]
\end{pmatrix},
\label{eqn:bc}
\end{equation}
where $\left[j_m\right]=j_m(x=0+)-j_m(x=0-)$ and $\left[f_m\right]=f_m(x=0)-f_m^{\mathrm{inj}}$. The effective injection rate is given by
\begin{equation}
\kappa_\mathrm{I}=G_{\mathrm{inj}}\frac{\rho_\mathrm{N}\xi}{A},
\end{equation}
where $A$ is the cross-section of the wire, and $\rho_\mathrm{N}$ is the normal-state resistivity.
The distribution functions of the injector are given by
\begin{eqnarray}
f_{L}^{\mathrm{inj}} & = & \frac{1}{2}\left(f_0\left(\varepsilon+\mu\right)+f_0\left(\varepsilon-\mu\right)\right)-f_0\left(\varepsilon\right), \\
f_{T}^{\mathrm{inj}} & = & \frac{1}{2}\left(f_0\left(\varepsilon+\mu\right)-f_0\left(\varepsilon-\mu\right)\right),
\end{eqnarray}
where $\mu=-eV_\mathrm{inj}/\Delta_0$ is the electrochemical potential of the injector. The T3 and L3 modes are zero in the injector.
The boundary conditions at the ends of the wire are $f_m(x=\pm L)=0$.
\begin{figure}
\includegraphics[width=0.7\columnwidth]{fig7.eps}
\caption{Comparison of the measured (symbols) and theoretical (line) critical current of sample A at $T=100~\mathrm{mK}$.}
\label{fig:Ic}
\end{figure}
\paragraph{Observables.}
The spin-resolved density of states is $N_\sigma=\mathrm{Re}(G_\sigma)$, which gives the spin-symmetric and antisymmetric parts
\begin{equation}
N_\pm = N_\downarrow\pm N_\uparrow.
\end{equation}
The differential tunnel conductance of the injector is
\begin{equation}
g = \frac{G_\mathrm{inj}}{2}\int_{-\infty}^{\infty}N_+\frac{\partial f_0(\epsilon-\mu)}{\partial \mu}d\epsilon.
\end{equation}
The supercurrent is given by
\begin{equation}
I_\mathrm{S} = I_0\frac{\nabla\phi}{2}\int_{-\infty}^\infty f_0\left(\varepsilon\right)j_\mathrm{E} d\varepsilon
\end{equation}
with the characteristic current
\begin{equation}
I_0 = \frac{\Delta_0 A}{e \rho_\mathrm{N} \xi}.
\end{equation}
The theoretical critical current at $T=0$ and $B=0$ is about $0.75I_\mathrm{0}$.
Fig.~\ref{fig:Ic} compares the measured critical current for sample A to the model prediction. The measured critical current is slightly smaller than predicted, probably as a result of premature escape due to noise.
For the simulations, first the spectral properties were calculated self-consistently for the applied field and temperature, including $\nabla\phi$ determined self-consistently from the applied supercurrent. Then the kinetic equations were solved numerically on an equidistant position and energy grid with spacings $\delta x = L/80$ and $\delta \varepsilon=1/20$, respectively.
|
2,877,628,091,275 | arxiv | \section{Introduction to CCO Pulsars}
The discovery in recent years of many isolated neutron stars (NSs) at
the centers of supernova remnants (SNRs) confirms the long-held notion
that these ultra-dense stellar remnants are born in supernova
explosions \cite{baa34}. Most NSs are identified as
pulsars, whose emission derives either from rotational energy loss, as
for the rapidly spinning pulsars in the Crab ($P = 33$~ms) and Vela
($P = 89$~ms) remnants, or from magnetic field decay, as posited for
the highly magnetized $(10^{14} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} B_s \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 10^{15}$~G)
anomalous X-ray pulsars (AXPs) and
soft gamma-ray repeaters (SGRs). However, the
nature of a significant fraction of the young ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 10^4$~yrs)
NSs in SNRs is uncertain. These so-called central compact objects
(CCOs) are seemingly isolated NSs, distinguished by their steady flux,
predominantly thermal X-ray emission, lack of optical or radio counterparts,
and absence of a pulsar wind nebula (see review by
Pavlov et al.\ \cite{pav04}).
The properties of eight confirmed and proposed CCOs are summarized in Table~1.
(We omit the unique source 1E 161348$-$5055 at the center of
RCW 103 because its large-amplitude variability \cite{got99,del06}
violates the adopted definition.)
CCO luminosities are typically $10^{33}$
erg s$^{-1}$, similar to the younger pulsars; however, their spectra
are best characterized as hot blackbody emission of $kT_{BB} \sim
0.4$~keV, or two such components. This is significantly
hotter than the surfaces of radio pulsars or other radio-quiet NSs
and corresponds to only a small fraction of the NS surface
area, with $R_{BB} \sim 0.7$~km.
Here we summarize new results on two unique pulsars that were initially
identified as CCOs. Their properties, listed in Table 2, are
strikingly different from both the traditional radio pulsars
and the magnetars. The lack of a measurable period derivative
requires that they have:
\begin{itemize}
\item small natal magnetic fields ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 10^{11}$~G),
\item slow initial spin periods very near to their present values
($>0.1$~s), and
\item X-ray luminosities powered by internal cooling, and possibly by accretion
of supernova (SN) debris.
\end{itemize}
\noindent
We predict that these properties will come to
characterize the CCO class of young neutron stars in general.
This can be tested with more sensitive searches for pulsations
from those objects listed in Table~1.
\begin{table}
\begin{tabular}{llcccrrl}
\hline\hline
{\hfil CCO \hfil} & {\hfil SNR \hfil} &
{Age} & {$d$} & {$P$} & {\hfil $f_p$\tablenote{Upper limits on pulsed fraction
are for a search down to $P=12$~ms or smaller.}\hfil }
& {\hfil $L_x$ \hfil} & {\hfil References \hfil} \cr
{ } & { } & {(kyr)} & {(kpc)} & {(ms)} & (\%) & {(erg s$^{-1}$)} & \\
\hline
RX~J0822.0$-$4300 & Puppis~A & 3.7 & 2 & \dots\ & $<5$ & $5 \times 10^{33}$ & \cite{hui06}\cr
CXOU~J085201.4$-$461753 & G266.1$-$1.2 & 1 & 1 & \dots\ & $<13$ & $2.5\times 10^{32}$ & \cite{sla01,kar02,bam05,iyu05}\cr
1E 1207.4$-$5209 & PKS~1209$-$51/52& 7 & 2 & 424 & 9 & $2.1 \times 10^{33}$ & \cite{zav00,mer02a,big03,del04}\cr
CXOU~J160103.1$-$513353 & G330.2$+$1.0 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 3$ & 5 & \dots\ & \dots\ & $1.0 \times 10^{33}$ & \cite{par06}\cr
1WGA~J1713.4$-$3949 & G347.3$-$0.5 & 1.6 & 1.3 & \dots\ & $< 25$ & $1\times 10^{33}$ & \cite{laz03,cas04} \cr
CXOU~J181852.0$-$150213 & G15.9$+$0.2 & $1-3$ & (8.5) & \dots\ & \dots\ & $1 \times 10^{33}$ & \cite{rey06}\cr
CXOU~J185238.6$+$004020 & Kes~79 & $7$ & 7 & 105 & 80 & $3.3 \times 10^{33}$ & \cite{sew03,got05,hal07}\cr
CXOU~J232327.9$+$584842 & Cas~A & 0.33 & 3.4 & \dots\ & $<27$ & $2.6\times 10^{33}$ & \cite{pav00,cha01,mur02,mer02b}\cr
\hline
\end{tabular}\\
\caption{Central Compact Objects in Supernova Remnants}
\end{table}
\subsection{{PSR~J1210$-$5226}\ in SNR {PKS~1209$-$51/52}}
The central source {1E~1207.4$-$5209}\ in the SNR {PKS~1209$-$51/52}\ (Fig.~1)
is the most intensively
studied of the CCOs. It acquired special importance as
the first CCO detected as a pulsar \cite{zav00,pav02}.
It was distinguished again as the first isolated NS to display strong
absorption lines in its X-ray spectrum \cite{san02,mer02a,big03}.
We have made a comprehensive
study of all X-ray timing data on {1E~1207.4$-$5209}\ ({PSR~J1210$-$5226}) \cite{got07},
showing that its rotation is highly stable (Fig. 2), with $P =
424.130749(4)$~ms and $\dot P = (6.6 \pm 9.0) \times 10^{-17}$
($1\sigma$ errors), superseding previous claims of large period
changes in the same data \cite{zav04,woo07}. In
the dipole spin-down formalism, the $2\sigma$ upper limit on $\dot P$
implies a spin-down luminosity $\dot E \equiv -I\Omega\dot \Omega < 1.3 \times
10^{32}$~erg~s$^{-1}$, surface magnetic field strength $B_s < 3.3
\times 10^{11}$~G, and characteristic age $\tau_c \equiv P/2\dot P>
27$~Myr. This lower limit on $\tau_c$ exceeds the SNR age by at least
$3$ orders of magnitude, requiring that the pulsar was born spinning
at its present period to three significant digits. The X-ray
luminosity of {1E~1207.4$-$5209}, $L_{\rm bol} \approx 2.1 \times 10^{33}\,(d/2\ {\rm
kpc})^2$ erg~s$^{-1}$, exceeds its $\dot E$, implying that $L_{\rm
bol}$ derives from residual cooling, and perhaps partly from accreting
SN debris.
\subsection{{PSR J1852$+$0040}\ in SNR Kes 79}
We also discovered pulsations from a second CCO, in the
SNR Kes~79 \cite{got05} (Fig.~3).
Our follow-up program to time
{PSR J1852$+$0040}\ produced a remarkable result: no change in its 105~ms
period over 2.4 yr \cite{hal07}. From the data
shown in Figure~4, we derived a
$2\sigma$ upper limit of $\dot P < 2.0 \times 10^{-16}$, which
leads to a spin-down luminosity $\dot E < 7 \times 10^{33}$~erg~s$^{-1}$,
surface magnetic field strength $B_s < 1.5 \times 10^{11}$~G,
and characteristic age $\tau_c > 8$~Myr.
Again, this implies that the pulsar was born spinning
at its current period, with
a weaker $B$-field than that of any other young pulsar.
These are the only two NSs whose initial rotation periods are so precisely
inferred. They are longer than what was once thought typical,
but in fact are consistent with recent statistical
analyses of the radio pulsar population, e.g.,
Faucher-Gigu\`ere \& Kaspi \cite{fau06} who
favor a wide distribution of birth periods
(Gaussian mean $P \sim 300$~ms and $\sigma_P \sim 150$~ms).
The absence of spin-down for both CCO pulsars then
highlights the difficulty of existing theories to
explain the high luminosities and temperatures of CCO thermal X-ray
spectra. Their
X-ray luminosities are a large fraction of their $\dot E$, challenging the
rotation-powered assumption, and greater than their reservoirs of
$B$-field energy, refuting the magnetar hypothesis.
\begin{table}
\begin{tabular}{lcc}
\hline\hline
{\hfil Parameter \hfil} & {PSR J1209$-$5226} & {PSR J1852$+$0040} \cr
\hline
$P$ (ms) & 424.1307 & 104.9126 \cr
$\dot P$ ($10^{-17}$ s s$^{-1}$) & $6.6 \pm 9.0$ & $-34 \pm 27$ \cr
$\dot E$ ($10^{32}$ erg s$^{-1}$)\tablenote{$2\sigma$ limit assuming
magnetic dipole spin-down.}
& $<1.3$ & $<70$ \cr
$L({\rm bol})/\dot E$
& $> 15$ & $>0.5$ \cr
$B_s$ ($10^{11}$ G)$^*$ & $<3.3$ & $<1.5$ \cr
$\tau_c$ (Myr)$^*$ & $>27$ & $>8$ \cr
SNR age (kyr) & $\sim 7$ & $\sim 7$ \cr
Reference & \cite{got07} & \cite{hal07} \cr
\hline
\end{tabular}\\
\caption{Spin Parameters of CCO Pulsars}
\end{table}
\section{Accreting and/or cooling}
Instead, the high blackbody temperatures ($kT_{BB} \sim 0.4$~keV)
and small blackbody radii ($R_{BB} \sim 0.7$~km) of CCOs
may be evidence of accretion onto a polar cap, possibly
from a fallback disk of SN debris. Interior cooling models, even
with anisotropic conduction, do not make such a concentrated hot spot.
We \cite{hal07} proposed this as the origin of CCOs:
Magnetic field is generated by a turbulent dynamo
whose strength depends on the rotation rate of the
proto-neutron star \cite{tho93,bon06},
so the magnetic field strength would be inversely correlated with
the initial period. If $P$ is large and $B_s$ is small,
accretion of SN debris is possible. Accretion excludes
neutron stars born with both $B_s < 10^{11}$~G
and $P > 0.1$~s from radio pulsar surveys,
where $B_s < 10^{11}$~G is not found except in very old
($\tau_c > 40$~Myr) or recycled pulsars.
\begin{figure}
\includegraphics[height=0.32\textheight]{pks_pspc_bw.ps}
\caption{
Greyscale {\it ROSAT}\ X-ray image of the CCO pul-\break sar {PSR~J1210$-$5226}\
in the SNR {PKS~1209$-$51/52}. The pulsar is offset from the center of the
barrel-shaped thermal remnant.}
\end{figure}
\begin{figure}
\includegraphics[width=0.33\textheight]{1e_1207_timing.ps}
\caption{Timing of {PSR~J1210$-$5226}\ \cite{got07}.
{\it Filled circles} are from \xmm, {\it open circles} are from \chandra,
and the {\it open triangle} is from {\it ROSAT}.}
\end{figure}
Another clue to the nature of CCOs is the soft X-ray
spectrum of {1E~1207.4$-$5209}. It has broad absorption lines
centered at $0.7$~keV and $1.4$~keV \cite{san02,mer02a,big03,del04}.
Our upper limit, $B_s < 3.3 \times 10^{11}$~G, favors the electron
cyclotron model \cite{big03,del04},
for at least one of the lines, over all others that
require stronger fields. The basic cyclotron prediction, $B_s = 8 \times
10^{10}$~G, assumes that 0.7~keV is the fundamental energy $E_c =
1.16(B_s/10^{11}\,{\rm G})/(1+z)$ keV, where $z$ is the gravitational
redshift. Another solution postulates hydrogenic oxygen for the
0.7~keV line, while the 1.4~keV line is the cyclotron fundamental
\cite{hai02,mor06}. As the authors of the latter hypothesis pointed out,
abundant oxygen may be accreted from SN debris.
\begin{figure}
\includegraphics[height=0.32\textheight]{kes79_xmm_bw.ps}
\caption{
Greyscale \xmm\ X-ray image of the CCO pulsar {PSR J1852$+$0040}\ in the SNR Kes~79.
The image is centered on the pulsar within the shell-type thermal remnant.}
\end{figure}
\begin{figure}
\includegraphics[width=0.33\textheight]{kes_79_timing.ps}
\caption{ Timing of {PSR J1852$+$0040}\ \cite{hal07}.
{\it Filled circles} are from \xmm\ and the {\it open circle}
is from \chandra.}
\end{figure}
Accretion from a fallback disk of SN debris in the propeller
regime has been considered for CCOs by several authors
\cite{alp01,kar02,shi03,zav04,eks05,liu06}.
We propose that this may be the first
phase in the life of those neutron stars born rotating slowly
with weak magnetic fields.
The X-ray luminosity of CCOs, or possibly just their hot spots,
can be powered
by accretion of $\dot m \approx 3 \times 10^{13}$ g~s$^{-1}$,
or only $\approx 0.1$ lunar masses of supernova debris over
their $\sim 7$~kyr lifetimes.
The main barrier to disk accretion is the speed-of-light cylinder,
of radius $r_{\ell} = cP/2\pi$.
If an accretion disk cannot penetrate the light cylinder, the NS
cannot interact with the disk, and it behaves as a isolated pulsar.
But if $B_s$ is as small as $10^{10}$~G, accretion at a rate
$\dot M \geq 10^{13}$~g~s$^{-1}$ can penetrate the light cylinder
to the magnetospheric radius $r_m$, since $r_m < r_{\ell}$
in this case.
If so, the system is in the propeller regime,
in which matter flung out from
$r_m$ at a rate $\dot M$ takes angular momentum from the NS,
causing it to spin down.
In the case of {PSR J1852$+$0040}, we \cite{hal07}
estimated the propeller spin-down rate as
\begin{displaymath}
\dot P \approx 2.2 \times 10^{-16}\,\mu_{28}^{8/7}\,\dot M_{13}^{3/7}
\left({M \over M_{\odot}}\right)^{-2/7}
\end{displaymath}
\begin{equation}
\times \ I_{45}^{-1}\left({P \over 0.105\ {\rm s}}\right)
\left(1- {P \over P_{\rm eq}}\right)
\end{equation}
using the prescription of Menou et al.\ \cite{men99}.
Here $I \approx 10^{45}$ g~cm$^2$
is the NS moment of inertia,
$\mu = B_s\,R^3 \approx 10^{28}$ G~cm$^3$,
and $P_{\rm eq}$ is the equilibrium, or minimum
period for disk accretion.
Appropriate scaling for the 0.424~s period of {1E~1207.4$-$5209}\ can be
substituted in equation (1). These predictions for accretion are close to
the observed upper limits on $\dot P$ (Table~2).
We distinguish here between $\dot M$, the matter expelled that is
responsible for the torque on the NS,
and $\dot m \approx 3 \times 10^{13}$~g~s$^{-1}$,
the matter accreted, which is responsible for the X-ray emission
from the surface, presumably at a magnetic pole of the NS.
For the propeller model to be self-consistent,
$\dot M$ must be $> \dot m$, which is possible according to
equation (1) as long as $B_s < 10^{10}$~G.
In X-ray binaries, the propeller effect does not necessarily
preclude accretion onto the NS surface.
The disposition of material leaving the inner
edge of the accretion disk is not well understood,
and many authors, e.g., Rappaport et al.\ \cite{rap04}, consider that
accretion onto the NS can proceed even in the propeller (fast pulsar)
regime. We adopt that point of view here.
Accreting X-ray binary pulsars are often found in near equilibrium
states where $\dot P$ changes sign without changing luminosity.
Equation (1) may then represent the typical magnitude of $\dot P$,
independent of sign, albeit at an accretion rate 4--5 orders of magnitude
less than in binaries.
Whether CCO pulsars are accreting or not,
the conclusion that they have weak magnetic fields is
unavoidable. The hypothesis of dipole spin-down can be
quantified by measuring $\dot P$ using phase-coherent timing,
the only method that is effective in a reasonable time span
for detecting such small $B$-fields. A steady,
positive $\dot P$ will yield $B_s$ via the dipole spin-down formula,
$B_s = 3.2 \times 10^{19}(P \dot P)^{1/2}$. But if the pulsars
show episodes of fluctuating or negative $\dot P$, that
will be evidence of accretion, i.e., torque noise.
Existing data cannot (yet) be used to accomplish these tests
because they were not densely spaced enough to span gaps of
several months with a coherent timing solution free of cycle
count uncertainties. But a carefully planned sequence of
new observations can be used to fit a coherent
ephemeris that will bootstrap
the historical data into a phase-linked timing solution
spanning 4.5 years for {PSR J1852$+$0040}, and 10 years for {PSR~J1210$-$5226},
which will improve the sensitivity to $\dot P$ by 3 orders of magnitude
over the limits in Table~2, reaching limits of $B_s \sim 10^{10}$~G.
\section{Are There Younger CCOs?}
CCOs represent a significant fraction of young NSs.
Here, we propose that their birth properties
are common enough to be a likely explanation
for the inconspicuous nature of the two youngest
NSs, the CCO in Cas~A, and an undetected pulsar
in the SN 1987A remnant.
\subsection{Cas A: Magnetar or Anti-Magnetar?}
The Cas~A CCO is the youngest
known NS (327 yr); the SN was probably
seen by Flamsteed in 1680, which agrees with the measured
convergent date of the SNR ejecta \cite{tho01}.
The simple argument that CCOs
are born spinning at their current periods, and with their
current magnetic field strengths, implies that their
luminosities need not decrease substantially since
birth. Only their interior cooling would reduce
their soft X-ray emission.
The Cas~A CCO is characterized, like the others,
by its steady X-ray luminosity of $\approx 2.6 \times 10^{33}$ erg~s$^{-1}$,
compared to $\sim 10^{35}$ erg~s$^{-1}$ for magnetars.
Nevertheless, despite the strong evidence that CCO pulsars
have weak magnetic fields, there is circumstantial
evidence that Cas A may host a magnetar.
Rapidly moving IR features detected by the
{\it Spitzer Space Telescope} outside Cas~A were interpreted as a light echo
from a beamed, high-energy flare some 55 years ago that is heating the
surrounding interstellar dust \cite{kra05}.
An SGR-like outburst of $\sim 2 \times 10^{46}$~erg (isotropic equivalent)
would have been required to explain the reprocessed IR luminosity.
Until recently, a quiescent magnetar was a favored
hypothesis for the nature of the Cas~A point source.
However, the magnetar hypothesis implies that the present
rotation period of the 327~old pulsar is
$P \approx 0.45(B_s/10^{14}\ {\rm G})$~s,
assuming that its initial period $P_0<<P$, which also requires
that $\dot E \approx 9.5 \times 10^{36}(B_s/10^{14}\ {\rm G})^{-2}$
erg~s$^{-1}$. Cas~A could therefore have a spin-down
luminosity that exceeds the typical $10^{35}$ erg~s$^{-1}$ X-ray
luminosity of all other magnetars.
Such a large spin-down luminosity is almost always
accompanied by substantial non-thermal X-ray luminosity, of order
$10^{-3} \dot E$, including a resolvable
pulsar wind nebula. The absence of any X-ray evidence of
such energetic spin-down argues against the presence of
a typical magnetar $B$-field in the Cas~A point source.
Thus, Cas~A is the
focus of sharply contradictory evidence about the birth properties
of CCOs: their natal magnetic
field strengths and initial spin periods.
Apart from
waiting for an SGR or AXP-like outburst (which may never occur),
the only way to resolve whether Cas~A hosts
a transient magnetar or an ``anti-magnetar''
is by direct measurement of its spin properties.
A pulsar in Cas~A will reveal the spin period and dipole
$B$-field of a NS at an age that is only a few percent of the
known AXPs' and CCOs' ages,
providing a correspondingly more secure representation
of their initial values. If the magnetar hypothesis is not confirmed,
it may imply that the high-energy flare was powered by a one-time event
in Cas A, a first-order phase transition (corequake)
\cite{zdu07}, e.g., to a pion or kaon condensate or deconfined quarks.
That would be a remarkable outcome.
\subsection{\bf Is there a CCO in SN~1987A?}
It has long been known that the non-detection of a pulsar in SN 1987A
can be explained if the NS was born with a weak $B$-field or a long
rotation period \cite{oge04,man07}. Now, we have established that a
CCO can have both, which gives it essentially the same spin-down
luminosity at birth that it has at an age of $10^4$~yr. This means
that a CCO can emit less than the observed limits from SN~1987A even
if 100\% of its spin-down power is reprocessed into IR emission by
dust in the surrounding SN ejecta. The observed luminosity limits for
a point source inside the ring of SN~1987A are -- in X-rays:
$L_x(2-10\,{\rm keV}) < 1.5 \times 10^{34}$ erg~s$^{-1}$ corrected for
extinction \citep{par04}, in the visible range: $L(2900-9650$~\AA) $<
8 \times 10^{33}$ erg~s$^{-1}$ corrected for dust absorption
\cite{gra05}, and at mid-IR wavelengths: $L(10\,\mu{\rm m}) < 1.5
\times 10^{36}$ erg~s$^{-1}$ for dust emitting at $T \approx 90-100$~K
\cite{bou04}. This mid-IR luminosity can be accounted for by
radioactive decay of $^{44}$Ti, and therefore represents a very
conservative upper limit on the spin-down power of an embedded pulsar.
At an age of $10-20$~yr, a cooling NS need emit only $\approx 3 \times
10^{34}$ erg~s$^{-1}$ of soft X-rays at a temperature of $2.5 \times
10^6$~K \cite{yak02}, some of which is absorbed by SN ejecta or ISM in
the LMC. So we conclude that a CCO is a promising model for an unseen
NS in SN~1987A. Given the lack of any other NS signatures from CCOs,
a continuing search for surface thermal X-ray emission as the ejecta
thin out is perhaps the best hope of detecting the NS in SN~1987A,
even though it is becoming more difficult as the SNR brightens
dramatically.
\begin{theacknowledgments}
This work uses data obtained with the \xmm\ and
\chandra\ observatories and funded by NASA grants NNX06AH95G and SAO
G06-7048X. EVG thanks the conference organizers for financial support.
\end{theacknowledgments}
\bibliographystyle{aipprocl}
|
2,877,628,091,276 | arxiv | \section{Concluding remarks}
\label{chap:conclusion}
In this paper, we propose a decentralized online algorithm for optimizing convex and monotone continuous DR-submodular functions with regret and $\parenthese{1-\frac{1}{e}}$-regret bound of $O(T^{4/5})$. The extension of the algorithm to the bandit setting ensures a $\parenthese{1-\frac{1}{e}}$-regret bound of $O(T^{8/9})$. A detailed analysis is given when the constraint set is either a general convex set or a downward-closed convex set under full information and bandit settings, respectively. In addition, the experiment results on a real-life movie recommendation problem assess the interest of the proposed algorithm for learning in decentralized settings.
\section{Full Information Setting}
\label[section]{chap:formulation}
This section thoroughly describes the algorithm for both convex and DR-submodular optimization. Recall that each agent receives a function $f_t^i$ at every time $t \in \bracket{T}$. We partition time steps into $Q$ blocks, each of size $K$ so that $T = QK$. For each block $q \in \bracket{Q}$, we define $f^i_q$ as the average of the $K$ functions within the block. Additionally, each agent $1 \leq i \leq n$ maintains $K$ online linear optimization oracles $\mathcal{O}_{i,1}, \ldots, \mathcal{O}_{i,K}$. Let $\sigma_q \in \mathfrak{S}_K$ be a random permutation of function indexes for all agents.
At a high level, at each block $q$, the agent $i$ performs $K$-steps of Frank-Wolfe algorithm, where the update vector is a combination of the oracles' outputs and the aggregate of its neighbors' current decisions. The final decision $\vect{x}^i_q$ for the block $q$ is disclosed at the end of $K$ steps, such that at each time step in the block, agent $i$ plays the same decision $\vect{x}^i_q$.
More specifically, following the Frank-Wolfe steps, agent $i$ performs $K$ gradient updates using the estimators $f^i_{\sigma_q(k)}$. It calculates the stochastic gradient of the permuted function $f^i_{\sigma_q (k)}$ evaluated at the corresponding decision vector $\vect{x}^i_{q,k}$ and thereafter exchanges information with its neighbors. It then computes a variance reduction version $\widetilde{\vect{a}}^{i}_{q,k}$ of the vector $\widetilde{\vect{d}}^{i}_{q,k}$ and returns $\langle \widetilde{\vect{a}}^i_{q,k}, \cdot \rangle$ as the cost function at time $\sigma^{-1}_q (k)$ to the oracle $\mathcal{O}^i_{k}$. The vectors $\widetilde{\vect{d}}^i_{q,k}$ are subtly constructed to capture progressively more information on the accumulating cost functions.
Note that the use of random permutation $\sigma_q$ is crucial here. By that, all the permuted functions $f^i_{\sigma_q(k)}$ become an estimation of $f^i_q$, i.e.,
$\ensuremath{\mathbb{E}}[f^i_{\sigma_q(k)}] = f^i_q$. Therefore the gradient of $f^i_{\sigma_q(k)}$ is likewise an estimation of the gradient of $f^i_q$. One can think of $f^i_q$ as an artificial objective function for which we have access to its gradient estimates, where each estimation is one gradient evaluation per function within the block. As a result, conducting $K$ gradient updates of $f^i_q$ turns out to be executing one gradient update for each of the $K$ functions. Using this approach, initiated in \cite{Zhang:2019}, we can effectively reduce the gradient evaluation number to 1 for each arriving function $f^i_t$.
Since we deal with both convex and submodular, there are modifications to adapt for both kinds of optimization problem. The online optimization oracle's objective function should be minimized for convex optimization and maximized for submodular optimization. The decision update for convex problems is a convex combination of the aggregated neighbors' decisions $\vect{y}^i_{q,k}$ and the oracle's output $\vect{v}^i_{q,k}$, i.e.,
\begin{align}
\label{eq:covex-update}
\vect{x}^i_{q,k+1} = (1-\eta_k)\vect{y}^i_{q,k} + \eta_k \vect{v}^i_{q,k}, \quad \eta_k \in \bracket{0,1}
\end{align}
whereas the update for the submodular optimization problem is achieved by shifting the aggregated decisions towards the direction of the oracle's output by a step-size $\eta_k$, i.e.,
\begin{align}
\label{eq:submodular-update}
\vect{x}^i_{q,k+1} = \vect{y}^i_{q,k} + \eta_k \vect{v}^i_{q,k}, \quad \eta_k \in \bracket{0,1}
\end{align}
For convex functions, the initialization can be any random point inside the constraint set, however for submodular functions, this value should be set to 0.
The formal description is given in Algorithm~\ref{algo:online-dist-FW}. The proof of the following lemmas and theorems can be found in \Cref{chap:analysis}.
\begin{algorithm}[ht!]
\begin{flushleft}
\textbf{Input}: A convex set $\mathcal{K}$,
a time horizon $T$, a block size $K$, online linear optimization oracles $\mathcal{O}_{i,1}, \ldots, \mathcal{O}_{i,K}$ for each agent $1 \leq i \leq n$,
step sizes $\eta_k \in (0, 1)$ for all $1 \leq k \leq K$, number of blocks $Q=T/K$
\end{flushleft}
\begin{algorithmic}[1]
\STATE Initialize linear optimizing oracle $\mathcal{O}^i_k$ for all $1 \leq k \leq K$
\FOR {$q = 1$ to $Q$}
\FOR{every agent $1 \leq i \leq n$} %
\STATE Initialize $\vect{x}^i_{q,1}$ and set $\widetilde{\vect{{a}}}^t_{i,0} \gets 0$
\FOR{$1 \leq k \leq K$}
\STATE Let $\vect{v}^{i}_{q,k}$ be the output of oracle $\mathcal{O}^i_{k}$ at phase $q$. \label{online-oracle}
\STATE Send $\vect{x}^{i}_{q,k}$ to all neighbours $N(i)$
\STATE \label{alg:y}
Once receiving $\vect{x}^{j}_{q,k}$ from all neighbours $j \in N(i)$,
set $\vect{y}^{i}_{q,k} \gets \sum_{j} W_{ij} \vect{x}^{j}_{q,k}$.
\STATE \label{alg:x} Update $\vect{x}^i_{q,k+1}$ as (\ref{eq:covex-update}) or (\ref{eq:submodular-update}). \label{update-x}
\ENDFOR
%
\STATE Choose $\vect{x}^{i}_{q} \gets \vect{x}^{i}_{q,K+1}$ and agent $i$ plays the same $\vect{x}^i_{q}$ for every time $t$ in phase $q$.
\STATE Let $\sigma_q$ be a random permutation of $1, \ldots, K$ --- times in phase $q$.
\FOR{$1 \leq k \leq K$}
\STATE Let $s = \sigma_q^{-1}(k)$
\STATE Query the values of
$\widetilde{\nabla} f^{i}_{k}(\vect{x}^i_{q,s})$
\ENDFOR
\STATE Set $\widetilde{\vect{g}}^{i}_{q,1} \gets \widetilde{\nabla} f^{i}_{\sigma_q (1)}(\vect{x}^{i}_{q,1})$
\FOR{$1 \leq k \leq K$}
\STATE Send $\widetilde{\vect{g}}^{i}_{q,k}$ to all neighbours $N(i)$.
\STATE After receiving $\widetilde{\vect{g}}^{j}_{q,k}$ from all neighbours $j \in N(i)$, compute
$\widetilde{\vect{d}}^{i}_{q,k} \gets \sum_{j \in N(i)} W_{ij} \widetilde{\vect{g}}^{j}_{q,k}$
and
$\widetilde{\vect{g}}^{i}_{q,k + 1} \gets \bigl( \widetilde{\nabla} f^{i}_{\sigma_q (k+1)}(\vect{x}^i_{q,k+1})
- \widetilde{\nabla} f^{i}_{\sigma_q(k)}(\vect{x}^{i}_{q,k}) \bigr) + \widetilde{\vect{d}}^{i}_{q,k}$ \label{alg:d-update}
\STATE $\widetilde{\vect{a}}^i_{q,k} \gets (1 - \rho_k) \cdot \widetilde{\vect{a}}^i_{q, k-1} + \rho_k \cdot \widetilde{\vect{d}}^{i}_{q, k}$.
\STATE Feedback function $\langle \widetilde{\vect{a}}^{i}_{q,k}, \cdot \rangle$
to oracles $\mathcal{O}^i_k$. (The cost of the oracle $\mathcal{O}^i_k$ at block $q$ is
$\langle \widetilde{\vect{a}}^{i}_{q,k}, \vect{v}^{i}_{q,k} \rangle$.)
\ENDFOR
%
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{Monode Frank-Wolfe}
\label{algo:online-dist-FW}
\end{algorithm}
\begin{lemma}
\label[lemma]{lmm:bound_d}
Let $V_d = 2nG \parenthese{\frac{\lambda_2}{1-\lambda_2}+1}$, the local gradient is uniformly upper-bounded, i.e, $\forall i \in \bracket{n}, \forall k \in \bracket{K}$. $\norm{\vect{d}^i_{q,k}} \leq V_d$.
\end{lemma}
\begin{lemma}
\label[lemma]{lmm:stoch_variance}
Under \Cref{assum:stoch_grad} and let $\sigma_1^2 = 4n \bracket{\parenthese{\frac{G+G_0}{\frac{1}{\lambda_2}-1}}^2 + 2\sigma_0^2}$. For $i \in \bracket{n}, k \in \bracket{K}$, the variance of the local stochastic gradient is uniformly bounded i.e
$ \ensuremath{\mathbb{E}} \bracket{ \norm{
\vect{d}^i_{q,k} - \widetilde{\vect{d}}^i_{q,k}
}^2} \leq\sigma_1^2$
\end{lemma}
\begin{theorem}
\label[theorem]{thm:convex}
Given a convex set $\mathcal{K}$ and assume that $F_t$ is a convex function.
Setting $Q=T^{2/5}, K=T^{3/5}, T=QK$ and step-size $\eta_k = \frac{1}{k}$. Let $\rho_k = \frac{2}{\parenthese{k+3}^{2/3}}$ and $\rho_k = \frac{1.5}{\parenthese{K-k+2}^{2/3}}$ when $k \in \bracket{1,\frac{K}{2}}$ and $k \in \bracket{\frac{K}{2} +1,K}$ respectively. Then, the expected regret of \Cref{algo:online-dist-FW} is at most
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T}
\leq \parenthese{GD + 2\beta D^2}T^{2/5}
+ \parenthese{C + 6D\parenthese{N + \sqrt{M}}}T^{4/5}
+ \frac{3}{5}\beta D^2 T^{2/5}\log (T)
\end{align}
where $N = k_0 \cdot nG\max \{\lambda_2 \parenthese{1 + \frac{2}{1-\lambda_2}}, 2\}$
and $M = \max\{M_1, M_2\}$ where
$ M_0 = 4 \parenthese{V^2_{\vect{d}} + \sigma_1^2} + 128 V^2_{\vect{d}}$,
$ M_1 = \max \curlybracket{5^{2/3} \parenthese{V_{\vect{d}} + \frac{2}{4^{2/3}} G_0}^2 , M_0}$
and
$ M_2 = 2.55\parenthese{V^2_{\vect{d}} + \sigma^2_1} + \dfrac{28 V^2_{\vect{d}}}{3}$
\end{theorem}
\begin{theorem}
\label[theorem]{thm:submod}
Given a convex set $\mathcal{K}$ and assume that the function $F_t$ is monotone continuous DR-submodular. Setting $Q=T^{2/5}, K=T^{3/5}, T=QK$ and step-size $\eta_k = \frac{1}{K}$. Let $\rho_k = \frac{2}{\parenthese{k+3}^{2/3}}$ and $\rho_k = \frac{1.5}{\parenthese{K-k+2}^{2/3}}$ when $1 \leq k \leq \frac{K}{2}+1$ and $\frac{K}{2} +1 \leq k \leq K$ respectively. Then, the expected $\parenthese{1-\frac{1}{e}}$-regret is at most
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T} \leq \frac{3}{2}\beta D^2 T^{2/5} + \parenthese{C + 3D(N+\sqrt{M}}T^{4/5}
\end{align}
where the constants are defined in \Cref{thm:convex}
\end{theorem}
As stated in the preceding paragraph, the distinction between convex and submodular optimization can be found in line \ref{update-x} of \Cref{algo:online-dist-FW} and in the oracle optimization subroutine. To achieve the regret bound mentioned in \Cref{thm:convex,thm:submod}, we use follow-the-perturbed-leader as the oracle with regret $\mathcal{R}_T = C\sqrt{T}$. In the case of convex optimization, one may use online gradient descent to obtain the same outcome, but this method is more computationally intensive because it involves a projection step onto the constraint set.
\subsection{Bandit}
\label{chap:bandit_analysis}
Let $f^{\delta}_{t} (\vect{x})$ = $\ensuremath{\mathbb{E}}_{\vect{v} \in \mathbb{B}^d} \bracket{f_t \parenthese{\vect{x} + \delta \vect{v}}}$ and recall its gradient $\nabla f^{\delta}_t (\vect{x}) = \ensuremath{\mathbb{E}}_{\vect{u} \in \mathbb{S}^{d-1}} \bracket{\frac{d}{\delta}f_t \parenthese{\vect{x} + \delta \vect{u}}\vect{u}}$. We define the average function
\begin{align}
\label{def:average_error_Fx}
\Bar{F}_{q,k}^{\delta} (\vect{x}) = \frac{1}{L-k} \sum_{\ell=k+1}^L F_{\sigma_q (\ell)}^{\delta} (\vect{x})
= \frac{1}{L-k} \sum_{\ell=k+1}^L \frac{1}{n} \sum_{i=1}^n f^{i,\delta}_{\sigma_q (\ell)} (\vect{x})
\end{align}
and the average of the remaining \emph{$(L-k)$} functions of $f^{i,\delta}_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell}$) over $n$ agents as
\begin{align}
\label{def:average_error_F}
\hat{F}^{\delta}_{q,k} = \frac{1}{n}\sum_{i=1}^n \hat{f}^{i,\delta}_{q,k} = \frac{1}{L-k} \sum_{\ell=k+1}^L \frac{1}{n}\sum_{i=1}^n f^{i,\delta}_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})
\end{align}
where $F_{\sigma_q (\ell)}^{\delta} (\vect{x}) = \frac{1}{n} \sum_{i=1}^n f^{i,\delta}_{\sigma_q (\ell)} (\vect{x}) $ and $\hat{f}^{i,\delta}_{q,k} = \frac{1}{L-k} \sum_{\ell=k+1}^L f^{i,\delta}_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})$.
Then, the one-point gradient $\nabla \Bar{F}^{\delta}_{q,k}$ and $\nabla \hat{F}^{\delta}_{q,k}$ come naturally with the above definitions.
Let $\mathcal{H}_{q,1} \subset \dots \subset \mathcal{H}_{q,k}$ be the $\sigma$-fields generated by the randomness of the stochastic gradient estimate up to time $k$.
\begin{align}
\label{def:one-shot-exp}
\vect{g}^{i, \delta}_{q,k} = \ensuremath{\mathbb{E}} \bracket{\widetilde{\vect{g}}_{q,k}^i \vert \mathcal{H}_{q,k-1}},
\quad \vect{d}^{i, \delta}_{q,k} = \ensuremath{\mathbb{E}} \bracket{\widetilde{\vect{d}}_{q,k}^i \vert \mathcal{H}_{q,k-1}},
\quad \nabla f^{i,\delta}_{\sigma_q (k)} (\vect{x}^i_{q,k}) = \ensuremath{\mathbb{E}} \bracket{\widetilde{\vect{h}}^i_{q,k}}
\end{align}
and
\begin{align}
\label{def:one-shot-exp-sigma}
\hat{\vect{g}}^{i, \delta}_{q,k} = \frac{1}{L-k}\sum_{\ell=k+1}^L \vect{g}^{i, \delta}_{q,\ell},
\quad \hat{\vect{d}}^{i, \delta}_{q,k} = \frac{1}{L-k}\sum_{\ell=k+1}^L \vect{d}^{i, \delta}_{q,\ell},
\end{align}
\setcounter{theorem}{13}
\begin{lemma}
\label[lemma]{lmm:bound_d_bandit}
For $i \in \bracket{n}, k \in \bracket{K}$. Let $V^{\delta}_d = 2n\frac{d}{\delta}B \parenthese{\frac{\lambda_2}{1-\lambda_2}+1}$, the local gradient is upper-bounded, i.e $\norm{\vect{d}^{i,\delta}_{q,k}} \leq V^{\delta}_d$
\end{lemma}
\setcounter{theorem}{14}
\begin{lemma}
\label[lemma]{lmm:stoch_variance_bandit}
Under \Cref{assum:max_bound_f}, the variance of the local gradient estimate is uniformly bounded, i.e
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{d}^{i,\delta}_{q,k} - \widetilde{\vect{d}}^{i, \delta}_{q,k}}^2 } \leq 4n\parenthese{\frac{d}{\delta}B}^2 \bracket{\frac{1}{\parenthese{\frac{1}{\lambda_2}-1}^2} + 2}
\end{align}
\end{lemma}
\begin{proof}
By \Cref{assum:max_bound_f}, we have
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{\norm{\nabla f^{cat}_{\sigma_q(\tau)}- \widetilde{\vect{h}}^{cat}_{q,\tau} }^2 }
= \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{\nabla f^{i,\delta}_{\sigma_q(\tau)} \parenthese{\vect{x}^i_{q,\tau}} - \widetilde{\vect{h}}^i_{q,\tau}}^2}
\leq n\parenthese{\frac{d}{\delta}B}^2
\end{align}
Following the same analysis in \cref{eq:bound_stoch_d_proof}, we have
\begin{align}
&\ensuremath{\mathbb{E}} \bracket{\norm{
\vect{d}^{cat}_{q,k} - \widetilde{\vect{d}}^{cat}_{q,k}}^2} \nonumber \\
\leq & \ensuremath{\mathbb{E}} \bracket{\parenthese{\sum_{\tau = 1}^{k-1}
\norm{\mathbf{W}^{k-\tau} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}
\norm{
\nabla f^{cat}_{\sigma_q(\tau+1)}
- \widetilde{\vect{h}}^{cat}_{q,\tau+1}
+ \widetilde{\vect{h}}^{cat}_{q,\tau}
- \nabla f^{cat}_{\sigma_q(\tau)}
}}^2} \nonumber \\
& \quad
+ 4\parenthese{\ensuremath{\mathbb{E}} \bracket{\norm{\mathbf{W}^{k} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}^2
\norm{
\nabla f^{cat}_{\sigma_q(1)}
- \widetilde{\vect{h}}^{cat}_{q,1}
}^2}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}^2
\norm{
\nabla f^{cat}_{\sigma_q(k)}
- \widetilde{\vect{h}}^{cat}_{q,k}
}^2}
} \nonumber \\
\leq & 4n\parenthese{\frac{d}{\delta}B}^2 \parenthese{\sum_{\tau = 1}^{k-1} \lambda_2^{k-\tau} }^2
+ 4n\parenthese{\frac{d}{\delta}B}^2 \parenthese{\lambda_2^{2k} + 1} \nonumber \\
\leq & 4n\parenthese{\frac{d}{\delta}B}^2 \parenthese{\frac{\lambda_2}{1-\lambda_2}}^2 + 4n\parenthese{\frac{d}{\delta}B}^2(\lambda_2 + 1)
\leq 4n\parenthese{\frac{d}{\delta}B}^2 \bracket{\frac{1}{\parenthese{\frac{1}{\lambda_2}-1}^2} + 2}
\end{align}
The lemma follows by remarking that $\ensuremath{\mathbb{E}} \bracket{\norm{\vect{d}^{i,\delta}_{q,k} - \widetilde{\vect{d}}^{i}_{q,k}}^2} \leq \ensuremath{\mathbb{E}} \bracket{\norm{
\vect{d}^{cat}_{q,k} - \widetilde{\vect{d}}^{cat}_{q,k}}^2}$
\end{proof}
Let $\vect{x}^* = \argmax_{\vect{x} \in \mathcal{K}} \sum_{t=1}^T f_{t} (\vect{x}) $, $\vect{x}^*_{\delta} = \argmax_{\vect{x} \in \mathcal{K}'} \sum_{t=1}^T f_t (\vect{x})$
Let $\vect{z}^i_{q,k} = \vect{x}^i_{q,k} + \delta \vect{u}^i_{q,k}$, we define $\overline{\vect{z}}_{q,k} = \frac{1}{n}\sum_{i=1}^n \vect{z}^i_{q,k}$ for $1 \leq k \leq K$.
\begin{lemma}
\label[lemma]{lmm:bound_local_global_d_bandit}
Under \Cref{assum:assum_1} and \Cref{assum:max_bound_f}. Let $N = k_0\cdot n B \frac{d}{\delta}\max \curlybracket{\lambda_2\parenthese{1 + \frac{2}{1-\lambda_2}}, 2}$. Then, for $k \in \bracket{K}$,we have
\begin{align}
\max_{i \in \bracket{1,n}} \ensuremath{\mathbb{E}} \bracket{ \norm{
\hat{\vect{d}}^{i,\delta}_{q,k} - \nabla \hat{F}^{\delta}_{q,k}}
} \leq \frac{N}{k}
\end{align}
\end{lemma}
\begin{proof}
The proof is essentially based on the one of \Cref{lmm:bound_d_avg_consensus}. Note that we keep the same notation with a superscipt $\delta$ to indicate the smooth version of $f$ and related variables. By definition of the one-point gradient estimator and \Cref{assum:max_bound_f}, \cref{eq:diff_k} becomes
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat, \delta}_{q,k}}^2}
&= \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{\delta ^{i,\delta}_{q,k}}^2}
= \sum_{i=1}^n \ensuremath{\mathbb{E}}
\bracket{
\norm{
\nabla \hat{f}^{i,\delta}_{q,k} - \nabla \hat{f}^{i,\delta}_{q,k-1}
}^2
} \nonumber\\
& \quad = \sum_{i=1}^n \ensuremath{\mathbb{E}}
\bracket{
\ensuremath{\mathbb{E}}
\bracket{
\norm{
\nabla \hat{f}^{i,\delta}_{q,k} - \nabla \hat{f}^{i,\delta}_{q,k-1}
}^2
\bigm\vert \mathcal{F}_{q,k-1}
}
} \nonumber\\
& \quad = \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\ensuremath{\mathbb{E}}
\bracket{
\norm{
\frac{\sum_{\ell=k+1}^L \nabla f^{i,\delta}_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})}{L-k}
- \frac{\sum_{\ell=k}^L \nabla f^{i,\delta}_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})}{L-k+1}
}^2
\bigm\vert \mathcal{F}_{q,k-1}
}
} \nonumber \\
& \quad = \sum_{i=1}^n\ensuremath{\mathbb{E}} \bracket{
\ensuremath{\mathbb{E}} \bracket{ \norm{
\frac{\sum_{\ell=k+1}^L \nabla f^{i,\delta}_{\sigma_q (\ell)} (\vect{x}^{i}_{q,\ell})}{(L-k)(L-k+1)}
- \frac{\nabla f^{i,\delta}_{\sigma_q (k)} (\vect{x}^i_{q,k})}{{L-k+1}}}^2
\bigm\vert \mathcal{F}_{q,k-1} }
}
\nonumber\\
& \quad \leq
n \left(
\frac{2B\frac{d}{\delta}}{L-k+1}
\right)^2 \label{eq:diff_k_bandit}
\end{align}
By Jensen's inequality, we deduce that
\begin{align}
\label{eq:bound_delta_cat_bandit}
\ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat,\delta}_{q,k}}}
\leq \sqrt{\ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat,\delta}_{q,k}}^2}}
\leq \frac{2\sqrt{n}B\frac{d}{\delta}}{L-k+1}
\end{align}
When $k=1$, following the same derivation in \cref{eq:bound_cat_k1}, we have
\begin{align}
\ensuremath{\mathbb{E}}& \bracket{\norm{\vect{\hat{d}}^{cat,\delta}_{q,1} - \nabla \hat{F}^{cat,\delta}_{q,1}}^2}
\leq \lambda_2^2 \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{
\vect{\hat{g}}^{i, \delta}_{q,1} - \nabla \hat{F}^{\delta}_{q,1}}^2
} \nonumber
\leq n\lambda_2^2 \frac{d^2}{\delta^2}B^2
\end{align}
Let $k \in \bracket{2,k_0}$, from \cref{eq:recurrence_d_F} and \cref{eq:bound_delta_cat_bandit}
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat, \delta}_{q,k} - \nabla \hat{F}^{cat, \delta}_{q,k}}}
& \leq \lambda_2 \parenthese{
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat,\delta}_{q,k-1} - \nabla \hat{F}^{cat,\delta}_{q,k-1}}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat,\delta}_{q,k}}}
} \nonumber \\
& \leq \lambda_2^{k-1}\sqrt{n}\frac{d}{\delta}B + 2 \sum_{\tau=1}^{k} \lambda_2^{\tau} \sqrt{n}\frac{d}{\delta}B \nonumber\\
& \leq \lambda_2\sqrt{n}\frac{d}{\delta}B + 2\frac{\lambda_2}{1-\lambda_2}\sqrt{n}\frac{d}{\delta}B \nonumber \\
&= \lambda_2 \sqrt{n}\frac{d}{\delta}B \parenthese{1 + \frac{2}{1-\lambda_2}}
\end{align}
Let $N_0 = k_0\cdot\sqrt{n}\max \curlybracket{\lambda_2 B\frac{d}{\delta} \parenthese{1 + \frac{2}{1-\lambda_2}}, 2B\frac{d}{\delta}}$. We claim that $\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat, \delta}_{q,k} - \nabla \hat{F}^{cat, \delta}_{q,k}}} \leq \frac{N_0}{k}$ when $k \in \bracket{k_0, K}$. Let $L \geq 2K$, we have then $\frac{1}{L-k+1} \leq \frac{1}{2K-k+1} \leq \frac{1}{K+1} \leq \frac{1}{k+1}$. Thus, using the induction hypothesis, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat, \delta}_{q,k} - \nabla \hat{F}^{cat, \delta}_{q,k}}}
& \leq \lambda_2 \parenthese{
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat,\delta}_{q,k-1} - \nabla \hat{F}^{cat,\delta}_{q,k-1}}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat,\delta}_{q,k}}}
} \nonumber \\
& \leq \lambda_2 \parenthese{\frac{N_0}{k-1} + \frac{2\sqrt{n}B\frac{d}{\delta}}{L-k+1}} \nonumber\\
& \leq \lambda_2 \parenthese{\frac{N_0}{k-1} + \frac{2\sqrt{n}B\frac{d}{\delta}}{k+1}} \nonumber \\
& \leq \lambda_2 \left(
N_0\frac{k_0 + 1}{k_0 (k-1)}
\right) \nonumber \\
& \leq \frac{N_0}{k} \label{eq:boundksmall_bandit}
\end{align}
Using the inequality in \cref{eq:bound_sum_sqrt} and the above result, the lemma is then proven.
\end{proof}
\begin{lemma}[Lemma 10, Lemma 11 \cite{Zhang:2019}]
\label[lemma]{lmm:lmm:bound_d_var_red_bandit}
Under \Cref{lmm:bound_d_bandit} and \cref{lmm:stoch_variance_bandit} and setting $\rho_k = \frac{2}{\parenthese{k+3}^{2/3}}$, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{
\norm{\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}}
\leq \frac{\sqrt{M_0}}{\parenthese{k+3}^{1/3}}, \qquad k \in \bracket{K}
\end{align}
where $M_0 = 4^{2/3}\frac{d^2}{\delta^2}B^2 \bracket{24n^2 \parenthese{\frac{1}{\frac{1}{\lambda_2}-1} + 1}^2 + 8n\parenthese{\frac{1}{\parenthese{\frac{1}{\lambda_2}-1}^2}+2}}$
\end{lemma}
\begin{proof}
The proof follows the same idea in Lemma 10 and Lemma 11 of \cite{Zhang:2019} with some changes in the constant values. We will evoques in details in the following section. Following the same decomposition in the proof of \Cref{lmm:bound_d_var_red}, we have
\begin{align*}
\ensuremath{\mathbb{E}} &\bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
= \ensuremath{\mathbb{E}} \bracket{\norm{
\hat{\vect{d}}^{i,\delta}_{q,k-1}
- (1-\rho_k) \widetilde{\vect{a}}^i_{q,k-1}
- \rho_k \widetilde{\vect{d}}^{i,\delta}_{q,k}
}^2
}\\
& = \rho_k^2 \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-1}-\widetilde{\vect{d}}^{i,\delta}_{q,k})}^2}
+ (1-\rho_k)^2 \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-1} - \hat{\vect{d}}^{i,\delta}_{q,k-2}}^2} \\
& \quad + (1-\rho_k)^2
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1} }^2} \\
& \quad + 2\rho_k (1-\rho_k)
\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{d}}^{i,\delta}_{q,k}, \hat{\vect{d}}^{i,\delta}_{q,k-1} - \hat{\vect{d}}^{i,\delta}_{q,k-2}
}} \\
& \quad + 2\rho_k (1-\rho_k)
\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{d}}^{i,\delta}_{q,k},
\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}} \\
& \quad + 2(1-\rho_k)^2
\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \hat{\vect{d}}^{i,\delta}_{q,k-2},
\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}}
\end{align*}
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{
\norm{
\hat{\vect{d}}^{i,\delta}_{q,k-1}-\widetilde{\vect{d}}^{i,\delta}_{q,k})
}^2
}
= \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}} \bracket{
\norm{
\hat{\vect{d}}^{i,\delta}_{q,k-1}-\widetilde{\vect{d}}^{i,\delta}_{q,k})
}^2 \bigm\vert \mathcal{F}_{q,k-1}
}} \nonumber\\
& \leq \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}} \bracket{
\norm{
\hat{\vect{d}}^{i,\delta}_{q,k-1}
- \vect{d}^{i,\delta}_{q,k}}^2
+ \norm{
\vect{d}^{i,\delta}_{q,k}
-\widetilde{\vect{d}}^{i,\delta}_{q,k})}^2
+ 2 \langle
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \vect{d}^{i,\delta}_{q,k},
\vect{d}^{i,\delta}_{q,k} -\widetilde{\vect{d}}^{i,\delta}_{q,k}
\rangle
\bigm\vert \mathcal{F}_{q,k-1}
}} \label{eq:bound_d0_bandit}
\end{align}
By the definition in \cref{def:one-shot-exp-sigma}, we have $\ensuremath{\mathbb{E}}\bracket{\vect{d}^{i,\delta}_{q,k} \vert \mathcal{F}_{q,k-1}} = \hat{\vect{d}}^{i,\delta}_{q,k-1}$, using \Cref{lmm:bound_d_bandit}, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}} \bracket{
\norm{
\hat{\vect{d}}^{i,\delta}_{q,k-1}
- \vect{d}^{i,\delta}_{q,k}}^2
\bigm\vert \mathcal{F}_{q,k-1}
}}
& \leq (V^{\delta}_{\vect{d}})^2
\end{align}
Invoking \Cref{lmm:stoch_variance_bandit}, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{ \norm{
\vect{d}^{i,\delta}_{q,k} -\widetilde{\vect{d}}^{i,\delta}_{q,k})
}^2
}
\leq \sigma^2_2 \label{eq:bound_d2_bandit}
\end{align}
and
\begin{align}
&\ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}} \bracket{
\scalarproduct{
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \vect{d}^{i,\delta}_{q,k},
\vect{d}^{i,\delta}_{q,k} -\widetilde{\vect{d}}^{i,\delta}_{q,k}
}
\bigm\vert \mathcal{F}_{q,k-1}
}} = 0 \nonumber \\
\end{align}
by following the same analysis in \cref{eq:bound_d3}. We now claim that \cref{eq:bound_d0_bandit} is bounded above by
\begin{align}
\label{eq:bound_d_hat_tilde_bandit}
\ensuremath{\mathbb{E}} &\bracket{
\norm{
\hat{\vect{d}}^{i,\delta}_{q,k-1}-\widetilde{\vect{d}}^{i,\delta}_{q,k})
}^2
} \leq (V^{\delta}_{\vect{d}})^2 + \sigma^2_2 \triangleq V^{\delta}
\end{align}
More over, taking the idea from \cref{eq:bound_d_hat_diff,eq:bound_d4,eq:bound_d5,eq:bound_d6}, we have
\begin{align}
\label{eq:bound_d_hat_diff_bandit}
\ensuremath{\mathbb{E}} \bracket{\norm{
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \hat{\vect{d}}^{i,\delta}_{q,k-2}
}^2}
\leq \frac{4(V^{\delta}_{\vect{d}})^2}{\left(L-k+2\right)^2}
\triangleq \frac{L^{\delta}}{\left(L-k+2\right)^2}
\end{align}
\begin{align}
\label{eq:bound_d4_bandit}
&\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{d}}^{i,\delta}_{q,k}, \hat{\vect{d}}^{i,\delta}_{q,k-1} - \hat{\vect{d}}^{i,\delta}_{q,k-2}
}} = 0
\end{align}
\begin{align}
\label{eq:bound_d5_bandit}
&\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{d}}^{i,\delta}_{q,k},
\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}}
= 0
\end{align}
and
\begin{align}
\label{eq:bound_d6_bandit}
\ensuremath{\mathbb{E}} &\bracket{\scalarproduct{
\hat{\vect{d}}^{i,\delta}_{q,k-1} - \hat{\vect{d}}^{i,\delta}_{q,k-2},
\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}} \leq \frac{L^{\delta}}{2\alpha_k (L-k+2)^2}
+ \frac{\alpha_k}{2} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2}
\end{align}
by using Young's inequality. Setting $\alpha_k = \frac{\rho_k}{2}$ similarly to \Cref{lmm:bound_d_var_red}, we have
\begin{align}
\label{eq:recurrent_psi_bandit}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
& \leq \rho_k^2 V^{\delta}
+ \parenthese{1+\frac{2}{\rho_k}} \frac{L^{\delta}}{\parenthese{L-k+2}^2}
+ \parenthese{1-\rho_k} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2}
\end{align}
Setting $L \geq 2K$ and $\rho_k = \frac{2}{(k+2)^{2/3}}$, we have then $\frac{1}{L-k+2} \leq \frac{1}{2K-k+2} \leq \frac{1}{K+2} \leq \frac{1}{k+2}$. Following the derivation from Lemma 11 of \cite{Zhang:2019}.\Cref{eq:recurrent_psi_bandit} can be bounded above by
\begin{align}
\label{eq:recurrent_psi_bandit2}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
& \leq \rho_k^2 V^{\delta}
+ \parenthese{1+\frac{2}{\rho_k}} \frac{L^{\delta}}{\parenthese{k+2}^2}
+ \parenthese{1-\rho_k}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \nonumber \\
& \leq \frac{4^{2/3}\parenthese{2V^{\delta} + L^{\delta}}}{(k+2)^{4/3}}
+ \parenthese{1-\frac{2}{(k+2)^{2/3}}}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \nonumber \\
& \triangleq \frac{M_0}{(k+2)^{4/3}} + \parenthese{1-\frac{2}{(k+2)^{2/3}}}\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2}
\end{align}
Assume that $\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2} \leq \frac{M_0}{(k+3)^{2/3}}$ for $k \in \bracket{K}$. When $k=1$, by definition of $\widetilde{\vect{a}}^{i}_{q,1}$ and $\hat{\vect{d}}^{i,\delta}_{q,0}$, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,0} - \widetilde{\vect{a}}^i_{q,1}}^2}
\leq \parenthese{
V^{\delta}_{\vect{d}} + \frac{2}{3^{2/3}} \frac{d}{\delta}B
}^2
\end{align}
Thus, since $\sigma_2 \geq \frac{2}{3^{2/3}} \frac{d}{\delta}B$, one can observe that
\begin{align}
\frac{M_0}{(1+2)^{2/3}} = 2V^{\delta} + L^{\delta} \geq 2V^{\delta} = 2\parenthese{(V^{\delta}_{\vect{d}})^2 + \sigma^2_2} \geq \parenthese{V^{\delta}_{\vect{d}}) + \sigma_2}^2 \geq \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,0} - \widetilde{\vect{a}}^i_{q,1}}^2}
\end{align}
Suppose that the induction hypothesis holds for $k-1$, one can easily verify for $k$ since
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
& \leq \frac{M_0}{(k+2)^{4/3}} + \parenthese{1-\frac{2}{(k+2)^{2/3}}}\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^{i,\delta}_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \nonumber \\
& \leq \frac{M_0}{(k+2)^{4/3}} + \parenthese{1-\frac{2}{(k+2)^{2/3}}}
\frac{M_0}{(k+2)^{2/3}} \nonumber \\
& \leq M_0\frac{(k+2)^{2/3} - 1}{(k+3)^{4/3}} \nonumber \\
& \leq \frac{M_0}{(k+3)^{2/3}}
\end{align}
\end{proof}
\begin{claim}
\label{clm:bandit_regret1}
Under \Cref{clm:f_hat_f_bar}, \Cref{lmm:bound_local_global_d_bandit} and \Cref{lmm:lmm:bound_d_var_red_bandit}.
\begin{align}
\sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \Bar{F}^{\delta}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}}} \leq \beta D + \frac{3}{2}\parenthese{N + \sqrt{M_0}}K^{2/3}
\end{align}
\end{claim}
\begin{claimproof}
\begin{align}
\label{eq:bandit_bound_with_K}
\sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \Bar{F}^{\delta}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \vect{\Tilde{a}}_{q,k}^i}} \nonumber
& \leq \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{ \norm{\nabla \Bar{F}^{\delta}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \nabla \hat{F}^{\delta}_{q,k-1}}}
+ \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \hat{F}^{\delta}_{q,k-1} - \vect{\widetilde{a}}_{q,k}^i }} \nonumber \\
& \leq \beta D
+ \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \hat{F}^{\delta}_{q,k-1} - \hat{\vect{d}}^{i,\delta}_{q,k-1}}}
+ \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{ \hat{\vect{d}}^{i,\delta}_{q,k-1} - \vect{\widetilde{a}}_{q,k}^i}} \nonumber \\
& \leq \beta D
+ \sum_{k=1}^K \frac{N}{k}
+ \sum_{k=1}^K \frac{\sqrt{M_0}}{\parenthese{k+3}^{1/3}} \nonumber \\
& \leq \beta D + \parenthese{N + \sqrt{M_0}}\sum_{k=1}^K \frac{1}{(k+3)^{1/3}} \nonumber \\
& \leq \beta D + \frac{3}{2}\parenthese{N + \sqrt{M_0}}K^{2/3}
\end{align}
where \Cref{clm:f_hat_f_bar} is still verified in the second inequality since $f^{i,\delta}_{\sigma_q(\ell)}$ is $\beta$-smooth and the third inequality is the result of \Cref{lmm:bound_local_global_d_bandit} and \Cref{lmm:lmm:bound_d_var_red_bandit}
\end{claimproof}
\begin{claim}
\label{clm:oneshot_regret}
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\sum_{q=1}^Q \sum_{\ell=1}^L \parenthese{1-\frac{1}{e}}F^{\delta}_{\sigma_q(\ell)}(\vect{x}^*_{\delta}) - F^{\delta}_{\sigma_q(\ell)}(\overline{\vect{x}}_{q})}
\leq \frac{L \beta D^2}{K} + \frac{3LD\parenthese{N + \sqrt{M_0}}}{2K^{1/3}} + LC\sqrt{Q} + \frac{\beta QLD^2}{2K}
\end{align}
\end{claim}
\begin{claimproof}
Using \Cref{lmm:submod_basic} with $F_t$ = $\Bar{F}^{\delta}_{q,k-1}$, $\vect{x}_{t,k} = \overline{\vect{x}}_{q,k}$ and $\vect{d}_{t,k} = \frac{1}{n}\sum_{i=1}^n \widetilde{\vect{a}}^i_{q,k}$, we have
\begin{align}
\label{eq:submod_upperbound_bandit}
\Bar{F}^{\delta}_{q,k-1}&(\vect{x}^*_{\delta}) - \Bar{F}^{\delta}_{q,k-1}(\overline{\vect{x}}_{q,k+1})
\leq \parenthese{1-\frac{1}{K}}\bracket{\Bar{F}^{\delta}_{q,k-1}(\vect{x}^*_{\delta}) - \Bar{F}^{\delta}_{q,k-1}(\overline{\vect{x}}_{q,k})} \nonumber\\
& + \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n
\left[\norm{\nabla \Bar{F}^{\delta}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}}D
+ \scalarproduct{\widetilde{\vect{a}}^i_{q,k}, \vect{x}^*_{\delta} - \vect{v}^i_{q,k}} \right]
+ \frac{\beta}{2} \frac{D^2}{K^2}
\end{align}
Similarly to the proof of \Cref{thm:submod} and using \Cref{clm:bandit_regret1}, we note
\begin{align}
&\ensuremath{\mathbb{E}} \bracket{\parenthese{1-\frac{1}{e}}\Bar{F}^{\delta}_{q,0}(\vect{x}^*_{\delta}) - \Bar{F}^{\delta}_{q,0}(\overline{\vect{x}}_{q})} \nonumber \\
& \leq \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \Bar{F}^{\delta}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}}D}
+ \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\scalarproduct{\widetilde{\vect{a}}^i_{q,k}, \vect{x}^*_{\delta} - \vect{v}^i_{q,k}}}
+ \frac{\beta}{2} \frac{D^2}{K} \nonumber \\
& \leq \frac{D}{K} \parenthese{\beta D + \frac{3}{2}\parenthese{N + \sqrt{M_0}}K^{2/3}}
+ \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\scalarproduct{\widetilde{\vect{a}}^i_{q,k}, \vect{x}^*_{\delta} - \vect{v}^i_{q,k}}}
+ \frac{\beta}{2} \frac{D^2}{K} \nonumber \\
\end{align}
Thus, we can write
\begin{align}
&\ensuremath{\mathbb{E}} \bracket{\sum_{q=1}^Q \sum_{\ell=1}^L \parenthese{1-\frac{1}{e}}F^{\delta}_{\sigma_q(\ell)}(\vect{x}^*_{\delta}) - F^{\delta}_{\sigma_q(\ell)}(\overline{\vect{x}}_{q})}
= \ensuremath{\mathbb{E}} \bracket{\sum_{q=1}^Q L \parenthese{1-\frac{1}{e}}\Bar{F}^{\delta}_{q,0}(\vect{x}^*_{\delta}) - \Bar{F}^{\delta}_{q,0}(\overline{\vect{x}}_{q})} \nonumber \\
& \leq \frac{LD}{K} \parenthese{\beta D + \frac{3}{2}\parenthese{N + \sqrt{M_0}}K^{2/3}}
+ LC\sqrt{Q}
+ \frac{\beta}{2} \frac{QLD^2}{K} \nonumber \\
& \leq \frac{L \beta D^2}{K} + \frac{3LD\parenthese{N + \sqrt{M_0}}}{2K^{1/3}} + LC\sqrt{Q} + \frac{\beta QLD^2}{2K}
\end{align}
\end{claimproof}
\setcounter{theorem}{6}
\begin{theorem}
Let $\mathcal{K}$ be a down-closed convex and compact set. We suppose the $\delta$-interior $\mathcal{K}'$ following $\Cref{lmm:discrepancy}$. Let $Q = T^{2/9}, L = T^{7/9}, K = T^{2/3} $, $\delta = \frac{r}{\sqrt{d}+2}T^{-1/9}$ and $\rho_k = \frac{2}{(k+2)^{2/3}}$, $\eta_k = \frac{1}{K}$. Then the expected $\parenthese{1-\frac{1}{e}}$-regret is upper bounded
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T} \leq ZT^{8/9} + \frac{\beta D^2}{2}T^{1/9}
+ \frac{3}{2} D \frac{d \parenthese{\sqrt{d}+2}}{r} P_{n,\lambda_2} T^{2/9} + \beta D^2 T^{3/9}
\end{align}
where we note
$ Z = \parenthese{1-\frac{1}{e}} \parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} G \frac{r}{\sqrt{d}+2}
+ \parenthese{2-\frac{1}{e}}G \frac{r}{\sqrt{d}+2}+ 2\beta + C$
and
$
P_{n,\lambda_2} = k_0 \cdot n B\max \curlybracket{\lambda_2\parenthese{1 + \frac{2}{1-\lambda_2}}, 2}
+ 4^{1/3}\parenthese{24n^2 \parenthese{\frac{1}{\frac{1}{\lambda_2}-1} + 1}^2 + 8n\parenthese{\frac{1}{\parenthese{\frac{1}{\lambda_2}-1}^2}+2}}^{1/2}
$
\end{theorem}
\begin{proof}
Recall the values of $N$ and $M_0$ from \Cref{lmm:bound_local_global_d_bandit} and \Cref{lmm:lmm:bound_d_var_red_bandit}, we have
\begin{align*}N = k_0\cdot n B \frac{d}{\delta}\max \curlybracket{\lambda_2\parenthese{1 + \frac{2}{1-\lambda_2}}, 2}
\end{align*}
\begin{align*}M_0 = 4^{2/3}\frac{d^2}{\delta^2}B^2 \bracket{24n^2 \parenthese{\frac{1}{\frac{1}{\lambda_2}-1} + 1}^2 + 8n\parenthese{\frac{1}{\parenthese{\frac{1}{\lambda_2}-1}^2}+2}}
\end{align*}
Let $P_{n,\lambda_2} = k_0\cdot nB\max \curlybracket{\lambda_2\parenthese{1 + \frac{2}{1-\lambda_2}}, 2}
+ 4^{1/3}\parenthese{24n^2 \parenthese{\frac{1}{\frac{1}{\lambda_2}-1} + 1}^2 + 8n\parenthese{\frac{1}{\parenthese{\frac{1}{\lambda_2}-1}^2}+2}}^{1/2}$. Then,
one can easily see that $N + \sqrt{M_0} = \frac{d}{\delta}B P_{n, \lambda_2}$
where $P_{n,\lambda_2}$ is a constant depending on $n$ and $\lambda_2$. For the next step, we set $\delta = \frac{r}{\sqrt{d}+2}T^{-1/9}$, then $\frac{d}{\delta} = \frac{d\parenthese{\sqrt{d}+2}}{r} T^{1/9}$, $Q = T^{2/9}, L = T^{7/9}$ and $K = T^{2/3}$. From the analysis in Theorem 4 of \cite{Zhang:2019}, \Cref{lmm:discrepancy} and \Cref{clm:oneshot_regret}, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R_T}}
&\leq \parenthese{1-\frac{1}{e}}\parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} GT \delta^{\gamma}
+ \parenthese{2-\frac{1}{e}}GT\delta
+ 2BQK \nonumber \\
& \quad + \sum_{q=1}^Q \sum_{\ell=1}^L \parenthese{1-\frac{1}{e}}F^{\delta}_{\sigma_q(\ell)}(\vect{x}^*_{\delta}) - F^{\delta}_{\sigma_q(\ell)}(\overline{\vect{x}}_{q}) \nonumber \\
&\leq \parenthese{1-\frac{1}{e}} \parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} GT \delta^{\gamma}
+ \parenthese{2-\frac{1}{e}}GT\delta
+ 2BQK \nonumber \\
& \quad + \frac{L \beta D^2}{K} + \frac{3LD\parenthese{N + \sqrt{M_0}}}{2K^{1/3}} + LC\sqrt{Q} + \frac{\beta QLD^2}{2K} \nonumber \\
&\leq \parenthese{1-\frac{1}{e}} \parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} GT \delta^{\gamma}
+ \parenthese{2-\frac{1}{e}}GT\delta
+ 2BQK \nonumber \\
& \quad + \frac{L \beta D^2}{K} + \frac{3LD\frac{d}{\delta}P_{n,\lambda_2}}{2K^{1/3}} + LC\sqrt{Q} + \frac{\beta QLD^2}{2K} \nonumber \\
&\leq \parenthese{1-\frac{1}{e}} \parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} G T \frac{r}{\sqrt{d}+2} T^{-1/9}
+ \parenthese{2-\frac{1}{e}}G T \frac{r}{\sqrt{d}+2}T^{-1/9} \nonumber \\
&\quad + 2\beta T^{2/9}T^{2/3}
+ T^{7/9}\beta D^2 T^{-2/3}
+ \frac{3}{2} T^{7/9} D \frac{d\parenthese{\sqrt{d}+2}}{r} T^{1/9} P_{n, \lambda_2} T^{-2/3} \nonumber \\
&\quad + T^{7/9}C T^{1/9}
+ \frac{\beta}{2} T^{2/9} T^{7/9} D^2 T^{-2/3} \nonumber \\
&\leq \bracket{
\parenthese{1-\frac{1}{e}} \parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} G \frac{r}{\sqrt{d}+2}
+ \parenthese{2-\frac{1}{e}}G \frac{r}{\sqrt{d}+2} + C} T^{8/9} \nonumber \\
&\quad + \frac{\beta D^2}{2}T^{6/9}
+ \bracket{2\beta + \frac{3}{2} D \frac{d \parenthese{\sqrt{d}+2}}{r} P_{n,\lambda_2}} T^{5/9} + \beta D^2 T^{4/9} \nonumber \\
&\leq \bracket{
\parenthese{1-\frac{1}{e}} \parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} G \frac{r}{\sqrt{d}+2}
+ \parenthese{2-\frac{1}{e}}G \frac{r}{\sqrt{d}+2} + 2\beta + C} T^{8/9} \nonumber \\
&\quad + \frac{\beta D^2}{2}T^{1/9}
+ \frac{3}{2} D \frac{d \parenthese{\sqrt{d}+2}}{r} P_{n,\lambda_2} T^{2/9} + \beta D^2 T^{3/9}
\end{align}
\end{proof}
\section{Introduction}
\label{chap:introduction}
Learning over data generated by sensors and mobile devices has gained a high interest in recent years due to the continual interaction with users and the environment on a timely basis. The patterns related to user's behavior, preference, and the surrounding stochastic events become a promising source for machine learning applications to be more and more reliable. However, collecting such data in a centralized location has become problematic due to privacy concerns and the high cost of data transfer over the network. Consequently, the learning methods that can leave the data locally while efficiently exploiting data patterns, such as decentralized learning, are emerging as an alternative to traditional centralized learning.
Under the optimization scheme, learning in a decentralized manner consists of multiple interconnected agents cooperating to optimize a global objective function where each agent detains partial information of the interested function. Several works \cite{Deori:2016, Reisizadeh:2019, Yuan:2016, Duchi:2012, Zheng:2018} have considered this setting for convex and strongly convex functions. \cite{WaiLafond17:Decentralized-Frank--Wolfe} also study the problem when the objective function is generally non-convex whereas \cite{MokhtariHK18, xie19b} proposes a decentralized algorithm to maximize monotone submodular functions for both continuous and discrete domains. However, these works only consider the offline setting which is not realistic since data constantly evolve in many real-world applications. In this paper, we study decentralized online algorithms for optimizing both convex and submodular functions.
\paragraph{Problem definition.}
Formally, we are given a convex set $\mathcal{K} \subseteq \mathbb{R}^d$ (w.l.o.g one can assume that $\mathcal{K} \subseteq [0,1]^d$)
and a set of agents connected over a network represented by a graph $G = (V, E)$
where $n = |V|$ is the number of agents.
At every time $1 \leq t \leq T$, each agent $i \in V$ can communicate with (and only with) its immediate neighbors, i.e., adjacent agents in $G$ and
takes a decision $\vect{x}^{i}_{t} \in \mathcal{K}$.
Subsequently, a cost/reward function $f^{i}_{t}: \mathcal{K} \rightarrow \mathbb{R}$
is revealed adversarially and locally to agent $i$. Note that in the \emph{bandit} setting, agent $i$ observes only the value $f^{i}_{t}(\vect{x}^{i}_{t})$
instead of the whole function $f^{i}_{t}$. Although each agent $i$ observes only function $f^{i}_{t}$ (or the value $f^{i}_{t}(\vect{x}^{i}_{t})$ in the bandit setting),
agent $i$ is interested in the cumulating cost/reward $F_{t}(\cdot)$ where
$F_{t}(\cdot) := \frac{1}{n} \sum_{j=1}^{n} f^{j}_{t}(\cdot)$. In particular, at time $t$,
the cost/reward of agent $i$ with the its chosen $\vect{x}^{i}_{t}$ is $F_{t}(\vect{x}^{i}_{t})$.
In the context of convex minimization, the functions $f_{t}^{i}$'s are convex and
the goal of each agent $i$ is to minimize the total cumulating cost $\sum_{t=1}^{T} F_{t}(\vect{x}^{i}_{t})$
via local communication with its immediate neighbors.
Our objective is to design an algorithm with small regret.
An online algorithm is \emph{$\mathcal{R}_T$-regret} if for every agent $1 \leq i \leq n$,
\begin{align*}
\sum_{t = 1}^T F_t(\vect{x}_t^i) - \min_{\vect{x} \in \mathcal{K}} \sum_{t=1}^T F_t(\vect{x}) \leq \mathcal{R}_T
\end{align*}
In the context of monotone DR-submodular maximization, the functions $f_{t}^{i}$'s are monotone DR-submodular.
Roughly speaking, a bounded differentiable and non-negative function $F:[0,1]^d \rightarrow \mathbb{R}_+$
is \emph{DR-submodular} if for every $\vect{x}, \vect{y} \in [0,1]^d$ satisfying $x_{i} \leq y_i, \forall i \in [d]$,
we have $\nabla F(\vect{x}) \geq \nabla F(\vect{y})$.
The goal of each agent $i$ is to maximize the total cumulating reward $\sum_{t=1}^{T} F_{t}(\vect{x}_{t}^{i})$, again
via local communication with its immediate neighbors.
Our objective is to design an algorithm with an approximation ratio as close to 1 as possible and together with a small regret.
An online algorithm has a \emph{$\rho$-regret} of $\mathcal{R}_T$ if for every agent $1 \leq i \leq n$,
\begin{align*}
\rho \cdot \max_{\vect{x} \in \mathcal{K}} \sum_{t=1}^T F_t(\vect{x}) - \sum_{t = 1}^T F_t(\vect{x}_t^i) \leq \mathcal{R}_T
\end{align*}
\subsection{Our contribution}
The challenge in designing robust and efficient algorithms for these problems is to simultaneously address the following issues:
\begin{itemize}
\item Uncertainty (online setting, agents observe their loss functions only after selecting their decisions).
\item Partial information (decentralized setting, agents know only its local loss functions while attempting to minimize the cumulated cost).
\item Low computation/communication resources of agents (so it is desirable that each agent performs a small number of gradient computations and communications).
\item Additionally, in the bandit setting, one has only limited feedback (agents can only observe the function value of their decisions).
\end{itemize}
We present performance-guaranteed algorithms for solving the constraint convex and continuous DR-submodular optimization problem in the decentralized and online setting with \emph{only} one gradient evaluation and low communications per agent per time step on average. Specifically,
our algorithms achieve the regret and the $\parenthese{1-\frac{1}{e}}$-regret bounds of $O(T^{4/5})$ for both convex and monotone continuous DR-submodular functions.
Using a one-point gradient estimator \cite{FlaxmanKalai05:Online-convex}, we extend the algorithms to the bandit setting in which the gradient is unavailable to the agents.
We obtain the $\parenthese{1-\frac{1}{e}}$-regret bound of $O(T^{8/9})$ for the bandit setting.
It should be noted that the $\parenthese{1-\frac{1}{e}}$-regret of $O(T^{4/5})$ and $O(T^{8/9})$ matches the regret guarantees in the centralized online settings.
Besides, one can convert the algorithm to be projection-free (by selecting suitable oracles).
This property allows the algorithm to be implemented in various contexts based on the computing capacity of local devices.
We demonstrate the practical application of our algorithm on a Movie Recommendation problem and present a thorough analysis of different aspects of the performance guarantee, the effects of network topology, and decentralization, which are predictably explained by our theoretical results.
\begin{table}[ht!]
\centering
\begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} ccccc}
\toprule
Algorithm & Stochastic & $(1-1/e)$-Regret & Communications & Gradient \\
& Gradient & & & Evaluations \\
\midrule
DMFW & Yes & $\mathcal{O}(T^{1/2})$ & $2 \cdot T^{3/2}$ & $T^{3/2}$ \\
\textbf{Monode-FW} & Yes & $\mathcal{O}(T^{4/5})$ & $2 \cdot$ \#neighbors & 1\\
\textbf{Bandit Monode-FW} & - & $\mathcal{O}(T^{8/9})$ & $2 \cdot$ \#neighbors & - \\
\bottomrule
\end{tabular*}
\caption{Comparison of previous work on \emph{adversarial} decentralized online monotone DR-submodular maximization (DMFW \cite{submod-timevarying}) and our proposed algorithms (in bold). The communications and gradient evaluations are mesured per agent per time step.}
\hspace{0.8cm}
\end{table}
\section{Theoretical Analysis for \Cref{algo:online-dist-FW}}
\label{chap:analysis}
In the analysis, we note $\sigma_q (k)$ to be the permutation of $k$ at phase $q$. We define the average function of the remaining \emph{$(K-k)$} functions as
\begin{align}
\bar{F}_{q,k} (\vect{x}) = \frac{1}{K-k} \sum_{\ell=k+1}^K F_{\sigma_q (\ell)} (\vect{x}) = \frac{1}{K-k} \sum_{\ell=k+1}^K \frac{1}{n} \sum_{i=1}^n f^i_{\sigma_q (\ell)} (\vect{x}) \nonumber \\
\end{align}
where $F_{\sigma_q (\ell)} (\vect{x}) = \frac{1}{n} \sum_{i=1}^n f^i_{\sigma_q (\ell)} (\vect{x})$.
We also define
\begin{align}
\hat{f}^i_{q,k} = \frac{1}{K-k} \sum_{\ell=k+1}^K f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell}),
\quad \nabla \hat{f}^i_{q,k} = \frac{1}{K-k} \sum_{\ell=k+1}^K \nabla f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})
\end{align}
as the average of the remaining \emph{$(K-k)$} functions and stochastic gradients of $f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell}$) respectively. Then we note,
\begin{align}
\hat{F}_{q,k} = \frac{1}{n}\sum_{i=1}^n \hat{f}^i_{q,k},
\quad \nabla \hat{F}_{q,k} = \frac{1}{n}\sum_{i=1}^n \nabla \hat{f}^i_{q,k},
\end{align}
In the same spririt of $\hat{f}^i_{q,k}$, we define
\begin{align}
\hat{\vect{g}}^i_{q,k} = \frac{1}{K-k} \sum_{\ell=k+1}^K \vect{g}^i_{q,\ell},
\quad \hat{\vect{d}}^i_{q,k} = \frac{1}{K-k} \sum_{\ell=k+1}^K \vect{d}^i_{q,\ell}
\end{align}
In the rest of the analysis, we let $\mathcal{F}_{q,1} \subset \dots \subset \mathcal{F}_{q,k}$ to be the $\sigma$-field generated by the permutation up to time $k$ and $\mathcal{H}_{q,1} \subset \dots \subset \mathcal{H}_{q,k}$ another $\sigma$-field generated by the randomness of the stochastic gradient estimate up to time $k$.
\begin{assumption}
\label{assum:assum_d_var_red}
Let $\curlybracket{\widetilde{\vect{d}}_{t}}_{1}^T$ be a sequence such that $\ensuremath{\mathbb{E}}\bracket{\widetilde{\vect{d}}_{t}\bigm\vert \mathcal{H}_{t-1}} = \vect{d}_{t}$ where $\mathcal{H}_{t-1}$ is the filtration of the stochastic estimate up to $t-1$.
\end{assumption}
\begin{lemma}[Fact 1, \cite{WaiLafond17:Decentralized-Frank--Wolfe}]
\label[lemma]{lmm:average_consensus}
Let $\vect{x}^1, \ldots, \vect{x}^n \in \mathbb{R}^d$ be a set of vector and and $\vect{x}_{avg} := \frac{1}{n} \sum_{i=1}^n \vect{x}_i$ their average. Let $\mathbf{W}$ be non-negative doubly stochastic matrix. The output of one round of the average consensus update $\Bar{\vect{x}}^i = \sum_{j=1}^n W_{ij} \vect{x}^j$ satisfy :
\begin{align*}
\sqrt{\sum_{i=1}^n \norm{\Bar{\vect{x}}^i - \vect{x}_{avg}}^2}
\leq |\lambda_2 (\mathbf{W})| \cdot \sqrt{\sum_{i=1}^n \norm{\vect{x}^i - \vect{x}_{avg}}^2}
\end{align*}
where $\lambda_2 (\mathbf{W})$ is the second largest eigenvalue of $\mathbf{W}$.
\end{lemma}
\begin{lemma}
\label[lemma]{lmm:x_iteration}
For all $1 \leq q \leq Q$ and $1 \leq k \leq K$, we have
\begin{align}
\overline{\vect{x}}_{q,k+1} = \overline{\vect{x}}_{q,k} + \eta_k \parenthese{\frac{1}{n}\sum_{i=1}^n \vect{v}^i_{q,k} - \overline{\vect{x}}_{q,k}}
\end{align}
for convex case, and
\begin{align}
\overline{\vect{x}}_{q,k+1} = \overline{\vect{x}}_{q,k} + \frac{1}{K} \parenthese{\frac{1}{n}\sum_{i=1}^n \vect{v}^i_{q,k}}
\end{align}
for submodular case.
\end{lemma}
\begin{proof}
\begin{align*}
\overline{\vect{x}}_{q,k+1}
&= \frac{1}{n} \sum_{i=1}^{n} \vect{x}^{i}_{q,k + 1} \tag{Definition of $\overline{\vect{x}}_{q,k+1}$}
\\
&= \frac{1}{n} \sum_{i=1}^{n} \left ((1-\eta_{k}) \vect{y}^{i}_{q,k} + \eta_{k} \vect{v}_{q,k}^i \right ) \tag{Definition of $\vect{x}_{q,k}^i$}
\\
&= \frac{1}{n} \sum_{i=1}^{n} \left [ (1-\eta_{k}) \left ( \sum_{j=1}^n \vect{W}_{ij} \vect{x}_{q,k}^j \right ) + \eta_{k} \vect{v}_{q,k}^i \right ] \tag{Definition of $\vect{y}_{q,k}^i$}
\\
&= (1 - \eta_{k}) \frac{1}{n} \sum_{i=1}^n \left [ \sum_{j=1}^n \vect{W}_{ij}\vect{x}_{q,k}^j \right ] + \frac{1}{n}\eta_{k} \sum_{i=1}^n \vect{v}_{q,k}^i
\\
&= (1 - \eta_{k}) \frac{1}{n} \sum_{j=1}^n \left [ \vect{x}_{q,k}^j \sum_{i=1}^n \vect{W}_{ij} \right ] + \frac{1}{n} \eta_{k} \sum_{i=1}^n \vect{v}_{q,k}^i
\\
&= (1 - \eta_{k}) \frac{1}{n} \sum_{j=1}^n \vect{x}_{q,k}^j + \frac{1}{n} \eta_{k} \sum_{i=1}^n \vect{v}_{q,k}^i \tag{$\sum_{i=1}^n W_{ij} =1$ for every $j$}
\\
&= (1 - \eta_{k}) \overline{\vect{x}}_{q,k} + \frac{1}{n} \eta_{k} \sum_{i=1}^n \vect{v}_{q,k}^i
\\
&= \overline{\vect{x}}_{q,k} + \eta_{k} \left ( \frac{1}{n} \sum_{i=1}^n \vect{v}_{q,k}^i - \overline{\vect{x}}_{q,k} \right )
\end{align*}
where we use a property of $W$ which is $\sum_{i = 1}^{n} W_{ij} = 1$ for every $j$. The proof of the second equation is similar.
\end{proof}
\begin{lemma}
\label[lemma]{lmm:condition_n_equal_hat}
For all $k \in \curlybracket{1, \ldots, K}$, it holds that
\begin{align}
\nabla \hat{F}_{q,k}
= \frac{1}{n} \sum_{i=1}^n \nabla \hat{f}^i_{q,k}
= \frac{1}{n} \sum_{i=1}^n \hat{\vect{g}}^i_{q,k}
= \frac{1}{n} \sum_{i=1}^n \hat{\vect{d}}^i_{q,k}
\end{align}
\end{lemma}
\begin{proof}
First, we verify that $\forall \ell \in \curlybracket{1, \ldots, K}$
\begin{align}
\label{eq:condition_n_equal}
\frac{1}{n} \sum_{i=1}^n \nabla f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})
= \frac{1}{n} \sum_{i=1}^n \vect{g}^i_{q,\ell}
= \frac{1}{n} \sum_{i=1}^n \vect{d}^i_{q,\ell}
\end{align}
For the base step $\ell=1$, we have $\vect{g}^i_{q,\ell} = \nabla f^i_{\sigma_q (1)} (\vect{x}^i_{q,1})$. Averaging over $n$ yields
\begin{align}
\frac{1}{n} \sum_{i=1}^n \vect{g}^i_{q,1}
= \frac{1}{n} \sum_{i=1}^n \nabla f^i_{\sigma_q (1)} (\vect{x}^i_{q,1}) \nonumber
\end{align}
Since $\mathbf{W}$ is doubly stochastic, we have
\begin{align}
\label{eq:doubly_stoc}
\frac{1}{n} \sum_{i=1}^n \vect{d}^i_{q,1}
= \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^n W_{ij} \vect{g}^j_{q,1}
= \frac{1}{n} \sum_{j=1}^n\vect{g}^j_{q,1} \sum_{i=1}^n W_{ij}
= \frac{1}{n} \sum_{j=1}^n\vect{g}^j_{q,1}
\end{align}
For the recurrence step, recall the definition of $\vect{g}^i_{q,\ell}$,
\begin{align*}
\vect{g}^i_{q, \ell}
= \nabla f^i_{\sigma_q(\ell)} (\vect{x}^i_{q,\ell})
- \nabla f^i_{\sigma_q(\ell-1)} (\vect{x}^i_{q,\ell-1})
+ \vect{d}^i_{q,\ell-1}
\end{align*}
Averaging over $n$ and using the recurrence hypothesis $\frac{1}{n}\sum_{i=1}^n\nabla f^i_{\sigma_q (\ell-1)} (\vect{x}^i_{q,\ell-1}) = \frac{1}{n}\sum_{i=1}^n\vect{d}^i_{q,\ell-1}$, we deduce that
\begin{align}
\frac{1}{n} \sum_{i=1}^n \vect{g}^i_{q,\ell}
= \frac{1}{n} \sum_{i=1}^n \nabla f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})
\end{align}
Also, using the same techniques in \cref{eq:doubly_stoc} for $\vect{d}^i_{q,\ell}$, we complete the verification for \cref{eq:condition_n_equal}. The proof of \Cref{lmm:condition_n_equal_hat} can be deduced from \cref{eq:condition_n_equal} by averaging over $\ell \in \curlybracket{k+1, \ldots, K}$
\end{proof}
\begin{lemma}
\label[lemma]{lmm:bound_d_avg_consensus}
Suppose that each of $f^i_{\sigma_q (k)}$ is \emph{$\beta$}-smooth. Using the Frank-Wolfe update of $\vect{x}^i_{q,k}$, the average of the remaining $(K-k)$ gradient approximation $\hat{\vect{d}}^i_{q,k}$ satisfies
\begin{align*}
\max_{i \in \bracket{1,n}}\ensuremath{\mathbb{E}} \bracket{ \norm{
\hat{\vect{d}}^i_{q,k} - \nabla \hat{F}_{q,k}
}}
\leq \begin{dcases}
\dfrac{N}{k} \qquad k \in \bracket{1,\dfrac{K}{2}} \\
\dfrac{N}{K-k+1} \qquad k \in \bracket{\dfrac{K}{2}+1,K}
\end{dcases}
\end{align*}
where $N = nGk_0 \max \{\lambda_2 \parenthese{1 + \frac{2}{1-\lambda_2}}, 2\}$.
\end{lemma}
\begin{proof}
We will prove the lemma by induction following the idea from Lemma 2 of \cite{WaiLafond17:Decentralized-Frank--Wolfe}. Let's define following variables
\begin{align}
\label{def:cat_vec}
\vect{\hat{d}}^{cat}_{q,k} = \bracket{\vect{\hat{d}}^{1 \top}_{q,k}, \dots, \vect{\hat{d}}^{n \top}_{q,k} }^{\top},
\quad \vect{\hat{g}}^{cat}_{q,k} = \bracket{\vect{\hat{g}}^{1 \top}_{q,k}, \dots, \vect{\hat{g}}^{n \top}_{q,k} }^\top,
\quad \nabla \hat{F}^{cat}_{q,k} = \bracket{\nabla \hat{F}^{\top}_{q,k}, \dots, \nabla \hat{F}^{\top}_{q,k} }^{\top}
\end{align}
and let the slack variables as
\begin{align}
\delta^i_{q,k} := \nabla \hat{f}^i_{q,k} - \nabla \hat{f}^i_{q,k-1},
\quad \bar{\delta}_{q,k} := \frac{1}{n} \sum_{i=1}^n \parenthese{\nabla \hat{f}^i_{q,k} - \nabla \hat{f}^i_{q,k-1}} = \nabla\hat{F}_{q,k} - \nabla\hat{F}_{q,k-1}
\end{align}
then, following the definition in \ref{def:cat_vec}, we note
\begin{align*}
\delta^{cat}_{q,k} = \bracket{\delta^{1 \top}_{q,k}, \dots, \delta^{n \top}_{q,k}}^{\top},
\quad \bar{\delta}^{cat}_{q,k} = \bracket{\bar{\delta}^{\top}_{q,k}, \dots, \bar{\delta}^{\top}_{q,k}}^{\top}
\end{align*}
By \Cref{lmm:average_consensus}, we have
\begin{align}
\label{eq:gradient_consensus}
\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}^2
&= \sum_{i=1}^n \norm{\vect{\hat{d}}^{i}_{q,k} - \nabla \hat{F}_{q,k}}^2 \nonumber \\
& \leq \lambda_2^2 \sum_{i=1}^n \norm{\vect{\hat{g}}^{i}_{q,k} - \nabla \hat{F}_{q,k}}^2 \nonumber \\
& = \lambda_2^2 \norm{\vect{\hat{g}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}^2
\end{align}
We can deduce that
\begin{align}
\label{eq:recurrence_d_F}
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}}
&\leq \lambda_2 \ensuremath{\mathbb{E}} \bracket{\norm{
\vect{\hat{g}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}
} }\nonumber \\
&= \lambda_2 \ensuremath{\mathbb{E}} \bracket{\norm{
\delta^{cat}_{q,k} + \vect{\hat{d}}^{cat}_{q,k-1}
- \nabla \hat{F}^{cat}_{q,k} + \nabla \hat{F}^{cat}_{q,k-1} - \nabla \hat{F}^{cat}_{q,k-1}
}} \nonumber \\
& \leq \lambda_2 \parenthese{
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k-1} - \nabla \hat{F}^{cat}_{q,k-1}}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat}_{q,k} - \bar{\delta}^{cat}_{q,k}}}
} \nonumber \\
& \leq \lambda_2 \parenthese{
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k-1} - \nabla \hat{F}^{cat}_{q,k-1}}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat}_{q,k}}}
}
\end{align}
since
\begin{align}
\norm{\delta^{cat}_{q,k} - \bar{\delta}^{cat}_{q,k}}^2
= \sum_{i=1}^n \norm{\delta^{i}_{q,k} - \bar{\delta}_{q,k}}^2
\leq \sum_{i=1}^n \norm{\delta^{i}_{q,k}}^2 - n\norm{\bar{\delta}_{q,k}}^2
\leq \sum_{i=1}^n \norm{\delta^{i}_{q,k}}^2
= \norm{\delta^{cat}_{q,k}}^2
\end{align}
Notice that we can bound the expected value of $\delta^{cat}$ by
\begin{align}
\label{eq:diff_k}
\ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat}_{q,k}}^2}
&= \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{\delta ^i_{q,k}}^2}
= \sum_{i=1}^n \ensuremath{\mathbb{E}}
\bracket{
\norm{
\nabla \hat{f}^i_{q,k} - \nabla \hat{f}^i_{q,k-1}
}^2
} \nonumber\\
& \quad = \sum_{i=1}^n \ensuremath{\mathbb{E}}
\bracket{
\ensuremath{\mathbb{E}}
\bracket{
\norm{
\nabla \hat{f}^i_{q,k} - \nabla \hat{f}^i_{q,k-1}
}^2
\bigm\vert \mathcal{F}_{q,k-1}
}
} \nonumber\\
& \quad = \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\ensuremath{\mathbb{E}}
\bracket{
\norm{
\frac{\sum_{\ell=k+1}^K \nabla f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})}{K-k}
- \frac{\sum_{\ell=k}^K \nabla f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})}{K-k+1}
}^2
\bigm\vert \mathcal{F}_{q,k-1}
}
} \nonumber \\
& \quad = \sum_{i=1}^n\ensuremath{\mathbb{E}} \bracket{
\ensuremath{\mathbb{E}} \bracket{ \norm{
\frac{\sum_{\ell=k+1}^K \nabla f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell})}{(K-k)(K-k+1)}
- \frac{\nabla f^i_{\sigma_q (k)} (\vect{x}^i_{q,k})}{{K-k+1}}}^2
\bigm\vert \mathcal{F}_{q,k-1} }
}
\nonumber\\
& \quad \leq n
\parenthese{
\frac{2G}{K-k+1}
}^2
\end{align}
using Jensen's inequality, we can deduce that
\begin{align}
\label{eq:bound_delta_cat}
\ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat}_{q,k}}}
\leq \sqrt{\ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat}_{q,k}}^2}}
\leq \frac{2\sqrt{n}G}{K-k+1}
\end{align}
We are now proving the lemma by induction, when $k=1$, we have
\begin{align}
\label{eq:bound_cat_k1}
\ensuremath{\mathbb{E}}& \bracket{\norm{\vect{\hat{d}}^{cat}_{q,1} - \nabla \hat{F}^{cat}_{q,1}}^2}
= \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{
\vect{\hat{d}}^{i}_{q,1} - \nabla \hat{F}_{q,1}}^2
} \nonumber
\leq \lambda_2^2 \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{
\vect{\hat{g}}^{i}_{q,1} - \nabla \hat{F}_{q,1}}^2
} \nonumber \\
& \leq \lambda_2^2 \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{
\nabla \hat{f}^{i}_{q,1} - \nabla \hat{F}_{q,1}}^2
} \nonumber
\leq \lambda_2^2 \ensuremath{\mathbb{E}} \bracket{
\sum_{i=1}^n \norm{\nabla \hat{f}^{i}_{q,1}}^2
}
\leq n\lambda_2^2 G^2
\end{align}
where we have used Lipschitzness of $f$ in the last inequality. We now suppose that $1 \leq k \leq k_0$, from \cref{eq:recurrence_d_F,eq:bound_delta_cat}
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}}
& \leq \lambda_2 \parenthese{
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k-1} - \nabla \hat{F}^{cat}_{q,k-1}}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat}_{q,k}}}
} \nonumber \\
& \leq \lambda_2^{k-1}\sqrt{n}G + 2 \sum_{\tau=1}^{k} \lambda_2^{\tau} \sqrt{n}G \nonumber\\
& \leq \lambda_2\sqrt{n}G + 2\frac{\lambda_2}{1-\lambda_2}\sqrt{n}G \nonumber \\
&= \lambda_2 \sqrt{n}G \parenthese{1 + \frac{2}{1-\lambda_2}}
\end{align}
We set $N_0 = k_0\sqrt{n}G\max \{\lambda_2 \parenthese{1 + \frac{2}{1-\lambda_2}}, 2\}$, we claim that $\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}} \leq \dfrac{N_0}{k}$ for $k \in \bracket{k_0,\frac{K}{2}+1}$. Recall that $K-k+1 \geq k-1$, by \cref{eq:recurrence_d_F,eq:bound_delta_cat} and induction hypothesis, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}}
& \leq \lambda_2 \parenthese{
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k-1} - \nabla \hat{F}^{cat}_{q,k-1}}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat}_{q,k}}}
} \nonumber \\
& \leq \lambda_2 \left(
\frac{N_0}{k-1} + \frac{2\sqrt{n}G}{K-k+1}
\right) \nonumber \\
& \leq \lambda_2 \left(
\frac{N_0}{k-1} + \frac{2\sqrt{n}G}{k-1}
\right) \nonumber \\
& \leq \lambda_2 \left(
\frac{N_0 + 2\sqrt{n}G}{k-1}
\right)\nonumber \\
& \leq \lambda_2 \left(
N_0\frac{k_0 + 1}{k_0 (k-1)}
\right) \nonumber \\
& \leq \frac{N_0}{k} \label{eq:boundksmall}
\end{align}
where we have used the fact that $\lambda_2(\mathbf{W}) \dfrac{k_0+1}{k_0(k-1)} \leq \dfrac{1}{k}$ in the last inequality.
When $k \in \bracket{\frac{K}{2}+1, K}$, we claim that $\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}} \leq \dfrac{N_0}{K-k+1}$. The base case $k = \frac{K}{2}+1$ is verified by \cref{eq:boundksmall},
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}}
\leq \frac{N_0}{\frac{K}{2}+1}
\leq \frac{N_0}{\frac{K}{2}}
\leq \frac{N_0}{K-(\frac{K}{2}+1) + 1}
\end{align}
For $k \geq \frac{K}{2}+2$, using \cref{eq:recurrence_d_F,eq:bound_delta_cat} and the induction hypothesis, we have
\begin{align}
\label{eq:bound_delta_big}
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}}
& \leq \lambda_2 \parenthese{
\ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k-1} - \nabla \hat{F}^{cat}_{q,k-1}}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\delta^{cat}_{q,k}}}
} \nonumber \\
& \leq \lambda_2 \parenthese{
\frac{N_0}{K-k+2}
+ \frac{2\sqrt{n}G}{K-k+1}
} \nonumber \\
& \leq \lambda_2\left(
\frac{N_0 + 2G}{K-k+1}
\right) \nonumber\\
& \leq \lambda_2
\parenthese{N_0 \frac{k_0 + 1}{k_0 (K-k+1)}}
\nonumber \\
& \leq \frac{N_0}{K-k+1}
\end{align}
Recall that
\begin{align}
\label{eq:bound_sum_sqrt}
\frac{1}{\sqrt{n}} \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{\vect{\hat{d}}^{i}_{q,k} - \nabla \hat{F}_{q,k}}}
\leq \ensuremath{\mathbb{E}} \bracket{\parenthese{\sum_{i=1}^n \norm{\vect{\hat{d}}^{i}_{q,k} - \nabla \hat{F}_{q,k}}^2}^{1/2}}
= \ensuremath{\mathbb{E}} \bracket{\norm{\vect{\hat{d}}^{cat}_{q,k} - \nabla \hat{F}^{cat}_{q,k}}}
\end{align}
The desired result followed from \cref{eq:boundksmall,eq:bound_delta_big,eq:bound_sum_sqrt} where $N = \sqrt{n}N_0$
\begin{align}
\max_{i \in \bracket{1,n}} \ensuremath{\mathbb{E}} \bracket{ \norm{
\hat{\vect{d}}^i_{q,k} - \nabla \hat{F}_{q,k}}
}
\leq \begin{dcases}
\dfrac{N}{k} \qquad k \in \bracket{1, \frac{K}{2}} \\
\dfrac{N}{K-k+1} \qquad k \in \bracket{\frac{K}{2}+1, K}
\end{dcases}
\end{align}
\end{proof}
\setcounter{theorem}{0}
\begin{lemma}
For $i \in \bracket{n}, k \in \bracket{K}$. Let $V_d = 2nG \parenthese{\frac{\lambda_2}{1-\lambda_2}+1}$, the local gradient is upper-bounded, i.e $\norm{\vect{d}^i_{q,k}} \leq V_d$
\end{lemma}
\begin{proof}
We use the same notation introduced in \cref{def:cat_vec}. Let's define
\begin{align}
\vect{d}^{cat}_{q,k} = \bracket{\vect{d}^{1 \top}_{q,k}, \dots, \vect{d}^{n \top}_{q,k}}^{\top} \in \mathbb{R}^{nd},
\quad \nabla f^{cat}_{\sigma_q(k)} = \bracket{\nabla f^{1 }_{\sigma_q (k)}(\vect{x}^1_{q,k})^\top, \dots, \nabla f^{n }_{\sigma_q (k)}(\vect{x}^n_{q,k})^\top}^{\top} \in \mathbb{R}^{nd}
\end{align}
and
\begin{align}
\nabla F^{cat}_{\sigma_q(k)} = \bracket{\nabla F_{\sigma_q(k)}^\top, \dots, \nabla F_{\sigma_q(k)}^\top}^\top
= \bracket{\frac{1}{n} \sum_{i=1}^n \nabla f^i_{\sigma_q(k)} (\vect{x}^i_{q,k})^\top, \dots, \frac{1}{n} \sum_{i=1}^n \nabla f^i_{\sigma_q(k)} (\vect{x}^i_{q,k})^\top}^\top
\end{align}
Using the local gradient update, we have
\begin{align}
\label{eq:bound_dk_step1}
\vect{d}^{cat}_{q,k}
&= \parenthese{\mathbf{W} \otimes I_d} \parenthese{\nabla f^{cat}_{\sigma_q(k)} - \nabla f^{cat}_{\sigma_q(k-1)} +\vect{d}^{cat}_{q,k-1}} \nonumber \\
&= \parenthese{\mathbf{W} \otimes I_d} \parenthese{\nabla f^{cat}_{\sigma_q(k)} - \nabla f^{cat}_{\sigma_q(k-1)}}
+ \parenthese{\mathbf{W} \otimes I_d}^2 \parenthese{\nabla f^{cat}_{\sigma_q(k-1)} - \nabla f^{cat}_{\sigma_q(k-2)} +\vect{d}^{cat}_{q,k-2}} \nonumber \\
&= \sum_{\tau = 1}^{k-1} \parenthese{\mathbf{W} \otimes I_d}^{k-\tau} \parenthese{\nabla f^{cat}_{\sigma_q(\tau+1)} - \nabla f^{cat}_{\sigma_q(\tau)}} + \parenthese{\mathbf{W} \otimes I_d}^{k} \nabla f^{cat}_{\sigma_q(1)} \nonumber \\
& = \sum_{\tau = 1}^{k-1} \parenthese{\mathbf{W} \otimes I_d}^{k-\tau}
\parenthese{\nabla f^{cat}_{\sigma_q(\tau+1)} - \nabla f^{cat}_{\sigma_q(\tau)}}
+ \parenthese{\mathbf{W} \otimes I_d}^{k} \nabla f^{cat}_{\sigma_q(1)} \nonumber \\
&\quad - \sum_{\tau=1}^{k-1} \parenthese{\nabla F^{cat}_{\sigma_q(\tau+1)} - \nabla F^{cat}_{\sigma_q(\tau)}} - \nabla F^{cat}_{\sigma_q(1)} + \nabla F^{cat}_{\sigma_q(k)} \nonumber \\
& = \sum_{\tau = 1}^{k-1} \bracket{\parenthese{\mathbf{W}^{k-\tau} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}\otimes I_d} \parenthese{\nabla f^{cat}_{\sigma_q(\tau+1)} - \nabla f^{cat}_{\sigma_q(\tau)}} \nonumber \\
& \quad + \bracket{\parenthese{\mathbf{W}^{k} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}\otimes I_d}
\nabla f^{cat}_{\sigma_q(1)}
+ \parenthese{\frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T \otimes I_d}\nabla f^{cat}_{\sigma_q(k)}
\end{align}
where the fourth equality holds since $\nabla F^{cat}_{\sigma_q(k)} - \sum_{\tau=1}^{k-1} \parenthese{\nabla F^{cat}_{\sigma_q(\tau+1)} - \nabla F^{cat}_{\sigma_q(\tau)}} - \nabla F^{cat}_{\sigma_q(1)} = 0$. The fifth equality can be deduced using $\nabla F^{cat}_{\sigma_q(k)} = \parenthese{\frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T \otimes I_d} \nabla f^{cat}_{\sigma_q(k)}$ and $\parenthese{\mathbf{W} \otimes I_d}^{k} = \parenthese{\mathbf{W}^k \otimes I_d}$. Recall that $\norm{\mathbf{W}\otimes I_d} = \norm{\mathbf{W}}$. Taking the norm on \cref{eq:bound_dk_step1}, we have
\begin{align}
\norm{\vect{d}^{cat}_{q,k}}
\leq 2\sqrt{n}G \sum_{\tau = 1}^{k-1} \lambda_2^{k-\tau}
+ \sqrt{n}G\parenthese{\lambda_2^k + 1}
\leq 2\sqrt{n}G \parenthese{\frac{\lambda_2}{1-\lambda_2} + 1}
\end{align}
where we have used $\norm{\nabla f^{cat}_{\sigma_q(\tau+1)} - \nabla f^{cat}_{\sigma_q(\tau)}} \leq 2\sqrt{n}G$, $\norm{\mathbf{W}^{k} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T} \leq \lambda_2^{k}$ and $\norm{\frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T} \leq 1$ in the first inequality. We have $\forall i \in \bracket{n}$
\begin{align}
\label{eq:bound_sum_useful}
\norm{\vect{d}^i_{q,k}} \leq \sum_{i=1}^n \norm{\vect{d}^i_{q,k}} \leq \sqrt{n} \parenthese{\sum_{i=1}^n \norm{\vect{d}^i_{q,k}}^2}^{1/2} = \sqrt{n}\norm{\vect{d}^{cat}_{q,k}}
\end{align}
one can obtain the desired result.
\end{proof}
\begin{lemma}
Under \Cref{assum:stoch_grad} and let $\sigma_1^2 = 4n \bracket{\parenthese{\frac{G+G_0}{\frac{1}{\lambda_2}-1}}^2 + 2\sigma_0^2}$. For $i \in \bracket{n}, k \in \bracket{K}$, the variance of the local stochastic gradient is uniformly bounded.
\begin{align*}
\ensuremath{\mathbb{E}} \bracket{ \norm{
\vect{d}^i_{q,k} - \widetilde{\vect{d}}^i_{q,k}
}^2} \leq\sigma_1^2
\end{align*}
\end{lemma}
\begin{proof}
We denote $\widetilde{\vect{d}}^{cat}$ the stochastique version of $\vect{d}^{cat}$, following \cref{eq:bound_dk_step1}, we have
\begin{align}
\widetilde{\vect{d}}^{cat}_{q,k}
& = \sum_{\tau = 1}^{k-1}
\bracket{\parenthese{\mathbf{W}^{k-\tau} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}\otimes I_d}
\parenthese{
\widetilde{\nabla}f ^{cat}_{\sigma_q(\tau+1)} - \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau)}
} \nonumber \\
& \quad + \bracket{\parenthese{\mathbf{W}^{k} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}\otimes I_d}
\widetilde{\nabla}f ^{cat}_{\sigma_q(1)}
+ \parenthese{\frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T \otimes I_d}\widetilde{\nabla}f ^{cat}_{\sigma_q(k)}
\end{align}
Then, we have
\begin{align}
\label{eq:surrogate_grad_diff}
\vect{d}^{cat}_{q,k} - \widetilde{\vect{d}}^{cat}_{q,k}
&= \sum_{\tau = 1}^{k-1}
\bracket{\parenthese{\mathbf{W}^{k-\tau} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}\otimes I_d}
\parenthese{
\nabla f^{cat}_{\sigma_q(\tau+1)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau+1)}
+ \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau)}
- \nabla f^{cat}_{\sigma_q(\tau)}
} \nonumber \\
& \quad + \bracket{\parenthese{\mathbf{W}^{k} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}\otimes I_d}
\parenthese{
\nabla f^{cat}_{\sigma_q(1)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(1)}
} \nonumber \\
& \qquad+ \parenthese{\frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T \otimes I_d}
\parenthese{
\nabla f^{cat}_{\sigma_q(k)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(k)}
}
\end{align}
By \Cref{assum:stoch_grad} and Jensen's inequality, we have
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{\norm{\nabla f^{cat}_{\sigma_q(\tau)}- \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau)}}^2}
= \ensuremath{\mathbb{E}} \bracket{\sum_{i=1}^n \norm{\nabla f^{i}_{\sigma_q(\tau)} \parenthese{\vect{x}^i_{q,\tau}} - \widetilde{\nabla}f ^{i}_{\sigma_q(\tau)} \parenthese{\vect{x}^i_{q,\tau}}}^2} \nonumber \\
&\leq \sqrt{\sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{\norm{\nabla f^{i}_{\sigma_q(\tau)} \parenthese{\vect{x}^i_{q,\tau}} - \widetilde{\nabla}f ^{i}_{\sigma_q(\tau)} \parenthese{\vect{x}^i_{q,\tau}}}^2}} \leq \sqrt{n} \sigma_0
\end{align}
The second moment of \cref{eq:surrogate_grad_diff} is written as
\begin{align}
\label{eq:bound_stoch_d_proof}
&\ensuremath{\mathbb{E}} \bracket{\norm{
\vect{d}^{cat}_{q,k} - \widetilde{\vect{d}}^{cat}_{q,k}
}^2} \nonumber \\
\leq & \ensuremath{\mathbb{E}} \bracket{\parenthese{\sum_{\tau = 1}^{k-1}
\norm{\mathbf{W}^{k-\tau} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}
\norm{
\nabla f^{cat}_{\sigma_q(\tau+1)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau+1)}
+ \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau)}
- \nabla f^{cat}_{\sigma_q(\tau)}
}}^2} \nonumber \\
& \quad + \ensuremath{\mathbb{E}} \bracket{\norm{\parenthese{{\mathbf{W}^{k} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}\otimes I_d}
\parenthese{
\nabla f^{cat}_{\sigma_q(1)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(1)}
}
+ \parenthese{\frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T \otimes I_d}
\parenthese{
\nabla f^{cat}_{\sigma_q(k)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(k)}
}
}
^2} \nonumber \\
\leq & \ensuremath{\mathbb{E}} \bracket{\parenthese{\sum_{\tau = 1}^{k-1}
\norm{\mathbf{W}^{k-\tau} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}
\norm{
\nabla f^{cat}_{\sigma_q(\tau+1)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau+1)}
+ \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau)}
- \nabla f^{cat}_{\sigma_q(\tau)}
}}^2} \nonumber \\
& \quad
+ 4\parenthese{\ensuremath{\mathbb{E}} \bracket{\norm{\mathbf{W}^{k} - \frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}^2
\norm{
\nabla f^{cat}_{\sigma_q(1)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(1)}
}^2}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\frac{1}{n}\mathbf{1}_n \mathbf{1}_n^T}^2
\norm{
\nabla f^{cat}_{\sigma_q(k)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(k)}
}^2}
} \nonumber \\
\leq & 4n\parenthese{G + G_0}^2 \parenthese{\sum_{\tau = 1}^{k-1} \lambda_2^{k-\tau} }^2
+ 4n\sigma_0^2 \parenthese{\lambda_2^{2k} + 1} \nonumber \\
\leq & 4n\parenthese{G + G_0}^2\parenthese{\frac{\lambda_2}{1-\lambda_2}}^2 + 4n\sigma_0^2(\lambda_2 + 1)
\leq 4n \bracket{\parenthese{\frac{G+G_0}{\frac{1}{\lambda_2}-1}}^2 + 2\sigma_0^2}
\end{align}
where the first inequality holds since $\ensuremath{\mathbb{E}} \bracket{\nabla f^{cat}_{\sigma_q(\tau+1)}
- \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau+1)}
+ \widetilde{\nabla}f ^{cat}_{\sigma_q(\tau)}
- \nabla f^{cat}_{\sigma_q(\tau)} } = 0$.
The second inequality follows the fact that $\norm{a + b}^2 \leq 4\parenthese{\norm{a}^2 + \norm{b}^2}$. The third inequality comes from \Cref{assum:stoch_grad} and the analysis in \Cref{lmm:bound_d}. Finally, one can obtain the desired result by noticing
$\ensuremath{\mathbb{E}} \bracket{\norm{\vect{d}^i_{q,k} - \widetilde{\vect{d}}^i_{q,k}}^2} \leq \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{\norm{\vect{d}^i_{q,k} - \widetilde{\vect{d}}^i_{q,k}}^2} = \ensuremath{\mathbb{E}} \bracket{\norm{\vect{d}^{cat}_{q,k} - \widetilde{\vect{d}}^{cat}_{q,k}}^2}$
\end{proof}
\setcounter{theorem}{11}
\begin{lemma}[Lemma 6, \cite{Zhang:2019}]
\label[lemma]{lmm:bound_d_var_red}
Under \Cref{assum:assum_d_var_red}, \Cref{lmm:bound_d}, \Cref{lmm:stoch_variance} and setting $\rho_k = \frac{2}{\parenthese{k+3}^{2/3}}$ and $\rho_k = \frac{1.5}{\parenthese{K-k+2}^{2/3}}$ for $k \in \bracket{\frac{K}{2}}$ and $k \in \bracket{\frac{K}{2}+1,K}$ respectively, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{
\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}}
\leq \begin{dcases}
\frac{\sqrt{M}}{\parenthese{k+4}^{1/3}} \qquad k \in \bracket{\frac{K}{2}} \\
\frac{\sqrt{M}}{\parenthese{K-k+1}^{1/3}} \qquad i \in \bracket{\frac{K}{2} + 1,K}
\end{dcases}
\end{align}
where $M = \max\{M_1, M_2\}$ where $M_1 = \max \{5^{2/3} \parenthese{V_{\vect{d}} + L_0}^2 , M_0 \}$, $M_0 = 4\parenthese{V^2_{\vect{d}} + \sigma^2} + 32\sqrt{2}V_{\vect{d}}$ and $M_2 = 2.55\parenthese{V^2_{\vect{d}} + \sigma^2} + \dfrac{7\sqrt{2}V_{\vect{d}}}{3}$ and $L_0 = \frac{2}{4^{2/3}} \norm{\widetilde{\vect{d}}^i_{q,1}}$
\end{lemma}
\begin{proof}
In order to prove the lemma, we only need to bound $\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}$, following the decomposition in \cite{Zhang:2019}, we have
\begin{align}
\label{eq:var_red_develop}
\ensuremath{\mathbb{E}} &\bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
= \ensuremath{\mathbb{E}} \bracket{\norm{
\hat{\vect{d}}^i_{q,k-1}
- (1-\rho_k) \widetilde{\vect{a}}^i_{q,k-1}
- \rho_k \widetilde{\vect{d}}^i_{q,k}
}^2
} \nonumber \\
& = \rho_k^2 \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1}-\widetilde{\vect{d}}^i_{q,k})}^2}
+ (1-\rho_k)^2 \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}}^2} \\
& \quad + (1-\rho_k)^2
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1} }^2} \nonumber \\
& \quad + 2\rho_k (1-\rho_k)
\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{d}}^i_{q,k}, \hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}
}} \nonumber\\
& \quad + 2\rho_k (1-\rho_k)
\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{d}}^i_{q,k},
\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}} \nonumber\\
& \quad + 2(1-\rho_k)^2
\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2},
\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}}
\end{align}
The first part of the above equation is written as
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{
\norm{
\hat{\vect{d}}^i_{q,k-1}-\widetilde{\vect{d}}^i_{q,k})
}^2
}
= \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{
\hat{\vect{d}}^i_{q,k-1}-\widetilde{\vect{d}}^i_{q,k})
}^2 \bigm\vert \mathcal{F}_{q,k-1}
}} \nonumber\\
& = \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{
\hat{\vect{d}}^i_{q,k-1}
- \vect{d}^i_{q,k}
+ \vect{d}^i_{q,k}
-\widetilde{\vect{d}}^i_{q,k})
}^2 \bigm\vert \mathcal{F}_{q,k-1}
}} \nonumber\\
& \leq \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{
\hat{\vect{d}}^i_{q,k-1}
- \vect{d}^i_{q,k}}^2
+ \norm{
\vect{d}^i_{q,k}
-\widetilde{\vect{d}}^i_{q,k})}^2
+ 2 \langle
\hat{\vect{d}}^i_{q,k-1} - \vect{d}^i_{q,k},
\vect{d}^i_{q,k} -\widetilde{\vect{d}}^i_{q,k}
\rangle
\bigm\vert \mathcal{F}_{q,k-1}
}} \label{eq:bound_d0}
\end{align}
Using the definition of $\hat{\vect{d}}^i_{q,k-1}$, \Cref{lmm:bound_d} and \Cref{lmm:stoch_variance} and law of total expectation, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{
\hat{\vect{d}}^i_{q,k-1}
- \vect{d}^i_{q,k}}^2
\bigm\vert \mathcal{F}_{q,k-1}
}}
= \ensuremath{\mathbb{E}} \bracket{\ensuremath{Var}_{\sigma}\left(
\vect{d}^i_{q,k} \bigm\vert \mathcal{F}_{q,k-1}
\right)
}
\leq \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{\vect{d}^i_{q,k}}^2 \bigm\vert \mathcal{F}_{q,k-1}
}}
\leq V_{\vect{d}}^2 \label{eq:bound_d1}
\end{align}
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{ \norm{
\vect{d}^i_{q,k} -\widetilde{\vect{d}}^i_{q,k})
}^2
}\vert \mathcal{F}_{q,k-1}}
\leq \sigma^2_1 \label{eq:bound_d2}
\end{align}
Recall that $\mathcal{H}_{q,k}$ is the filtration related to the randomness of $\vect{\widetilde{d}}^i_{q,k}$ and $\hat{\vect{d}}^i_{q,k-1}$ and $\vect{d}^i_{q,k}$ is $\mathcal{F}_{q,k} $-measurable, then one can write
\begin{align}
&\ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \vect{d}^i_{q,k},
\vect{d}^i_{q,k} -\widetilde{\vect{d}}^i_{q,k}
}
\bigm\vert \mathcal{F}_{q,k-1}
}} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \vect{d}^i_{q,k},
\vect{d}^i_{q,k} -\widetilde{\vect{d}}^i_{q,k}
}
} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \vect{d}^i_{q,k},
\vect{d}^i_{q,k} -\vect{\widetilde{d}}^i_{q,k}
}
\bigm\vert \mathcal{F}_{q,k}
}} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \vect{d}^i_{q,k},
\ensuremath{\mathbb{E}}_{\sigma} \bracket{ \vect{d}^i_{q,k} -\vect{\widetilde{d}}^i_{q,k}
\bigm\vert \mathcal{F}_{q,k}
}}
} \tag{by $\mathcal{F}_{q,k}$-measurability} \nonumber\\
=& \ensuremath{\mathbb{E}} \bracket{ \ensuremath{\mathbb{E}}_{\vect{\widetilde{d}}} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \vect{d}^i_{q,k},
\ensuremath{\mathbb{E}}_{\sigma} \bracket{ \vect{d}^i_{q,k} -\vect{\widetilde{d}}^i_{q,k}
\bigm\vert \mathcal{F}_{q,k}
}}
}\bigm\vert \mathcal{H}_{q,k-1}} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \vect{d}^i_{q,k}, \ensuremath{\mathbb{E}}_{\vect{\widetilde{d}}} \bracket{
\ensuremath{\mathbb{E}}_{\sigma} \bracket{ \vect{d}^i_{q,k} -\vect{\widetilde{d}}^i_{q,k}
\bigm\vert \mathcal{F}_{q,k}
} \bigm\vert \mathcal{H}_{q,k-1}}
}} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \vect{d}^i_{q,k}, \ensuremath{\mathbb{E}}_{\sigma} \bracket{
\ensuremath{\mathbb{E}}_{\vect{\widetilde{d}}} \bracket{ \vect{d}^i_{q,k} -\vect{\widetilde{d}}^i_{q,k}
\bigm\vert \mathcal{H}_{q,k-1}
} \bigm\vert \mathcal{F}_{q,k}}
}} \nonumber \tag{by Fubini's theorem} \\
=& 0 \label{eq:bound_d3}
\end{align}
where the last equation holds since $\ensuremath{\mathbb{E}}_{\vect{\widetilde{d}}}\bracket{\vect{\widetilde{d}}^i_{q,k}\bigm\vert \mathcal{H}_{q,k-1}} = \vect{d}^i_{q,k}$. Combining \cref{eq:bound_d1,eq:bound_d2,eq:bound_d3}, \cref{eq:bound_d0} is upper bounded by
\begin{align}
\label{eq:bound_d_hat_tilde}
\ensuremath{\mathbb{E}} &\bracket{
\norm{
\hat{\vect{d}}^i_{q,k-1}-\vect{\widetilde{d}}^i_{q,k})
}^2
} \leq V_{\vect{d}}^2 + \sigma^2_1 \triangleq V
\end{align}
We are now bounding $\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}}^2} $, using the definition of $\hat{\vect{d}}^i_{q,k}$ and \Cref{lmm:bound_d}. We have
\begin{align}
\label{eq:bound_d_hat_diff}
&\ensuremath{\mathbb{E}} \bracket{\norm{
\hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}
}^2} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{
\hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}
}^2
\bigm\vert \mathcal{F}_{q,k-2}
}} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{ \ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{
\frac{\sum_{\ell=k}^K \vect{d}^i_{q,\ell}}{K-k+1}
- \frac{\sum_{\ell=k-1}^K \vect{d}^i_{q,\ell}}{K-k+2}
}^2
\bigm\vert \mathcal{F}_{q,k-2}
}} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{ \ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{
\frac{\sum_{\ell=k}^K \vect{d}^i_{q,\ell}}{K-k+1}
- \frac{\sum_{\ell=k}^K \vect{d}^i_{q,\ell}}{K-k+2}
- \frac{\vect{d}^i_{q,k-1}}{K-k+2}
}^2
\bigm\vert \mathcal{F}_{q,k-2}
}} \nonumber \\
=& \ensuremath{\mathbb{E}} \bracket{ \ensuremath{\mathbb{E}}_{\sigma} \bracket{
\norm{
\frac{\sum_{\ell=k}^K \vect{d}^i_{q,\ell}}{(K-k+1)(K-k+2)}
- \frac{\vect{d}^i_{q,k-1}}{K-k+2}
}^2
\bigm\vert \mathcal{F}_{q,k-2}
}} \nonumber \\
\leq& \ensuremath{\mathbb{E}} \bracket{ \ensuremath{\mathbb{E}}_{\sigma} \bracket{
\left(
\frac{\sum_{\ell=k}^K \norm{\vect{d}^i_{q,\ell}}}{(K-k+1)(K-k+2)}
+ \frac{\norm{\vect{d}^i_{q,k-1}}}{K-k+2}
\right)^2
\bigm\vert \mathcal{F}_{q,k-2}
}} \nonumber \\
\leq& \frac{4V^2_{\vect{d}}}{\left(K-k+2\right)^2}
\triangleq \frac{L}{\left(K-k+2\right)^2}
\end{align}
More over, we have
\begin{align}
\label{eq:bound_d4}
&\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{d}}^i_{q,k}, \hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}
}} \nonumber\\
=& \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma, \vect{\widetilde{d}}} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{d}}^i_{q,k}, \hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}
} \bigm\vert \mathcal{F}_{q,k-1}, \mathcal{H}_{q,k-1}
}} \nonumber\\
=& \ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\ensuremath{\mathbb{E}}_{\sigma, \vect{\widetilde{d}}} \bracket{
\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{d}}^i_{q,k} \bigm\vert \mathcal{F}_{q,k-1}, \mathcal{H}_{q,k-1}},
\hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}
}} \nonumber\\
=& 0
\end{align}
since $\ensuremath{\mathbb{E}}_{\widetilde{\vect{d}}} \bracket{\vect{\widetilde{d}}^i_{q,k} \bigm\vert \mathcal{H}_{q,k-1}} = \vect{d}^i_{q,k}$ and $\ensuremath{\mathbb{E}}_{\sigma} \bracket{\vect{d}^i_{q,k} \bigm\vert \mathcal{F}_{q,k-1} } = \hat{\vect{d}}^i_{q,k}$. Using the same argument, we can deduce
\begin{align}
\label{eq:bound_d5}
&\ensuremath{\mathbb{E}} \bracket{\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{d}}^i_{q,k},
\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}} \nonumber\\
=& \ensuremath{\mathbb{E}} \bracket{\ensuremath{\mathbb{E}}_{\sigma, \vect{\widetilde{d}}} \bracket{
\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{d}}^i_{q,k},
\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
} \bigm\vert \mathcal{F}_{q,k-1}, \mathcal{H}_{q,k-1},
}} \nonumber\\
=& \ensuremath{\mathbb{E}} \bracket{
\scalarproduct{ \ensuremath{\mathbb{E}}_{\sigma, \vect{\widetilde{d}}} \bracket{
\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{d}}^i_{q,k} \bigm\vert \mathcal{F}_{q,k-1}, \mathcal{H}_{q,k-1}},
\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}} \nonumber\\
=& 0
\end{align}
where we have use law of total expectation and conditional unbiasness of $\widetilde{\vect{d}}^{i}_{q,k}$.
Using Young's inequality and \cref{eq:bound_d_hat_diff}, one can write
\begin{align}
\label{eq:bound_d6}
\ensuremath{\mathbb{E}} &\bracket{\scalarproduct{
\hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2},
\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}
}} \nonumber\\
& \quad \leq \ensuremath{\mathbb{E}} \bracket{
\frac{1}{2\alpha_k} \norm{\hat{\vect{d}}^i_{q,k-1} - \hat{\vect{d}}^i_{q,k-2}}^2
+ \frac{\alpha_k}{2} \norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2
} \nonumber \\
& \quad \leq \frac{L}{2\alpha_k (K-k+2)^2}
+ \frac{\alpha_k}{2} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2}
\end{align}
With the above analysis, we can deduce that
\begin{align}
\label{eq:bound_d_atilde_recur}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
&\leq \rho_k^2 V
+ (1-\rho_k)^2 \frac{L}{(K-k+2)^2}
+ (1-\rho_k)^2 \ensuremath{\mathbb{E}} \bracket{
\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2
} \nonumber\\
& \quad + (1-\rho_k)^2 \left(
\frac{L}{\alpha_k (K-k+2)^2}
+ \alpha_k \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2}
\right)
\end{align}
Setting $\alpha_k = \frac{\rho_k}{2}$, we have
\begin{align}
\label{eq:recurrent_psi}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
& \leq \rho_k^2 V
+ \parenthese{1-\rho_k}^2 \parenthese{1+\frac{2}{\rho_k}} \frac{L}{\parenthese{K-k+2}^2} \nonumber \\
& \quad + \parenthese{1-\rho_k}^2 \parenthese{1+\frac{\rho_k}{2}} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \nonumber \\
& \leq \rho_k^2 V
+ \parenthese{1+\frac{2}{\rho_k}} \frac{L}{\parenthese{K-k+2}^2}
+ \parenthese{1-\rho_k} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2}
\end{align}
For $k \leq \frac{K}{2}+1$, we set $\rho_k = \frac{2}{\parenthese{k+3}^{2/3}}$ and recall that $K-k+2 \geq k$, \cref{eq:recurrent_psi} is written as:
\begin{align}
\ensuremath{\mathbb{E}}& \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2} \nonumber \\
&\leq \frac{4}{\parenthese{k+3}^{4/3}}V
+ \parenthese{1 + \parenthese{k+3}^{2/3}} \frac{L}{k^2}
+ \parenthese{1 - \frac{2}{\parenthese{k+3}^{2/3}}} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \nonumber \\
& \leq \frac{4}{\parenthese{k+3}^{4/3}}V
+ \parenthese{1 + \parenthese{k+3}^{2/3}} \frac{16L}{\parenthese{k+3}^2}
+ \parenthese{1 - \frac{2}{\parenthese{k+3}^{2/3}}} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \nonumber \\
& \leq \frac{4}{\parenthese{k+3}^{4/3}}V
+ \frac{16L}{\parenthese{k+3}^{4/3}}
+\frac{16L}{\parenthese{k+3}^{4/3}}
+ \parenthese{1 - \frac{2}{\parenthese{k+3}^{2/3}}} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \nonumber \\
& \leq \frac{4V + 32L}{\parenthese{k+3}^{4/3}}
+ \parenthese{1 - \frac{2}{\parenthese{k+3}^{2/3}}} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \nonumber \\
& \triangleq \frac{M_0}{\parenthese{k+3}^{4/3}}
+ \parenthese{1 - \frac{2}{\parenthese{k+3}^{2/3}}} \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2}
\end{align}
We consider the base step where $k=1$,
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,0} - \widetilde{\vect{a}}^i_{q,1}}^2}
&= \ensuremath{\mathbb{E}} \bracket{\norm{\frac{1}{K} \sum_{\ell=1}^K \vect{d}^i_{q,\ell}
- \frac{2}{4^{2/3}} \widetilde{\vect{d}}^i_{q,1}}^2} \nonumber \\
& \leq \parenthese{
V_{\vect{d}} + \frac{2}{4^{2/3}} \norm{\widetilde{\vect{d}}^i_{q,1}}
}^2 \nonumber \\
& \leq \parenthese{
V_{\vect{d}} + \frac{2}{4^{2/3}} G_0
}^2 \nonumber \\
& \triangleq \parenthese{
V_{\vect{d}} + L_0
}^2
\end{align}
Set $M_1 = \max \curlybracket{5^{2/3} \parenthese{V_{\vect{d}} + L_0}^2 , M_0}$. For $k \in \bracket{\frac{K}{2}+1}$, we claim that $\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2} \leq \dfrac{M_1}{(k+4)^{2/3}}$. Suppose the claim holds for $k-1$, we have
\begin{align}
\label{eq:bound_d_k_small}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
& \leq \frac{M_1}{(k+3)^{4/3}} + \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2}
\parenthese{1 - \frac{2}{\parenthese{k+3}^{2/3}}} \nonumber \\
& \leq \frac{M_1}{(k+3)^{4/3}} + \frac{M_1}{(k+3)^{2/3}} \cdot
\frac{\parenthese{k+3}^{2/3} - 2}{\parenthese{k+3}^{2/3}} \nonumber \\
& \leq \frac{M_1 \parenthese{(k+3)^{2/3} - 1}}{(k+3)^{4/3}} \nonumber \\
& \leq \frac{M_1}{(k+4)^{2/3}}
\end{align}
since $\dfrac{(k+3)^{2/3} - 1}{k+3)^{4/3}} \leq \dfrac{1}{(k+4)^{2/3}}$. For $k \in \bracket{\frac{K}{2}+1,K}$, we set $\rho_k = \dfrac{1.5}{(K-k+2)^{2/3}}$, thus
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
& \leq \frac{2.55 V}{(K-k+2)^{4/3}}
+ \parenthese{1 + \frac{4}{3} \parenthese{K-k+2}^{2/3}} \frac{L}{\parenthese{K-k+2}^2} \nonumber \\
& \quad + \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \parenthese{1 - \frac{1.5}{\parenthese{K-k+2}^{2/3}}} \nonumber \\
& \leq \frac{2.55 V}{(K-k+2)^{4/3}}
+ \frac{L}{\parenthese{K-k+2}^{4/3}}
+ \frac{4}{3} \frac{L}{\parenthese{K-k+2}^{4/3}} \nonumber \\
& \quad + \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \parenthese{1 - \frac{1.5}{\parenthese{K-k+2}^{2/3}}} \nonumber \\
& \leq \frac{2.55 V + 7L/3}{(K-k+2)^{4/3}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \parenthese{1 - \frac{1.5}{\parenthese{K-k+2}^{2/3}}} \nonumber \\
& \triangleq \frac{M_2}{(K-k+2)^{4/3}}
+ \ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-2} - \widetilde{\vect{a}}^i_{q,k-1}}^2} \parenthese{1 - \frac{1.5}{\parenthese{K-k+2}^{2/3}}}
\end{align}
Let $M = \max \curlybracket{M_1, M_2}$ and $k \in \bracket{\frac{K}{2} + 1,K}$, we claim that $\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2} \leq \dfrac{M}{\parenthese{K-k+1}^{2/3}}$. The base step is verified by \cref{eq:bound_d_k_small}. We now suppose the claim holds for $k-1$, let's prove for $k$.
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
\label{eq:bound_d_k_big}
& \leq \frac{M}{\parenthese{K-k+2}^{4/3}} + \frac{M}{\parenthese{K-k+2}^{2/3}} \cdot \frac{\parenthese{K-k+2}^{2/3} - 1.5}{\parenthese{K-k+2}^{2/3}} \nonumber\\
& = \frac{M \parenthese{\parenthese{K-k+2}^{2/3} - 0.5}}{\parenthese{K-k+2}^{4/3}} \nonumber\\
& \leq \frac{M}{\parenthese{K-k+1}^{2/3}}
\end{align}
Thus, from \cref{eq:bound_d_k_small} and \cref{eq:bound_d_k_big}, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}
\leq \begin{dcases}
\frac{M}{\parenthese{k+4}^{2/3}} \qquad k \in \bracket{1,\frac{K}{2}} \\
\frac{M}{\parenthese{K-k+1}^{2/3}} \qquad k \in \bracket{\frac{K}{2}+1,K}
\end{dcases}
\end{align}
Thus, using Jensen inequality, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}}
&\leq \sqrt{\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}} \nonumber \\
&= \sqrt{\ensuremath{\mathbb{E}} \bracket{\norm{\hat{\vect{d}}^i_{q,k-1} - \widetilde{\vect{a}}^i_{q,k}}^2}} \nonumber \\
& \leq \begin{dcases}
\frac{\sqrt{M}}{\parenthese{k+4}^{1/3}} \qquad k \in \bracket{1,\frac{K}{2}} \\
\frac{\sqrt{M}}{\parenthese{K-k+1}^{1/3}} \qquad k \in \bracket{\frac{K}{2}+1,K}
\end{dcases}
\end{align}
\end{proof}
\begin{claim}
\label{clm:f_hat_f_bar}
\begin{align}
\label{eq:F_bar_hat}
\ensuremath{\mathbb{E}} \bracket{\norm{\nabla \overline{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \nabla \hat{F}_{q,k-1}}}
\leq \beta D
\end{align}
\end{claim}
\begin{claimproof}
Recall the definition of $\overline{F}_{q,k-1}$ and $\hat{F}_{q,k-1}$,
\begin{align}
\overline{F}_{q,k-1} (\overline{\vect{x}}_{q,k}) = \frac{1}{K-k+1} \sum_{\ell=k}^K \frac{1}{ n}\sum_{i=1}^n f^i_{\sigma_q (\ell)} (\overline{\vect{x}}_{q,k}) \nonumber\\
\hat{F}_{q,k-1} = \frac{1}{K-k+1} \sum_{\ell=k}^K \frac{1}{ n}\sum_{i=1}^n f^i_{\sigma_q (\ell)} (\vect{x}^i_{q,\ell}) \nonumber
\end{align}
we have,
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{\norm{\nabla \overline{F}_{q,k-1}(\overline{\vect{x}}_q^k) - \nabla \hat{F}_{q,k-1}}} \nonumber \\
&= \ensuremath{\mathbb{E}} \bracket{\norm{
\frac{1}{K-k+1} \cdot \frac{1}{n} \sum_{\ell=k}^K \sum_{i=1}^n
\parenthese{
\nabla f^i_{\sigma_{q} (\ell)} \parenthese{\overline{\vect{x}}_{q,k}}
- \nabla f^i_{\sigma_{q} (\ell)} \parenthese{\vect{x}^i_{q,\ell}}
}
}} \nonumber \\
& \leq \ensuremath{\mathbb{E}} \bracket{
\frac{1}{K-k+1} \cdot \frac{1}{n} \sum_{\ell=k}^K \sum_{i=1}^n
\norm{
\nabla f^i_{\sigma_{q} (\ell)} \parenthese{\overline{\vect{x}}_{q,k}}
- \nabla f^i_{\sigma_{q} (\ell)} \parenthese{\vect{x}^i_{q,\ell}}
}
} \nonumber \\
& \leq \ensuremath{\mathbb{E}} \bracket{
\frac{1}{K-k+1} \cdot \frac{1}{n} \sum_{\ell=k}^K \sum_{i=1}^n
\beta \norm{
\overline{\vect{x}}_{q,k} - \vect{x}^i_{q,\ell}
}
} \nonumber \tag{by $\beta$-smoothness} \\
& \leq \beta D
\end{align}
\end{claimproof}
\begin{claim}
\label{clm:bound_F_K}
\begin{equation}
\sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \overline{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}_{q,k}^i}} \leq \beta D + \parenthese{N + \sqrt{M}} 3K^{2/3}
\end{equation}
\end{claim}
\begin{claimproof}
\begin{align}
\label{eq:bound_with_K}
\sum_{k=1}^K & \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \overline{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \vect{\Tilde{a}}_{q,k}^i}} \nonumber \\
& \leq \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{ \norm{\nabla \overline{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \nabla \hat{F}_{q,k-1}}}
+ \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \hat{F}_{q,k-1} - \vect{\widetilde{a}}_{q,k}^i }} \nonumber \\
& \leq \beta D
+ \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \hat{F}_{q,k-1} - \hat{\vect{d}}^i_{q,k-1}}}
+ \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{ \hat{\vect{d}}^i_{q,k-1} - \vect{\widetilde{a}}_{q,k}^i}}
\end{align}
where we have used \Cref{clm:f_hat_f_bar} and triangle inequality in the last inequality. Using \Cref{lmm:bound_d_avg_consensus}, we have
\begin{align}
\label{eq:bound_value_d_avg_consensus}
\sum_{k=1}^K &\ensuremath{\mathbb{E}} \bracket{\norm{\nabla \hat{F}_{q,k-1} - \hat{\vect{d}}^i_{q,k-1}}} \nonumber \\
&= \sum_{k=1}^{K/2} \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \hat{F}_{q,k-1} - \hat{\vect{d}}^i_{q,k-1}}}
+ \sum_{k=K/2 + 1}^{K}\ensuremath{\mathbb{E}}\bracket{\norm{\nabla \hat{F}_{q,k-1} - \hat{\vect{d}}^i_{q,k-1}}} \nonumber \\
& \leq \sum_{k=1}^{K/2}\frac{N}{k} + \sum_{k=K/2 + 1}^{K}\frac{N}{K-k+1}
\end{align}
By \Cref{lmm:bound_d_var_red}, we also have
\begin{align}
\label{eq:bound_value_d_var_red}
\sum_{k=1}^K &\ensuremath{\mathbb{E}} \bracket{\norm{ \hat{\vect{d}}^i_{q,k-1} - \vect{\widetilde{a}}_{q,k}^i}} \nonumber \\
&= \sum_{k=1}^{K/2}\ensuremath{\mathbb{E}} \bracket{\norm{ \hat{\vect{d}}^i_{q,k-1} - \vect{\widetilde{a}}_{q,k}^i}}
+ \sum_{k=K/2 + 1}^{K}\ensuremath{\mathbb{E}} \bracket{\norm{ \hat{\vect{d}}^i_{q,k-1} - \vect{\widetilde{a}}_{q,k}^i}} \nonumber \\
& \leq \sum_{k=1}^{K/2} \frac{\sqrt{M}}{\parenthese{k+4}^{1/3}}
+ \sum_{k=K/2 + 1}^{K}\frac{\sqrt{M}}{\parenthese{K-k+1}^{1/3}}
\end{align}
Combining \cref{eq:bound_value_d_avg_consensus} and \cref{eq:bound_value_d_var_red}, \cref{eq:bound_with_K} is written as
\begin{align}
\sum_{k=1}^K & \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \overline{F}_{q,k-1}(\overline{\vect{x}}_q^k) - \vect{\Tilde{a}}_{q,k}^i}} \nonumber \\
&\leq \beta D
+ \sum_{k=1}^{K/2} \parenthese{\frac{N}{k} + \frac{\sqrt{M}}{\parenthese{k+4}^{1/3}}}
+ \sum_{k=K/2 + 1}^{K} \parenthese{\frac{N}{K-k+1} + \frac{\sqrt{M}}{\parenthese{K-k+1}^{1/3}}} \nonumber \\
& \leq \beta D
+ \parenthese{N + \sqrt{M}} \sum_{k=1}^{K/2} \frac{1}{\parenthese{k+4}^{1/3}}
+ \parenthese{N + \sqrt{M}} \sum_{k=K/2 + 1}^{K} \frac{1}{\parenthese{K-k+1}^{1/3}} \nonumber \\
& \leq \beta D
+ \parenthese{N + \sqrt{M}} \sum_{k=1}^{K/2} \frac{1}{k^{1/3}}
+ \parenthese{N + \sqrt{M}} \sum_{l=1}^{K/2} \frac{1}{l^{1/3}} \nonumber \\
& \leq \beta D
+ 2\parenthese{N + \sqrt{M}} \int_{0}^{K/2} \frac{1}{s^{1/3}}ds \nonumber \\
& \leq \beta D + 2\parenthese{N + \sqrt{M}} \frac{3}{2}\parenthese{\frac{K}{2}}^{2/3} \nonumber \\
& \leq \beta D + \parenthese{N + \sqrt{M}} 3K^{2/3}
\end{align}
\end{claimproof}
\section{Experiments}
\label{chap:experiment}
We run the algorithm on a movie recommendation problem, with the goal of identifying a set of $k$ movies that satisfy all users. Our setting is closely related to the one in \cite{MokhtariHK18} and \cite{xie19b}. We use the MovieLens dataset, which contains one million ratings ranging from 1 to 5 from 6000 users on 3883 movies. We divided the data set into $T$ batches $B_1, \dots, B_T$, with each batch $B_t$ containing ratings from 50 users. We chose Complete, Line, Grid, and Erdos-Renyi graphs with linked probability $0.2$. We set the number of nodes/agents equals to $10$, $25$, and $50$. At each iteration $t$, the agent $i$ receives a subset of ratings $B^i_t \subset B_t$. Let $\mathcal{M}$ be the set of movies and $\mathcal{U}$ the set of users; we note $r(u,m)$ the rating of user $u \in \mathcal{U}$ for movies $m \in \mathcal{M}$. Let $S \subset \mathcal{M}$ a collection of movies such that $|S| = k$, the facility location function associated to each agent $i$ denoted,
\begin{align}
\label{pb:facility_location}
f(B^i_t, S) = f^i_t(S) = \frac{1}{|B^i_t|} \sum_{u \in B^i_t} \max_{m \in S} r(u,m)
\end{align}
We denote by $\mathcal{K} = \curlybracket{\vect{x} \in \bracket{0,1}^d \vert \sum_{j=1}^d \vect{x}_j = k}$. The multilinear extension of $f^t_i$ is defined as,
\begin{align}
F^i_t (\vect{x}) = \sum_{S \subset \mathcal{M}} f^i_t (S) \prod_{j \in S} \vect{x}_j \prod_{\ell \not\in S} \parenthese{1-\vect{x}_\ell}, \quad \forall \vect{x} \in \mathcal{K}
\end{align}
The goal is to maximize the global objective function $F_t(\vect{x}) = \frac{1}{n} \sum_{i=1}^n F^i_t (\vect{x})$, subject to $\vect{x} \in \mathcal{K}$ while using only local communication and partial information for each local functions.
\begin{figure}[htb!]
\centering
\subfloat[$(1-1/e)$-Regret on Complete Graph]{\label{fig:regret-k}
\includegraphics[width=.45\textwidth]{chapters/img/k20/regret_K.png}}
\subfloat[Ratio on Complete graph]{\label{fig:ratio-k}
\includegraphics[width=.45\textwidth]{chapters/img/k20/ratio_K.png}}
\caption{Algorithm performance on complete graph. The number of nodes is 10, 25 and 50. }
\label{fig:regret}
\end{figure}
\Cref{fig:regret-k} shows the $\parenthese{1-\frac{1}{e}}$-regret of the algorithm for $k=20$ on a complete graph with different node's configuration. We observe that increasing network size leads to a decrease in regret value, which is expected in a decentralized setting because information distributed across a larger set of nodes makes reaching consensus more difficult. Recall that the algorithm uses the same value for each function $f_t$ in block $q$. If we set $K = 17$ and $Q = 6$, we can expect a stepwise-like curve since the objective function's value changes significantly at each round $t \mod 17$. In a small graph configuration, this value change is more pronounced, bringing the cumulative sum of the objective function closer to the $\parenthese{1-\frac{1}{e}}$-optimal value.
\Cref{fig:ratio-k} depicts the ratio of our algorithm's objective value on a complete graph to an offline centralized Frank-Wolfe. As $t$ increases, the ratio approaches one, demonstrating that our algorithm's performance is comparable to that of an offline setting if we run the algorithm for many rounds, particularly in a $10$-nodes configuration. Thus, the results validate our theoretical analysis in the previous section.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.4]{chapters/img/avg_3.png}
\caption{Average objective value over $T$ rounds in function of the cardinality constraint.}
\label{fig:vary-k}
\end{figure}
\Cref{fig:vary-k} shows the average value of the objective function over $T$ rounds for all graph types when the number of movie suggestions $k$ is varied in a $50$-node configuration. The average degree for Erdos-Renyi, Complete, Grid, and Line is $5.8, 51, 5.4$, and $4$, respectively. As a result, we observe lower performance on less connected graphs when compared to other graph settings. We also notice that increasing the value of $k$ is equivalent to relaxing the cardinality constraint, which results in better performance on the objective function.
\section{Bandit Setting}
\label{chap:bandit}
This section describes a bandit algorithm for a decentralized submodular maximization. We let $\mathcal{K}$ be a down-closed convex set. A major difference between this algorithm and the previous one is the function's value $f^i_{t}(\vect{x}^i_t)$ being the only information provided to the agent. It does not know of the value incurred if it had chosen another decision in the constraint set. As a consequence, this setting makes access to the gradient impossible for the agent. To circumvent this limitation, we use the one-point gradient estimate \cite{FlaxmanKalai05:Online-convex} and adapt the biphasic bandit setting \cite{Zhang:2019} to our decentralized algorithm.
We recall that for a function $f_t$ defined on $\mathcal{K} \subset \mathbb{R}^d$, it admits a $\delta$-smoothed version for any $\delta > 0$, given as
\begin{align*}
\hat{f}_{t,\delta} (\vect{x}_t) = \ensuremath{\mathbb{E}}_{\vect{v} \sim \mathbb{B}_d} \bracket{f_t(\vect{x}_t + \delta \vect{v})}
\end{align*}
where $\vect{v}$ is drawn uniformly random from the $d$-dimensional unit ball. The value of $\hat{f}_{t,\delta}$ at a point $\vect{x}$ is the average of $f_t$ evaluated across the $d$-dimensional ball of radius $\delta$ centered at $\vect{x}$. This function inherits various functional properties from $f_t$, therefore becomes a suitable approximation for $f_t$, as shown in the following lemma.
\begin{lemma}[Lemma 2 \cite{ChenZHK20}, Lemma 6.6 \cite{Hazanothers16:Introduction-to-online}]
\label[lemma]{lmm:one-point-grad}
Let $f$ be a monotone continuous DR-submodular function. If $f$ is $\beta$-smooth, $G$-Lipschitz, then so is $\hat{f}_\delta$ and we have $\norm{\hat{f}_{\delta}(\vect{x}) - f (\vect{x})} \leq \delta G$. More over, if we choose \vect{u} uniformly from the unit sphere $\mathbb{S}^{d-1}$, the following equation holds
\begin{align}
\nabla \hat{f}_{t,\delta} (\vect{x}) = \ensuremath{\mathbb{E}}_{\vect{u} \sim \mathbb{S}_{d-1}}\bracket{\frac{d}{\delta} f_t (\vect{x} + \delta \vect{u}) \vect{u}} \label{eq:grad-one-point}
\end{align}
\end{lemma}
\Cref{lmm:one-point-grad} shows that a decision that maximizes $\hat{f}_{t,\delta}$ can also maximizes $f_t$ approximately. The $\delta$-smooth version additionally provides a one-point gradient estimate that can be used to estimate the gradient of $f_t$ by evaluating the function at a random point on the $(d-1)$-dimensional sphere of radius $\delta$. It is important to note that the point $\vect{x} + \delta \vect{u}$ may be outside the set when \vect{x} is near to the constraint set's boundary. For this reason, we let $\mathcal{K}' \subset \mathcal{K}$ be the $\delta$-interior of $\mathcal{K}$ that verifies : $\forall \vect{x} \in \mathcal{K}'$, $\mathbb{B}(\vect{x}, \delta) \subset \mathcal{K}$, and solve the optimization problem on the new set $\mathcal{K}'$. By shrinking the constraint set down to $\mathcal{K}'$, we assure that the point $\vect{x}+\delta \vect{u}$ is in $\mathcal{K}$ for any point $\vect{x}$ in $\mathcal{K}$'. More over, if the distance $d(\mathcal{K'}, \mathcal{K})$ between $\mathcal{K}'$ and $\mathcal{K}$ is small enough, we can approximately get the optimal regret bound on the original constraint set $\mathcal{K}$ by running the bandit algorithm on $\mathcal{K}'$. The detail on the construction of $\mathcal{K}'$ is given is \Cref{lmm:discrepancy}
The biphasic setting consist of partitioning $T$ into $Q$ blocks of size $L$, with each block consisting of two phases: exploration and exploitation.
Each agent $i$ performs $K < L$ steps of exploration by updating the decision vector $\vect{x}^i_{q,k}$ using \cref{eq:submodular-update}.
During the exploration phase, rather than playing the final decision as in \Cref{algo:online-dist-FW}, the agent draws uniformly a random vector $\vect{u}^i_{q,k}$ from $\mathbb{S}_{d-1}$ and plays $\vect{x}^i_{q,k} + \delta \vect{u}^i_{q,k}$ for the function $f^i_{\sigma_q (k)}$, as it can only estimate the gradient at the point it plays. The gradient estimate $\widetilde{\vect{h}}^i_{q,k}$ is then computed using \cref{eq:grad-one-point}, followed by a local aggregation and variance reductions steps, the final step consisting of feeding the variance reduction vector $\widetilde{\vect{a}}^i_{q,k}$ back to the oracle $\mathcal{O}^i_k$. The remaining $L-K$ iterations are used for exploitation, where each agent plays the final decision $x^i_q$ to obtain a high reward. We give the detail of every step in \Cref{algo:online-bandit}. \Cref{chap:bandit_analysis} contains the analysis of \Cref{thm:regret_bandit}
\begin{algorithm}[ht!]
\begin{flushleft}
\textbf{Input}: Smoothing radius $\delta$, $\delta$-interior $\mathcal{K}'$ with lower bound $\underline{u}$,
a time horizon $T$, a block size $L$, number of exploration step $K$. Online linear optimization oracles $\mathcal{O}_{i,1}, \ldots, \mathcal{O}_{i,K}$ for each player $1 \leq i \leq n$,
step sizes $\eta_k, \rho_k \in (0, 1)$ for all $1 \leq k \leq K$, number of blocks $Q=T/L$
\end{flushleft}
\begin{algorithmic}[1]
\STATE Initialize linear optimizing oracle $\mathcal{O}^i_k$ for all $1 \leq k \leq K$
\FOR {$q = 1$ to $Q$}
\FOR{every agent $1 \leq i \leq n$} %
\STATE Initialize $\vect{x}^i_{q,1} \gets \underline{u}$ and set $\widetilde{\vect{{a}}}^t_{i,0} \gets 0$
\STATE Update $\vect{x}^i_{q,k}$ using line 5 to 10 of \Cref{algo:online-dist-FW}. Choose $\vect{x}^{i}_{q} \gets \vect{x}^{i}_{q,K+1}$
%
\STATE Let $\sigma_q$ be a random permutation of $1, \ldots, L$ --- times in phase $q$.
\FOR{$1 \leq \ell \leq L$}
\STATE Let $s = \sigma_q^{-1}(\ell)$
\IF{$\ell \leq K$}
\STATE play $f^i_{q,\ell} \parenthese{\vect{x}^i_{q,s} + \delta \vect{u}^i_{q,s}}$ where $\vect{u}^i_{q,s} \in \mathbb{S}^{d-1}$. - \textit{Exploration}
\ELSE
\STATE play $f^i_{q,\ell} \parenthese{\vect{x}^i_{q}}$. - \textit{Exploitation}
\ENDIF
\ENDFOR
\STATE Set $\widetilde{\vect{g}}^{i}_{q,1} \gets \frac{d}{\delta} f^i_{\sigma_q(1)} \parenthese{\vect{x}^i_{q,1} + \delta \vect{u}^i_{q,1}}\vect{u}^i_{q,1}$
\FOR{$1 \leq k \leq K$}
\STATE Let $\widetilde{\vect{h}}^i_{q,k} = \frac{d}{\delta} f^i_{\sigma_q (k)} \parenthese{\vect{x}^i_{q,k} + \delta \vect{u}^i_{q,k}} \vect{u}^i_{q,k}$
\STATE Send $\widetilde{\vect{g}}^{i}_{q,k}$ to all neighbours $N(i)$.
\STATE After receiving $\widetilde{\vect{g}}^{j}_{q,k}$ from all neighbours $j \in N(i)$, compute
$\widetilde{\vect{d}}^{i}_{q,k} \gets \sum_{j \in N(i)} W_{ij} \widetilde{\vect{g}}^{j}_{q,k}$
\STATE $\widetilde{\vect{g}}^{i}_{q,k + 1} \gets \widetilde{\vect{h}}^i_{q,k+1} - \widetilde{\vect{h}}^i_{q,k} + \widetilde{\vect{d}}^{i}_{q,k}$
\STATE $\widetilde{\vect{a}}^i_{q,k} \gets (1 - \rho_k) \cdot \widetilde{\vect{a}}^i_{q, k-1} + \rho_k \cdot \widetilde{\vect{d}}^{i}_{q, k}$.
\STATE Feedback function $\langle \widetilde{\vect{a}}^{i}_{q,k}, \cdot \rangle$
to oracles $\mathcal{O}^i_k$. (The cost of the oracle $\mathcal{O}^i_k$ at block $q$ is
$\langle \widetilde{\vect{a}}^{i}_{q,k}, \vect{v}^{i}_{q,k} \rangle$.)
\ENDFOR
%
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{Bandit Monode Frank-Wolfe}
\label{algo:online-bandit}
\end{algorithm}
\begin{lemma}[Lemma 1, \cite{Zhang:2019}]
\label[lemma]{lmm:discrepancy}
Let $\mathcal{K}$ is down-closed convex set and $\delta$ is is sufficiently small such that $\alpha = \frac{\sqrt{d}+1}{r}\delta < 1$. The set $\mathcal{K}' = (1-\alpha)\mathcal{K} + \delta \mathbf{1}$ is convex, compact and down-closed $\delta$-interior of $\mathcal{K}$ satisfies $d(\mathcal{K}, \mathcal{K}') \leq \parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} \delta$
\end{lemma}
\begin{theorem}
\label{thm:regret_bandit}
Let $\mathcal{K}$ be a down-closed convex and compact set. We suppose the $\delta$-interior $\mathcal{K}'$ verify $\Cref{lmm:discrepancy}$. Let $Q = T^{2/9}, L = T^{7/9}, K = T^{2/3} $, $\delta = \frac{r}{\sqrt{d}+2}T^{-1/9}$ and $\rho_k = \frac{2}{(k+2)^{2/3}}$, $\eta_k = \frac{1}{K}$. Then the expected $\parenthese{1-\frac{1}{e}}$-regret is upper bounded
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T} \leq ZT^{8/9} + \frac{\beta D^2}{2}T^{1/9}
+ \frac{3}{2} D \frac{d \parenthese{\sqrt{d}+2}}{r} P_{n,\lambda_2} T^{2/9} + \beta D^2 T^{3/9}
\end{align}
where we note
$ Z = \parenthese{1-\frac{1}{e}} \parenthese{\sqrt{d}\parenthese{\frac{R}{e} +1} + \frac{R}{r}} G \frac{r}{\sqrt{d}+2}
+ \parenthese{2-\frac{1}{e}}G \frac{r}{\sqrt{d}+2}+ 2\beta + C$
and
P_{n,\lambda_2} = k_0 \cdot n B\max \curlybracket{\lambda_2\parenthese{1 + \frac{2}{1-\lambda_2}}, 2}
+ 4^{1/3}\parenthese{24n^2 \parenthese{\frac{1}{\frac{1}{\lambda_2}-1} + 1}^2 + 8n\parenthese{\frac{1}{\parenthese{\frac{1}{\lambda_2}-1}^2}+2}}^{1/2}
\end{theorem}
\subsection{Submodular}
\subsection{Proof of \Cref{thm:submod}}
\label{chap:submodular_analysis}
\setcounter{theorem}{12}
\begin{lemma}
\label[lemma]{lmm:submod_basic}
If $F_t$ is monotone continous DR-submodular and $\beta$-smoothness, $\vect{x}_{t,k+1} = \vect{x}_{t,k} + \frac{1}{K} \vect{v}_{t,k}$ for $k \in [1, \cdots, K]$, then
\begin{align}
F_t(\vect{x}^*) - F_t(\vect{x}_{t,k+1})
&\leq \parenthese{1 - 1/K} \bracket{F_t (\vect{x}^*) - F_t(\vect{x}_{t,k}}) \\ \nonumber
&- \frac{1}{K}\bracket{
-\norm{
\nabla F_t(\vect{x}_{t,k}) - \vect{d}_{t,k}
}D
+ \scalarproduct{\vect{d}_{t,k}, \vect{v}_{t,k} - \vect{x}^*}
} + \frac{\beta D^2}{2K^2}
\end{align}
\end{lemma}
\begin{proof}
The proof is essentially based on the analysis of \cite{ChenHarshaw18:Projection-Free-Online}. By $\beta$-smoothness of $F_t$,
\begin{align}
F_{t} &(\vect{x}_{t,k+1})
\geq F_{t}(\vect{x}_{t,k})
+ \scalarproduct{F_{t}(\vect{x}_{t,k}),\vect{x}_{t,k+1} - \vect{x}_{t,k}}
- \frac{\beta}{2} \norm{\vect{x}_{t,k+1} - \vect{x}_{t,k}}^2 \nonumber \\
& \geq F_{t}(\vect{x}_{t,k})
+ \frac{1}{K} \scalarproduct{F_{t}(\vect{x}_{t,k}), \vect{v}^i_{t,k}}
- \frac{\beta}{2} \frac{D^2}{K^2} \tag{since $\norm{\vect{v}_{t,k}} \leq D$} \\
& \geq F_{t}(\vect{x}_{t,k})
+ \frac{1}{K}
\left[\scalarproduct{\nabla F_{t}(\vect{x}_{t,k}) - \vect{d}_{t,k}, \vect{v}_{t,k} - \vect{x}^*}
+ \scalarproduct{\nabla F_{t}(\vect{x}_{t,k}), \vect{x}^*}
+ \scalarproduct{\vect{d}_{t,k}, \vect{v}_{t,k} - \vect{x}^*} \right]
- \frac{\beta}{2} \frac{D^2}{K^2} \label{eq:submod_smooth}
\end{align}
By Cauchy-Schwarz's inequality, note that,
$$ \scalarproduct{\nabla F_{t}(\vect{x}_{t,k}) - \vect{d}_{t,k}, \vect{v}_{t,k} - \vect{x}^*} \geq -\norm{\nabla F_{t}(\vect{x}_{t,k}) - \vect{t}_{t,k}} D$$
Using concavity along non-negative direction and monotonicity of $F_t$, we have,
\begin{align}
F_t(\vect{x}^*) - F_t(\vect{x}_{t,k})
&\leq F_t(\vect{x}^* \vee \vect{x}_{t,k}) - F_t(\vect{x}_{t,k}) \nonumber\\
& \leq \scalarproduct{\nabla F_t(\vect{x}_{t,k}), (\vect{x}^* \vee \vect{x}_{t,k}) -\vect{x}_{t,k}} \nonumber\\
& = \scalarproduct{\nabla F_t(\vect{x}_{t,k}), (\vect{x}^* - \vect{x}_{t,k}) \vee 0} \nonumber\\
& \leq \scalarproduct{\nabla F_t(\vect{x}_{t,k}), \vect{x}^*}
\end{align}
then, \cref{eq:submod_smooth} becomes
\begin{align}
F_{t} &(\vect{x}_{t,k+1})
\geq F_{t}(\vect{x}_{t,k})
+ \scalarproduct{F_{t}(\vect{x}_{t,k}),\vect{x}_{t,k+1} - \vect{x}_{t,k}}
- \frac{\beta}{2} \norm{\vect{x}_{t,k+1} - \vect{x}_{t,k}}^2 \nonumber \\
&\geq F_{t}(\vect{x}_{t,k})
+ \frac{1}{K}
\bracket{-\norm{\nabla F_{t}(\vect{x}_{t,k}) - \vect{t}_{t,k}} D
+ F_t(\vect{x}^*) - F_t(\vect{x}_{t,k})
+ \scalarproduct{\vect{d}_{t,k}, \vect{v}_{t,k} - \vect{x}^*}}
- \frac{\beta}{2} \frac{D^2}{K^2}
\end{align}
Adding and substracting $F_t(\vect{x}^*)$ and multiply both side by $-1$ yields \cref{lmm:submod_basic}.
\end{proof}
\setcounter{theorem}{3}
\begin{theorem}
Given a convex set $\mathcal{K}$ with diameters $D$. Assume that functions $F_t$ are monotone continuous DR-Submodular, $\beta$-smooth and G-Lipschitz. Setting $Q=T^{2/5}, K=T^{3/5}, T=QK$ and step-size $\eta_k = \frac{1}{K}$. Let $\rho_k = \frac{2}{\parenthese{k+3}^{2/3}}$ and $\rho_k = \frac{1.5}{\parenthese{K-k+2}^{2/3}}$ when $1 \leq k \leq \frac{K}{2}+1$ and $\frac{K}{2} +1 \leq k \leq K$ respectively. Then, the expected $\parenthese{1-\frac{1}{e}}$-regret is at most
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T} \leq \frac{3}{2}\beta D^2 T^{2/5} + \parenthese{C + 3D(N+\sqrt{M}}T^{4/5}
\end{align}
where the constant are defined in \Cref{thm:convex}
\end{theorem}
\begin{proof}
We apply \Cref{lmm:submod_basic} with $F_t$ = $\Bar{F}_{q,k-1}$, $\vect{x}_{t,k} = \overline{\vect{x}}_{q,k}$ and $\vect{d}_{t,k} = \frac{1}{n}\sum_{i=1}^n \widetilde{\vect{a}}^i_{q,k}$, we have
\begin{align}
\label{eq:submod_upperbound}
\Bar{F}_{q,k-1}&(\vect{x}^*) - \Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k+1})
\leq \parenthese{1-\frac{1}{K}}\bracket{\Bar{F}_{q,k-1}(\vect{x}^*) - \Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k})} \nonumber\\
& + \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n
\bracket{\norm{\nabla \Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}}D
+ \scalarproduct{\widetilde{\vect{a}}^i_{q,k}, \vect{x}^* - \vect{v}^i_{q,k}}}
+ \frac{\beta}{2} \frac{D^2}{K^2}
\end{align}
As $\ensuremath{\mathbb{E}} \bracket{\Bar{F}_{q,k-1}(\vect{x}^*) - \Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k})} = \ensuremath{\mathbb{E}} \bracket{\Bar{F}_{q,k-2}(\vect{x}^*) - \Bar{F}_{q,k-2}(\overline{\vect{x}}_{q,k})}$, we can apply \cref{eq:submod_upperbound} recursively for $k \in \{1, \ldots, K\}$, thus
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{\Bar{F}_{q,0}(\vect{x}^*) - \Bar{F}_{q,0}(\overline{\vect{x}}_{q})}
\leq \parenthese{1-\frac{1}{K}}^K \ensuremath{\mathbb{E}} \bracket{\Bar{F}_{q,0}(\vect{x}^*) - \Bar{F}_{q,0}(\overline{\vect{x}}_{q,1})} \nonumber\\
& + \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^K
\ensuremath{\mathbb{E}} \bracket{\norm{\nabla \Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}}D}
+ \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\scalarproduct{\widetilde{\vect{a}}^i_{q,k}, \vect{x}^* - \vect{v}^i_{q,k}}}
+ \frac{\beta}{2} \frac{D^2}{K}
\end{align}
Note that $\parenthese{1 - \dfrac{1}{K}}^K \leq \dfrac{1}{e}$ and $\Bar{F}_{q,0}(\overline{\vect{x}}_{q,1}) \geq 0$, we have
\begin{align}
\ensuremath{\mathbb{E}}&\bracket{\parenthese{1-\frac{1}{e}}\Bar{F}_{q,0}(\vect{x}^*) - \Bar{F}_{q,0}(\overline{\vect{x}}_{q})}
\leq \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\norm{\nabla \Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}}D} \nonumber \\
& \quad + \frac{1}{K} \cdot \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\scalarproduct{\widetilde{\vect{a}}^i_{q,k}, \vect{x}^* - \vect{v}^i_{q,k}}}
+ \frac{\beta}{2} \frac{D^2}{K}
\end{align}
Let $T=QK$, using \Cref{clm:bound_F_K} and note that the oracle has a regret $\mathcal{R_Q} \leq C \sqrt{Q}$. We have
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{\mathcal{R}_T}
= \ensuremath{\mathbb{E}} \bracket{
\sum_{q=1}^Q K \bracket{\parenthese{1-\frac{1}{e}}\Bar{F}_{q,0}(\vect{x}^*) - \Bar{F}_{q,0}(\overline{\vect{x}}_{q})}
} \nonumber \\
& \leq \frac{D}{n} \sum_{q=1}^Q \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{ \norm{\nabla \Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}}}
+ \frac{1}{n} \sum_{q=1}^Q \sum_{i=1}^n \sum_{k=1}^K \ensuremath{\mathbb{E}} \bracket{\scalarproduct{\widetilde{\vect{a}}^i_{q,k}, \vect{x}^* - \vect{v}^i_{q,k}}}
+ \frac{\beta}{2} QD^2 \nonumber \\
& \leq QD\parenthese{\beta D + \parenthese{N + \sqrt{M}} 3K^{2/3}}
+ KC\sqrt{Q} + \frac{\beta QD^2}{2}
\end{align}
Setting $Q = T^{2/5}$ and $K=T^{3/5}$, the expected regret of the algorithm is upper bounded by
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T}
&\leq T^{2/5}\parenthese{\beta D^2 + \parenthese{N + \sqrt{M}} 3T^{2/5}} + CT^{4/5} + \frac{\beta D^2 T^{2/5}}{2} \nonumber\\
& \leq \frac{3}{2}\beta D^2 T^{2/5} + \parenthese{C + 3D(N+\sqrt{M}}T^{4/5}
\end{align}
\end{proof}
\subsection{Related Works}
\label{chap:relatedworks}
\paragraph{Decentralized Online Optimization.} %
\cite{Yan:2013} introduced decentralized online projected subgradient descent and showed vanishing regret for convex and strongly convex functions.
In contrast,~\cite{Hosseini:2013} extended distributed dual averaging technique to the online setting, using a general regularized projection for both unconstrained and constrained optimization.
A distributed variant of online conditional gradient~\cite{Hazanothers16:Introduction-to-online} was designed and analyzed in~\cite{Zhang:2017} that requires linear minimizers and uses exact gradients.
However, computing exact gradients may be prohibitively expensive for moderately sized data and intractable when a closed-form does not exist. \cite{submod-timevarying} proposes a decentralized online algorithm for maximizing monotone submodular function on a time-varying network using stochastic gradient estimate and multiple optimization oracles. This work achieves the optimal regret bound of $O(T^{1/2})$ but requires $T^{3/2}$-gradient evaluation and communication per function. In this work, we advance further in designing a distributed algorithm that uses stochastic gradient estimates and requires only one gradient evaluation.
\paragraph{Monotone DR-submodular Maximization.}
The maximization of monotone DR-sub\-modular functions has been investigated in both offline and online settings. For the offline case, \cite{BianMirzasoleiman17:Guaranteed-Non-convex} examined the problem where the constraint set is a down-closed convex set and demonstrated that the greedy method \cite{CalinescuChekuri11:Maximizing-a-monotone}, a variation of the Frank-Wolfe algorithm, ensures a $(1-1/e)$-approximation. \cite{HassaniSoltanolkotabi17:Gradient-methods} demonstrated the restriction of the greedy method in a stochastic environment where only unbiased gradient estimates are available. Later, \cite{MokhtariHassani18:Conditional-Gradient} introduced an algorithm for maximizing monotone DR-submodular function over the general convex set using new variance reduction techniques to accomplish $(1-1/e)$-approximation in a stochastic setting. \cite{ChenHarshaw18:Projection-Free-Online} suggested a method that achieves $(1-1/e, O(\sqrt{T}))$-regret for maximizing monotone Dr-submodular over a general convex set in an online setting. Subsequently, \cite{Zhang:2019} introduced an approach that reduces the number of per-function gradient evaluations from $T^{3/2}$ to 1, while maintaining the same approximation ratio of $(1-1/e)$. They also presented a bandit approach that achieves an expected $(1-1/e)$-approximation ratio with regret $T^{8/9}$ to tackle the same problem.
\subsection{Proof of \Cref{thm:convex}}
\label{chap:convex_analysis}
\setcounter{theorem}{2}
\begin{theorem}
Given a convex set $\mathcal{K}$ with diameters $D$. Assume that function $F_t$ are convex, \emph{$\beta$-smooth} and $\norm{\nabla F_t} \leq G$ for $t \in \bracket{T}$. Setting $Q=T^{2/5}, K=T^{3/5}, T=QK$ and step-size $\eta_k = \frac{1}{k}$. Let $\rho_k = \frac{2}{\parenthese{k+3}^{2/3}}$ and $\rho_k = \frac{1.5}{\parenthese{K-k+2}^{2/3}}$ when $k \in \bracket{1,\frac{K}{2}}$ and $k \in \bracket{\frac{K}{2} +1,K}$ respectively. Then, the expected regret of \Cref{algo:online-dist-FW} is at most
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T}
\leq \parenthese{GD + 2\beta D^2}T^{2/5}
+ \parenthese{C + 6D\parenthese{N + \sqrt{M}}}T^{4/5}
+ \frac{3}{5}\beta D^2 T^{2/5}\log (T)
\end{align}
where $N = k_0 \cdot nG\max \{\lambda_2 \parenthese{1 + \frac{2}{1-\lambda_2}}, 2\}$
and $M = \max\{M_1, M_2\}$ where
$ M_0 = 4 \parenthese{V^2_{\vect{d}} + \sigma_1^2} + 128 V^2_{\vect{d}}$,
$ M_1 = \max \curlybracket{5^{2/3} \parenthese{V_{\vect{d}} + \frac{2}{4^{2/3}} G_0}^2 , M_0}$
and
$ M_2 = 2.55\parenthese{V^2_{\vect{d}} + \sigma^2_1} + \dfrac{28 V^2_{\vect{d}}}{3}$. All the constant are defined in \Cref{lmm:bound_d}, \Cref{lmm:stoch_variance}, \Cref{lmm:bound_d_avg_consensus} and \Cref{lmm:bound_d_var_red}.
\end{theorem}
\begin{proof}
\begin{align}
\label{eq:main_lemma}
\ensuremath{\mathbb{E}} &\bracket{
\Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k+1}) - \Bar{F}_{q,k-1}(\vect{x}^*)
} \nonumber \\
& \leq (1-\eta_k) \ensuremath{\mathbb{E}} \bracket{
\Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \Bar{F}_{q,k-1}(\vect{x}^*)
}
+ \frac{\eta_k}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\scalarproduct{\widetilde{\vect{a}}^i_{q,k}, \vect{v}^i_{q,k} - \vect{x}^*}
} \nonumber \\
& \quad + \frac{\eta_k}{n} D \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\norm{
\nabla \Bar{F}_{q,k-1} (\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}
}
}
+ \frac{\beta}{2} \eta_k^2 D^2
\end{align}
As
$\ensuremath{\mathbb{E}} \bracket{\Bar{F}_{q,k-1}(\overline{\vect{x}}_{q,k}) - \Bar{F}_{q,k-1}(\vect{x}^*)} = \ensuremath{\mathbb{E}} \bracket{\Bar{F}_{q,k-2}(\overline{\vect{x}}_{q,k}) - \Bar{F}_{q,k-2}(\vect{x}^*)}$, we can apply \cref{eq:main_lemma} recursively for $k \in \{1, \ldots, K\}$, thus
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{
\Bar{F}_{q,0}(\overline{\vect{x}}_{q}) - \Bar{F}_{q,0}(\vect{x}^*)
} \nonumber \\
& \leq \prod_{k=1}^K (1-\eta_k) \ensuremath{\mathbb{E}} \bracket{
\Bar{F}_{q,0}(\overline{\vect{x}}_{q,1}) - \Bar{F}_{q,0}(\vect{x}^*)
}
+ \sum_{k=1}^K \prod_{k'=k+1}^K (1-\eta_{k'}) \frac{\eta_k}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\langle \widetilde{\vect{a}}^i_{q,k}, \vect{v}^i_{q,k} - \vect{x}^*\rangle
} \nonumber \\
& \qquad + \sum_{k=1}^K \prod_{k'=k+1}^K (1-\eta_{k'}) \frac{\eta_k}{n} D \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\norm{
\nabla \Bar{F}_{q,k-1} (\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}
}
}
+ \frac{\beta}{2}D^2 \sum_{k=1}^K \prod_{k'=k+1}^K (1-\eta_{k'}) \eta_k^2
\end{align}
Choosing $\eta_k = \frac{1}{k}$, we have
$$ \prod_{k=r}^K (1-\eta_k) \leq \exp\left(-\sum_{k=r}^K \frac{1}{k}\right) \leq \frac{r}{K} $$
We have then,
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{
\Bar{F}_{q,0}(\overline{\vect{x}}_{q}) - \Bar{F}_{q,0}(\vect{x}^*)
} \nonumber \\
& \leq \frac{1}{K} \ensuremath{\mathbb{E}} \bracket{
\Bar{F}_{q,0}(\overline{\vect{x}}_{q,1}) - \Bar{F}_{q,0}(\vect{x}^*)
}
+ \sum_{k=1}^K \frac{k+1}{K} \cdot \frac{1}{k} \cdot \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\langle \vect{\Tilde{a}}^i_{q,k}, \vect{v}^i_{q,k} - \vect{x}^*\rangle
} \nonumber \\
& \qquad + \sum_{k=1}^K \frac{k+1}{K}\cdot \frac{1}{k} \cdot\frac{1}{n} D \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\norm{
\nabla \Bar{F}_{q,k-1} (\overline{\vect{x}}_{q,k}) - \vect{\Tilde{a}}^i_{q,k}
}
}
+ \frac{\beta}{2}D^2 \sum_{k=1}^K \frac{k+1}{K} \cdot \frac{1}{k^2}
\end{align}
Which maybe simplified by using $\frac{k+1}{K} \cdot \frac{1}{k} \leq \frac{2}{K}$.
\begin{align}
\ensuremath{\mathbb{E}} &\bracket{
\Bar{F}_{q,0}(\overline{\vect{x}}_{q}) - \Bar{F}_{q,0}(\vect{x}^*)
} \nonumber \\
& \leq \frac{1}{K} \ensuremath{\mathbb{E}} \bracket{
\Bar{F}_{q,0}(\overline{\vect{x}}_{q,1}) - \Bar{F}_{q,0}(\vect{x}^*)
}
+ \frac{2}{K} \cdot \frac{1}{n}\sum_{k=1}^K \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\langle \widetilde{\vect{a}}^i_{q,k}, \vect{v}^i_{q,k} - \vect{x}^*\rangle
} \nonumber \\
& \qquad + \frac{2}{K} \cdot \frac{1}{n} D \sum_{k=1}^K \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\norm{
\nabla \Bar{F}_{q,k-1} (\overline{\vect{x}}_{q,k}) - \widetilde{\vect{a}}^i_{q,k}
}
}
+ \frac{\beta D^2}{2} \frac{2}{K}\sum_{k=1}^K \frac{1}{k} \nonumber \\
& \leq \frac{GD}{K}
+ \frac{2}{K} \cdot \frac{1}{n}\sum_{k=1}^K \sum_{i=1}^n \ensuremath{\mathbb{E}} \bracket{
\langle \widetilde{\vect{a}}^i_{q,k}, \vect{v}^i_{q,k} - \vect{x}^*\rangle
} \nonumber \\
& \qquad + \frac{2}{K} \cdot D\parenthese{\beta D + \parenthese{N + \sqrt{M}} 3K^{2/3}}
+ \frac{\beta D^2}{K}\log K
\end{align}
where we have used \Cref{clm:bound_F_K}, \emph{G}-Lipschitz property of $\Bar{F}_{q,0}$ and boundedness of $\mathcal{K}$. Since $T = QK$ and assume that the oracle at round $k$ has a regret of order $\mathcal{O}\parenthese{\sqrt{Q}}$, i.e
\begin{align*}
\ensuremath{\mathbb{E}} \bracket{\sum_{q=1}^Q
\langle \widetilde{\vect{a}}^i_{q,k}, \vect{v}^i_{q,k} - \vect{x}^*\rangle
} \leq C\sqrt{Q
\end{align*}
then, the expected regret of the algorithm upper bounded by
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T}
&= \ensuremath{\mathbb{E}} \bracket{
\sum_{q=1}^Q K \parenthese{\Bar{F}_{q,0}(\overline{\vect{x}}_{q}) - \Bar{F}_{q,0}(\vect{x}^*)}
} \nonumber \\
& \leq QGD + CKQ^{1/2} + 2QD\parenthese{\beta D + \parenthese{N + \sqrt{M}} 3K^{2/3}} + Q\beta D^2 \log K \nonumber \\
& \leq QGD + CKQ^{1/2} + 2Q\beta D^2 + 6D\parenthese{N + \sqrt{M}} QK^{2/3} + Q\beta D^2 \log K \nonumber \\
& \leq \parenthese{GD + 2\beta D^2}Q + CKQ^{1/2} + 6D\parenthese{N + \sqrt{M}} QK^{2/3} + Q\beta D^2\log K
\end{align}
Setting $Q = T^{2/5}$ and $K=T^{3/5}$, we have
\begin{align}
\ensuremath{\mathbb{E}} \bracket{\mathcal{R}_T}
\leq \parenthese{GD + 2\beta D^2}T^{2/5}
+ \parenthese{C + 6D\parenthese{N + \sqrt{M}}}T^{4/5}
+ \frac{3}{5}\beta D^2 T^{2/5}\log (T)
\end{align}
\end{proof}
\section{Preliminaries and Notations}
\label{chap:preliminaries}
\label{sec:math_notation}
We begin by explaining the notations and concepts that will be used throughout the paper. Given an undirected graph $G = (V, E)$, the set of neighbors of an agent $i \in V$ is denoted $N(i) := \{j \in V: (i,j) \in E\}$. Consider the following symmetric matrix $\mathbf{W} \in \mathbb{R}_{+}^{n \times n}$ with entry $W_{ij}$ defined as follows.
\begin{align*}
W_{ij} = \begin{cases}
\dfrac{1}{1+\max\{d_i, d_j\}} & \text{if $(i,j) \in E$}\\
0 & \text{if $(i,j) \not\in E$,$i \neq j$}\\
1 - \sum_{j \in N(i)} W_{ij} & \text{if $i=j$}
\end{cases}
\end{align*}
where $d_i = |N(i)|$, the degree of vertex $i$.
In fact, the matrix $\mathbf{W}$ is doubly stochastic, i.e $\mathbf{W} \vect{1} = \mathbf{W}^T \vect{1} = \vect{1}$ and so it inherits several useful properties of doubly stochastic matrices.
We use boldface letter e.g $\vect{x}$ to represent vectors. We denote by $\vect{x}^i_{q,k}$ the decision vector of agent $i$ at time step $k$ of phase $q$. If not specified otherwise, we suppose that the constraint set $\mathcal{K}$ is a bounded convex set with diameters $D = \sup_{\vect{x}, \vect{y} \in \mathcal{K}} \| \vect{x} - \vect{y} \|$ and radius $R = \sup_{\vect{x} \in \mathcal{K}} \norm{\vect{x}}$. For two vectors $\vect{x}, \vect{y} \in \mathbb{R}^d$, we note $\vect{x} \leq \vect{y}$ if $x_i \leq y_i ~\forall i$. We note $\mathbb{B}_d$ and $\mathbb{S}_d$ the $d$-dimensional unit ball and the unit sphere, respectively.
A continuous function $F : [0,1]^d \rightarrow \mathbb{R}_+$ is \emph{DR-submodular} if for any vectors $\vect{x}$, $\vect{y} \in [0,1]^d$ such that $\vect{x} \leq \vect{y}$, for a constant $\alpha > 0$ and any basis vectors $\vect{e}_i = \parenthese{0, \dots, 1, \dots, 0}$ such that $\vect{x} + \alpha \vect{e}_i \in [0,1]^d$ and $\vect{y} + \alpha \vect{e}_i \in [0,1]^d$.
\begin{align}
F(\vect{x} + \alpha \vect{e}_i) - F(\vect{x}) \geq F(\vect{y} + \alpha \vect{e}_i) - F(\vect{y})
\end{align}
For a differentiable function, the DR-property is equivalent to $\nabla F(\vect{x}) \geq \nabla F(\vect{y})$, $\forall \vect{x} \leq \vect{y} \in [0,1]^d$. More over, if $F$ is twice-differentiable, the DR-property is equivalent to all entries of the Hessian matrix being non-positive i.e $\forall 1 \leq i,j \leq d$, $\frac{\partial^2 F}{\partial \vect{x}_i \partial \vect{x}_j } \leq 0$. A function $F$ is \emph{monotone} if $\forall \vect{x} \leq \vect{y} \in [0,1]^d$, we have $F(\vect{x}) \leq F(\vect{y})$.
A function $F$ is \emph{$\beta$-smooth} if for all $\vect{x}, \vect{y} \in \mathcal{K}$ :
\begin{align*}
F(\vect{y}) \leq F(\vect{x}) + \langle \nabla F(\vect{x}), \vect{y}-\vect{x} \rangle + \frac{\beta}{2}\|\vect{y}-\vect{x}\|^2
\end{align*} or equivalently $\| \nabla F(\vect{x}) - \nabla F(\vect{y})\| \leq \beta \| \vect{x} - \vect{y} \|$. Also, we say a function $F$ is \emph{$G$-Lipschitz} if for all $\vect{x}, \vect{y} \in \mathcal{K}$
\begin{align*}
\|F(\vect{x}) - F(\vect{y}) \| \leq G \|\vect{x} - \vect{y}\|
\end{align*}
In this paper, we employ linear optimization oracles in our algorithm to solve an online linear optimization problem given a feedback function and a constraint set. In particular, in the online linear optimization problem, one must choose $\vect{u}^{t} \in \mathcal{K}$ at every time $1 \leq t \leq T$. The adversary then discloses a vector $\vect{d}^{t}$ and feedbacks the cost function $\langle \cdot , \vect{d}^t \rangle$ where the goal is to minimize the regret of the linear objective.
Several algorithms \cite{Hazanothers16:Introduction-to-online}, including the projection-free follow-the-perturbed-leader algorithm offer an optimum regret bound of $\mathcal{R}_T = O(\sqrt{T})$ for the online linear optimization problem. One of these methods can be used as an oracle to solve the online linear optimization problem.
In practice, it may not be possible to use a full gradient due to the vast quantity of data and processing restrictions. To address this issue, our approach utilizes an unbiased stochastic gradient in place of the gradient and proposes a variance reduction technique for distributed optimization based on a rigorous analysis that may be applied to problems of independent interest. We make the following assumptions for the next two sections.
\begin{assumption}
\label[assumption]{assum:assum_1}
We let $k_0$ to be the smallest integer such that
$\lambda_2(\mathbf{W}) \leq \parenthese{\frac{k_0}{k_0 +1}}^2$.
\end{assumption}
\begin{assumption}
\label[assumption]{assum:stoch_grad}
The function $f_t$ is $G$-Lipschitz and $\beta$-smooth. Its stochastic gradient $\widetilde{\nabla} f_{t} (\vect{x})$ is unbiased, uniformly upper-bounded and has a bounded variance, i.e., $\ensuremath{\mathbb{E}} \bracket{\widetilde{\nabla} f_{t} (\vect{x})} = \nabla f_{t} (\vect{x})$, $\norm{\widetilde{\nabla} f_t(\vect{x})} \leq G_0$, and $\ensuremath{\mathbb{E}} \bracket{\norm{\widetilde{\nabla} f_{t} (\vect{x}) - \nabla f_{t}(\vect{x})}^2} \leq \sigma^2_0$.
\end{assumption}
\begin{assumption}
\label[assumption]{assum:max_bound_f}
For all $t \in \bracket{T}$ and $i \in \bracket{n}$, $\sup_{\vect{x} \in \mathcal{K}} | f_t^i (\vect{x}) | \leq B$
\end{assumption}
\begin{assumption}
\label[assumption]{assum:ball_r}
There exist a number $r \geq 0$ such that $r \mathbb{B}_d \subseteq \mathcal{K}$
\end{assumption}
\section*{Acknowledgments}
\bibliographystyle{unsrt}
|
2,877,628,091,277 | arxiv | \section{Introduction}
\begin{definition}
Let $M^n$ and $P^{n+k}$ be smooth manifolds and $f:M^n \to P^{n+k}$ a smooth ($\mathcal C^\infty$-) map. $f$ is called a \emph{corank $1$ map} if $\rk df_x \geq n-1$ for all $x \in M^n$. A stable corank $1$ map is called a \emph{Morin map}.
\end{definition}
\begin{definition}
Given a Morin map $f$ we say that $x\in M^n$ is a \emph{$\Sigma^{1_r,0}$-point} if there exists a regular curve $\gamma: (\R,0) \to (M,x)$ going through $x$ that has $\frac{\partial^r(f\circ \gamma)}{\partial t^r}(0)=0$, but no such regular curve satisfies $\frac{\partial^{r+1}(f\circ \gamma)}{\partial t^{r+1}}(0)=0$.
\end{definition}
Morin \cite{Morin} showed that for a fix $r$ all $\Sigma^{1_r,0}$ germs are left-right equivalent ($\mathcal A$-equivalent) and that for $r \neq s$ the $\Sigma^{1_r,0}$ germs are not equivalent to $\Sigma^{1_s,0}$ germs.
\begin{definition}
Given a Morin map $f: M^n \to P^{n+k}$ we denote by $\Sigma(f)$ the set of its singular points and we denote by $\Sigma^{1_r,0}(f)$ the set of its $\Sigma^{1_r,0}$-points.
\end{definition}
\begin{definition}
A Morin map is called a \emph{$\Sig{r}$-map} if it has no $\Sigma^{1_s,0}$-points with $s>r$.
\end{definition}
\begin{definition}
A corank $1$ map $f:M^n \to P^{n+k}$ equipped with a trivialization of its kernel line bundle is called a \emph{prim} map.
\end{definition}
Note that a prim map is the composition of an immersion $g:M^n \looparrowright P^{n+k} \times \R^1$ with the standard projection $pr:P^{n+k} \times \R^1 \to P^{n+k}$.\footnote{The word \emph{prim} is an abbreviation of ``\emph{pr}ojection of \emph{im}mersion''.}
We denote by $\Prim r(P)$ the cobordism group of prim $\Sig{r}$-maps in a fixed target manifold $P$, and we denote by $\Cob r(P)$ the cobordism group of all $\Sig{r}$-maps in a fixed target manifold $P$. For the standard definitions of these groups see \cite{GT}. Analogous groups can be defined for the case of cooriented maps or maps with a quaternionic normal structure; we denote them by $\PrimSO\Sig r(P)$, $\CobSO\Sig r(P)$ and $\PrimQ\Sig r (P)$, $\CobQ\Sig r(P)$, respectively.
Whenever there is a (prim) $\Sig{r}$-map $f:M\to P$ and a smooth map $g:P' \to P$ a standard pullback diagram arises:
$$
\xymatrix{
M' \ar@{-->}[r]^{g^*f} \ar@{-->}[d] & P' \ar[d]^{g} \\
M \ar[r]_f & P
}
$$
If the map $g$ is transverse to all the submanifolds $f\left(\Sigma^{1_s,0}(f)\right)$ for $s=0,\dots,r$, then the map $g^*f$ is a (prim) $\Sig{r}$-map as well. If the map $g$ is not transverse to the submanifolds $\Sigma^{1_s}(f)$, one can still choose an approximating map $\tilde g$ close to $g$ that is transverse and obtain a (prim) $\Sigma^{1_r}$-map $\tilde g^*f$; this map is not unique, but any two choices of such approximating maps --- $\tilde g_1$ and $\tilde g_2$, say --- can be deformed into one another by a homotopy $G$ that is itself transverse to the submanifolds $\Sigma^{1_s}(f)$, hence $\tilde g_1^*f$ and $\tilde g_2^*f$ are connected by the cobordism $G^*f$ and represent the same element in the (prim) $\Sigma^{1_r}$-cobordism group of $P$. It is easy to check that sending $f$ to the pullback map $f\mapsto \tilde g^*f$ induced by a suitable approximation of $g$ on the cobordism group of $P$ we obtain a contravariant functor from the category of smooth manifolds and smooth maps to the category of groups and homomorphisms. A similar functor can be defined using $\Prim r(\cdot)$ instead of $\Cob r(\cdot)$.
There exist (homotopically unique) spaces $X_r$ and $\overline{X}_r$ that represent the functors
\[
\aligned
P &\longrightarrow \Cob r(P) \ \text{ and}\\
P &\longrightarrow \Prim r(P)
\endaligned
\]
in the sense of Brown representability theorem\footnote{In order to apply Brown's theorem directly, one has to extend these functors to arbitrary simplicial complexes (not only manifolds). This is done in \cite{GT}.} (see \cite{Switzer}), in particular
\[
\aligned
\Cob r(P) &= [P, X_r]\ \text{ and}\\
\Prim r(P) &= [P, \overline X_r]
\endaligned
\]
for any compact manifold $P$.
We call the spaces $X_r$ and $\overline X_r$ the \emph{classifying spaces} for $\Sigma^{1_r}$-maps and prim $\Sigma^{1_r}$-maps, respectively. This type of classifying spaces in a more general setup has been explicitly constructed and investigated earlier, see \cite{Sbornik}, \cite{LNM}; in \cite{RSz} and \cite{GT} two significantly different explicit descriptions of the classifying space for a more general class of singular maps are given, and in \cite{gluing} a homotopy theoretic connection between those constructions is established. Again, analogues for oriented maps and quaternionic maps can be defined, we denote them by $X^{\rm SO}_r$, $\overline X^{\rm SO}_r$, $X^{\rm Sp}_r$ and $\overline X^{\rm Sp}_r$; in what follows, we will omit the distinguishing upper indices when the argument works for each case.
In the present paper we give a simple construction for the spaces $\overline X_r$ and prove structure theorems for them. As a byproduct we get an explicit description of some elements of stable homotopy groups of spheres via local forms of Morin maps.
\section{Construction of $\overline{X_r}$, the classifying space of co\-bor\-disms of prim $\Sigma^{1_r}$-maps}
{\bf Notation:} Let $\gamma^{\rm O}_{k+1} \to BO(k+1)$ denote the universal $(k+1)$-dimensional vector bundle, let
$\gamma^{\rm SO}_{k+1} \to BSO(k+1)$ denote the universal $(k+1)$-dimensio\-nal oriented vector bundle and denote by
$\gamma^{\rm Sp}_{k+1} \to BSp(k+1)$ the universal $4(k+1)$-dimensional quaternionic vector bundle. We denote by $\gamma_{k+1}$ one of these bundles, with the implication that our arguments apply to each case. Let $S=S\left((r+1)\gamma_{k+1}\right)$ be the sphere bundle of $(r+1)\gamma_{k+1} = \overbrace{\gamma_{k+1} \oplus \cdots \oplus \gamma_{k+1}}^{r+1}$, with $pr_S:S \to B(k+1)$ denoting the projection on $B(k+1)$, which is either $BO(k+1)$, $BSO(k+1)$, or $BSp(k+1)$. Define the bundle $\zeta_S$ to be the pullback $pr_S^*\gamma^{k+1}$. The Thom space of $\zeta_S$ will be denoted by $T\zeta_S$, and $\Omega\Gamma T\zeta_S$ is the space $\Omega^{\infty+1}S^\infty T\zeta_S = \underset{q\to\infty}{\lim} \Omega^{q+1}S^q T\zeta_S$.
\begin{theorem}\label{thm:classic}
$\overline{X_r}=\Omega \Gamma T\zeta_S$.
\end{theorem}
From here onward, the symbol $\cong_\Q$ stands for rational homotopy equivalence.
\begin{theorem}\label{thm:fibration}
\begin{enumerate}[a)]
\item There is a fibration $\overline{X_r} \xrightarrow{\Omega^2\Gamma T\left((r+2)\gamma_{k+1}\right)} \Omega\Gamma T\gamma_{k+1}$.
\item If $k$ is odd, then
$$\Omega\Gamma T\gamma^{\rm SO}_{k+1} \cong_\Q \overline X^{\rm SO}_r \times \Omega\Gamma T\left( (r+2)\gamma^{\rm SO}_{k+1}\right)$$
\item If $k$ is even, then
$$\overline X^{\rm SO}_r \cong_\Q \Omega^2\Gamma T\left((r+2)\gamma^{\rm SO}_{k+1}\right) \times \Omega\Gamma T\gamma^{\rm SO}_{k+1}$$
\end{enumerate}
\end{theorem}
The proofs are postponed to Section \ref{section:proofs}.
An important application of Theorem \ref{thm:fibration} is that it allows the calculation of the ranks of the groups $\PrimSO\Sig r(P) \cong [P,\overline X^{\rm SO}_r]$ for arbitrary target manifolds $P$. First recall that the $H$-space $\overline X^{\rm SO}_r$ rationally splits as a product of Eilenberg-MacLane spaces
$$
\overline X^{\rm SO}_r \cong_\Q \prod_{j=1}^\infty K(\pi_j(X^{\rm SO}_r)\otimes \Q;j).
$$
This implies that for every $P$ we have
\begin{equation}\label{eq:cobSplit}
\begin{aligned}
[P,\overline X^{\rm SO}_r] \otimes \Q & \cong \bigoplus_{j=1}^\infty [P,K(\pi_j(\overline X^{\rm SO}_r)\otimes \Q;j)] \cong\\
& \cong \bigoplus_{j=1}^\infty H_j(P;\Z) \otimes \pi_j(\overline X^{\rm SO}_r)\otimes \Q
\end{aligned}
\end{equation}
and we only need to calculate the ranks of the homotopy groups of $\overline X^{\rm SO}_r$.
When $k$ is odd, Theorem \ref{thm:fibration} yields
$$
\begin{aligned}
\rk \pi_j(\overline X^{\rm SO}_r) &= \rk \pi_j(\Omega\Gamma T\gamma^{\rm SO}_{k+1}) - \rk \pi_j\left(\Omega\Gamma T\left( (r+2)\gamma^{\rm SO}_{k+1}\right)\right) =\\
& = \rk \pi^s_{j+1}(T\gamma^{\rm SO}_{k+1}) - \rk \pi^s_{j+1}\left(T\left( (r+2)\gamma^{\rm SO}_{k+1}\right)\right)=\\
&= \rk H_{j+1}(T\gamma^{\rm SO}_{k+1};\Z) - \rk H_{j+1}\left(T\left( (r+2)\gamma^{\rm SO}_{k+1}\right);\Z\right)=\\
&= \rk H_{j-k}(BSO(k+1);\Z) - \\
&\qquad\qquad - \rk H_{j+1-(r+2)(k+1)}(BSO(k+1);\Z);
\end{aligned}
$$
and when $k$ is even we get
$$
\begin{aligned}
\rk \pi_j(\overline X^{\rm SO}_r) &= \rk \pi_j\left(\Omega^2\Gamma T\left( (r+2)\gamma^{\rm SO}_{k+1}\right)\right) + \rk \pi_j(\Omega\Gamma T\gamma^{\rm SO}_{k+1}) =\\
&= \rk \pi^s_{j+2}\left(T\left( (r+2)\gamma^{\rm SO}_{k+1}\right)\right) + \rk \pi^s_{j+1}( T\gamma^{\rm SO}_{k+1}) =\\
&= \rk H_{j+2}\left(T\left( (r+2)\gamma^{\rm SO}_{k+1}\right);\Z\right) + \rk \pi^s_{j+1}( T\gamma^{\rm SO}_{k+1};\Z) =\\
&= \rk H_{j+2-(r+2)(k+1)}(BSO(k+1);\Z) + \\
&\qquad\qquad + \rk H_{j-k}(BSO(k+1);\Z).
\end{aligned}
$$
Substituting the ranks of the homology groups ${H_*(BSO(k+1);\Z)}$ into the formula \eqref{eq:cobSplit} we obtain the following expressions:
\begin{cor}
Denote by $p_{\leq t}(m)$ the number of partitions of $m$ into positive integers between $1$ and $t$ (in particular, $p_{\leq t}(m)=0$ whenever $m$ is not a nonnegative integer).
\begin{enumerate}[a)]
\item If $k$ is odd,
$$
\begin{aligned}
& \rk \PrimSO\Sig r(P) = \sum_{j=1}^\infty \rk H_j(P,\Z) \times \\
&\times \left( p_{\leq \frac{k-1}{2}}\left(\frac{j-k}{4}\right)+ p_{\leq \frac{k-1}{2}}\left(\frac{j-2k-1}{4}\right) - \right.\\
&\left. - p_{\leq \frac{k-1}{2}}\left(\frac{j+1-(r+2)(k+1)}{4}\right)+ p_{\leq \frac{k-1}{2}}\left(\frac{j-k-(r+2)(k+1)}{4}\right)\right).
\end{aligned}
$$
\item If $k$ is even,
$$
\begin{aligned}
\rk \PrimSO\Sig r(P) &= \sum_{j=1}^\infty \rk H_j(P,\Z) \times \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\times \left( p_{\leq \frac{k}{2}}\left(\frac{j-k}{4}\right) + p_{\leq \frac{k}{2}}\left(\frac{j+2-(r+2)(k+1)}{4}\right) \right).
\end{aligned}
$$
\end{enumerate}
\end{cor}
Analogous computations can be performed to obtain the ranks of the prim $\Sig r$ cobordism groups in the non-oriented and in the quaternionic cases as well.
\subsection{Geometric interpretation of Theorem \ref{thm:fibration}}
Denote by $\overline X_\infty$ the classifying space of all prim maps (of codimension $k$). Clearly $\overline X_\infty = \underset{r \to \infty}{\lim} \overline X_r$. By considering the immersion lifts of the involved maps it is easy to see that $\overline X_\infty = \Omega \Gamma T\gamma_{k+1}$.
\begin{lemma}
The homotopy exact sequence of the fibration in Theorem \ref{thm:fibration} $a)$ can be identified with the homotopy exact sequence of the pair $(\overline X_\infty,\overline X_r)$.
\end{lemma}
\begin{proof}
We first construct a map
\begin{align*}
\alpha : \pi_{n+k+1}(\overline X_\infty,\overline X_r) \to & \pi_{n+k}\left(\Omega^2\Gamma T \left((r+2)\gamma_{k+1}\right)\right) =\\
&\qquad = \pi_{n+k+1}\left(\Omega\Gamma T \left((r+2)\gamma_{k+1}\right)\right).
\end{align*}
The relative homotopy group $\pi_{n+k+1}(\overline X_\infty,\overline X_r)$ can clearly be identified with the relative cobordism group of those prim maps $F:(N^{n+1},\partial N) \to (\R^{n+k+1}_+,\R^{n+k})$ that possess the property of $F|_{\partial N} : \partial N \to \R^{n+k}$ being a $\Sigma^{1_r}$-map. The map $\alpha$ associates to the cobordism class of $F$ the cobordism class of its $\Sigma^{1_{r+1}}$-points equipped with the normal structure of its immersion lift, which is a splitting of the normal bundle into $(r+2)$ isomorphic bundles. This kind of maps is classified by the space $\Omega\Gamma T\left((r+2)\gamma_{k+1}\right)$. Thus we obtain a chain map of the homotopy exact sequence of the pair $(\overline X_\infty,\overline X_r)$ to that of the fibration of Theorem \ref{thm:fibration} $a)$. The five lemma implies that $\alpha$ is an isomorphism.
\end{proof}
Theorem \ref{thm:fibration} $b)$ and $c)$ states that this exact sequence splits rationally.
To elaborate, if $k$ is odd then the sequence
$$
0 \to \pi_{n+k}(\overline X_r)\to\pi_{n+k}(\overline X_\infty) \to \pi_{n+k}(\overline X_\infty,\overline X_r) \to 0
$$
is exact at the middle term and has finite homology everywhere else. Geometrically this means that a prim map is rationally cobordant to a $\Sigma^{1_r}$-map exactly if its $\Sigma^{1_{r+1}}$-singularity stratum is rationally null-cobordant.
If $k$ is even, then the same is true for the sequence
$$
0 \to \pi_{n+k+1}(\overline X_\infty,\overline X_r)\to\pi_{n+k}(\overline X_r) \to \pi_{n+k}(\overline X_\infty) \to 0
$$
and it geometrically means that any prim map is rationally cobordant to a $\Sigma^{1_r}$-map. Furthermore a $\Sigma^{1_r}$-map that is rationally null-cobordant as an arbitrary prim map is determined up to rational cobordism by the rational cobordism class of the $\Sigma^{1_{r+1}}$-stratum of any prim map that it bounds rationally.
\subsection{Quaternionic prim maps}\label{section:quaternion}
\bigskip
Given an $(n+3)$-dimensional manifold $P^{n+3}$ let us consider immersions of closed cooriented $n$-dimensional manifolds immersed in $P \times \R^1$ with a quaternionic structure on their normal bundle. This means that each normal fibre is provided with a quaternionic structure, and the gluing maps respect this structure. If this structure group preserves the norms of the normal vectors as well (which can always be assumed), then this structure group is the symplectic group $Sp(1) \cong Spin(3) \cong S^3$, the group of unit quaternions.
The cobordism group of such immersions is in one-to-one correspondence with the set of homotopy classes $\left[SP,\Gamma \HH P^\infty\right]$; in particular taking $P=S^{n+3}$ yields a group isomorphic to
$$\pi_{n+4}(\Gamma \HH P^\infty) = \pi^s_{n+4}(\HH P^\infty).$$
Completely analogously to the codimension $2$ oriented case (when a complex structure can be defined on the normal bundle, see \cite{sing2}) we have that if the hyperplane projection of such an immersion is a $\Sig{r}$-map (i.e. it has no singularity $\Sig{i}$ for $i > r$), then the normal bundle of the immersion can be pulled back from $\HH P^r$. The inverse is also true up to regular homotopy: if the immersion has its normal bundle pulled back from the canonical quaternionic line bundle over $\HH P^r$, then it can be deformed by a regular homotopy into an immersion such that its hyperplane projection is a $\Sig{r}$-map. We shall call such prim maps \emph{quaternionic $\Sig{r}$-prim maps}.
The cobordism group of such maps into $P$ can be defined in a standard way and will be denoted by $\PrimQ\Sig{r}(P)$.
Let $\bar X^{Sp}_r$ denote the classifying space of these cobordism groups, so that it satisfies
$$\PrimQ\Sig{r}(P) = [\dot P,\bar X^{Sp}_r]_*$$
Here $\dot P$ denotes the one-point compactification of $P$ (if $P$ itself is compact, then this is the disjoint union of $P$ and an extra point); $[-,-]_*$ denotes the set of pointed homotopy classes.
Finally, in analogy with the complex (codimension $2$) case of \cite{sing2} we obtain that the classifying space $\bar X^{Sp}_r$ admits the representation
$$\bar X^{Sp}_r = \Omega \Gamma \HH P^{r+1}.$$
The so-called singularity spectral sequence (see \cite{sing2} for details) in homotopy groups that arises from the sequence of fibrations
$$
\bar X^{Sp}_{r-1} \subset \bar X^{Sp}_r \subset \bar X^{Sp}_{r+1} \subset \dots
$$
coincides (after a shift of the indices) with the spectral sequence in stable homotopy groups of the filtration
$$
\HH P^0 \subset \HH P^1 \subset \dots \HH P^r\subset \dots \subset \HH P^\infty
$$
The first page of this spectral sequence is
$$
E^1_{p,q} = \pi^s_{p+q} (\HH P^p/\HH P^{p-1}) = \pi^s_{p+q}(S^{4p}) = \pi^s(q-3p)
$$
\bigskip
\vbox{
$$
\xymatrix{
q=10 & \pi^s(7) \cong \Z_{240} & \pi^s(4)=0 & \Z_2 \\
q=9 & \pi^s(6) \cong \Z_2\langle \nu^2\rangle & \pi^s(3) \cong \Z_{24} \ar@{->>}[l] & \Z \ar[l] \ar[ull] \\
q=8 & 0 & 0 & \\
q=7 & 0 & 0 & \\
q=6 & \pi^s(3) \cong \Z_{24} & \Z \ar@{->>}[l] & \\
q=5 & \pi^s(2) \cong \Z_2 & & \\
q=4 & \pi^s(1) \cong \Z_2 & & \\
q=3 & \Z & \hspace*{60pt} & \hspace*{0pt}\\
& p=1 & p=2 & p=3 \\
}
$$
\vspace*{-128mm}
$$
\hspace*{18pt}\begin{array}{c||c|c|c|}
\cline{2-4}
\hspace*{60pt} & \hspace*{84pt} & \hspace*{84pt} & \hspace*{48pt} \\[27pt]
\cline{2-4}
& & & \\[27pt]
\cline{2-4}
& & & \\[27pt]
\cline{2-4}
& & & \\[27pt]
\cline{2-4}
& & & \\[27pt]
\cline{2-4}
& & & \\[27pt]
\cline{2-4}
& & & \\[27pt]
\cline{2-4}
& & & \\[27pt]
\hhline{~||=|=|=|}
\end{array}
\hspace*{70pt}
$$
}
\bigskip
The only non-finite groups among the groups $E^1_{p,q}$ are hence those on the line $q= 3p$, these are all $\Z$.
\begin{lemma}
The group $E^\infty_{p,3p} \cong \Z$ considered as a subgroup of $E^1_{p,3p}\cong \Z $ has index equal to the order of the cokernel of the stable Hurewicz homomorphism $ \pi^s_{p+q} (\HH P^\infty) \to H_{p+q}(\HH P^\infty)$.
\end{lemma}
\begin{proof}
Consider the degenerate homological spectral sequence starting with ${\bf {H}} E^1_{p,q} = H_{p+q} (\HH P^p/\HH P^{p-1})$ (which is $\Z$ if $q=3p$ and $0$ otherwise) for the same filtration and the map from the spectral sequence $E^*_{p,q}$ into it induced by the stable Hurewicz homomorphisms on each page. On the first page we have isomorphism of the groups $E^1_{p,3p} \cong {\bf H} E^1_{p,3p} $, both isomorphic to $\Z$. On the final page we have $E^\infty _{p,3p} = \Z$ identified with the free part of $\pi^s_{4p}(\HH P^\infty)$, and it is mapped into ${\bf {H}} E^\infty_{p,3p} = H_{4p}(\HH P^\infty) = \Z$, the image of this homomorphism being the same as the image of the stable Hurewicz homomorphism.
\end{proof}
\begin{cor}
The product of the orders of the images of the differentials $d^r_{p,3p}: \Z = E^r_{p,3p} \to E^r_{p-r, 3p+r-1}$
(taken for all $r$ from $1$ to infinity) is equal to the index of the image of the stable Hurewicz homomorphism
$\pi^s_{4p}(\HH P^\infty) \to H_{4p}(\HH P^\infty)$.
\end{cor}
Segal \cite{Segal} has determined this index:
\begin{theorem}{\cite[Theorem 1.1.]{Segal}}
The image of $\pi^s_{4p}(\HH P^\infty) \to H_{4p}(\HH P^\infty)$ is $h(p)\cdot \Z$, where
\begin{itemize}
\item $h(p) = (2p)!$ for $p$ even and
\item $h(p) = (2p)!/2$ for $p$ odd.
\end{itemize}
\end{theorem}
It follows that all the differentials of the fragment of the spectral sequence seen on the diagram are epimorphic modulo the $2$-primary torsion part, and the differential $d^1_{2,6}:E^1_{2,6} \to E^1_{1,6}$ is (truly) epimorphic. In particular we obtain that the boundaries of the normal forms of the Morin maps of type $\Sigma^{1,0}$ and $\Sigma^{1,1,0}$ in codimension $3$ give a generator of the stable homotopy group of spheres $\pi^s(3)$ and a generator of the odd torsion of the stable homotopy group of spheres $\pi^s(7)$, respectively. In the Appendix we describe the first of these classes in more detail.
We also get that the torsion parts of $\pi^s_i(\HH P^{\infty})$ in the range $i \le 11$ are $2$-primary groups, and hence so are those of the cobordism groups $\PrimQ\Sig{3}(\R^{n+3})$ for $n\le 7$.
These cobordism groups are finitely generated, and are infinite precisely when $n \equiv 0$ mod $4$. In fact we can prove the following more general theorem that determines the rational homotopy type of the classifying space $\overline X^{Sp}_r$:
\begin{theorem}
$$\overline X^{Sp}_r \cong_\Q S^3 \times S^7 \times \dots \times S^{4r+3}.$$
\end{theorem}
\begin{proof}
We repeat the argument that determines the rational homotopy type of $\C P^m$ (which we learned from D. Crowley); we show by induction that the stable rational homotopy type of $\HH P^m$ is that of $S^4 \vee S^8 \vee \dots \vee S^{4m}$. By the induction hypothesis $\HH P^{m-1} \cong_\Q S^4 \vee S^8 \vee\dots \vee S^{4m-4}$ and the stable homotopy class of the attaching map of the top dimensional cell of $\HH P^m$ is defined by stable maps from $S^{4m-1}$ to $S^{4i}$ for $i=1, 2, \dots, m-1$, and all these stable maps are rationally trivial (they have finite order).
Consequently $\overline X^{Sp}_r = \Omega \Gamma \HH P^{r+1} \cong_\Q \Omega \Gamma (S^4\vee S^8 \vee \dots \vee S^{4r+4}) \cong \Gamma S^3 \times \Gamma S^7 \times \dots \times \Gamma S^{4r+3} \cong_\Q S^3 \times S^7 \times \dots \times S^{4r+3}$ (we used that $\Omega \Gamma S = \Gamma$; $\Gamma (A \vee B) = \Gamma A \times \Gamma B$; and $\Gamma S^i \cong_\Q S^i$).
\end{proof}
\begin{remark}
In \cite{sing2}[Lemma 4] it is shown that when $k=2$, we have $\overline{X_r}=\Gamma \C P^{r+1}$. This is a special case of Theorem \ref{thm:classic} as it follows from the next lemma. Recall that if $B(k+1)=BSO(2)$, then $\zeta_S=\pi_r^*\gamma_2^{\rm SO}$, where $\gamma_2^{\rm SO}$ is the universal oriented vector bundle of rank $2$ and $\pi_r: S\left((r+1)\gamma_2^{\rm SO}\right) \to BSO(2)$ is the sphere bundle of the vector bundle $\gamma_2^{\rm SO}\oplus \dots\oplus \gamma_2^{\rm SO}$ (with $(r+1)$ summands). We denote by $\gamma_1^\C$ the universal complex line bundle (over $\C P^\infty$).
\end{remark}
\begin{lemma}
The vector bundle $\zeta_S \to S\left((r+1)\gamma_2^{\rm SO}\right)$ and the complex line bundle $\gamma^\C_1|_{\C P^r} \to \C P^r$ are homotopically equivalent in the sense that there is a homotopy equivalence $f:\C P^r \to S((r+1)\gamma_2^{\rm SO})$ such that $f^*\zeta_S$ is isomorphic as an oriented rank $2$ real vector bundle to the tautological complex line bundle over $\C P^r$.
\end{lemma}
\begin{proof}
It is well-known that $\gamma^\C_1$ can be identified with $\gamma^{\rm SO}_2$, and the tautological complex line bundle over $\C P^r$ is the restriction $\gamma_1^\C|_{\C P^r}$.
Consider the space $S^\infty \times S(\C^{r+1}) \times \C$ and the natural diagonal $S^1$-action on it: for $g\in S^1$ and $(x,y,z) \in S^\infty \times S(\C^{r+1}) \times \C$ set $g(x,y,z)=(gx,gy,gz)$.\footnote{Here $S(\C^{r+1})$ is considered to be the space of unit length complex vectors in $\C^{r+1}$.} The subspace $S^\infty \times S(\C^{r+1}) \times \{0\}$ is invariant under this action; the corresponding orbit space is $S^\infty \underset{S^1}{\times} S(\C^{r+1})$. Regarding this orbit space as an $S(\C^{r+1})$-bundle over $S^\infty/S^1$ we identify it with $S\left((r+1)\gamma_1^\C\right)$. Regarding the same orbit space as an $S^\infty$-bundle over $S(\C^{r+1})/S^1 = \C P^r$ we get that it is homotopically equivalent to $\C P^r$. The obtained homotopy equivalence between $\C P^r$ and $S\left((r+1)\gamma_1^\C\right)$ takes the tautological complex line bundle to (pullback of) the tautological complex line bundle since it extends to the entire orbit space $(S^\infty \times S(\C^{r+1}) \times \C)/S^1$, which is the total space of these bundles; this finishes the proof of the lemma.
\end{proof}
Similarly $\zeta^{Sp}_S \to S\left((r+1)\gamma^{Sp}_4\right)$ is homotopically equivalent to the vector bundle $\gamma_1^\HH|_{\HH P^r} \to \HH P^r$. This combined with Theorem \ref{thm:classic} gives that $\overline X_r^{Sp}$ is $\Omega \Gamma \HH P^{r+1}$.
\section{Proofs of Theorem \ref{thm:classic} and Theorem \ref{thm:fibration}}\label{section:proofs}
\begin{proof}[Proof of Theorem \ref{thm:classic}]
Given a generic immersion $g : M^n \looparrowright P^{n+k} \times \R^1$ we first produce sections $s_1$, $s_2$, $\dots$ of the normal bundle $\nu_g$ such that $\Sigma^{1_j}(f) = \cap_{i=1}^j s_i^{-1}(0)$, where $f=pr \circ g$, the map $pr:P\times \R^1 \to P$ being the projection.
The (positive) basis vector of $\R^1$ defines a constant vector field on $P\times \R^1$ that we call (upward directed) vertical and denote by $\up$. Project $\up$ into the normal bundle $\nu_g$ (considered as the quotient bundle $T(P\times \R^1)/dg(TM)$, and denote the obtained section by $s_1$. Note that the singularity set $\Sigma(f)$ of $f$ is precisely $s^{-1}_1(0)$, the zero set of the section $s_1$.
For a generic map $f$ the set $\Sigma(f)$ is a manifold of codimension $k+1$. Denote by $\nu_2$ the normal bundle of $\Sigma(f)$ in $M$. Note that $\nu_2 \cong \nu_g|_{\Sigma(f)}$: the tangent space of the section $s_1$ at the points of $\Sigma(f)$ is the graph of a linear isomorphism $\beta_2: \nu_2 \to \nu_g|_{\Sigma(f)}$.
In order to produce the section $s_2$ we first define a section $z_2$ of $\nu_2$ by projecting $\up$ into $\nu_2$ at the points of $\Sigma(f)$ (where $\up \in dg(TM)$, hence the definition makes sense). Applying the isomorphism $\beta_2$ we get a section $s_2' = \beta_2 \circ z_2$ of $\nu_g|_{\Sigma(f)}$. The section $s_2$ is defined as an arbitrary (continuous) extension of $s_2'$ to the entire $\nu_g$. Clearly $\Sigma^{1_2}(f)$ is the zero set of $z_2$, hence $\Sigma^{1_2}(f)=s_1^{-1}(0) \cap s_2^{-1}(0)$. We continue in the same fashion, producing sections $s_3$, $\dots$, $s_{r+1}$ such that $\Sig{j}(f)=\cap_{i=1}^{j} s_i^{-1}(0)$. In particular if $f$ is a $\Sig{r}$-map, then $\cap_{i=1}^{r+1}s_i^{-1}(0)=\emptyset$.
Note that the sections $s_2$, $\dots$, $s_r$ are not unique, but each one is chosen uniquely up to a contractible choice. The difference of any two possible choices of $s_2$ is an arbitrary section of $\nu_g$ that vanishes on $s_1^{-1}(0)$. The difference of any two possible choices of $s_3$ is arbitrary section of the normal bundle $\nu_g$ that vanishes on $s_1^{-1}(0) \cap s_2^{-1}(0)$ etc. Hence the collection of these sections defines a homotopically unique section $\alpha$ of the sphere bundle $p_r^S(g): S((r+1)\nu_g) \to M$.
Let $G_{k+1}$ denote the infinite Grassmann manifold of all $k+1$-planes in $\R^\infty$ and let $\varphi: M \to G_{k+1}$ be the map that induces $\nu_g$ from the universal bundle $\gamma_{k+1}$, furthermore let $\Phi: \nu_g \to \gamma_{k+1}$ be the corresponding fiberwise isomorphism. $\Phi$ induces a map $\Phi_S^r : S((r+1)\nu_g) \to S((r+1)\nu_{k+1})$. Consider the following diagram:
\begin{equation}\tag{$*_r$}\label{eq:inductionCD}\begin{split}
\xymatrix{
\zeta_g \ar[rrr] \ar@/_/[ddd] \ar[dr] & & & \zeta_S \ar[ddd] \ar[dl] \\
& S((r+1)\nu_g) \ar[r]^{\Phi^r_S} \ar@/_/[d]_{p_r^S(g)} & S((r+1)\gamma_{k+1}) \ar[d] & \\
& M \ar[ur]_{\ \wt{\alpha_r}=\Phi^r_S\circ\alpha} \ar[r]_\varphi \ar@/_/[u]_{\alpha}& G_{k+1} & \\
\nu_g \ar[ur] \ar[rrr]^{\Phi} \ar@/_/[uuu]_{A} & & & \gamma_{k+1} \ar[ul] \\
}
\end{split}
\end{equation}
(since $\zeta_g = p_r^S(g)^*(\nu_g)$ and $\alpha$ is a section of the bundle map $p_r^S(g)$, the map $\alpha$ can be lifted into a vector bundle map $A: \nu_g \to \zeta_g$).
Its commutativity gives us that $\wt{\alpha_r}^*\zeta_S = \nu_g$. Thus we have obtained the proof of the following lemma:
\begin{lemma}
If $g$ is a generic immersion such that $f=pr\circ g$ is a $\Sig{r}$-map, then the normal bundle $\nu_g$ can be induced by $\wt{\alpha_r}$ from $\zeta_S$, which in turn is the pullback of $\gamma_{k+1} \to G_{k+1}$ to $S((r+1)\gamma_{k+1})$ by the projection $S((r+1)\gamma_{k+1}) \to G_{k+1}$. \qed
\end{lemma}
By the remark above about the contractible choices of the sections $s_2$, $\dots$, $s_{r+1}$ we have seen that $\alpha$ is homotopically unique and therefore the map $\wt{\alpha_r}$ is also homotopically unique. Hence $\nu_g$ can be pulled back from $\zeta_S$ in a homotopically well-defined way.
Applying the Pontryagin-Thom construction to the diagram \eqref{eq:inductionCD} we construct a map
\begin{equation}\tag{$**$}\label{eq:PrimPontryagin}
\Prim{r}(P) \to [P,\Omega\Gamma T\zeta_S]
\end{equation}
This map arises as follows: the cobordism class of an immersion $g: M \looparrowright P \times \R^1$ gives a map $SP \to \Gamma T\zeta_S$. Hence the map $f=pr\circ g$ gives a map $P \to \Omega \Gamma T\zeta_S$; this map is homotopically unique and its homotopy class is the same for any representative of the cobordism class of $g$; hence also for that of $f$. Let us denote the classifying space for prim $\Sig{r}$-cobordism by $\overline X_r$. The map \eqref{eq:PrimPontryagin} is induced by a (homotopically unique) map $\theta_r:\overline X_r \to \Omega\Gamma T\zeta_S$ between the classifying spaces. In \cite{GT} we have shown that there is a fibration $\overline X_{r-1} \to \overline X_r \overset{p_r}{\to} \Gamma S^r T\left((r+1)\gamma_k\right)$ and the map $p_r$ induces the forgetful map that sends the cobordism class of a $\Sig{r}$-map to the cobordism class of the immersed top singularity stratum. Note that the base space can be rewritten as $\Gamma S^r T\left((r+1)\gamma_k\right) = \Omega\Gamma T \left( (r+1) (\gamma_k \oplus \varepsilon^1)\right)$ .
Hence there is a long exact sequence
\begin{align*}
\dots &\to \Prim{r-1}(SP) \to \Prim{r}(SP) \to [SP,\Omega\Gamma T(r+1)\gamma_{k+1}] \to\\
&\to \Prim{r-1}(P) \to \Prim{r}(P) \to [P,\Omega\Gamma T(r+1)\gamma_{k+1}] \to 0
\end{align*}
We now try to obtain an analogous exact sequence for the functor on the right-hand side of \eqref{eq:PrimPontryagin}.
\begin{claim}\label{claim:cofibration}
There is a cofibration $$
T\zeta_S|_{S(r\gamma_{k+1})} \subset T\zeta_S \to T(r+1)(\gamma_k\oplus \varepsilon^1).
$$
\end{claim}
To see that Claim \ref{claim:cofibration} holds, we utilize a trivial lemma:
\begin{lemma}\label{lemma:cofibration}
Let $N$ be a manifold, $A\subset N$ be a submanifold with a tubular neighbourhood $V$ and normal bundle $\nu$, and let $\xi$ be a vector bundle over $N$. Denote by $\xi_A$ and $\xi_{N\setminus V}$ the restrictions of $\xi$ to $A$ and $N\setminus V$, respectively. Then there is a cofibration of Thom spaces
$$
T\xi_{N\setminus V} \to T\xi \to T(\nu \oplus \xi_A).
$$\qed
\end{lemma}
Claim \ref{claim:cofibration} follows by applying Lemma \ref{lemma:cofibration} to $N=S\left( (r+1)\gamma_{k+1} \right)$, $A=S(\gamma_{k+1})$, $\xi=\zeta_S$ with $V=S\left( (r+1)\gamma_{k+1} \right) \setminus S(r\gamma_{k+1})$ and $\nu=r(\gamma_k \oplus \varepsilon^1)$.
Applying the functor $\Omega\Gamma$ to the cofibration of Claim \ref{claim:cofibration} we obtain a fibration
$$
\Omega\Gamma T\zeta_S \xrightarrow{\Omega\Gamma T \zeta_S|_{S(r\gamma_{k+1})}} \Omega\Gamma T\left( (r+1)(\gamma_k\oplus \varepsilon^1) \right).
$$
There are also natural maps
$$\theta_{r-1}: \overline X_{r-1} \to \Omega\Gamma T \zeta_S|_{S(r\gamma_{k+1})}\text{ and}$$
$$\theta_r: \overline X_r \to \Omega\Gamma T\zeta_S$$
that correspond to the natural transformation of functors
$$\Prim{r-1}(\cdot) \to [\cdot, \Omega\Gamma T \zeta_S|_{S(r\gamma_{k+1})}]\text{ and}$$
$$\Prim r(\cdot) \to [\cdot,\Omega\Gamma T\zeta_S]$$
given by \eqref{eq:PrimPontryagin}. We hence obtain the following map of fibrations:
$$
\xymatrix{
\overline X_{r-1} \ar[r]^{\theta_{r-1}} \ar[d] & \Omega \Gamma T\zeta_S|_{S(r\gamma_{k+1})} \ar[d]\\
\overline X_r \ar[r]^{\theta_r} \ar[d] & \Omega\Gamma T\zeta_S \ar[d]\\
\Omega\Gamma T\left((r+1)(\gamma_k\oplus \varepsilon^1)\right) \ar[r]^{\cong} & \Omega\Gamma T\left((r+1)(\gamma_k\oplus\varepsilon^1)\right)\\
}
$$
We show that this diagram commutes homotopically by proving that the diagram of the induced natural maps between the corresponding functors commutes. The commutativity of the top square is obvious. In the bottom square, the normal structure encoded in $(r+1)(\gamma_k\oplus\varepsilon^1)$ on the left is that of the splitting of the normal bundle of $\Sigma^{1_r,0}$ into $r+1$ bundles canonically isomorphic to the restriction of the normal bundle of the immersion lift, while the normal structure on the right encodes the common zero sets of the $r+1$ sections $s_j$. While the structure on the right may be twisted with respect to that on the left by isomorphisms provided by the sections themselves, one can still recover one normal structure from the other uniquely up to homotopy.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:fibration}]
{\it a)} Consider the vector bundle $\gamma_{k+1}$ pulled back to the disc bundle $D\left((r+1)\gamma_{k+1}\right)$ by the projection. Note that the Thom space of this bundle is homotopy equivalent to $T\gamma_{k+1}$, while the total space of its disc bundle is that of the disc bundle $D\left((r+2)\gamma_{k+1}\right)$. Recall that after identifying $\gamma_{k+1}$ with this pullback to the disc bundle the vector bundle $\zeta_S$ is the restriction $\gamma_{k+1}|_{S((r+1)\gamma_{k+1})}$, and notice that $T\zeta_S \to T\gamma_{k+1} \to T\left((r+2)\gamma_{k+1}\right)$ is a cofibration by Lemma \ref{lemma:cofibration}. By applying the functor $\Omega\Gamma$ to it, we obtain the fibration
$$
\Omega\Gamma T\zeta_S \to \Omega\Gamma T\gamma_{k+1} \to \Omega\Gamma T\left((r+2)\gamma_{k+1}\right).
$$
It is well-known (see e.g. \cite{MosherTangora}) that when one turns the inclusion $\Omega\Gamma T\zeta_S \to \Omega\Gamma T\gamma_{k+1}$ of the fiber into a fibration, its fiber will be the loop space of the base, $\Omega^2\Gamma T\left((r+2)\gamma_{k+1}\right)$. This fibration $\Omega\Gamma T\zeta_S = \overline{X_r} \xrightarrow{\Omega^2\Gamma T\left((r+2)\gamma_{k+1}\right)} \Omega\Gamma T\gamma_{k+1}$ is the one stated by Theorem \ref{thm:fibration} $a)$.
\par
{\it b)} When $k$ is odd and we are in the oriented setting, the vector bundle $\gamma_{k+1}^{SO}$ has a nonvanishing Euler class and so does the vector bundle $(r+1)\gamma_{k+1}^{SO}$ as well, hence (using the Gysin sequence) we obtain that the projection $pr: S((r+1)\gamma^{SO}_{k+1}) \to BSO(k+1)$ induces an epimorphism in cohomology with rational coefficients (here we use the fact that $H^*(BSO(k+1))$ has no zero divisors as a subring of the polynomial ring $H^*(BT)$ with $T$ the maximal torus of $BSO(k+1)$). Consequently so does the induced map $T\zeta_S \to T\gamma^{SO}_{k+1}$ of Thom spaces, therefore the cofibration $T\zeta_S \to T\gamma^{SO}_{k+1} \to T((r+2)\gamma^{SO}_{k+1})$ splits homologically and so the long exact sequence of the fibration $\Gamma T\zeta_S \to \Gamma T\gamma^{SO}_{k+1} \to \Gamma T((r+2)\gamma^{SO}_{k+1})$ in homotopy -- which coincides with the long exact sequence of the original cofibration in stable homotopy -- splits rationally as well. Since all the involved spaces are H-spaces and thus rationally products of Eilenberg-MacLane spaces, this implies that $\Omega\Gamma T\gamma^{SO}_{k+1} \cong_\Q \Omega\Gamma T\zeta_S \times \Omega\Gamma T\left((r+2)\gamma^{SO}_{k+1}\right) =\overline X^{SO}_r \times \Gamma T\left((r+2)\gamma^{SO}_{k+1}\right)$ as claimed.
\par
{\it c)} When $k$ is even (and we are still in the oriented setting), the vector bundle $\gamma_{k+1}^{SO}$ has vanishing Euler class and so does the vector bundle $(r+1)\gamma_{k+1}^{SO}$ as well, hence the projection $pr: S((r+1)\gamma^{SO}_{k+1}) \to BSO(k)$ induces a monomorphism in cohomology with rational coefficients. Consequently so does the induced map $T\zeta_S \to T\gamma^{SO}_{k+1}$ of Thom spaces. Extending the cofibration $T\zeta_S \to T\gamma^{SO}_{k+1} \to T((r+2)\gamma^{SO}_{k+1})$ to form the Puppe sequence
\begin{align*}
T\zeta_S \to T\gamma^{SO}_{k+1} &\to T((r+2)\gamma^{SO}_{k+1}) \to\\
&\to ST\zeta_S \to ST\gamma^{SO}_{k+1} \to ST((r+2)\gamma^{SO}_{k+1}) \to \dots
\end{align*}
we observe that the induced map $H^*\left(ST\left((r+2)\gamma^{SO}_{k+1}\right)\right) \to H^*\left( ST\gamma^{SO}_{k+1} \right)$ is also zero, therefore the cofibration $T\left((r+2)\gamma^{SO}_{k+1}\right) \to ST\zeta_S \to ST\gamma^{SO}_{k+1}$ splits homologically and so the long exact sequence of the fibration
$$\Omega^2\Gamma T\left((r+2)\gamma^{SO}_{k+1}\right) \to \Omega^2\Gamma ST\zeta_S \to \Omega^2\Gamma ST\gamma^{SO}_{k+1}$$
in homotopy splits rationally as well. But since $\Omega^2\Gamma ST\zeta_S = \Omega\Gamma T\zeta_S$ and $\Omega^2\Gamma ST\gamma^{SO}_{k+1} = \Omega\Gamma T\gamma_{k+1}$ are H-spaces, we have
$$\overline X_r =\Omega\Gamma T\zeta_S \cong_\Q \Omega \Gamma T\gamma_{k+1} \times \Omega^2\Gamma T\left((r+2)\gamma_{k+1}\right)$$
as claimed.
\end{proof}
\section{Appendix: The generator of $\pi^s(3)$ via singularity theory}
Consider the normal form of a Whitney umbrella map $U:\R^4 \to \R^7$ given by the coordinate functions
\begin{align*}
(x,t_1,t_2,t_3) &\mapsto (y_1,y_2, y_3,z_1,z_2,z_3,z_4)\\
y_m &= t_m \qquad m=1,2,3\\
z_m &= t_mx \qquad m=1,2,3\\
z_4 &= x^2
\end{align*}
By adding the eighth coordinate function $z_5=x$ we lift this map to an embedding $\tilde U:\R^4 \to \R^8$. Consider the restriction of $\tilde U$ to $\tilde D^4 = U^{-1}(D^7)$ and denote by $\tilde S^3$ the boundary of $\tilde D^4$. Note that $\tilde D^4$ is diffeomorphic to the standard ball $D^4$. The map $\tilde U$ is an embedding, hence $\tilde U(\tilde D^4)$ is an embedded ball in $\R^8$. Its normal bundle admits a homotopically unique quaternionic line bundle structure. The direction of the added $8$th coordinate line $(z_5)$ is not tangent to $\tilde D^4$ at the points of $\tilde S^3$ (actually at any point of $\tilde D^4$ except for the origin). Hence the classifying map sends this subset to a ball neighbourhood of $\HH P^0 \subset \HH P^1$ and induces the normal bundle of $\tilde U$ from $\gamma^H_1$ in the following manner. Let $v$ denote the image of the upward-pointing vector $\partial/\partial z_5$ in the normal bundle of $\tilde D^4$ under the natural projection $T\R^8 \to T\R^8/d\tilde U(T\tilde D^4) = \nu_{\tilde U}$ . The classifying map sends $v$ to a fixed section $s$ of $\gamma^H_1$ (that does not vanish on the given neighbourhood) and extends this bundle map to respect the quaternionic structure (see the proof of the second Claim in \cite[Section 3]{sing2}). In particular, the trivialization $(s,is,js,ks)$ of $\gamma^H_1$ over $\HH P^0$ that gives the generator of $\pi^s(0) = \Z$ is pulled back to the trivialization $(v,iv,jv,kv)$ of the normal bundle of $\tilde U$, and the element in $\pi^s(3) \approx \Z_{24}$ represented by this framed manifold $\tilde S^3$ is the image of the differential $d^1_{2,6}: E^1_{2,6} \cong \Z \to E^1_{1,6} = \pi^s(3) \cong \Z_{24}$ in the spectral sequence of subsection \ref{section:quaternion} evaluated on a generator of $E^1_{2,6} \cong \Z$. Since the differential $d^1_{2,6}$ is surjective, it sends the generator of $E^1_{2,6}$ to a generator of $E^1_{1,6} = \pi^s(3) \cong \Z_{24}$.
|
2,877,628,091,278 | arxiv | \section{Introduction}
Felix Klein's Ping Pong Lemma is a widely used criterion for determining if a collection of group elements generate a non-abelian free subgroup and more generally, for constructing subgroups which are non-trivial free products.
In this paper, we employ the ping pong lemma in the setting of groups acting on \cat cube complexes to construct subgroups which split as non-trivial free products, as described below in the Main Theorem.
An action of a group $G$ on a \cat cube complex $X$ is said to be \emph{essential} if for any given orbit of $G$, there are orbit points which are arbitrarily deep inside any half space of $X$. A collection of groups $G_1, \ldots, G_k$ acting on $X$ are said to be \emph{simultaneously inessential} if there is a half space $\h$ and a vertex $v \in X$ such that $\cup_i G_i(v) \subset \h$.
A large class of examples of simultaneously inessential subgroups arise when $G$ is Gromov hyperbolic and acts properly cocompactly on $X$ and the $G_i$'s are a finite collection of infinite index quasi-convex subgroups of $G$ (see Proposition \ref{QuasiconvexInessential}). Our goal is the following.
\begin{mnthm} Let $X$ be a finite dimensional, irreducible, non-Euclidean \cat cube complex and let $G$ be a group acting essentially and properly on $X$, without a global fixed point at infinity. Assume further that $G$ has no finite normal subgroup. Let $A_1,\ldots, A_n$ be a collection of simultaneously inessential subgroups of $G$. Then there exists $g\in G$ of infinite order, such that for each $i$, $ \langle g, A_i\rangle \cong \langle g\rangle * A_i.$
\label{main}
\end{mnthm}
If $H$ is a quasi-convex subgroup in a non-elementary hyperbolic group $G$, then Theorem \ref{main} holds and there exists $g \in G$ such that the subgroup generated by $g$ and $H$ is the free product $\langle g\rangle*H$; this was proved by Arzhantseva in \cite{GA}.
The key step in the proof of the Main Theorem that allows us to play ping pong is Proposition \ref{GoodHyperplane} which says that for any collection $A_1,\ldots, A_n$ of simultaneously inessential subgroups, one can find a half space $\h$ in $X$ such that $a\h$ is contained in the complement of $\h$, for all nontrivial $a \in \cup_i A_i$.
In the process of proving the Proposition, we construct a new ultrafilter boundary $S(X)$ built out of \emph{strongly separated} ultrafilters of $X$. The strongly separated ultrafilters have nice properties. For example, the median of three strongly separated ultrafilters is a vertex of $X$. We use this to show that the fixed set of every non-trivial element of the group has empty interior on the boundary. We summarize this into the proposition below, which is proved at the end of Section \ref{IrreducibleCase}.
\begin{proposition} \label{bound}
Let $X$ be a non-Euclidean irreducible CAT(0) cube complex $X$. Suppose a group $G$ is acting essentially on $X$ without a global fixed point at infinity. Then, the compact $G$-space $S(X)$ is minimal and strongly proximal and hence, a $G$-boundary. Moreover, if the action of $G$ on $X$ is proper and $G$ has no non-trivial finite normal subgroups, then the action of $G$ on $S(X)$ is topologically free.
\end{proposition}
When $X$ splits into irreducible direct factors $X_1\times \ldots \times X_n$ and each factor $X_i$ is non-Euclidean then $S(X)$ decomposes as a direct product of the $S(X_i)$ and Proposition \ref{bound} naturally extends to the reducible case. A similar ultrafilter boundary was studied by Fernos in \cite{fernos}.
\subsection*{An Application of the Main Theorem : property $P_{naive}$} We use the Main Theorem to study property $P_{naive}$ for groups acting on \cat cube complexes.
Property $P_{naive}$ was introduced by Bekka, de la Harpe and Cowling \cite{BCH} to study the ideal structure of group $C^*$-algebras. We give a brief introduction to $P_{naive}$ in section \ref{naive}.
\begin{definition} A group $G$ has property $P_{naive}$ if for every finite subset $F \subset G$ there exists an element $y \in G$ of infinite order such that given $g \in F$, the subgroup $\langle g, y \rangle$ is canonically isomorphic to the free product $\langle g\rangle * \langle y\rangle$.
\end{definition}
\begin{corollary}
Suppose a group $G$ is acting properly and cocompactly on a finite dimensional non-Euclidean CAT(0) cube complex. If $G$ has no non-trivial finite normal subgroups then $G$ has property $P_{naive}$.
\end{corollary}
In the irreducible case, property $P_{naive}$ is a direct consequence of the Main Theorem, as given by Corollary \ref{naivecor}. When the underlying \cat cube complex is reducible, we prove property $P_{naive}$ for lattices in Aut$(X)$, where $X$ is locally finite, co-compact and has no Euclidean factor (see Theorem \ref{Products}). Examples of groups satisfying the hypotheses of Theorem \ref{Products}, which were not known up to now to satisfy $P_{naive}$, are the Burger-Mozes simple groups \cite{BurgerMozes2000}, which arise as lattices in products of trees.
The study of property $P_{naive}$ was initiated by Bekka, Cowling and de la Harpe as a means to establish $C^*$-simplicity of group $C^*$-algebras. Here, we use property $P_{naive}$ from the above Corollary and several previously known results to provide necessary and sufficient conditions for the reduced $C^*$-algebra of a \cat cube complex group to be simple. This last property is commonly referred to as $C^*$-simplicity.
\begin{corollary}\label{equiv}
The following are equivalent for a group $G$ acting properly and co-compactly on a finite dimensional \cat cube complex $X$.
\begin{enumerate}
\item $G$ has property $P_{naive}$.
\item $G$ is $C^*$-simple.
\item Every non-trivial conjugacy class of $G$ is infinite.
\item The amenable radical of $G$ is trivial.
\item The $G$-action is faithful and $X$ is non-Euclidean.
\end{enumerate}
\end{corollary}
Recent research has yielded more sophisticated techniques for establishing $C^*$-simplicity. Kalantar and Kennedy \cite{KK} have brought in dynamical techniques showing that a group $G$ is $C^*$-simple if and only if there exists a $G$-boundary on which the $G$-action is topologically free. Using Proposition \ref{bound}, we get an application of their Theorem to groups acting properly (not necessarily, co-compactly) on \cat cube complexes (refer to Proposition \ref{bound}) without a global fixed point at infinity.
Kalantar and Kennedy's methods were developed further by Breuillard, Kalantar, Kennedy and Ozawa \cite{BKKO}. Recall \cite[Theorem 3.1]{BKKO} which says that if a discrete group $G$ has countably many amenable subgroups, then $G$ is $C^*$-simple if and only if the amenable radical is trivial. In \cite{SageevWise}, Sageev and Wise showed that groups acting on finite dimensional \cat cube complexes satisfy the Tits Alternative so long as one knows the action is proper and there is a bound on the size of the finite subgroups. Their proof works equally well if the existence of a bound on the size of finite subgroups is replaced by the weaker condition that, every locally finite subgroup is finite. Therefore, if $G$ is acting properly on a finite dimensional CAT(0) cube complex and every locally finite subgroup of $G$ is finite, then the Tits Alternative for $G$ implies that every amenable subgroup is finitely generated virtually abelian. Consequently, if $G$ is countable, then $G$ can have only countably many amenable subgroups. We get the following interesting application of \cite[Theorem 3.1]{BKKO}.
\begin{proposition}
Let $G$ be a countable discrete group such that every locally finite subgroup is finite. Suppose $G$ acts properly on a finite dimensional CAT(0) cube complex. Then, $G$ is $C^*$-simple if and only if its amenable radical is trivial.
\end{proposition}
This generalizes Le Boudec's Proposition 3.2 from \cite{Adrien}, which deals with the case when $X$ is a product of trees. When the locally finite subgroups are not necessarily finite, groups acting properly on finite dimensional \cat cube complexes can have uncountably many amenable subgroups. For instance, one can make a direct sum of infinitely many copies of a finite cyclic group act properly on a tree.
\subsection*{Acknowledgements}
We would like to thank Moose, Luna and Shurjo, without whom this paper would have been possible. We would like to thank Emmanuel Breuillard, Pierre de la Harpe and the anonymous referee for their comments and suggestions for improving the paper.
\section{Preliminaries}
In this section, we collect some relevant notions and results on \cat cube complexes, as well as introducing a few new notions. We refer the reader to \cite{CapraceSageev2011}, \cite{NevoSageev2013} and \cite{Sageev2014} for details on the relevant background material. In particular, we will assume familiarity with hyperplanes and halfspaces. We will always assume that $X$ is a finite dimensional \cat cube complex.
We will use $\h$ (and other gothic letters) to refer to a halfspace, $\h^*$ to refer to the complementary halfspace and $\hh$ to refer to a hyperplane.
\subsection{Essentiality}
A \cat cube complex is called \emph{essential} if every halfspace $\h$ contains arbitrarily large metric balls. This is the same as saying that every halfspace contains arbitrarily deep points: points arbitrarily far away from its bounding hyperplane.
If $\Aut(X)$ acts on $X$ without a global fixed point either in $X$ or at infinity (the visual boundary), then $X$ contains an $\Aut(X)$ invariant essential core. Thus, it is reasonable to discuss only essential \cat cube complexes, and we shall assume this from now on.
An action of a group $G$ on $X$ is said to be an \emph{essential action} if for any given orbit, there are orbit points arbitrarily deep inside every halfspace. When $X$ is essential and the action is inessential there exists a halfspace $\h$ and a vertex $v$ such that $G(v)\subset \h$. A collection of subgroups $G_1,...,G_n < \Aut(X)$ are said to be \emph{simultaneously inessential} if there exists halfspace $\h$ and a vertex $v$ in $X$ such that $\cup_i G_i(v)\subset \h$.
A large class of examples of simultaneously inessential subgroups arises in the context of hyperbolic groups.
\begin{proposition}
Let $G$ be a hyperbolic group which acts properly, cocompactly and essentially on a \cat cube complex $X$. Let $G_1,\ldots, G_n$ be a finite collection of infinite index quasiconvex subgroups. Then $G_1,\ldots, G_n$ are simultaneously inessential.
\label{QuasiconvexInessential}
\end{proposition}
We delay the proof of Proposition \ref{QuasiconvexInessential} until Section \ref{IrreducibleCase}.
\subsection{Products}
We say that $X$ is reducible if it admits a decomposition as a product of two non-trivial \cat cube complexes. A finite dimensional \cat cube complex always admits a canonical decomposition as a product of irreducible complexes.
If $X$ is essential then each irreducible factor of $X$ is also essential. Those irreducible factors that are not quasi-isometric to a real line are called \emph{non-Euclidean} factors. More explicitly, an irreducible, essential \cat cube complex is called non-Euclidean if it is not quasi-isometric to a real line. A (possibly reducible) essential \cat cube complex is called non-Euclidean if all of its factors are non-Euclidean. Essential, irreducible, non-Euclidean complexes will be the subject of Section \ref{IrreducibleCase}.
\subsection{Facing triples and strongly separated hyperplanes} The notion of a non-Euclidean \cat cube complex can be characterized in terms of facing triples of hyperplanes. By a facing triple of hyperplanes we mean a pairwise disjoint triple of hyperplanes that bound halfspaces which are also pairwise disjoint. Equivalently, no hyperplane of the triple separates the other two from one another. We then have the following lemma.
\begin{lemma}[Facing Triples]
Let $X$ be an essential, non-Euclidean \cat cube complex such that $\Aut(X)$ acts with no global fixed point at infinity. Then for every halfspace $\h$, there exists a facing triple $\hh, \hk,\hm$ with $\hk, \hm\subset \h$.
\label{FacingTriples}
\end{lemma}
An important lemma for us regarding irreducible cube complexes involves strongly separated pairs. A pair of disjoint hyperplanes $\hh$ and $\hk$ are called \emph{strongly separated} if there are no hyperplanes that intersect both $\hh$ and $\hk$. We will also refer to the corresponding nested pair of halfspaces $\h\subset\k$ as being strongly separated. We then have the following lemma.
\begin{lemma}[Strongly Separated Pairs]
Let $X$ be an essential non-Euclidean \cat cube complex such that $\Aut(X)$ acts without a global fixed point at infinity. Then for every halfspace $\h$ there exists a halfspace $\k\subset\h$ such that $\hh$ and $\hk$ are strongly separated.
\label{StronglySeparated}
\end{lemma}
\subsection{Skewering} A halfspace $\h$ is said to be \emph{skewered} by an automorphism $g\in\Aut(X)$ if $g\h\subset\h$. We say that $g$ skewers the hyperplane $\hh$ if $g$ skewers $\h$ or $\h^*$. The relevant lemma for us regarding skewering is the following.
\begin{lemma}[Double Skewering]
Let $X$ be essential and $G$ act on $X$ either cocompactly or without a global fixed point at infinity.
Then for every pair of halfspaces $\h\subset\k$, there exists $g\in G$ such that $g\k\subset\h$.
\label{DoubleSkewering}
\end{lemma}
As a corollary of the Double Skewering Lemma, we have that every halfspace is skewered by some element. For given a halfspace $\h$, there exists some $\h\subset\k$ and then the element ensured by the Double Skewering Lemma skewers $\h$.
In fact, a generalization of this for products can be established. More precisely (Theorem C of \cite{CapraceSageev2011}), one can show the following.
\begin{theorem}
Let $X=X_1\times\ldots\times X_n$ be a product of infinite, locally compact \cat cube complexes such that $\Aut(X_i)$ acts cocompactly on $X_i$ for each $i$. Suppose that $G$ is a lattice in $\Aut(X)$. Suppose that $\h_i\subset\k_i$ are nested halfspaces in each factor $X_i$. Then there exists $g\in G$ which simultaneously double skewers these hyperplanes. That is to say, for each $i$, $g\k_i\subset\h_i$.
\label{SimultaneousDoubleSkewering}
\end{theorem}
\subsection{The Roller Boundary}
As before, let $X$ be essential. We will consider here a certain part of the Roller boundary which will be useful to us (see \cite{Sageev2014} for basics on ultrafilters and the Roller boundary). Let $\cH$ denote the collection of halfspaces of $X$. Recall that an ultrafilter on $\cH$ is a subset $\alpha\subset \cH$ satisfying
\begin{enumerate}
\item (Choice) For each pair ${\h,\h^*}$, exactly one of $\h$ or $\h^*$ is in $\alpha$.
\item (Consistency) If $\h\subset\k$ and $\h\in\alpha$ then $\k\in\alpha$.
\end{enumerate}
The collection of all ultrafilters ${\mathcal U}(X)$ has a natural topology induced by the Tychonoff topology on $2^\cH$. This has as a basis the collection of \emph{halfspace neighborhoods}, where a halfspace neighborhood is a subset of ${\mathcal U}(X)$ of the form
\[U_\h\equiv\{\alpha\in{\mathcal U}(X)\vert \h\in\alpha\}\]
One can show that the collection of ultrafilters is then closed in $2^\cH$.
The vertices of $X$ correspond to those ultrafilters satisfying the descending chain condition (DCC).
The Roller Boundary is defined to be the complement in ${\mathcal U}(X)$ of the DCC ultrafilters. It is closed in ${\mathcal U}(X)$ as well and is therefore compact.
On the opposite side of the spectrum for ultrafilters, we have what we call \emph{strongly separated} ultrafilters.
\begin{definition}
An ultrafilter $\alpha$ is \emph{strongly separated} if there exists an infinite nested sequence of halfspaces $\h_1\supset\h_2\ldots\in\alpha$ such that $\h_i$ and $\h_{i+1}$ are strongly separated. We call such a sequence of halfspaces a \emph{strongly separated sequence} of halfspaces.
\end{definition}
It is easy to see that there are strongly separated sequences of halfspaces, since by Lemma \ref{StronglySeparated}, any halfspace $\h$ contains a halfspace strongly separated from it. In fact, by employing the Facing Triple Lemma, there exist uncountably many strongly separated sequences. A key observation is that a strongly separated sequence uniquely determines an ultrafilter.
\begin{lemma}
For every strongly separated sequence of halfspaces $\h_1\supset\h_2\ldots$, there exists a unique ultrafilter $\alpha$ such that
$\h_i\in\alpha$.
\label{filter}
\end{lemma}
\begin{proof}
We define an ultrafilter as follows.
\[\alpha=\{\h\vert \h_i\subset\h \text{ for infinitely many } i \}\]
By definition $\h_i\in\alpha$ for each $i$. We are left to check that $\alpha$ satisfies the two conditions necessary for an ultrafilter (choice and consistency) and then that it is unique. Any given hyperplane $\hh$ may intersect at most one of the $\hh_i$'s. It follows that exactly one of the halfspaces $\h$, $\h^*$ contains infinitely many $\h_i$'s, thus precisely one of $\h,\h^*$ is in $\alpha$. The consistency condition is immediate since if infinitely many $\h_i$ satisfy $\h_i\subset\h$ and $\h\subset\k$ then $\h_i\subset\k$ for infinitely many $i$.
To see uniqueness, let $\beta$ be an ultrafilter such that $\h_i\in\beta$ for all $i$. Then for any $\h\in\beta$, observe that $\hh$ may intersect at most one $\hh_i$. Consequently, either $\h$ contains infinitely many $\h_i$'s or $\h^*$ contains infinitely many $\h_i$'s. Choose one such $\h_i$. Since $\h,\h_i\in\beta$, by the consistency condition we have that $\h_i\subset\h$ (and not $\h_i\subset\h^*$). This means that $\h\in\alpha$. So $\alpha$ and $\beta$ make the same choices for each pair $\h,\h^*$ and hence $\alpha=\beta$.
\end{proof}
We define $S(X)$ to be the closure in ${\mathcal U}$ of the collection of strongly separated ultrafilters. It is a compact subspace of the Roller Boundary.
Next we see that strongly separated utrafilters behave nicely with respect to medians. Recall that given three ultrafilters $\alpha, \beta, \gamma$, the \emph{median} of $\alpha, \beta$ and $\gamma$ is defined as \[med(\alpha,\beta,\gamma)\equiv(\alpha\cap\beta)\cup(\beta\cap\gamma)\cup(\gamma\cap\alpha).\]
\begin{lemma}
Let $\alpha,\beta,\gamma$ be distinct strongly separated ultrafilters. Then the $med(\alpha,\beta,\gamma)$ satisfies DCC and hence is a vertex of $X$.
\label{MedianInSpace}
\end{lemma}
\begin{proof}
We need to show that $\mu=med(\alpha,\beta,\gamma)$ satisfies the descending chain condition (see Figure \ref{Fig:MedianInSpace}). Suppose that $\h_1\supset\h_2\ldots$ is an infinite sequence of \nobreak{halfspaces} such that $\h_i\in\mu$. Then after passing to a subsequence, we may assume that $\h_i\in\alpha\cap\beta$ for all $i$. Since $\alpha$ and $\beta$ are distinct strongly separated ultrafilters, there exist $\h\in\alpha$ and $\k\in\beta$ such that $\h\cap\k=\emptyset$ and $\hh$ and $\hk$ are strongly separated. Since $\h_i\in\alpha$, we have that $\h_i\cap\h\not=\emptyset$ and
\begin{figure}[h]
\includegraphics[width=.70\textwidth]{MedianInSpace}
\caption{The median of strongly separated ultrafilters satisfies DCC.}
\label{Fig:MedianInSpace}
\end{figure}
$\h_i\cap\k\not=\emptyset$. But if $\{\h_i\}$ is an infinite descending sequence of hyperplanes, we must have that for $i$ sufficiently large, $\h_i\subset\h$ or $\hh_i\cap\hh\not=\emptyset$.
Similarly, for $i$ sufficiently large, we must have $\h_i\subset\k$ or $\hh_i\cap\hk\not=\emptyset$.
But this contradicts the fact that $\hh$ and $\hk$ are strongly separated.
\end{proof}
We will also need the following lemma telling us that halfspace neighborhoods form a basic collection of open neighborhoods for the strongly separated ultrafilters.
\begin{lemma}
Let $U\subset S(X)$ be an open neighborhood of $\alpha\in S(X)$, where $\alpha$ is a strongly separated ultrafilter. Then there exists a halfspace $\h$ such that $\alpha\in (U_\h\cap S(X))\subset U$.
\label{HalfSpaceNbhds}
\end{lemma}
\begin{proof}
Since the halfspace neighborhoods $U_\h$ serve as a collection of sub-basic open sets for the topology on ${\mathcal U}(X)$ and hence of $S(X)$, it suffices to prove this when $U$ is a finite intersection of halfspace neighborhoods of $\alpha$. That is, we assume that there exist halfspaces $\h_1,\ldots,\h_n$ such that $U=\cap U_{\h_i} \cap S(X)$. Since $\alpha$ is a strongly separated ultrafilter, there exists a strongly separated sequence $\k_1\supset\k_2\ldots$ with $\k_i\in\alpha$. For each $\h_i$, we then know that there exists a tail of the strongly separated sequence contained in $\h_i$. Consequently, there exists a single $\k_j$ such that $\k_j\subset\h_i$ for all $i$. We then have that $\alpha\in U_{\k_j}\cap S(X)\subset U$ as required.
\end{proof}
\subsection{Ping Pong}
We will use the following version of the Ping-Pong Lemma.
\begin{lemma}[Ping-Pong Lemma]
Let $S$ be a set and let $G$ be a group acting on $S$. Let $H,K<G$ be subgroups of $G$. Suppose that there exist two disjoint subsets $U,V\subset S$ such that for all for all $1\not=h\in H$, we have
$hU\subset V$ and for all $1\not=k\in K$, $kV\subset U$. Then $<H,K>\cong H*K$.
\label{PingPong}
\end{lemma}
\section{Irreducible complexes}
\label{IrreducibleCase}
In all that follows, we will assume that $X$ is a finite dimensional, irreducible, essential, non-Euclidean \cat cube complex, and that $G$ is a group acting on $X$ essentially, properly, and without global a fixed point at infinity. We also assume that $G$ has no finite normal subgroup.
\begin{theorem}[Main Theorem]
Let $A_1,\ldots, A_n$ be a collection of simultaneously inessential subgroups of $G$. Then there exists $g\in G$ of infinite order, such that for each $i$,
$$ \langle g, A_i\rangle \cong \langle g\rangle * A_i $$
\label{MainTheorem}
\end{theorem}
\begin{corollary}
\label{naivecor}
Suppose that a group $G$ is acting on a finite-dimensional irreducible non-Euclidean \cat cube complex $X$. If the action of $G$ on $X$ is essential, proper and has no global fixed point at infinity, and, $G$ has no non-trivial finite normal subgroups then $G$ has property $P_{naive}$.
\end{corollary}
First of all, we will need the following lemma.
\begin{lemma}
Suppose that $a\in G$ is nontrivial. Then $\Fix(a)\subset S(X)$ has empty interior.
\label{EmptyInterior}
\end{lemma}
\begin{proof}
Suppose that $a$ is non-trivial and fixes an open subset $U\subset S(X)$. By Lemma \ref{HalfSpaceNbhds},
there exists a half space $\h$ such that the halfspace neighborhood $U_\h\subset S(X)$.
Consider three strongly separated ultrafilters in $U_\h$ and let $v$ denote their median. By Lemma \ref{MedianInSpace}, the ultrafilter $v$ is a vertex in $X$.
Since the action is essential, there exists $g\in G$ such that $g$ skewers $\h$, so that $g\h\subset \h$.
By the Lemmas \ref{StronglySeparated} and \ref{DoubleSkewering}, we may further assume that $g\hh$ and $\hh$ are strongly separated.
We now consider the elements $a_n \equiv g^{-n} a g^n$. Let $\h_n = g^{-n}\h$. Note that by our choice of $g$ above, the sequence $\{\h_n^*\}$ is a strongly separated sequence of halfspaces.
Note that $a_n$ fixes $U_n\equiv~U_{\h_n}$. Since $v\in\h\subset g^{-n}\h$ it follows that $v$ is the median of three points contained in $U_n$, and therefore $a_n v= v$. By the properness of the action, there are only finitely many possibilities for $a_n$, so that we may pass to a subsequence of $\{a_n\}$ such that $a_n=b$ for all $n$. We then have $\bigcup_n U_n\subset \Fix(b)=\{y\in S(X) \vert by=y\}$. Because the action is proper, the kernel of the action on $S(X)$ is a finite normal subgroup and because $G$ has no finite normal subgroup, we have $Fix(b)\not= S(X)$. But now $\Fix(b)$ is closed. So there exists a halfspace $\k$ such that $U_\k\subset S(X)-\Fix(b)$. Consequently, we have that $\hk\cap \h_n^*$, for all $n$. But this is a contradiction, since
$\{\h_n^*\}$ is a strongly separated sequence of halfspaces.
\end{proof}
The key to proving the main theorem is the following proposition, which will allow us to play ping-pong.
\begin{proposition}
Let $A_1,\ldots A_n$ be a collection of simultaneously inessential subgroups of $G$. Then there exists a halfspace $\k$ in $X$, such that
$a\k\subset\k^*$ for all non-trivial $a\in \bigcup_i A_i$.
\label{GoodHyperplane}
\end{proposition}
\begin{proof}
To avoid writing indices, we will first give the proof for the case of a single subgroup $A$ and then later explain how this is done for finitely many subgroups.
We will construct a combinatorial convex hull for $A(v)$, where $v$ is some vertex of $X$. For a halfspace $\h$, let $C(\h)$ denote the carrier of $\h$, namely the union of cubes that intersect $\h$ non-trivially. It is easy to see that that $C(\h)$ is a convex subcomplex of $X$ (see \cite{Haglund2008}). Now, given a halfspace $\h$ such that $A(v)\subset\h$, we define \[C_\h=\bigcap_{a\in A} C(a\h).\]
The inessentiality assumption tells us that there exists such an $\h$, and since $C_\h$ is the intersection of convex subcomplexes, it is convex. Also, $C_\h$ is invariant under $A$.
\noindent{\bf Remark.} It is convex in both the usual CAT(0) sense but also in the $\ell_1$ sense: every combinatorial edge-geodesic between vertices in $C_\h$ remains in $C_\h$.
Choose some halfspace $\k_1\subset\h^*$ such that $\hh$ and $\hk_1$ are strongly separated.
We observe that every hyperplane which intersects $C_\h$ does not intersect $\hk_1$, since $C_\h\subset\h$ and $\hh$ and $\hk_1$ are strongly separated.
\begin{figure}[h]
\includegraphics[width=.90\textwidth]{Hull1}
\caption{A convex hull for $A(v)$.}
\label{Fig:Hull1}
\end{figure}
Now we consider the natural combinatorial projection of $\hk_1$ onto $C_h$. Namely, consider all the hyperplanes intersecting $C_\h$. As observed, for every such hyperplane $\hm$, we have $\hk_1\subset \m$ or $\hk_1\subset\m^*$. This thus defines an ultrafilter on the collection of hyperplanes meeting $C_\h$, since it is a choice of halfspaces which satisfies the standard consistency conditions necessary for an ultrafilter. It also satisfies the DCC condition (see, for example, \cite{Sageev2014}). Thus, it determines a vertex $w$ in $C_\h$. This is the unique vertex of $C_\h$ that can be joined by a path to $\hk_1$ without crossing any hyperplane that meets $C_\h$.
Note that for any $a\in A$, $a\hk_1$ does not intersect any hyperplane that intersects $C(\h)$. This is because if it did, say $a\hk_1\cap\hm\not=\emptyset$, then by applying $a^{-1}$, we find that $\hk_1\cap a^{-1}(\hm)\not=\emptyset$. But by invariance of $C(\h)$ under $A$, we have that $a^{-1}(\hm)\cap C(\h)\not=\emptyset$, contradicting the strong separation of $\hk_1$ and $\hh$.
Thus $a\hk_1$ projects to a vertex in $C(\h)$, just as $\hk_1$ does.
Now by the naturality of this construction, we have that for each $a\in A$, the translate $a\hk_1$ projects to $aw$. But if $a\hk_1\cap \hk_1\not=\emptyset$ it must project to $w$ as well.
\begin{figure}[h]
\includegraphics[width=.90\textwidth]{Hull2}
\caption{The projection of $\hk_1$ onto $C_\h$.}
\label{Fig:Hull2}
\end{figure}
This tells us that
$$S = \{a\in A\vert a\hk_1\cap\hk_1\not=\emptyset\}\subset \Stab(w)$$
By the properness of the action, we get that $S$ is finite. For all elements $a\not\in S$, we have $a\k\subset\k^*$, as required. We are thus left to prove the proposition for the elements of $S$.
Let $U_{\k_1}$ denote the open subset of $S(X)$ determined by $\k_1$. By Lemma \ref{EmptyInterior} we can find a point $b\in U_{\hk_1}$ which is not fixed by any element of $S$. Since $S$ is finite, there exists a neighborhood $U\subset U_{\k_1}$ of $b$ such that $U\cap aU=\emptyset$ for any $a\in S$. Since every open neighborhood contains a halfspace neighborhood, we have a halfspace $\k_2\subset\k_1$ such that $aU_{\k_2}\cap U_{\k_2} = \emptyset$ for all $a\in S$. Thus, for any $a\in S$, we have that
$a\k_2\subset \k_2^*$ for any $a\in S$. Since this is already true for $\k_1$ for all other elements of $A$, the hyperplane $\hk_2$ is the desired hyperplane.
To show the proposition for the case of finitely many subgroups $A_1,\ldots,A_n$ which are simultaneously inessential, we start with a hyperplane $\h$ such that $\bigcup _i A_i(v)\subset\h$. Taking $\hk_1$ as above
we see that the set of elements $S$ of $\bigcup_i A_i$ which carry $\hk_1$ to a hyperplane meeting $\hk_1$ is finite.
We then construct, as in the previous paragraph a halfspace $\k_2\subset\k_1$ such that $a\k_2\subset\k_2^*$ for any element of $S$. This $\hk_2$ is the desired hyperplane.
\end{proof}
\begin{proof}[Proof of Theorem \ref{MainTheorem}]
By Proposition \ref{GoodHyperplane}, there exists a halfspace $\k$ such that $a\k\subset\k^*$ for all $a\in \bigcup_i A_i$.
We need to find our $g\in G$ which plays ping-pong with every $A_i$.
By the Facing Triples Lemma, there exists a pair of disjoint halfspaces $\m$ and $\n$ with $\m\cup \n\subset \k$.
By the Double Skewering Lemma, there exists $g\in G$ such that $g\m^*\subset\n$.
We now construct two disjoint subsets $U$ and $V$ of $X$, such that $aU\subset V$ for all non-trivial $a\in A_i$ and $g^n V\subset U$ for all $n\not=0$ and for all $i$. This will then give the result by the Ping Pong Lemma. For each $i$, we define
$$U=\bigcup_{a\not=1\in A_i} a\k \text{\ \ and\ \ } V = \m\cup g\m^*$$
\begin{figure}[h]
\includegraphics[width=.90\textwidth]{MainArgument}
\caption{The construction of the ping pong pair.}
\label{Fig:MainArgument}
\end{figure}
Note that by construction, for each $a\not=1\in A_i$, we have $a\k\subset U$, so we have $aV\subset U$. For each $n\not=0$, we obtain
$g^n(g\m)\subset\m$ or $g^n\m^*\subset g\m^*$. Since $U\subset\m^*$ and $U\subset g\m$, we have that $g^n U\subset V$, as required.
\end{proof}
We now prove Proposition \ref{QuasiconvexInessential}.
\begin{proof}
We prove the proposition by induction on $n$. Let $v$ be a vertex of $X$. In \cite{SageevWise2015} (Proposition 3.3), it is shown that if $H$ is a quasiconvex subgroup of $G$, there exists a number $C>0$ and a universal number $D>0$ (depending only on the dimension of the complex), such that if $w$ is a vertex with $d(w, H(v))>C$, then there exists a hyperplane $\hh$ separating $w$ and $H(v)$ and $d(w,\hh)<D$.
Since points arbitrarily far away from $H(v)$ are guaranteed to exist when $H$ is of infinite index, this implies that the the proposition in the case $n=1$.
We know assume that $G_1,\ldots,G_{n-1}$ are simultaneously inessential. Let $\h$ be a halfspace such that $G_i(v)\subset \h^*$ for $i=1,\ldots,n-1$. Note that since $v\subset \h^*$, the orbit $G_n(v)$ is not entirely contained in $\h$. We consider a halfspace $\k\subset\h$ such that $\hk$ is strongly separated from $\hh$. Since $G_n$ is of infinite index, there exists a vertex $w\in\k$ such that $d(w, G_n(v))>C$. We also choose $w$ such that $d(w,\hk)>D$. Now we apply the above again and conclude that there exists a hyperplane $\hm$ separating $w$ and $G_n(v)$ and such that $d(w,\hm)<D$. Let $\m$ be the halfspace associated to $\hm$ such that $w\in\m$ and $G_n(v)\subset\m^*$. Since $d(w,\hm)<D$, we have that $\hm\cap\k\not=\emptyset$. Since $\hk$ and $\hh$ are strongly separated, we thus have that $\hm\cap\hh=\emptyset$. Moreover, since $G_n(v)\cap\h^*\not=\emptyset$, we must have $\m\subset\h$ and not $\m^*\subset\h$. It follows
that $G_i(v)\subset\h$ for all $1=1,\ldots,n$, as required.
\end{proof}
We complete the section with a proof of Proposition \ref{bound}, which says that $S(X)$ is a $G$-boundary, on which, if conditions are favourable, $G$ acts topologically freely.
\begin{proof}[Proof of Proposition \ref{bound}]
Let $X$ be a non-Euclidean irreducible CAT(0) cube complex $X$. Suppose $G$ is acting essentially on $X$ without a global fixed point at infinity. Then, we claim that $S(X)$ is a $G$-boundary. We first show that the compact $G$-space $S(X)$ is minimal: given $\alpha \in S(X)$ and $U \subset S(X)$ open, there exists $g\in G$ such that $g\alpha \in U$. By Lemma \ref{HalfSpaceNbhds}, there exists some halfspace $\h$ such that $(U_\h\cap S(X))\subset U$. If $\h \in \alpha$ then we can take $g=1$. Suppose then $\h \notin \alpha$. By the Flipping Lemma \cite{CapraceSageev2011}, there exists $g \in G$ such that $g\h^* \subset \h$ and for this $g$, $\h \in g\alpha$. This implies $g\alpha \in (U_\h\cap S(X))\subset U$.
We now show that the $S(X)$ is proximal: for any pair $\alpha, \beta \in S(X)$ of points, there exists a point $\gamma \in S(X)$ such that for every open neighbourhood $U$ of $\gamma$ there exists $g \in G$ such that $g\alpha, g\beta \in U$. Choose a strongly separated ultrafilter $\gamma$ which is distinct from both $\alpha$ and $\beta$. Let $U$ be any open set containing $\gamma$. Note that $S(X)$ is Hausdorff and so we can find an open set $V$ that contains $\gamma$ but does not contain $\alpha$ and $\beta$. The open set $U\cap V$ contains $\gamma$ and by Lemma \ref{HalfSpaceNbhds}, contains a half-space neighbourhood $U_\h$. Now, $\h^* \in \alpha, \beta $, so we use the Flipping Lemma to find $g \in G$ such that $g\h^* \subset \h$. Then, $\h \in g\alpha, g\beta$ and therefore $g\alpha, g\beta \in U$.
To ensure that the proximal minimal $G$ space $S(X)$ is strongly proximal, we need to check that $S(X)$ has contractible neighbourhoods. Let $\alpha$ be a strongly separated ultrafilter and let $\h$ be a halfspace contained in $\alpha$. We claim that the open neighbourhood $V:=U_\h \cap S(X)$ of $\alpha$ is contractible i.e. there exists $\beta \in S(X)$ such that every open neighbourhood of $\beta$ contains a translate of $V$. Choose $\beta$ to be any strongly separated ultrafilter distinct from $\alpha$ and let $U$ be an open set containing $\beta$. As before, choose a halfspace $\k$ such that $\k \in \beta$, $\k \subset \h^*$ and $U_\k \cap S(X) \subset U$. Use the Flipping Lemma to choose $g\in G$ such that $g \k^* \subset \k$. Then, $gV \subset U$.
This shows that $S(X)$ is a minimal and strongly proximal compact $G$-space. Lemma \ref{EmptyInterior} verifies that the action is topologically free whenever the action of $G$ on $X$ is proper and $G$ has no non-trivial finite normal subgroups.
\end{proof}
\section{Property $P_{naive}$ and $C^*$-simplicity}\label{naive}
Recall that a group $G$ has property $P_{naive}$ if for every finite subset $F \subset G$ there exists an element $y \in G$ of infinite order such that given $g \in F$, the subgroup $\langle g, y \rangle$ is isomorphic to the free product $\langle g\rangle * \langle y\rangle$.
The simplest example of a group possessing property $P_{naive}$ is a non-abelian free group $F_n$. Property $P_{naive}$ was introduced by Bekka, Cowling and de la Harpe as part of their programme to study simplicity of group $C^*$-algebras \cite{BCH}. Non-elementary hyperbolic groups have property $P_{naive}$; this was proved for torsion-free groups by de la Harpe, and further generalized to relatively hyperbolic groups in \cite{ArzhantsevaMinasyan}. In \cite{BCH}, the authors established $P_{naive}$ for Zariski dense subgroups of connected simple Lie groups with $\mathbb{R}$-rank 1 and trivial center. More recently, property $P_{naive}$ was studied by Tal Poznansky in the context of linear groups: he proved that every Zariski-dense subgroup of a semisimple algebraic group (over any field), satisfies a weak version of property $P_{naive}$ \cite[Lemma 2.3]{Poznansky}.
Here, we study conditions under which groups acting on \cat cube complexes have property $P_{naive}$. When the underlying complex is irreducible, property $P_{naive}$ follows from the Main Theorem and is recorded as Corollary \ref{naivecor} above.
\subsection{Products}
In the case of products, we prove a result in a more restricted setting, namely that of lattices in $\Aut(X)$, where $X$ is a locally finite, cocompact cube complex.
\begin{theorem}\label{Products}
Let $X$ be a locally finite, cocompact \cat cube complex with no Euclidean factors; let $G$ be a lattice in $\Aut(X)$ with no non-trivial finite normal subgroup. Then $G$ satisfies $P_{naive}$.
\end{theorem}
\begin{proof}
Let $X=\prod_k X_k$ be the decomposition of $X$ into irreducible factors. Let $g_1,\ldots, g_n$ denote a finite collection of elements of $G$.
We first observe that for each $i$, the action of $<g_i>$ on each irreducible factor of $X$ is inessential. This is simply because $<g_i>$ is cyclic and each factor is non-euclidean.
Secondly, we observe that since the action of $<g_i>$ is proper, there exists a factor of $X$ on which the action of $<g_i>$ is proper. Otherwise, for each factor $X_k$ there exists an integer $n_k$ such that $g^{n_k}$ fixes the ball of radius $R$ in $X_k$. Taking $N=\prod_k n_k$, we obtain an $N$ such that $<g_i^N>$ fixes the ball of radius $R$ in each $X_k$, which in the case that $g_i$ is infinite cyclic, would contradict the properness of the action of $<g_i>$ on $X$.
For each factor $X_k$ for which $<g_i>$ acts properly on $X_k$, Proposition \ref{GoodHyperplane} insures that there exists a halfspace $\h_k$, such that $a\h_k\subset \h_k^*$ for all
$a\in <g_i>$. (If for some $k$, there are no such $g_i$'s, we choose $\h_k$ arbitrarily.) Following the proof of Theorem \ref{MainTheorem}, for each such $k$, we then choose halfspaces $\m_k$ and $\n_k$, so that $\m_k\cup \n_k\subset \h_k$.
Now we apply Theorem \ref{SimultaneousDoubleSkewering} to conclude that there exists $g\in G$ such that $g\m_k^*\subset\n_k$ simultaneously for all $k$. The construction now of $U$ and $V$ for the application of the Ping Pong Lemma proceeds as in the proof of Theorem \ref{MainTheorem}.
More precisely, we need to show that $<g,g_i> \equiv <g> * <g_i>$. Given such an $i$, Let $X_k$ denote an irreducible component on which $<g_i>$ acts properly. Then set
$$U=\bigcup_{a\not=1\in <g_i>} a\h_k \text{\ \ and\ \ } V = \m_k\cup\n_k$$
Then we obtain $aV\subset U$ for any $a\not=1$ and we have $g^nU\subset V$ for any $n\not=0$, as required.
\end{proof}
\subsection{Infinite conjugacy classes}
Corollary \ref{naivecor} and Theorem \ref{Products} allow us to determine necessary and sufficient conditions for a \cat cube complex group to be $C^*$-simple. $C^*$-simple groups are often \textit{icc}: a group is icc if the conjugacy class of every non-identity element is infinite. We will first identify the collection of \cat cubical groups which are icc.
\begin{proposition}\label{icc}
If a group $G$ acts properly and co-compactly on a \cat cube complex then $G$ is icc if and only if no finite index subgroup of $G$ contains a non-trivial virtually abelian normal subgroup.
\end{proposition}
\begin{proof}[Proof of Proposition] Suppose that $G$ is not icc. Let $H$ be the collection of all elements $g \in G$ such that the conjugacy class of $g$ is finite. It is easy to check that $H$ is a characteristic subgroup of $G$. Let $L$ be a subgroup generated by finitely many elements $x_1, \ldots, x_k$ of $H$. For each $i$, the centralizer of $x_i$ in $L$ is a subgroup of finite index in $L$. Consequently, the centre of $L$, which is the intersection of the centralizers of the $x_i$'s has finite index in $L$. This implies that each finitely generated subgroup of $H$ is virtually abelian. As every virtually abelian subgroup must stabilize a flat and the dimension of flats in $X$ is bounded, $H$ is forced to be virtually abelian. This shows, if $G$ has a non-trivial finite conjugacy class, then $G$ contains a non-trivial virtually abelian normal subgroup.
Suppose now that a finite index subgroup $\Gamma$ of $G$ contains a virtually abelian normal subgroup $K$. If $K$ is finite and $g$ is a non-trivial element of $K$, then the conjugacy class $\{xgx^{-1}\ |\ x \in G\}$ of $g$ is contained in $\cup_{t \in G/\Gamma} tKt^{-1}$. Evidently, every conjugacy class of $K$ is finite and so, $G$ cannot be icc. If $K$ is infinite, then replace $K$ by a characteristic subgroup $K'$ which is free abelian of finite rank. The action of $\Gamma$ on $K$ by conjugation fixes $K'$ and so, $\Gamma$ normalizes $K'$. The homomorphism from $\Gamma$ to $Aut(K')\cong GL(n,\mathbb{Z})$ has finite image (in fact, it lies inside $O(n)\cap GL(n,\mathbb{Z}))$ and so, a finite index subgroup of $\Gamma$ (and hence, of $G$) that centralizes $K'$. Clearly, the conjugacy class of every element of $K'$ in $G$ is finite and $G$ cannot be icc.
\end{proof}
The amenable radical of a group $G$, written $A_G$ is the largest amenable normal subgroup of $G$. As amenability is closed under extensions, the amenable radical exists and is easily shown to be a characteristic subgroup of $G$. Suppose a group $G$ has a finite index subgroup that contains a normal virtually abelian subgroup $K$. Then, passing to a normal finite index subgroup, we can assume that $G$ has a normal subgroup $H$ of finite index such that $A_H \neq 1$. As $A_H$ is characteristic in $H$, it is normal in $G$ and it follows, $A_G \neq 1$. Hence, the triviality of $A_G$ implies that $G$ has no finite index subgroups containing normal virtually abelian subgroups.
In groups acting geometrically on \cat cube complexes the converse is true: the amenable radical is trivial if no finite index subgroup of $G$ has normal virtually abelian subgroups $\neq 1$. This is because, \cat cubical groups satisfy the Tits Alternative \cite[Main Theorem]{SageevWise} and the amenable radical is virtually abelian. Therefore $A_G \neq 1$ implies $G$ has a normal virtually abelian subgroup. To summarise, we have the following equivalence.
\begin{lemma} \label{radical}
Suppose that a group $G$ is acting properly on a \cat cube complex and $G$ has a bound on the size of its finite subgroups. Then the amenable radical $A_G$ is trivial iff $G$ has no finite index subgroups with normal non-trivial virtually abelian subgroups.
\end{lemma}
The presence of virtually abelian subgroups inside finite index subgroups of $G$ is directly related to the existence of Euclidean factors in the Cartan decomposition of the underlying space.
\begin{lemma} Suppose a group $G$ is acting geometrically and faithfully on a CAT(0) cube complex $X$. If $X$ has a Euclidean factor, then some finite index subgroup of $G$ contains a non-trivial virtually abelian normal subgroup. In particular, the amenable radical of $G$ is non-trivial.
\label{euclidean}
\end{lemma}
\begin{proof}
As $G$ is acting geometrically, $X$ is finite dimensional and moreover by passing to an essential core, we may assume that the $G$-action on $X$ is essential. Now, if $X$ is irreducible, then $X$ is Euclidean, meaning, it is quasi-isometric to the real line. In this case $G$ itself is virtually infinite cyclic. If $X$ is reducible, then it has a Cartan decomposition into irreducible factors. We have $X \cong X_P \times X_E$, where $X_E$ is the Euclidean part of $X$. Then, by Corollary 2.8 from \cite{NevoSageev2013}, there is a finite index subgroup $H$ of $G$ such that $H= H_E \times H_P$, where $H_E$ acts properly and co-compactly on $X_E$. This implies that $H_E$ is virtually abelian and so, a finite index subgroup contains a non-trivial virtually abelian normal subgroup.
\end{proof}
\subsection*{$C^*$ simple groups. }
Let $G$ be a countable discrete group and let $\ell^2 G$ be the Hilbert space of square-summable functions on $G$. The group $G$ acts on $\ell^2G$ via its left regular representation as follows. $$\lambda_g(f)(h)= f(g^{-1}h),\ \forall g,h\in G.$$
The map $g \mapsto \lambda_g$ gives an injection of $G$ into the space of bounded linear operators $\mathcal{B}(\ell^2 G)$. The closure of the linear span of image $\{\lambda_g\ :\ g \in G\}$ in the operator norm is called the reduced $C^*$-algebra of $G$ and written, $C^*_r(G)$.
A countable group is said to be $C^*$-simple if $C^*_r(G)$ is a simple algebra, i.e. $C^*_r(G)$ has no non-trivial two-sided ideals. The reduced $C^*$-algebra carries information about the representation theory of the group. One can show that simplicity of the algebra $C^*_r(G)$ is equivalent to the following restriction on the representation theory of $G$: every unitary representation of $G$ which is weakly contained in the left regular representation of $G$ is actually equivalent to it. This means that a group which is both amenable and $C^*$-simple must be the trivial group. This statement in turn generalizes to the fact that a $C^*$-simple group cannot have non-trivial normal amenable subgroups.
Many geometric classes of groups have been shown to be $C^*$-simple. These include all free products (except the infinite dihedral group), non-soluble subgroups of $\mathrm{PSL}_2(\mathbb{R})$, torsion-free non-elementary hyperbolic groups and mapping class groups of surfaces. More generally, acylindrically hyperbolic groups are $C^*$-simple \cite{DGO}.
A group acting geometrically on an irreducible \cat cube complex has enough rank one elements to make it acylindrically hyperbolic, using results from \cite{sisto}. So groups acting geometrically on irreducible \cat cubical groups are $C^*$-simple. However a group acting geometrically on a non-trivial product of irreducibles is not acylindrically hyperpbolic (for example, irreducible lattices in products of trees). Here, we apply our theorems on property $P_{naive}$ to show that even in this setting, a group $G$ acting properly and co-compactly on a \cat cube complex is $C^*$-simple. We summarize this as follows.
\begin{theorem}[Corollary \ref{equiv}] Suppose that a group $G$ is acting properly and co-compactly on a \cat cube complex $X$. The following are equivalent.
\begin{enumerate}
\item $G$ is $C^*$ simple.
\item $G$ is icc.
\item No finite index subgroup of $G$ has a non-trivial virtually abelian normal subgroup.
\item the amenable radical of $G$ is trivial.
\item The $G$-action is faithful and $X$ is non-Euclidean.
\item $G$ has property $P_{naive}$.
\end{enumerate}
\end{theorem}
\proof The implications (1)$\Rightarrow$(2) and (6)$\Rightarrow$(1) are well-known, see \cite{BCH}. Proposition \ref{icc} establishes the equivalence of (2) and (3). Lemma \ref{radical} shows (3) and (4) are equivalent. That (4) implies (5) follows from Lemma \ref{euclidean}.
The hypothesis that $G$ acts properly and co-compactly implies that $G$ is finitely presented and moreover, $X$ is finite-dimensional. The kernel of the action is finite whenever the action is proper and so if the amenable radical is trivial, the action is faithful. Now, to deduce (6) from (5), after passing to an essential core if needed, we apply Corollary \ref{naivecor} and Theorem \ref{Products}.
\endproof
\small{A. Kar, University of Southampton, a.kar@soton.ac.uk \\
M. Sageev, Technion - Israel Institute of Technology, sageevmtx.technion.ac.il }
|
2,877,628,091,279 | arxiv | \section{Introduction} \label{sec1}
Let $\mathbb{K}$ be a field and $S = \mathbb{K}[x_1,\ldots,x_n]$ be the
polynomial ring in $n$ variables over $\mathbb{K}$. Computing and finding bounds for the depth (or equivalently, projective dimension) of homogenous ideals of $S$ and their powers have been studied by several authors (see e.g., \cite{b'}, \cite{chhktt}, \cite{ds}, \cite{fm}, \cite{htt}, \cite{hh''}, \cite{hktt}, \cite{nt}).
In \cite{fhm}, Fouli, H${\rm \grave{a}}$ and Morey introduced the notion of {\it initially regular sequence}. Using this notion, they provided a method for estimating the depth of a homogenous ideal. To be more precise, let $I\subset S$ be a homogenous ideal and let $\{b_{i,j} \mid 1\leq i\leq q, 0\leq j\leq t_i\}$ be a subset of distinct variables of $S$. Suppose ${\rm in_<(I)}$ is the initial ideal of $I$ with respect to a fixed monomial order $<$ and assume that $G({\rm in}_<(I))=\{u_1, \ldots, u_m\}$ is the set of minimal monomial generators of ${\rm in}_<(I)$. It is shown in \cite[Theorem 3.11]{fhm} that $\depth S/I\geq q$, provided that the following conditions hold.
\begin{itemize}
\item[(i)] The monomials $u_1, u_2, \ldots, u_m$ are not divisible by $b_{i,j}^2$ for $1\leq i\leq q$ and $1\leq j\leq t_i$.
\item[(ii)] For $i=1, 2, \ldots, q$, if a monomial in $\{u_1, \ldots, u_m\}$ is divisible by $b_{i,0}$, then it is also divisible by $b_{i,j}$, for some integer $1\leq j\leq t_i$,
\end{itemize}
In Section \ref{sec2}, we provide an alternative proof for this result (see Proposition \ref{depin}). Our proof is based on a short exact sequence argument, while in \cite{fhm}, the authors construct an initially regular sequence to prove their result.
Fouli, H${\rm \grave{a}}$ and Morey \cite{fhm1} observed that the above result provides a combinatorial lower bound for the depth of edge ideals of graphs. Indeed, for every graph $G$ with edge ideal $I(G)$, we have$$\depth S/I(G)\geq \alpha_2(G),$$where $\alpha_2(G)$ denotes the so-called star packing number of $G$ (see Section \ref{sec2} for the definition of star packing number and see Corollary \ref{spn} for more details about the above inequality). It is proven in \cite[Theorem 3.7]{fhm1} that the above inequality can be extended to powers of $I(G)$ when $G$ is a forest. More precisely, for every forest $G$ and for every integer $s\geq 1$, the inequality
\[
\begin{array}{rl}
\depth S/I(G)^s \geq \alpha_2(G)-s+1
\end{array} \tag{$\dag$} \label{dag}
\]
holds. On the other hand, we know from \cite[Theorem 5.9]{svv} that for every forest $G$, the $s$-th ordinary and symbolic powers of $I(G)$ coincide. Hence, inequality (\ref{dag}) essentially says that for every forest $G$ and any positive integer $s$,
\[
\begin{array}{rl}
\depth S/I(G)^{(s)} \geq \alpha_2(G)-s+1.
\end{array} \tag{$\ddag$} \label{ddag}
\]
In Theorem \ref{main}, we generalize \cite[Theorem 3.7]{fhm1} by proving inequality (\ref{ddag}) for any chordal graph. Moreover, we show that inequality (\ref{ddag}) is true for $s=2$ and for any graph $G$ (see Theorem \ref{2power}).
\section{Preliminaries and known results} \label{sec2}
In this section, we provide the definitions and the known results which will be used in the next sections.
Let $G$ be a simple graph with vertex set $V(G)=\big\{x_1, \ldots,
x_n\big\}$ and edge set $E(G)$. For a vertex $x_i$, the {\it neighbor set} of $x_i$ is $N_G(x_i)=\{x_j\mid x_ix_j\in E(G)\}$. We set $N_G[x_i]=N_G(x_i)\cup \{x_i\}$ and call it the {\it closed neighborhood} of $x_i$. The cardinality of $N_G(x_i)$ is the {\it degree} of $x_i$ and will be denoted by ${\rm deg}_G(x_i)$. For every subset $U\subset V(G)$, the graph $G\setminus U$ has vertex set $V(G\setminus U)=V(G)\setminus U$ and edge set $E(G\setminus U)=\{e\in E(G)\mid e\cap U=\emptyset\}$. A subgraph $H$ of $G$ is called {\it induced} provided that two vertices of $H$ are adjacent if and only if they are adjacent in $G$. A graph $G$ is called {\it chordal} if it has no induced cycle of length at least four. A subset $W$ of $V(G)$ is a {\it clique} of $G$ if every two distinct vertices of $W$ are adjacent in $G$. A vertex $x$ of $G$ is a {\it simplicial vertex} if $N_G(x)$ is a clique. It is well-known that every chordal graph has a simplicial vertex. A subset $C$ of $V(G)$ is a {\it vertex cover} of $G$ if every edge of $G$ is incident to at least one vertex of $C$. A vertex cover $C$ is a {\it minimal vertex cover} if no proper subset of $C$ is a vertex cover of $G$. The set of minimal vertex covers of $G$ will be denoted by $\mathcal{C}(G)$. A subset $A$ of $V(G)$ is called an {\it independent subset} of $G$ if there are no edges among the vertices of $A$. Obviously, $A$ is independent if and only if $V(G)\setminus A$ is a vertex cover of $G$.
The {\it edge ideal} of a graph $G$ is defined as$$I(G)=\big(x_ix_j \, |\, x_ix_j\in E(G)\big).$$For a subset $C$ of $\big\{x_1, \ldots, x_n\big\}$, we denote by $\mathfrak{p}_C$, the monomial prime ideal which is generated by the variables belonging to $C$. It is well-known that for every graph $G$,$$I(G)=\bigcap_{C\in \mathcal{C}(G)}\mathfrak{p}_C.$$
Let $I$ be an ideal of $S$ and let ${\rm Min}(I)$ denote the set of minimal primes of $I$. For every integer $s\geq 1$, the $s$-th {\it symbolic power} of $I$,
denoted by $I^{(s)}$, is defined to be$$I^{(s)}=\bigcap_{\frak{p}\in {\rm Min}(I)} {\rm Ker}(S\rightarrow (S/I^s)_{\frak{p}}).$$Let $I$ be a squarefree monomial ideal in $S$ and suppose that $I$ has the irredundant
primary decomposition $$I=\frak{p}_1\cap\ldots\cap\frak{p}_r,$$ where every
$\frak{p}_i$ is an ideal generated by a subset of the variables of
$S$. It follows from \cite[Proposition 1.4.4]{hh} that for every integer $s\geq 1$, $$I^{(s)}=\frak{p}_1^s\cap\ldots\cap
\frak{p}_r^s.$$We set $I^{(s)}=S$, for any integer $s\leq 0$.
It is clear that for any graph $G$ and every integer $s\geq 1$,$$I(G)^{(s)}=\bigcap_{C\in \mathcal{C}(G)}\mathfrak{p}_C^s.$$
As it was mentioned in introduction, Fouli, H${\rm \grave{a}}$ and Morey \cite{fhm} detected a method to bound the depth of a homogenous ideal. We provide an alternative proof for their result. Recall that for every monomial $u$ and for every variable $x_i$, the degree of $u$ with respect to $x_i$ is denoted by ${\rm deg}_{x_i}(u)$.
\begin{prop} [\cite{fhm}, Theorem 3.11] \label{depin}
Let $I$ be a proper homogenous ideal of $S$ and let $<$ be a monomial order. Assume that $B=\{b_{i,j} \mid 1\leq i\leq q, 0\leq j\leq t_i\}$ is a subset of distinct variables of $S$, such that the following conditions are satisfied.
\begin{itemize}
\item[(i)] For every pair of integers $1\leq i\leq q$, $1\leq j\leq t_i$ and for every $u\in G({\rm in}_<(I))$, we have ${\rm deg}_{b_{i,j}}(u)\leq 1$.
\item[(ii)] For $i=1, 2, \ldots, q$, if a monomial $u\in G({\rm in}_<(I))$ is divisible by $b_{i,0}$, then it is also divisible by $b_{i,j}$, for some integer $1\leq j\leq t_i$.
\end{itemize}
Then $\depth S/I\geq q$.
\end{prop}
\begin{proof}
It is known that $\depth S/I\geq \depth S/{\rm in}_<(I)$ (see e.g., \cite[Theorem 3.3.4]{hh}). Hence, replacing $I$ by ${\rm in}_<(I)$, we may suppose that $I$ is a monomial ideal. We use induction on $|B|$. There is nothing to prove for $|B|=0$, as in this case $q=0$. Therefore, assume that $|B|\geq 1$. If $t_i=0$, for every $i=1, 2, \ldots, q$, then it follows from condition (ii) that $b_{1,0}, \ldots, b_{q,0}$ do not divide the minimal monomial generators of $I$. In particular, they form a regular sequences on $S/I$ and the assertion follows. Thus, suppose that $t_i\geq 1$, for some $i$ with $1\leq i\leq q$. Without lose of generality, suppose $i=1$. Consider the following short exact sequence.
\begin{align*}
0\longrightarrow S/(I:b_{1,t_1})\longrightarrow S/I\longrightarrow S/(I,b_{1,t_1})\longrightarrow 0
\end{align*}
This yields that
\[
\begin{array}{rl}
\depth S/I \geq \min \big\{\depth S/(I:b_{1,t_1}), \depth S/(I,b_{1,t_1})\big\}.
\end{array} \tag{1} \label{ast}
\]
By condition (i), the variable $b_{1,t_1}$ does not appear in the minimal monomial generators of $(I:b_{1,t_1})$. In particular, $b_{1,t_1}$ is a regular element on $S/(I:b_{1,t_1})$. Let $S'$ be the polynomial ring obtained from $S$ by deleting the variable $b_{1,t_1}$ (in other words, $S'\cong S/(b_{1,t_1})$). Set $I':=(I:b_{1,t_1})\cap S'$. It follows that$$\depth S/(I:b_{1,t_1})=\depth S/((I:b_{1,t_1}), b_{1,t_1})+1=\depth S'/I'+1.$$Clearly, $I'$ satisfies the assumptions with respect to the set $\{b_{i,j} \mid 2\leq i\leq q, 0\leq j\leq t_i\}$ of variables. Thus, the induction hypothesis implies that $\depth S'/I'\geq q-1$. Hence, we deduce from the above equalities that$$\depth S/(I:b_{1,t_1})\geq q.$$
Using inequality (\ref{ast}), it suffices to prove that $\depth S/(I,b_{1,t_1})\geq q$. Set $I'':=I\cap S'$. Then $S/(I,b_{1,t_1})\cong S'/I''$. Put $t_1':=t_1-1$ and $t_i':=t_i$, for $i=2, \ldots, q$. Obviously, $I''$ satisfies the assumptions with respect to the set $\{b_{i,j} \mid 1\leq i\leq q, 0\leq j\leq t_i'\}$ of variables. Therefore, we conclude from the induction hypothesis that$$\depth S/(I,b_{1,t_1})=\depth S'/I''\geq q.$$
\end{proof}
Let $G$ be a graph and $x$ be a vertex of $G$. The subgraph ${\rm St}(x)$ of $G$ with vertex set $N_G[x]$ and edge set $\{xy\, |\, y\in N_G(x)\}$ is called a {\it star with center $x$}. A {\it star packing} of $G$ is a family $S$ of stars in $G$ which are pairwise disjoint, i.e., $V({\rm St}(x))\cap V({\rm St}(x'))=\emptyset$, for ${\rm St}(x), {\rm St}(x')\in S$. The quantity$$\max\big\{|S|\, |\, S \ {\rm is \ a \ star \ packing \ of} \ G\big\}$$ is called the {\it star packing number} of $G$. Following \cite{fhm1}, we denote the star packing number of $G$ by $\alpha_2(G)$.
The following corollary is an immediate consequence of Proposition \ref{depin}, and it was indeed observed in \cite{fhm1}.
\begin{cor} [\cite{fhm1}] \label{spn}
For every graph $G$, we have$$\depth S/I(G)\geq \alpha_2(G).$$
\end{cor}
\begin{proof}
Let $b_{1,0}, \ldots, b_{q,0}$ be the centers of stars in a largest star packing of $G$. Moreover, for $1\leq i\leq q$, assume that $N_G(b_{i,0})=\{b_{i,1}, \ldots, b_{i,t_i}\}$. Then the assumptions of Proposition \ref{depin} are satisfied and it follows that$$\depth S/I(G)\geq q=\alpha_2(G).$$
\end{proof}
\section{Symbolic powers of edge ideals of chordal graphs} \label{sec3}
In this section, we prove the first main result of this paper, Theorem \ref{main} which states that inequality (\ref{ddag}) is true for every chordal graph $G$ and for any integer $s\geq 1$. In order to prove this result, we first need to estimate the star packing number of the graph obtained from $G$ by deleting a certain subset of its vertices. This will be done in the following two lemmas.
\begin{lem} \label{packdel1}
Let $G$ be a graph and let $W$ be a subset of $V(G)$. Then for every $A\subseteq \bigcup_{x\in W}N_G[x]$, we have$$\alpha_2(G\setminus A)\geq \alpha_2(G)-|W|.$$
\end{lem}
\begin{proof}
Let $\mathcal{S}$ be the set of the centers of stars in a largest star packing of $G$. In particular, $|\mathcal{S}|=\alpha_2(G)$. Since every vertex in $A$ belongs to the closed neighborhood of a vertex in $W$, it follows from the definition of star packing that $|\mathcal{S}\cap A|\leq | W|$. Then the stars in $G\setminus A$ centered at the vertices in $\mathcal{S}\setminus A$ form a star packing in $G\setminus A$ of size at least $\alpha_2(G)-| W|$. Therefore, $\alpha_2(G\setminus A)\geq \alpha_2(G)-|W|$.
\end{proof}
\begin{lem} \label{packdel}
Assume that $G$ is a graph and $W=\{x_1, \ldots, x_d\}$ is a clique of $G$. Let $A$ be a subset of $V(G)$ such that
\begin{itemize}
\item [(i)] $A\subseteq \bigcup_{i=1}^dN_G(x_i)$,
\item [(ii)] $N_G(x_1)\setminus \{x_2, \ldots, x_d\} \subseteq A$, and
\item [(iii)] $x_1\notin A$.
\end{itemize}
Then $\alpha_2(G\setminus A)\geq \alpha_2(G)-d+1$.
\end{lem}
\begin{proof}
Let $\mathcal{S}$ be the set of the centers of stars in a largest star packing of $G$. Similar to the proof of the Lemma \ref{packdel1}, we have $|\mathcal{S}\cap A|\leq d$. If $|\mathcal{S}\cap A|\leq d-1$, then the stars in $G\setminus A$ centered at the vertices in $\mathcal{S}\setminus A$ form a star packing in $G\setminus A$ of size at least $\alpha_2(G)-d+1$. Thus, the assertion follows in this case. Therefore, suppose $|\mathcal{S}\cap A|=d$. In this case, we have$$x_1, \ldots, x_d\in \bigcup_{x\in \mathcal{S}\cap A}N_G(x).$$It again follows from the definition of star packing that$$x_1, \ldots, x_d\notin \bigcup_{x\in \mathcal{S}\setminus A}N_{G\setminus A}(x).$$Therefore, we conclude from condition (ii) that
\begin{align*}
N_{G\setminus A}[x_1]\cap \big(\bigcup_{x\in \mathcal{S}\setminus A}N_{G\setminus A}(x)\big)\subseteq \{x_1, \ldots, x_d\}\cap \big(\bigcup_{x\in \mathcal{S}\setminus A}N_{G\setminus A}(x)\big)=\emptyset.
\end{align*}
As a consequence, the stars in $G\setminus A$ centered at the vertices in $(\mathcal{S}\setminus A)\cup\{x_1\}$ form a star packing in $G\setminus A$ of size $\alpha_2(G)-d+1$. This completes the proof of the lemma.
\end{proof}
We are now ready to prove that inequality (\ref{ddag}) holds for any chordal graph. Indeed, we are able to prove the following stronger result.
\begin{prop} \label{sum1}
Let $G$ be a chordal graph. Suppose $H$ and $H'$ are subgraphs of $G$ with$$E(H)\cap E(H')=\emptyset \ \ \ and \ \ \ E(H)\cup E(H')=E(G).$$Assume further that $H$ is a chordal graph. Then for every integer $s\geq 1$,$$\depth S/(I(H)^{(s)}+I(H'))\geq \alpha_2(G)-s+1.$$
\end{prop}
\begin{proof}
As the isolated vertices have no effect on edge ideals, we assume that $V(H)=V(H')=V(G)$ (i.e., we extend the vertex sets of $H$ and $H'$ to $V(G)$). We use induction on $s+|E(H)|$. For $s=1$, we have $I(H)^{(s)}+I(H')=I(G)$ and the assertion follows from Corollary \ref{spn}. Therefore, suppose $s\geq 2$. If $E(H)=\emptyset$, then $I(H')=I(G)$ and again we have the required inequality by Corollary \ref{spn}. Hence, we assume $|E(H)|\geq 1$.
To simplify the notations, we set $I:=I(H)^{(s)}+I(H')$. Since $H$ is a chordal graph, it has a simplicial vertex, say $x_1$, with nonzero degree. Without loss of generality, suppose $N_H(x_1)=\big\{x_2, \ldots, x_d\big\}$, for some integer $d\geq 2$. Consider the following short exact sequence.
\begin{align*}
0\longrightarrow \frac{S}{(I:x_1\ldots x_d)}\longrightarrow \frac{S}{I}\longrightarrow \frac{S}{(I,x_1\ldots x_d)}\longrightarrow 0
\end{align*}
Using depth Lemma \cite[Proposition 1.2.9]{bh}, we have
\[
\begin{array}{rl}
\depth S/I \geq \min \big\{\depth S/(I:x_1\ldots x_d), \depth S/(I,x_1\ldots x_d)\big\}.
\end{array} \tag{2} \label{1}
\]
By assumption, for every pair of integers $i\neq j$, with $1\leq i,j\leq d$ we have $x_ix_j\in E(H)$. Therefore, $x_ix_j$ is not an edge of $H'$. Set$$U:=\bigcup_{i=1}^dN_{H'}[x_i]$$and$$U':=\bigcup_{i=1}^dN_{H'}(x_i).$$Then using \cite[Lemma 2]{s9}, we have
\begin{align*}
& (I:x_1\ldots x_d)=\big((I(H)^{(s)}+I(H')):x_1\ldots x_d\big)\\ & =I(H)^{(s-d+1)}+I(H'\setminus U) +\big({\rm the \ ideal\ generated\ by}\ U'\big)\\ & =I(H\setminus U')^{(s-d+1)}+I(H'\setminus U)+\big({\rm the \ ideal\ generated\ by}\ U'\big).
\end{align*}
This yields that$$\depth S/(I:x_1\ldots x_d)=\depth S'/(I(H\setminus U')^{(s-d+1)}+I(H'\setminus U)),$$where $S'=\mathbb{K}[x_i: 1\leq i\leq n, i\notin U']$. Let $G'$ be the union of $H\setminus U'$ and $H'\setminus U$. In fact, $G'$ is the induced subgraph of $G$ on $V(G)\setminus U'$. Clearly, $N_G(x_1)\setminus \{x_2, \ldots, x_d\}$ is contained in $U'$. Then the above equality together with Lemma \ref{packdel} and the induction hypothesis implies that
\[
\begin{array}{rl}
\depth S/(I:x_1\ldots x_d)\geq \alpha_2(G\setminus U')-(s-d+1)+1\geq \alpha_2(G)-s+1.
\end{array} \tag{3} \label{2}
\]
Using inequalities (\ref{1}) and (\ref{2}), it is enough to prove that$$\depth S/(I,x_1\ldots x_d)\geq \alpha_2(G)-s+1.$$For every integer $k$ with $1\leq k\leq d-1$, let $J_k$ be the ideal generated by all the squarefree monomials of degree $k$ on variables $x_2, \ldots, x_d$. We continue in the following steps.
\vspace{0.3cm}
{\bf Step 1.} Let $1\leq k\leq d-2$ be a fixed integer and assume that $\{u_1, \ldots, u_t\}$ is the set of minimal monomial generators of $x_1J_k$. In particular, every $u_j$ is divisible by $x_1$ and ${\rm deg}(u_j)=k+1$. For every integer $j$ with $1\leq j\leq t$, we prove that
\begin{align*}
& \depth S/(I+x_1J_{k+1}+(u_1, \ldots, u_{j-1}))\\ & \geq\min\big\{\depth S/(I+x_1J_{k+1}+(u_1, \ldots, u_j)), \alpha_2(G)-s+1\big\}.
\end{align*}
(Note that for $j=1$, we have $I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})=I+x_1J_{k+1}$.)
Consider the following short exact sequence.
\begin{align*}
0 & \longrightarrow \frac{S}{(I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j}\longrightarrow \frac{S}{I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})}\\ & \longrightarrow \frac{S}{I+x_1J_{k+1}+(u_1, \ldots, u_j)}\longrightarrow 0
\end{align*}
As a consequence,
\begin{align*}
& \depth S/(I+x_1J_{k+1}+(u_1, \ldots, u_{j-1}))\geq\\& \min\big\{\depth S/((I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j), \depth S/(I+x_1J_{k+1}+(u_1, \ldots, u_j))\big\}.
\end{align*}
Therefore, to complete this step, we need to show that$$\depth S/((I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j)\geq \alpha_2(G)-s+1.$$
Set$$U_j:=\{x_i\mid 1\leq i \leq d \ {\rm and} \ x_i\ {\rm does\ not\ divide\ }u_j\}.$$For any $x_i\in U_j$, the monomial $x_iu_j$ is a squarefree monomial of degree $k+2$. Hence, $x_iu_j$ belongs to $x_1J_{k+1}$. This shows that$$({\rm the\ ideal\ generated\ by}\ U_j)\subseteq \big((x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j\big).$$We show the reverse inclusion holds too.
Since $x_1J_{k+1}+(u_1, \ldots, u_{j-1})$ is a squarefree monomial ideal, it follows that$$\big((x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j\big)$$is also a squarefree monomial ideal. On the other hand, $\{x_1, \ldots, x_d\}$ is the set of variables appearing in the set of minimal monomial generators of $x_1J_{k+1}+(u_1, \ldots, u_{j-1})$. This implies that every monomial generator of $\big((x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j\big)$ is a squarefree monomial over the variables $x_1, \ldots, x_d$. Assume that $v$ is a minimal generator of $\big((x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j\big)$. If $v$ is not equal to any of the variables belonging to $U_j$, then by definition of $U_j$, every variable dividing $v$, also divides $u_j$. As $v$ is a squarefree monomial, we have $v\mid u_j$. Since$$u_jv\in x_1J_{k+1}+(u_1, \ldots, u_{j-1}),$$we deduce that$$u_j^2\in x_1J_{k+1}+(u_1, \ldots, u_{j-1}),$$ which implies that$$u_j\in x_1J_{k+1}+(u_1, \ldots, u_{j-1}),$$because $x_1J_{k+1}+(u_1, \ldots, u_{j-1})$ is a squarefree monomial ideal. This is contradiction, as the degree of $u_j$ is strictly less that the degree of any monomial in $x_1J_{k+1}$ and moreover none of the monomials $u_1, \ldots, u_{j-1}$ is equal to $u_j$. Hence,
\[
\begin{array}{rl}
\big((x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j\big)=({\rm the\ ideal\ generated\ by}\ U_j).
\end{array} \tag{4} \label{3}
\]
Let $W_j$ be the set of variables dividing $u_j$. In other words, $W_j=\{x_1, \ldots, x_d\}\setminus U_j$. We remind that for any pair of integers $1\leq i, j\leq d$, the vertices $x_i$ and $x_j$ are not adjacent in $H'$. Set$$U_j':=\bigcup_{x_i\in W_j}N_{H'}[x_i]$$and$$U_j'':=\bigcup_{x_i\in W_j}N_{H'}(x_i).$$Using equality (\ref{3}), we conclude that
\begin{align*}
& \big((I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j\big)=\\ & \big((I(H)^{(s)}+I(H')+x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j\big)=\\ & (I(H)^{(s)}:u_j)+I(H'\setminus U_j')+({\rm the\ ideal\ generated\ by}\ U_j\cup U_j'')\\ & =\big(I\big(H\setminus (U_j\cup U_j'')\big)^{(s)}:u_j\big)+I\big(H'\setminus (U_j\cup U_j')\big)\\ & +({\rm the\ ideal\ generated\ by}\ U_j\cup U_j'').
\end{align*}
Set $H_j:=H\setminus (U_j\cup U_j'')$ and $H_j':=H'\setminus (U_j\cup U_j')$. Then $H_j$ is a chordal graph, and $x_1$ is a simplicial vertex of $H_j$. It is also clear that $N_{H_j}[x_1]$ is the set of variables divining $u_j$. It thus follows from \cite[Lemma 2]{s9} and the above equalities that
\begin{align*}
& \big((I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j\big)=I(H_j)^{(s-k)}+I(H_j')\\ & +({\rm the\ ideal\ generated\ by}\ U_j\cup U_j'').
\end{align*}
This yields that$$\depth S/((I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j)=\depth S_j/(I(H_j)^{(s-k)}+I(H_j')),$$where $S_j=\mathbb{K}[x_i: 1\leq i\leq n, i\notin U_j\cup U_j'']$. Let $G_j$ be the union of $H_j$ and $H_j'$. Then $G_j$ is the induced subgraph of $G$ on $V(G)\setminus (U_j\cup U_j'')$. We conclude from Lemma \ref{packdel} (by considering the clique $W_j$) that$$\alpha_2(G_j)\geq \alpha_2(G)-| W_j| +1=\alpha_2(G)-k,$$where the last equality follows from the fact that ${\rm deg}(u_j)=k+1$. Hence, the induction hypothesis implies that
\begin{align*}
& \depth S/((I+x_1J_{k+1}+(u_1, \ldots, u_{j-1})):u_j)=\depth S_j/(I(H_j)^{(s-k)}+I(H_j'))\\ & \geq \alpha_2(G_j)-(s-k)+1\geq \alpha_2(G)-s+1,
\end{align*}
and this step is complete.
\vspace{0.3cm}
{\bf Step 2.} Let $1\leq k\leq d-2$ be a fixed integer. By a repeated use of Step 1, we have
\begin{align*}
& \depth S/(I+x_1J_{k+1}) \geq\min\big\{\depth S/(I+x_1J_{k+1}+(u_1, \ldots, u_t)), \alpha_2(G)-s+1\big\}\\ & =\min\big\{\depth S/(I+x_1J_k), \alpha_2(G)-s+1\big\}.
\end{align*}
\vspace{0.3cm}
{\bf Step 3.} It follows from Step 2 that
\begin{align*}
& \depth S/(I,x_1\ldots x_d)=\depth S/(I+x_1J_{d-1})\\ & \geq\min\big\{\depth S/(I+x_1J_{d-2}), \alpha_2(G)-s+1\big\}\\ & \geq\min\big\{\depth S/(I+x_1J_{d-3}), \alpha_2(G)-s+1\big\}\\ & \geq \cdots \geq\min\big\{\depth S/(I+x_1J_1), \alpha_2(G)-s+1\big\}.
\end{align*}
In particular,
\[
\begin{array}{rl}
\depth S/(I,x_1\ldots x_d)\geq\min\big\{\depth S/\big(I+(x_1x_2, x_1x_3, \ldots, x_1x_d)\big), \alpha_2(G)-s+1\big\}.
\end{array} \tag{5} \label{4}
\]
\vspace{0.3cm}
{\bf Step 4.} Let $L$ be the graph obtained from $H$, by deleting the edges $x_1x_2, \ldots, x_1x_d$. Then $L$ is the disjoint union of $H\setminus x_1$ and the isolated vertex $x_1$. In particular, $L$ is a chordal graph. Also, let $L'$ be the graph obtained from $H'$, by adding the edges $x_1x_2, \ldots, x_1x_d$. Then$$E(L)\cap E(L')=\emptyset \ \ \ and \ \ \ E(L)\cup E(L')=E(G).$$It follows from \cite[Lemma 3.2]{s8} and the induction hypothesis that
\begin{align*}
& \depth S/(I+(x_1x_2, x_1x_3, \ldots, x_1x_d))=\\ & \depth S/(I(H)^{(s)}+I(H')+(x_1x_2, x_1x_3, \ldots, x_1x_d)\big)=\\ & \depth S/((I(L)^{(s)}+I(L')))\geq \alpha_2(G)-s+1.
\end{align*}
Finally, inequality (\ref{4}) implies that
\[
\begin{array}{rl}
\depth S/(I,x_1\ldots x_d)\geq \alpha_2(G)-s+1.
\end{array} \tag{6} \label{5}
\]
Now, inequalities (\ref{1}), (\ref{2}) and (\ref{5}) complete the proof of the proposition.
\end{proof}
The following theorem is the main result of this section and follows easily from Proposition \ref{sum1}.
\begin{thm} \label{main}
Let $G$ be a chordal graph. Then for every integer $s\geq 1$, we have$$\depth S/(I(G)^{(s)})\geq\alpha_2(G)-s+1.$$
\end{thm}
\begin{proof}
The assertion follows from Proposition \ref{sum1} by substituting $H=G$ and $H'=\emptyset$.
\end{proof}
\section{Second symbolic power of edge ideals} \label{sec4}
The aim of this section is to show that inequality (\ref{ddag}) is true for $s=2$, Theorem \ref{2power}. To prove this result, we need to bound the depth of ideals of the form $\big(I(G)^{(k)}:xy\big)$, where $xy$ is an edge of $G$. To achieve this goal, we will use the following lemma in the case of $k=2$.
\begin{lem} \label{intsec}
Let $G$ be a graph and $xy$ be an edge of $G$. Then for any integer $k\geq 2$, we have$$\big(I(G)^{(k)}:xy\big)=\big(I(G)^{(k-1)}:x\big)\cap \big(I(G)^{(k-1)}:y\big).$$
\end{lem}
\begin{proof}
Let $u$ be a monomial in $\big(I(G)^{(k)}:xy\big)$. Then $uxy\in I(G)^{(k)}$. Clearly, this implies that $ux\in I(G)^{(k-1)}$. Therefore, $u\in \big(I(G)^{(k-1)}:x\big)$. Similarly, $u$ belongs to $\big(I(G)^{(k-1)}:y\big)$. Hence,$$\big(I(G)^{(k)}:xy\big)\subseteq\big(I(G)^{(k-1)}:x\big)\cap \big(I(G)^{(k-1)}:y\big).$$
To prove the reverse inclusion, let $v$ be a monomial in$$\big(I(G)^{(k-1)}:x\big)\cap \big(I(G)^{(k-1)}:y\big).$$We must show that $vxy\in I(G)^{(k)}$. It is enough to prove that for any minimal vertex cover $C$ of $G$, we have $vxy\in \mathfrak{p}_C^k$. So, let $C$ be a minimal vertex cover of $G$. It follows from $xy\in E(G)$ that $C$ contains at least one of the vertices $x$ and $y$. Without lose of generality, suppose $x\in C$. Since $v\in \big(I(G)^{(k-1)}:y\big)$, we have $vy\in I(G)^{(k-1)}\subseteq \mathfrak{p}_C^{k-1}$. This together with $x\in \mathfrak{p}_C$ implies that $vxy\in \mathfrak{p}_C^k$.
\end{proof}
The following theorem is the second main result of this paper.
\begin{thm} \label{2power}
For any graph $G$, we have$$\depth S/I(G)^{(2)}\geq \alpha_2(G)-1.$$
\end{thm}
\begin{proof}
Set $I:=I(G)$ and let $G(I)=\{u_1, \ldots, u_m\}$ be the set of minimal monomial generators of $I$. Using \cite[Theorem 4.12]{b}, we may assume that for every pair of integers $1\leq j< i\leq m$, one of the following conditions holds.
\begin{itemize}
\item [(i)] $(u_j:u_i) \subseteq (I^2:u_i)\subseteq (I^{(2)}:u_i)$; or
\item [(ii)] there exists an integer $k\leq i-1$ such that $(u_k:u_i)$ is generated by a subset of variables, and $(u_j:u_i)\subseteq (u_k:u_i)$.
\end{itemize}
For every integer $i$ with $1\leq i\leq m$ consider the short exact sequence
\begin{align*}
0 & \longrightarrow \frac{S}{(I^{(2)}+(u_1, \ldots, u_{i-1})):u_i}\longrightarrow \frac{S}{I^{(2)}+(u_1, \ldots, u_{i-1})}\\ & \longrightarrow \frac{S}{I^{(2)}+(u_1, \ldots, u_i)}\longrightarrow 0.
\end{align*}
It follows from depth Lemma \cite[Proposition 1.2.9]{bh} that
\begin{align*}
& \depth S/(I^{(2)}+(u_1, \ldots, u_{i-1}))\\ & \geq \min\big\{\depth S/((I^{(2)}+(u_1, \ldots, u_{i-1})):u_i), \depth S/(I^{(2)}+(u_1, \ldots, u_i))\big\}.
\end{align*}
Consequently,
\begin{align*}
& \depth S/I^{(2)}\geq\\ & \min\big\{\depth S/(I^{(2)}+I), \depth S/((I^{(2)}+(u_1, \ldots, u_{i-1})):u_i)\mid 1\leq i\leq m\big\}=\\ & \min\big\{\depth S/I, \depth S/((I^{(2)}+(u_1, \ldots, u_{i-1})):u_i)\mid 1\leq i\leq m\big\} \geq\\ & \min\big\{\alpha_2(G), \depth S/((I^{(2)}+(u_1, \ldots, u_{i-1})):u_i)\mid 1\leq i\leq m\big\},
\end{align*}
where the last inequality follows from Corollary \ref{spn}. Hence, it is enough to show that$$\depth S/((I^{(2)}+(u_1, \ldots, u_{i-1})):u_i)\geq \alpha_2(G)-1,$$for every integer $i$ with $1\leq i\leq m$.
Fix an integer $i$ with $1\leq i\leq m$ and assume that $u_i=xy$. We know from (i) and (ii) above that
\[
\begin{array}{rl}
\big((I^{(2)}, u_1, \ldots, u_{i-1}):u_i\big)=(I^{(2)}:u_i)+({\rm some \ variables}).
\end{array} \tag{7} \label{6}
\]
Let $A$ be the set of variables appearing in $\big((I^{(2)}, u_1, \ldots, u_{i-1}):u_i\big)$. Assume that $x\in A$. This means that $x^2y$ belongs to the ideal $(I^{(2)}, u_1, \ldots, u_{i-1})$. Since $u_1, \ldots, u_{i-1}$ do not divide $x^2y$, we deduce that $x^2y\in I(G)^{(2)}$. But this is a contradiction, as $C:=V(G)\setminus \{x\}$ is a vertex cover of $G$ with $x^2y\notin \mathfrak{p}_C^2$. Therefore, $x\notin A$. Similarly, $y\notin A$. It follows from $x,y\notin A$ and equality (\ref{6}) that
\begin{align*}
& \big((I^{(2)}, u_1, \ldots, u_{i-1}):u_i\big)=\big(I^{(2)}:u_i\big)+({\rm the\ ideal\ generated\ by}\ A)\\ & =\big(I^{(2)}+ ({\rm the\ ideal\ generated\ by}\ A)):u_i\big)\\ & =\big(I(G\setminus A)^{(2)}+({\rm the\ ideal\ generated\ by}\ A)):u_i\big)\\ & =\big(I(G\setminus A)^{(2)}:u_i\big)+({\rm the\ ideal\ generated\ by}\ A).
\end{align*}
Therefore,
\[
\begin{array}{rl}
\depth S/((I^{(2)},u_1, \ldots, u_{i-1}):u_i)=\depth S_A/(I(G\setminus A)^{(2)}:u_i),
\end{array} \tag{8} \label{7}
\]
where $S_A=\mathbb{K}[x_i: 1\leq i\leq n, x_i\notin A]$. It follows from Lemma \ref{intsec} that$$(I(G\setminus A)^{(2)}:u_i)=(I(G\setminus A):x)\cap(I(G\setminus A):y).$$Consider the following short exact sequence.
\begin{align*}
0 & \longrightarrow \frac{S_A}{(I(G\setminus A)^{(2)}:u_i)}\longrightarrow \frac{S_A}{(I(G\setminus A):x)}\oplus\frac{S_A}{(I(G\setminus A):y)}\\ & \longrightarrow \frac{S_A}{(I(G\setminus A):x)+(I(G\setminus A):y)}\longrightarrow 0
\end{align*}
Applying depth Lemma \cite[Proposition 1.2.9]{bh} on the above exact sequence, it suffices to prove that
\begin{itemize}
\item [(a)] $\depth S_A/(I(G\setminus A):x) \geq \alpha_2(G)-1$,
\item [(b)] $\depth S_A/(I(G\setminus A):y) \geq \alpha_2(G)-1$, and
\item [(c)] $\depth S_A/((I(G\setminus A):x)+(I(G\setminus A):y))\geq \alpha_2(G)-2$.
\end{itemize}
To prove (a), note that
\begin{align*}
& (I(G\setminus A):x)=I(G\setminus(A\cup N_{G\setminus A}[x]))+({\rm the\ ideal\ generated\ by}\ N_{G\setminus A}(x))\\ & =I(G\setminus(A\cup N_G[x]))+({\rm the\ ideal\ generated\ by}\ N_{G\setminus A}(x)).
\end{align*}
Hence,
\[
\begin{array}{rl}
\depth S_A/(I(G\setminus A):x)=\depth S'/I(G\setminus(A\cup N_G[x])),
\end{array} \tag{9} \label{8}
\]
where $S'=\mathbb{K}[x_i: 1\leq i\leq n, x_i\notin A\cup N_G(x)]$. Obviously, $x$ is a regular element of $S'/I(G\setminus(A\cup N_G[x]))$. Therefor, Corollary \ref{spn} implies that
\[
\begin{array}{rl}
\depth S'/I(G\setminus(A\cup N_G[x]))\geq \alpha_2(G\setminus(A\cup N_G[x]))+1.
\end{array} \tag{10} \label{9}
\]
Assume that $A\subseteq N_G(x)\cup N_G(y)$. It then follows from Lemma \ref{packdel1} that$$\alpha_2(G\setminus(A\cup N_G[x]))\geq \alpha_2(G)-2.$$ Hence, we conclude from equality (\ref{8}) and inequality (\ref{9}) that$$\depth S_A/(I(G\setminus A):x)\geq \alpha_2(G)-1.$$Thus, to complete the proof of (a), we only need to show that $A\subseteq N_G(x)\cup N_G(y)$.
Let $z$ be an arbitrary variable in $A$ and suppose $z\notin N_G(x)\cup N_G(y)$. Then the only edge dividing $zu_i=zxy$ is $u_i$. In particular,
\[
\begin{array}{rl}
z\notin \big((u_1, \ldots, u_{i-1}):u_i).
\end{array} \tag{11} \label{10}
\]
Moreover, since $\{z,x\}$ is an independent of subset of vertices of $G$, we conclude that $C'=V(G)\setminus \{z,x\}$ is a vertex cover of $G$ with $zu_i=zxy\notin \mathfrak{p}_C^2$. Thus, $zu_i\notin I^{(2)}$. This means that $z\notin (I^{(2)}:u_i)$. This, together with (\ref{10}) implies that$$z\notin\big((I^{(2)}, u_1, \ldots, u_{i-1}):u_i\big),$$which is a contradiction. Therefore, $z\in N_G(x)\cup N_G(y)$. Hence, $A\subseteq N_G(x)\cup N_G(y)$ and this completes the proof of (a). The proof of (b) is similar to the proof of (a). We now prove (c).
Note that
\begin{align*}
& (I(G\setminus A):x)+(I(G\setminus A):y)=\\ & I(G\setminus(A\cup N_{G\setminus A}[x]\cup N_{G\setminus A}[y]))+({\rm the\ ideal\ generated\ by}\ N_{G\setminus A}[x]\cup N_{G\setminus A}[y])=\\ & I(G\setminus(A\cup N_G[x]\cup N_G[y]))+({\rm the\ ideal\ generated\ by}\ N_{G\setminus A}[x]\cup N_{G\setminus A}[y])=\\ & I(G\setminus(N_G[x]\cup N_G[y]))+({\rm the\ ideal\ generated\ by}\ N_{G\setminus A}[x]\cup N_{G\setminus A}[y]),
\end{align*}
Where the last equality follows from $A\subseteq \big(N_G(x)\cup N_G(y)\big)$. We conclude that
\[
\begin{array}{rl}
\depth S_A/((I(G\setminus A):x)+(I(G\setminus A):y))=\depth S''/I(G\setminus(N_G[x]\cup N_G[y])),
\end{array} \tag{12} \label{11}
\]
where $S''=\mathbb{K}\big[x_i: 1\leq i\leq n, x_i\notin N_G[x]\cup N_G[y]\big]$. Using Corollary \ref{spn} and Lemma \ref{packdel1}, we deuce that$$\depth S''/I(G\setminus(N_G[x]\cup N_G[y]))\geq \alpha_2(G\setminus(N_G[x]\cup N_G[y]))\geq \alpha_2(G)-2.$$Finally, the assertion of (c) follows from equality (\ref{11}) and the above inequality. This completes the proof of the theorem.
\end{proof}
|
2,877,628,091,280 | arxiv | \section{Introduction}
In this work, we study the following class of fractional magnetic Schr\"{o}dinger equations
\begin{equation}\label{problem}
(-\Delta)_{A}^{s}u+V(x)u=g(\vert u\vert^{2})u+\lambda\vert u\vert^{q-2}u, \quad \mbox{in } \mathbb{R}^{N}, \tag{$P_{\lambda,A}$}
\end{equation}
where $\lambda$ is a nonnegative parameter, $s\in (0,1)$, $N>2s$, $A :\mathbb{R}^N \rightarrow \mathbb{R}^N$ is the magnetic potential, $V:\mathbb{R}^N \rightarrow \mathbb{R}$ is a continuous and nonnegative potential, $u:\mathbb{R}^N \rightarrow \C$, $g:\mathbb{R}_{+} \rightarrow \mathbb{R}$ is a continuous function, the exponent $q\geq 2^*_s:=2N/(N-2s)$. The number $2^*_s$ is known as the fractional critical Sobolev exponent. The \textit{fractional magnetic Laplacian} $(-\Delta)_{A}^{s}$ has been defined as follows
\begin{eqnarray}\label{1.1}
(-\Delta)_{A}^{s}u(x):=C_{N,s}\lim_{\varepsilon \rightarrow 0}\int_{B^{c}_{\varepsilon}(x)}\frac{u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}}{|x-y|^{N+2s}}\mathrm{d}x, \quad C_{N,s}=\frac{4^s\Gamma(\frac{N+2s}{2})}{\pi^{N/2}|\Gamma(-s)|}.
\end{eqnarray}
This nonlocal operator has been introduced in \cite{AveniaSq,Ichinose2} and it can be seen as a fractional extension of the magnetic pseudorelativistic operator or Weyl pseudodifferential operator with mid-point prescription. For details on the consistency of definition \eqref{1.1} and a more complete discussion on this subject, we refer the readers to \cite{Ichinose,AveniaSq,Squassina,Ichinose2}. In \cite{Squassina}, the authors have shown that when $A$ is sufficiently smooth, $(-\Delta)_{A}^{s}u$ can be viewed as a fractional counterpart of the magnetic Laplacian operator $-\Delta_{A}$ which is defined as follows
\[
-\Delta_{A}:=\left(\frac{1}{i} \nabla-A\right)^2=-\Delta u-\frac{2}{i}A(x) \cdot\nabla u + |A(x)|^2u-\frac{1}{i}u \textrm{div} A(x),
\]
see \cite{avron,SULEM} for more information on this operator. Magnetic nonlinear Schr\"{o}dinger equations arise by the study of standing wave solutions for the following time-dependent Schr\"{o}dinger equation with magnetic field
\begin{equation}\label{OPSCH}
i \frac{\partial \Psi}{\partial t}= \left(\frac{1}{i} \nabla-A(x)\right)^2\Psi+W(x)\Psi-f(|\Psi|^2)\Psi, \quad \mbox{in \ }(x,t) \in \mathbb{R}^N\times\mathbb{R}_{+},
\end{equation}
where $W(x)$ is an electric potential, $f$ is the nonlinear coupling and $\Psi$ is the wave function representing the state of the particle, see for instance \cite{anto,SULEM,ReedSimon} for a physical background.
A function of the form $\Psi(x,t):=u(x)e^{-i Et}$, with $E \in \mathbb{R}$, is a standing wave solution of \eqref{OPSCH} if and only if $u$ satisfies the following stationary equation
\begin{equation}\label{PA}
-\Delta_{A}u +V(x)u=f(|u|^2)u, \quad\mbox{in \ }\mathbb{R}^N.
\end{equation}
where $V(x)=W(x)-E$.
For problems involving magnetic Laplacian operator we refer the readers to \cite{ AlvesFigFurt, ariole,chao,lions} and the references therein.
Regarding to the nonlocal magnetic equation \eqref{problem}, if the magnetic field $A \equiv 0$ and $s \in (0,1)$, then $(-\Delta)_{A}^{s}$ reduces to the fractional Laplacian operator $(-\Delta)^{s}$. In particular, Problem \eqref{problem} boils down to the fractional Schrödinger equation
\begin{equation}\label{fract1}
(-\Delta)^{s}u+V(x)u=g(u)+\lambda\vert u\vert^{q-2}u, \quad \mbox{in } \mathbb{R}^{N}.
\end{equation}
The fractional Laplacian operator has been widely studied due to its vast fields of applications, such as, obstacle problems, flame propagation, minimal surfaces, conservation laws, financial
market, optimization, crystal dislocation and phase transition, see \cite{silvetre, guia,BucurVald,bisci} for more details. From the mathematical point of view, there is a huge literature related to fractional Schr\"{o}dinger equations like \eqref{fract1} under various classes of assumptions on the potential and nonlinear terms, see for instance \cite{bisci,UB,Secchi} and references therein.
We are concerned with equations involving potential that may vanishes at infinity. In \cite{alvessouto}, the authors studied existence of solutions to the local case ($s=1$) of equation \eqref{fract1} when $\lambda=0$, $g$ is subcritical and the potential $V(x)$ has a decay behavior
\begin{equation}\label{alves}
\frac{1}{R^{4}}\inf_{|x|\geq R}|x|^{4}V(x) \geq\Lambda>0.
\end{equation}
This work was extended in several directions, for instance, we cite \cite{plaplacian,jm1} for quasilinear problems, \cite{alves} for Choquard-type equation, \cite{jm2} for Kirchhoff-type equation, \cite{chao} for the magnetic equation \eqref{PA} involving vanishing potential and \cite{UB} for the fractional magnetic equation \eqref{fract1} involving vanishing potential. In these works, the decay \eqref{alves} was adapted to the respective class of problems. Inspired by \cite{alvessouto}, the existence of solutions is obtained by applying variational methods jointly with the penalization method in the spirit of \cite{delpinofelmer}. Although there is a relevant progress in the theory of fractional magnetic Schrödinger equation, see for instance \cite{AveniaSq, ambro3, ambro4,QKXw} and the references therein, as far as we know, nothing has been done for fractional magnetic equations involving vanishing potential.
Motivated by the above discussion and inspired by \cite{alvessouto,UB,chao}, we study the existence of solutions for Problem \eqref{problem}.
The presence of the fractional magnetic Laplacian operator brings additional difficulties. Firstly, we must consider this problem for complex valued functions and we need more delicate estimates, that can be obtained with the aid of diamagnetic inequalities, see \cite{AveniaSq,Lieb}. Secondly, in the local magnetic case, it can be applied arguments involving the following Kato’s inequality (see \cite{kato})
\begin{equation*}
-\Delta|u|\leq {\bf \Re}\Big(sign(u)\left(-\Delta_{A} u \right)\Big),
\end{equation*}
where $\Re(z)$ denotes the real part of $z \in \C$. However, in the nonlocal magnetic framework, it is believed that a Kato’s inequality is available for $(-\Delta)^{s}_{A}$ but we are not able to prove it except for rough functions which are bounded from below and above, see \cite{ambro3}. For this reason, several arguments used in \cite{chao} are not adaptable for our case. Finally, it is worth mentioning that we are also dealing with a supercritical perturbation, which provokes more lack of compactness. In order to overcome such difficulties, we introduce two auxiliary problems to recover some compactaness and we control the parameters $\lambda$ and $\Lambda$ to relate the solution of the auxiliary problem with the original \eqref{problem}. Our approach is based on an adapted version of Penalization method jointly with $L^{\infty}$--estimates.
\vspace{0,5cm}
Throughout this work we assume that $A\in C(\mathbb{R}^{N},\mathbb{R}^{N})$. The potential $V\in C(\mathbb{R}^{N},\mathbb{R})$ satisfies the following assumptions:
\begin{itemize}
\item [$(V_1)$] $ V(x) \geq 0$, $ \forall x \in \mathbb{R}^N$;
\item [$(V_2)$] $V(x)\leq V_{\infty}$, $\forall x \in B_1(0)$ for some constant $V_{\infty}>0$;
\item [$(V_3)$] There exist $\Lambda>0$ and $R_0 > 1$ such that
\begin{equation*}
\frac{1}{R_0^{4s}}\inf_{|x|\geq R_0}|x|^{4s}V(x) \geq\Lambda.
\end{equation*}
\end{itemize}
The nonlinearity $g\in C(\mathbb{R_{+}},\mathbb{R})$ satisfies the following hypotheses:
\begin{itemize}
\item [$(g_1)$] $\displaystyle\limsup_{t \rightarrow 0^+}\frac{g(t)}{t^{\frac{2^*_{s}-2}{2}}}$< $+\infty$;
\item [$(g_2)$]There exists $p \in (2, 2^*_{s})$ such that
\begin{equation*}
\limsup_{t \rightarrow +\infty}\frac{g(t)}{t^{\frac{p-2}{2}}}< +\infty;
\end{equation*}
\item [$(g_3)$] There exists $\theta \in (2, p]$ such that
\begin{equation*}
0<\frac{\theta}{2} G(t)\leq tg(t), \quad \forall t>0,
\end{equation*}
where $G(t):=\int_{0}^{t}g(\tau)d\tau$.
\end{itemize}
\vspace{0,2cm}
Let
\begin{equation*}
E:=\Bigg\{ u \in \mathcal{D}_{A}^{s,2}(\mathbb{R}^N,\C): \int_{\mathbb{R}^N}V(x)|u|^2\,\mathrm{d}x <\infty\Bigg\}.
\end{equation*}
We say that a function $u\in E$ is a weak solution of Problem \eqref{problem}, if there holds
\begin{eqnarray*}\label{S_0}
\Re\left(\int \! \!\! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(\phi(x)-\phi(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right) \nonumber\\
+\,\,\Re\left( \int_{\mathbb{R}^{N}}V(x)u\overline{\phi} \,\mathrm{d}x-
\int_{\mathbb{R}^{N}}g(|u|^2)u\overline{\phi}\,\mathrm{d}x - \lambda\int_{\mathbb{R}^{N}}|u|^{q-2}u \overline{\phi}\,\mathrm{d}x \right)=0, \quad \forall\, \phi \in E,
\end{eqnarray*}
see Section \ref{q} for more details.
\vspace{0,2cm}
The main result of this paper can be stated as follows:
\begin{thm}\label{theorem}
Assume that $(V_1 )$-$(V_3)$ and $(g_1)$-$(g_3)$ are satisfied. Then, there are $\lambda_0,\Lambda_0 > 0$ such that,
for each $\lambda \in [0, \lambda_0)$ and $\Lambda \geq \Lambda_0$, the problem \eqref{problem} has a nontrivial weak solution.
\end{thm}
\begin{remark}
We emphasize that our main result extends and complements \cite{alvessouto,chao,UB}. Precisely, if $\lambda=0$, $A\equiv0$ and $s=1$, then \eqref{problem} boils down to the problem studied in \cite{alvessouto}. If $\lambda \neq0$ and $A \not\equiv 0$, then our main result extends \cite{chao} for the fractional magnetic setting with supercritical perturbation. Furthermore, for the case $A\not\equiv0$ it extends \cite{UB} for the fractional magnetic setting.
\end{remark}
\begin{remark}
An example of potential that satisfies $(V_{1})$--$(V_{3})$ is given by
\begin{equation*}
V(x) = \left\{
\begin{array}{ccl}
\varrho_{1},& \mbox{if} & |x|<R_0-\varrho_{2},\\
|x| -R_0 + \varrho_{1} +\varrho_{2}, & \mbox{if} & R_{0}-\varrho_{2} \leq|x|< R_0,\\
\dsp\frac{R_{0}^{4s}}{|x|^{4s}}(\varrho_{1}+\varrho_{2}), & \mbox{if} & |x|\geq R_0,
\end{array}
\right.
\end{equation*}
where $\varrho_{1}\geq 0$ and $0< \varrho_{2}<R_0$. A function $g$ that satisfies $(g_{1})$--$(g_{3})$ is given by
\begin{equation*}
g(t) = \left\{
\begin{array}{ccl}
0,& \mbox{if} & t=0,\\
\sigma t^{\frac{2^*_{s}-2}{2}}, & \mbox{if} & 0< t< 1,\\
\sigma t^{\frac{p-2}{2}}, & \mbox{if} & t\geq 1,
\end{array}
\right.
\end{equation*}
where $\sigma >0$.
\end{remark}
\vspace{0,3cm}
\noindent {\bf Notations.} In what follows $C$, $C_i$ denote positive constants, $B_R$ denote the open ball centered at the origin with radius $R>0$, $o_{n}(1)$ denotes a sequence which converges to $0$ as $n\rightarrow\infty$, $\Re(z)$ denotes the real part of $z \in \C$ and $\overline{z}$ denotes its complex conjugate.
\vspace{0,3cm}
\noindent {\bf Outline.} The remainder of this paper is organized as follows: In the forthcoming Section we introduce some preliminary results which will be useful in the remainder of the work. In Sections \ref{3} and \ref{4}, we show the existence of nontrivial solution for an auxiliary problem associated to \eqref{problem}. Section \ref{5} is devoted to obtain a suitable $L^{\infty}$--estimate of the solution of a auxiliary problem. In the final Section \ref{6}, we prove that the solution of the auxiliary problem is in fact a solution for problem \eqref{problem}.
\section{Preliminary results}\label{q}
In this Section we collect some preliminary concepts and definitions which will be used throughout the work. Initially, we collect some facts about the fractional magnetic Sobolev space. For $s\in (0, 1)$ and magnetic field $A \in C(\mathbb{R}^N,\mathbb{R}^N)$, we define $\mathcal{D}_{A}^{s,2}(\mathbb{R}^N,\C)$ as the completion of the set of $C_0^{\infty}(\mathbb{R}^N,\C)$ with respect to the so called magnetic Gagliardo semi-norm
\begin{equation*}
[u]_{s,A}:=\left(\frac{C_{N,s}}{2}\int\!\!\!\int_{\mathbb{R}^{2N}}\frac{|u(x)-u(y)e^{i A\big(\frac{x+y}{2} \big).(x-y)}|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{2}}.
\end{equation*}
The space $\mathcal{D}_{A}^{s,2}(\mathbb{R}^N,\C)$ can be characterized as
\begin{equation*}
\mathcal{D}_{A}^{s,2}(\mathbb{R}^N,\C):= \Bigg\{ u \in L^{2^*_{s}}(\mathbb{R}^N,\C): [u]_{s,A} <\infty \Bigg \},
\end{equation*}
and, it is a Hilbert space with respect to the inner product
\begin{eqnarray*}
\langle u,v\rangle_{s,A}:= \frac{C_{N,s}}{2}\Re\left(\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(v(x)-v(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \right).
\end{eqnarray*}
Note that for magnetic field $A \equiv 0$, we recover the classical definition of
$$
\mathcal{D}^{s,2}(\mathbb{R}^N, \C):= \Bigg\{ u \in L^{2^*_{s}}(\mathbb{R}^N,\C): [u]_{s} <\infty \Bigg\},
$$
where
\begin{equation*}
[u]_{s}=\Bigg(\frac{C_{N,s}}{2}\int\!\!\!\int_{\mathbb{R}^{2N}}\frac{|u(x)-u(y)|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\Bigg)^{\frac{1}{2}}
\end{equation*}
denotes the Gagliardo semi-norm of a function $u$. For a more complete discussion on fractional Sobolev spaces, we refer the readers to \cite{guia}. Henceforth, we omit normalization constant $\frac{C_{N,s}}{2}$.
Arguing as in \cite[Lemma 3.1]{AveniaSq}, we see that the following result holds:
\begin{lem}(Diamagnetic inequality)
If $u \in\mathcal{D}_{A}^{s,2}(\mathbb{R}^N,\C)$, then $|u| \in \mathcal{D}^{s,2}(\mathbb{R}^N,\mathbb{R})$ and we have
\begin{equation} \label{ime_1}
[|u|]_s \leq [u]_{s,A}.
\end{equation}
We also have the following pointwise diamagnetic inequality
\begin{equation*}
\big||u(x)|-|u(y)|\big|\leq \Big|u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big|.
\end{equation*}
\end{lem}
Due to the presence of $V(x)$, we introduce the subspace of $\mathcal{D}_{A}^{s,2}(\mathbb{R}^N,\C)$
\begin{equation*}
E:=\Bigg\{ u \in \mathcal{D}_{A}^{s,2}(\mathbb{R}^N,\C): \int_{\mathbb{R}^N}V(x)|u|^2\,\mathrm{d}x <\infty\Bigg\}
\end{equation*}
which is a Hilbert space when endowed with the inner product,
\begin{eqnarray*}
\langle u,v\rangle&:=&
\Re \left(\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(v(x)-v(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \right) \\
&& +\,\, \Re \left( \int_{\mathbb{R}^N}V(x)u\overline{v}\,\mathrm{d}x \right)
\end{eqnarray*}
and its correspondent norm, $\|u\|:=\sqrt{\langle u,u \rangle}$.
We recall the following embeddings of the fractional Sobolev spaces into Lebesgue spaces, see \cite[Theorem 6.5]{guia}.
\begin{lem}\label{im2.1} Let $s \in (0, 1)$ and suppose that $N > 2s$. Then, there exists a positive constant $S = S(N, s)$ such that, for any $w \in \mathcal{D}^{s,2}(\mathbb{R}^N,\mathbb{R})$, we have
\begin{equation}\label{ime_2}
\|w\|_{2^*_{s}}^2\leq S^{-1}[w]^2_{s}.
\end{equation}
\end{lem}
By combining \eqref{ime_1} with \eqref{ime_2}, we can deduce
\begin{equation}\label{im_1.0}
\||u|\|_{2^*_{s}}^2\leq S^{-1}[|u|]^2_s \leq S^{-1}[u]_{s,A}^2.
\end{equation}
Consequently, the following result holds:
\begin{lem}\label{lem2.2} The space $E$ is continuously embedded into $L^{2^*_{s}}(\mathbb{R}^N,\C)$ and compactly embedded into $ L_{loc}^{\alpha}(\mathbb{R}^N,\C)$ for any $\alpha \in (2,2^*_{s})$.
\end{lem}
The energy functional $\mathcal{F}_{\lambda}: E \rightarrow \mathbb{R}$ associated with Problem \eqref{problem} is given by
\begin{equation}
\mathcal{F}_{\lambda}(u)=\frac{1}{2}\|u\|^2-\frac{1}{2}\int_{\mathbb{R}^{N}}G(|u|^2)\,\mathrm{d}x-\frac{\lambda}{q}\int_{\mathbb{R}^{N}}|u|^q\,\mathrm{d}x.
\end{equation}
In view of $( g_1)$ and $( g_2)$ there exists $C_0 > 0$ such that
\begin{equation}\label{eq1.1}
|t^2g(t^2)|\leq C_0|t|^{2^*_{s}} \quad \mbox{ and } \quad |t^2g(t^2)|\leq C_0|t|^{p}, \quad \forall\, t \geq 0,
\end{equation}
and by $(g_3)$, there exist $C_1$ and $C_2$ such that
\begin{eqnarray}\label{eq1.2}
G(t^2)\geq C_1t^{\theta}-C_2, \quad \forall\, t \geq 0.
\end{eqnarray}
Hence, \eqref{eq1.1} and Lemma \ref{lem2.2} imply that $\mathcal{F}_{\lambda}$ is well defined in $E$ if and only if $q = 2^*_{s}$. Thus, we are not able to apply variational methods directly because the functional $\mathcal{F}_\lambda$ is not well defined on $E$ unless $q = 2^*_{s}$. However, to overcome this difficulty, we introduce an adequate modification on the nonlinearity $g(|u|^2)u + \lambda |u|^{q-2}u$, whose approach will be presented in the next section.
\section{Auxiliary problems}\label{section_3}\label{3}
In this section, in order to apply minimax methods to obtain a solution for \eqref{problem}, we consider two auxiliary problems. We start by introducing a new nonlinearity. For given $k\in \mathbb{N}$, we define the function $f_{\lambda,k}:\mathbb{R_{+}}\rightarrow \mathbb{R}$ by
\begin{equation}\label{flambdak}
f_{\lambda, k}(t) = \left\{
\begin{array}{ccl}
g(t)+\lambda t^{\frac{q-2}{2}}, & \mbox{if} & t\leq k,\\
g(t) +\lambda k^{\frac{q-p}{2}}t^{\frac{p-2}{2}}, & \mbox{if} & t\geq k.
\end{array}
\right.
\end{equation}
By using $(g_1)$ and $(g_2)$, it is not hard to check that $f_{\lambda,k}$ admits the following properties:
\begin{itemize}
\item [$(f_1)$] $|f_{\lambda,k}(t)|\leq C_0(1+\lambda k^{\frac{q-p}{2}})t^\frac{p-2}{2}$, \, $\forall \, t \geq 0$;
\item [$(f_2)$] $|f_{\lambda,k}(t)|\leq C_0(1+\lambda k^{\frac{q-p}{2}})t^\frac{2^*_{s}-2}{2}$, \, $\forall \, t \geq 0$.
\end{itemize}
Moreover, denoting $F_{\lambda,k}(t)=\int_{0}^{t}f_{\lambda,k}(\tau)\,\mathrm{d}\tau$, there holds
\begin{equation}\label{Flambdak}
F_{\lambda, k}(t) = \left\{
\begin{array}{ccl}
G(t)+\frac{2\lambda}{q} t^{\frac{q}{2}}, & \mbox{if} & t\leq k,\\
G(t) +\frac{2\lambda}{p} k^{\frac{q-p}{2}}t^{\frac{p}{2}} +2\lambda(\frac{1}{p}-\frac{1}{q})k^{\frac{q}{2}}, & \mbox{if} & t\geq k.
\end{array}
\right.
\end{equation}
Now, by condition $(g_3)$ and combining \eqref{flambdak} with \eqref{Flambdak}, a direct computation shows that
\begin{equation}\label{Fflambdak}
0 \leq f_{\lambda,k}(t^2)t^2-\frac{\theta}{2}F_{\lambda, k}(t^2) = \left\{
\begin{array}{ccl}
g(t^2)t^2-\frac{\theta}{2}G(t^2)+\Big(\frac{q-\theta}{q}\Big) \lambda t^q, & \mbox{if} & t\leq k,\\
g(t^2)t^2-\frac{\theta}{2}G(t^2) +\lambda k^{\frac{q-p}{2}}t^p\Big(\frac{p-\theta}{p}\Big) +\theta \lambda k^{\frac{q}{2}}\Big( \frac{q-p}{qp}\Big) , & \mbox{if} & t\geq k.
\end{array}
\right.
\end{equation}
Using \eqref{eq1.2} and \eqref{Flambdak}, we deduce
\begin{equation}\label{Fflambdak_1}
F_{\lambda,k}(t^2)\geq C_1|t|^\theta-C_2, \quad t \geq 0.
\end{equation}
Now, related with $f_{\lambda,k}$, we shall consider the auxiliary problem
\begin{equation}\label{aux1}
(-\Delta)_{A}^{s}u+V(x)u=f_{\lambda,k}(\vert u\vert^{2})u, \quad \mbox{in } \mathbb{R}^{N}. \tag{$A_{\lambda,k}$}
\end{equation}
We say that a function $u \in E$ is a weak solution of the problem \eqref{aux1}, if
\begin{eqnarray*}\label{S_00}
&&\Re\left(\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(\phi(x)-\phi(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right)\\
&& +\, \Re\left(\int_{\mathbb{R}^{N}}V(x)u\overline{\phi} \,\mathrm{d}x
- \int_{\mathbb{R}^{N}}f_{\lambda,k}(|u|^2)u\overline{\phi}\,\mathrm{d}x \right)=0,\quad \forall\, \phi \in E.
\end{eqnarray*}
Note that if $u$ is a weak solution of the problem \eqref{aux1} and satisfies $|u(x)| \leq k$ for all $x \in \mathbb{R}^N$, then $u$ is a weak solution of the problem \eqref{problem}. The energy functional $I_{\lambda,k}:E\rightarrow\mathbb{R}$ associated with Problem \eqref{aux1} is given by
\begin{equation*}
I_{\lambda,k}(u)=\frac{1}{2}\|u\|^2-\frac{1}{2}\int_{\mathbb{R}^{N}}F_{\lambda,k}(|u|^2)\,\mathrm{d}x.
\end{equation*}
In view of $(f_1)$, $(f_2)$ and Lemma \ref{lem2.2}, the functional $I_{\lambda,k}$ is well defined. Thus, it is easy to see that the weak solutions of \eqref{aux1} correspond to critical points of the energy functional $I_{\lambda,k}$. In order to apply Critical Point Theory we need to recover some compactness. However, we are not able to prove the Palais-Smale condition. In order to overcome this problem we use the penalization method introduced in \cite{delpinofelmer} and adapted in \cite{alvessouto,UB}. For this purpose, we introduce some definitions.
We fix $\nu=2\theta/(\theta-2)$, where $\theta$ is given in $(g_3)$ and we define the function $\hat{f}_{\lambda,k}: \mathbb{R}^N\times\mathbb{R_{+}}\rightarrow \mathbb{R}$ by
\begin{equation}
\hat{f}_{\lambda, k}(x,t) = \left\{
\begin{array}{rcl}
f_{\lambda,k}(t),& \mbox{if} & \nu f_{\lambda,k}(t)\leq V(x),\\
\dfrac{V(x)}{\nu}, & \mbox{if} & \nu f_{\lambda,k}(t) > V(x).\\
\end{array}
\right.
\end{equation}
Furthermore, considering $R_0 > 1$ given in condition $(V_3 )$, we define
\begin{equation}\label{hlambdak}
h_{\lambda, k}(x,t) = \left\{
\begin{array}{rcl}
\!f_{\lambda,k}(t),& \mbox{if}& |x|\leq R_0,\\
\hat{f}_{\lambda,k}(x,t),& \mbox{if}& |x| > R_0.\\
\end{array}
\right.
\end{equation}
We introduce the second auxiliary problem
\begin{equation}\label{aux2}
(-\Delta)_{A}^{s}u+V(x)u=h_{\lambda,k}(x,\vert u\vert^{2})u, \quad \mbox{in }\mathbb{R}^{N}, \tag{$B_{\lambda,k}$}
\end{equation}
where, $H_{\lambda,k}(x,t)=\int_{0}^{t}h_{\lambda,k}(x,\tau)\,\mathrm{d}\tau$. A direct computation shows that for all $t \geq 0$, the following inequalities hold:
\begin{eqnarray}
&h_{\lambda,k}(x,t)\leq f_{\lambda,k}(t),&\forall\, x \in \mathbb{R}^N, \label{del_1}\\
&H_{\lambda,k}(x,t)\leq F_{\lambda,k}(t),&\forall\, x \in \mathbb{R}^N,\label{del_2}\\
&\!\!\!\!h_{\lambda,k}(x,t)\leq \dsp\frac{V(x)}{\nu},& \mbox{if}\, |x|>R_0,\label{del_3}\\
&\!H_{\lambda,k}(x,t)\leq \dsp\frac{V(x)}{\nu}t,& \mbox{if}\,|x|>R_0,\label{del_4}\\
&\!\!h_{\lambda,k}(x,t)=f_{\lambda,k}(t),&\mbox{if}\, |x|\leq R_0.\label{del_4.0}
\end{eqnarray}
Moreover, combining \eqref{del_4.0} with \eqref{Fflambdak} we obtain
\begin{equation}\label{Hhlambdak}
h_{\lambda,k}(x,t^2)t^2-\frac{\theta}{2}H_{\lambda,k}(x,t^2)\geq 0, \quad\forall\, |x|\leq R_0 \mbox{ \ and \ } \forall\, t\geq 0 .
\end{equation}
We say that a function $u \in E$ is a weak solution of the problem \eqref{aux2}, if satisfies
\begin{eqnarray}\label{S_1}
&&\Re\left(\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(\phi(x)-\phi(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right)\nonumber\\ &&+\, \Re\left(\int_{\mathbb{R}^{N}}V(x)u\overline{\phi} \,\mathrm{d}x
- \int_{\mathbb{R}^{N}}h_{\lambda,k}(x,|u|^2)u\overline{\phi}\,\mathrm{d}x \right)=0, \quad \forall\, \phi \in E.
\end{eqnarray}
Note that if $u$ is a weak solution of the problem \eqref{aux2} and satisfies the estimate
\begin{equation*}
\nu f(|u(x)|^2) \leq V(x)|u(x)|^2,\quad \forall\,|x|>R_0,
\end{equation*}
then $h_{\lambda,k}(x|u|^2)u=f_{\lambda,k}(|u|^2)u$ and $u$ is indeed a solution of the problem \eqref{aux1}. The Euler–Lagrange functional associated with Problem \eqref{aux2} is given by
\begin{equation*}
J_{\lambda,k}(u)=\frac{1}{2}\|u\|^2-\frac{1}{2}\int_{\mathbb{R}^{N}}H_{\lambda,k}(x,|u|^2)\,\mathrm{d}x.
\end{equation*}
A direct computation shows that $J_{\lambda,k}\in C^1(E,\mathbb{R})$ with
\begin{eqnarray*}\label{J}
J'_{\lambda,k}(u)\phi &=&\Re\left(\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(\phi(x)-\phi(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \right)\\
&& +\,\Re\left(\int_{\mathbb{R}^{N}}V(x)u\overline{\phi} \,\mathrm{d}x
- \int_{\mathbb{R}^{N}}h_{\lambda,k}(x,|u|^2)u\overline{\phi}\,\mathrm{d}x \right), \quad \forall \,u, \phi \in E.
\end{eqnarray*}
Thus the weak solutions of \eqref{aux2} are precisely the critical points of $J_{\lambda,k}$.
\section{Existence of solutions for Problem \eqref{aux2}}\label{4}
In this section, we will check the mountain pass geometry to the energy functional associated to problem \eqref{aux2}. We recall that a functional $I \in C^{1}(X,\mathbb{R})$ ($X$ is a Banach space) satisfies the Palais–Smale condition at level $c \in \mathbb{R}$ ($(PS)_c-$condition for short) if any sequence $(u_n) \subset X$ such that $I(u_n) \rightarrow c \mbox{ \ and \ } I'(u_n) \rightarrow 0$ as $n \rightarrow \infty $, has a convergent subsequence in $X$. A sequence $(u_n) \subset X$ satisfying the previous convergences is called Palais–Smale sequence for $I$ at level $c \in \mathbb{R}$ ($(PS)_c-$sequence for short). Next, we will verify the facts stated above for the energy functional associated to the problem \eqref{aux2}. First, we show that $J_{\lambda,k}$ has the mountain pass geometry.
\begin{lem}\label{lem4.1}
The functional $J_{\lambda,k}$ satisfies the follwing conditions:
\begin{itemize}
\item [$(i)$] $J_{\lambda,k}(0)=0$;
\item [$(iii)$] there exist $\delta,\rho>0$ such that $J_{\lambda,k}(v)\geq \delta$ if $\|v\|=\rho$;
\item [$(iii)$] there exists $e \in E$ such that $\|e\|>\rho$ and $J_{\lambda,k}(e)<0$.
\end{itemize}
\end{lem}
\begin{proof}
It follows directly from the definition of $J_{\lambda,k}$, that $(i)$ holds. In order to prove $(ii)$, we note
that in view of $(f_2)$, \eqref{del_2} and Sobolev continuous embedding from $E$ into $L^{2^*_{s}}(\mathbb{R}^N,\mathbb{C})$, we see that
\begin{eqnarray*}
J_{\lambda,k}(u) &\geq& \frac{1}{2}\|u\|^2-\frac{2C_0}{2^*_{s}}\big(1+\lambda k^{\frac{q-p}{2}}\big)\frac{1}{2}\int_{\mathbb{R}^N}|u|^{2^*_{s}}\,\mathrm{d}x\nonumber\\
&\geq& \frac{1}{2}\|u\|^2-C\|u\|^{2^*_{s}},
\end{eqnarray*}
where $C:=C(\lambda,k)>0$. Therefore, since that $2^*_{s}>2$, we choose $\rho$ small enough such that $J_{\lambda,k}(u)\geq \delta >0$ for all $u$ in $E$ with $\|u\|=\rho$, that is, $(ii)$ holds.
in order to prove $(iii)$, fix $\varphi \in C^{\infty}_0(\mathbb{R}^N)\backslash\{0 \}$ with $supp(\varphi) \subset B_1(0)$ and note that \eqref{del_4.0} implies
\begin{equation*}
H_{\lambda,k}(x,|\varphi|^2)=F_{\lambda,k}(|\varphi|^2), \quad \mbox{if \ } x \in supp(\varphi).
\end{equation*}
Thus, using \eqref{Fflambdak_1} we obtain
\begin{eqnarray*}
J_{\lambda,k}(t\varphi)&=&\frac{t^2}{2}\|\varphi\|^2-\frac{1}{2}\int_{supp(\varphi)}F_{\lambda,k}(|t\varphi|^2)\,\mathrm{d}x\nonumber\\
&\leq& \frac{t^2}{2}\|\varphi\|^2 - \frac{C_1}{2}t^\theta\int_{supp(\varphi)}|\varphi|^\theta \,\mathrm{d}x+\frac{C_2}{2}|supp(\varphi)|,
\end{eqnarray*}
which implies that $J_{\lambda,k}(t\varphi) \rightarrow -\infty$ as $t \rightarrow \infty$, since $\theta >2$. Finally, assertion $(iii)$ follows for $e= t\varphi$ with $t$ large enough.
\end{proof}
Applying a version of the Mountain Pass Theorem without the $(PS)$ condition, \cite{w}, we obtain a Palais–Smale sequence $(u_n) \subset E$ such that
\begin{equation}\label{PSc}
J_{\lambda,k}(u_n) \rightarrow c_{\lambda,k} \mbox{ \ and \ } J'_{\lambda,k}(u_n) \rightarrow 0,
\end{equation}
where $c_{\lambda,k}$ is the mountain pass level characterized by
\begin{equation*}
0<c_{\lambda,k}:=\inf_{\gamma \in \Gamma_{\lambda,k}}\max_{t \in [0,1]}J_{\lambda,k}(\gamma(t))
\end{equation*}
where
\begin{equation*}
\Gamma_{\lambda,k}:=\Big\{ \gamma \in C([0, 1], E) : \gamma(0) = 0 \mbox{ \ and \ } J_{\lambda,k}(\gamma(1)) < 0 \Big\}.
\end{equation*}
Now, we introduce the functional $I_0: E\rightarrow \mathbb{R}$ given by
\begin{equation*}
I_0(u)=\frac{1}{2}\int \! \! \! \int_{\mathbb{R}^{2N}}\frac{|(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)})|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y +\frac{1}{2}\int_{\mathbb{R}^N}V(x)|u|^2\,\mathrm{d}x-\frac{1}{2}\int_{\mathbb{R}^N}G(|u|^2)\,\mathrm{d}x.
\end{equation*}
This functional satisfies the conditions $(i)$, $(ii)$ and $(iii)$ of Lemma \ref{lem4.1}. Hence,
\begin{equation}\label{minimax_1}
c_0:=\inf_{\gamma\in \Gamma_0}\max_{t \in [0,1]} I_0(\gamma(t))
\end{equation}
where
\begin{equation*}
\Gamma_0=\{\gamma \in C([0,1], E): \gamma(0)=0 \mbox{ \ and \ } I_0(\gamma(1)) < 0\}.
\end{equation*}
In view of the definition of $f_{\lambda,k}$ in \eqref{flambdak} and $h_{\lambda,k}$ in \eqref{hlambdak}, we have $J_{\lambda,k}(u) \leq I_0(u)$ for all $u \in E$. Hence, from definition of the levels $c_{\lambda,k}$ and $c_0$, we obtain $c_{\lambda,k}\leq c_0$. It is important to point out that the level $c_0$ does not depend on $\lambda$, $k$, and $R_0$.
\begin{lem}\label{lemma4.2}
If $(u_n)$ is a $(PS)_{c_{\lambda,k}}-$sequence for $J_{\lambda,k}$, then it is bounded in $E$.
\end{lem}
\begin{proof}
Let $(u_n) \subset E$ be a $(PS)_{c_{\lambda,k}}-$sequence. Then, using \eqref{Hhlambdak}, $h_{\lambda,k} \geq 0$ and \eqref{del_4}, respectively, we reach
\begin{eqnarray}\label{stimative_1}
J_{\lambda,k}(u_n)-\frac{\theta}{2}J'_{\lambda,k}(u_n)u_n&=&\Big(\frac{1}{2}-\frac{1}{\theta}\Big)\|u_n\|^2+\frac{1}{\theta}\int_{\mathbb{R}^{N}}\Big[h_{\lambda,k}(x,|u_n|^2)|u_n|^2-\frac{\theta}{2} H_{\lambda,k}(x,|u_n|^2)\Big]\,\mathrm{d}x \nonumber\\
&\geq & \Big(\frac{1}{2}-\frac{1}{\theta}\Big)\|u_n\|^2+\frac{1}{\theta}\int_{ B_{R_0}^{c}(0)}\Big[h_{\lambda,k}(x,|u_n|^2)|u_n|^2-\frac{\theta}{2} H_{\lambda,k}(x,|u_n|^2)\Big]\,\mathrm{d}x\nonumber\\
&\geq & \Big(\frac{1}{2}-\frac{1}{\theta}\Big)\|u_n\|^2-\frac{1}{2} \int_{ B_{R_0}^{c}(0)}H_{\lambda,k}(x,|u_n|^2)\,\mathrm{d}x\nonumber\\
&\geq& \Big(\frac{1}{2}-\frac{1}{\theta}\Big)\|u_n\|^2-\frac{1}{2\nu} \int_{ B^c_{R_0}(0)}V(x)|u_n|^2\,\mathrm{d}x\nonumber\\
&\geq& \frac{\theta -2}{4\theta}\|u_n\|^2.
\end{eqnarray}
From this, there exist $C_1$, $C_2 > 0$ such that
\begin{equation*}
C_1+C_2 \|u_n\| \geq \frac{\theta-2}{4\theta}\|u_n\|^2, \quad \forall n \in \mathbb{N},
\end{equation*}
which finishes the proof.
\end{proof}
\begin{lem}\label{lemmaPS}
The $(PS)_{c_{\lambda,k}}-$sequence satisfies the following property: For each $\varepsilon >0$ there exists $r=r(\varepsilon)>R_0$ verifying
\begin{equation}\label{PS}
\limsup_{n \rightarrow +\infty} \int_{B^c_{r}(0)} \! \! \! \! \ \int_{\mathbb{R}^{N} }\frac{\big|u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\big|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y +\int_{B^c_{r}(0)}V(x)|u_n|^2\,\mathrm{d}x < \varepsilon.
\end{equation}
\end{lem}
\begin{proof}
Fix $r > 2R_0$ and set $\eta_{r}\in C^{\infty}(\mathbb{R}^N,\mathbb{R})$ such that $0\leq \eta_r \leq 1$, $\eta_r=0$ in $B_{\frac{r}{2}}(0)$, $\eta_r=1$ in $B^c_r(0)$ and $|\nabla \eta_r(x)|\leq \frac{C}{r}$, for some $C > 0$ independent of $r$. Since $J'_{\lambda,k}(u_n).\eta_ru_n=o_n(1)$, it follows from \eqref{del_3} that
\begin{eqnarray}\label{PS_1}
& &\Re\left(\int\!\!\! \int_{\mathbb{R}^{2N}}\frac{\Big(u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(u_n(x)\eta_r(x)-u_n(y)\eta_r(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right) \nonumber\\
& &+\int_{\mathbb{R}^{N}}V(x)|u_n|^2\eta_r \,\mathrm{d}x =\int_{\mathbb{R}^{N}}h_{\lambda,k}(x,|u_n|^2)|u_n|^2\eta_r \,\mathrm{d}x+o_n(1)\nonumber\\
&\leq& \frac{1}{\nu}\int_{\mathbb{R}^N}V(x)|u|^2\eta_r \,\mathrm{d}x + o_n(1).
\end{eqnarray}
Next, using $\overline{z_1+z_2}=\overline{z_1}+\overline{z_2}$ for all $z_1,z_2 \in \C$ and $\overline{e^{i t}}=e^{-i t}$ for all $t \in \mathbb{R}$, let us note that
\begin{eqnarray}\label{PS_2}
&&\!\!\!\!\Re\Bigg(\Big(u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(u_n(x)\eta_r(x)-u_n(y)\eta_r(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}\Bigg)\nonumber\\
&=&\!\!\!\!\! \Re \Bigg(\Big(u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(u_n(x)\eta_r(x)-u_n(y)\eta_r(x)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}\nonumber\\
& & + \Big(u_n(x)-u_n(y)e^{iA\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(u_n(y)\eta_r(x)-u_n(y)\eta_r(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}\Bigg)\nonumber\\
&=&\!\!\!\!\!\Re\Bigg(\overline{u_n(y)}e^{-i A\big(\frac{x+y}{2}\big).(x-y)}\Big(u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\Big(\eta_r(x)-\eta_r(y)\Big)\Bigg)\nonumber\\
& &+\eta_r(x)\Big|u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big|^2.
\end{eqnarray}
Now, since $|\Re(z)|\leq |z|$ for all $z \in \C$, $|e^{it}|=1$ for all $t \in \mathbb{R}$ and $(u_n)$ is bounded in $E$, the H\"{o}lder’s inequality leads to
\begin{equation*}
\frac{\overline{u_n(y)}e^{-i A\big(\frac{x+y}{2}\big).(x-y)}\Big(u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\Big(\eta_r(x)-\eta_r(y)\Big)}{|x-y|^{N+2s}} \in L^1(\mathbb{R}^{2N},\C)
\end{equation*}
and
\begin{eqnarray}\label{PS_3}
&&\Bigg|\Re\left(\int\!\!\! \int_{\mathbb{R}^N}\frac{\overline{u_n(y)}e^{-i A\big(\frac{x+y}{2}\big).(x-y)}\Big(u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\Big(\eta_r(x)-\eta_r(y)\Big)}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right)\Bigg|\nonumber\\
&\leq&\left(\int\!\!\!\int_{\mathbb{R}^{2N}}\frac{\big|u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\big|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right)^\frac{1}{2} \left(\int\!\!\!\int_{\mathbb{R}^{2N}}|\overline{u_n(y)}|^2\frac{\big|\eta_r(x)-\eta_r(y)\big|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \right)^{\frac{1}{2}}\nonumber\\
&\leq& C\left(\int\!\!\!\int_{\mathbb{R}^{2N}}|u_n(y)|^2\frac{\big|\eta_r(x)-\eta_r(y)\big|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \right)^{\frac{1}{2}}.
\end{eqnarray}
By combining \eqref{PS_1}, \eqref{PS_2} and \eqref{PS_3}, we write
\begin{eqnarray}\label{PS_5}
&&\int\!\!\!\int_{\mathbb{R}^{2N}}\frac{\eta_r(x)\big|u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\big|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y +\left(1-\frac{1}{\nu}\right)\int_{\mathbb{R}^{N}}V(x)|u_n|^2\eta_r \,\mathrm{d}x \nonumber\\
&\leq&-\Re\left(\int\!\!\! \int_{\mathbb{R}^N}\frac{\overline{u_n(y)}e^{-i A\big(\frac{x+y}{2}\big).(x-y)}\Big(u_n(x)-u_n(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\Big(\eta_r(x)-\eta_r(y)\Big)}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \right)\nonumber\\
&\leq& C\left(\int\!\!\!\int_{\mathbb{R}^{2N}}|u_n(y)|^2\frac{\big|\eta_r(x)-\eta_r(y)\big|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{2}} + o_n(1).
\end{eqnarray}
Similarly to \cite[Lemma 2.4]{ambro3} one may prove that
\begin{eqnarray}\label{PSS}
\lim_{r \rightarrow \infty}\limsup_{n \rightarrow \infty}\int\!\!\!\int_{\mathbb{R}^{2N}}|u_n(y)|^2\frac{\big|\eta_r(x)-\eta_r(y)\big|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y=0.
\end{eqnarray}
Therefore, it follows from \eqref{PS_5} and \eqref{PSS} that \eqref{PS} holds.
\end{proof}
In the next lemma, we prove the Palais–Smale condition for the functional $J_{\lambda,k}$.
\begin{lem}\label{lemma4.3}
The functional $J_{\lambda,k}$ satisfies the
$(PS)_{c_{\lambda,k}}-$condition.
\end{lem}
\begin{proof}
Let $(u_n)\subset E$ be a $(PS)_{c_{\lambda,k}}-$sequence for $J_{\lambda,k}$. In view of Lemma \ref{lemma4.2}, the sequence $(u_n)$ is bounded in $E$ and, up to a subsequence, $u_n \rightharpoonup u$ weakly in $E$. Thus,
\begin{eqnarray}\label{LemmaPS_1}
o_n(1)&=&J'_{\lambda,k}(u_n)(u_n-u)\nonumber\\
&=& \|u_n\|^2-\|u\|^2+o_n(1)- \dsp\int_{\mathbb{R}^N}h_{\lambda,k}(x,|u_n|^2)u_n(\overline{u_n-u})\mathrm{d}x\,\nonumber
\\
&=& \|u_n-u\|^2-\dsp\int_{\mathbb{R}^N}h_{\lambda,k}(x,|u_n|^2)u_n(\overline{u_n-u})\,\mathrm{d}x.
\end{eqnarray}
In light of Sobolev’s compact embedding $E\hookrightarrow L^{\alpha}_{loc}(\mathbb{R}^N,\mathbb{C})$, for $\alpha \in (2, 2^*_{s})$, $(f_2)$, \eqref{del_1} and H\"{o}lder’s inequality, we obtain
\begin{equation}\label{PS_6}
\int_{B_r(0)}h_{\lambda,k}(x,|u_n|^2)u_n(\overline{u_n-u})\,\mathrm{d}x=o_n(1),
\end{equation}
for each $r>0$. Furthermore, taking $r$ sufficiently large, using $(f_2)$, \eqref{del_1} and H\"{o}lder’s inequality, we have
\begin{equation}\label{PS_7}
\int_{B^c_r(0)}h_{\lambda,k}(x,|u_n|^2)u_n\overline{u}\,\mathrm{d}x=o_n(1).
\end{equation}
Hence, it follows from \eqref{PS_6} and \eqref{PS_7}, for $r>R_0$ sufficiently large, that
\begin{eqnarray}\label{j1}
\int_{\mathbb{R}^N}h_{\lambda,k}(x,|u_n|^2)u_n(\overline{u_n-u})\,\mathrm{d}x&=&\int_{B_r^c(0)}h_{\lambda,k}(x,|u_n|^2)|u_n|^2\,\mathrm{d}x+o_n(1)\nonumber\\
&\leq&\frac{1}{\nu}\int_{B^c_r(0)}V(x)|u_n|^2\,\mathrm{d}x +o_n(1).
\end{eqnarray}
Therefore, Lemma \ref{lemmaPS}, \eqref{LemmaPS_1} and \eqref{j1} imply that
$$
\|u_n-u\|\rightarrow 0 \mbox{ \ as } n \rightarrow \infty,
$$
which finishes the proof.
\end{proof}
In view of Lemmas \ref{lem4.1} and \ref{lemma4.3} we have the following result:
\begin{lem}\label{lemma4.4}
For each $\lambda > 0$ and $k \in \mathbb{N}$, Problem \eqref{aux2} has at least a weak solution $u_{\lambda,k} \in E$ such that $J_{\lambda,k}(u_{\lambda,k}) = c_{\lambda,k}$.
\end{lem}
\section{$L^{\infty}$--estimates}\label{5}
We start this section by proving a uniform estimate for the magnetic Gagliardo semi-norm of the solution $u_{\lambda,k}$ of problem \eqref{aux2},
obtained in Lemma \ref{lemma4.4}.
\begin{lem}\label{lemma5.1}
Let $u_{\lambda,k}$ be the solution obtained in Lemma \ref{lemma4.4}. Then, there exists a constant $M_0$, which depends
only on $N, \theta, s, p$ \big(independent of $\lambda$, $k$ and $R_0$\big), such that
$$
[u_{\lambda,k}]_{s,A}^2=\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{|u_{\lambda,k}(x)-u_{\lambda,k}(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \leq M_0.
$$
\end{lem}
\begin{proof}
In view of the estimate \eqref{stimative_1} obtained in the proof of Lemma \ref{lemma4.2} and recalling that $c_{\lambda,k}\leq c_0$, we have
\begin{eqnarray}
c_0\geq c_{\lambda,k}=J_{\lambda,k}(u_{\lambda,k})-\frac{\theta}{2}J'_{\lambda,k}(u_{\lambda,k}).u_{\lambda,k} \geq \frac{\theta-2}{4\theta}\|u_{\lambda,k}\|^2
\end{eqnarray}
which implies
\begin{equation*}
\|u_{\lambda,k}\|^2 \leq \Big(\frac{4\theta}{\theta-2}\Big)c_0=:M_0.
\end{equation*}
Therefore,
\begin{equation*}
[u_{\lambda,k}]^2_{s,A} \leq M_0,
\end{equation*}
this completes the proof.
\end{proof}
The next Lemma is crucial in our arguments, since it establishes an important estimate involving the $L^{\infty}-$norm of the solution $u_{\lambda,k}$. For this purpose, we shall use the Moser iteration method.
\begin{lem}\label{lemma5.2} For each $\lambda> 0$ and $k \in \mathbb{N}$, $|u_{\lambda, k}| \in L^{\infty}(\mathbb{R}^N,\mathbb{R})$, and there exists $C > 0$ that depends only on
$N$, $s$, $\theta$, $p$ and $C_0$, such that
\begin{equation}\label{Eq Lemm5.2}
\big\||u_{\lambda,k} |\big\|_{\infty} \leq C\big(1+\lambda k^{\frac{q-p}{2}}\big)^{\gamma} \|u_{\lambda,k}\|_{2_s^*},
\end{equation}
where $\gamma ={\frac{1}{2(\beta_1-1)}}$ and $\beta_1=\frac{{2_s^*-p+2}}{2}$.
\end{lem}
\begin{proof}
The proof follows some ideas from \cite[Lemma 2.8]{ambro3}. We denote, by simplicity, $u\!=\!u_{\lambda,k}$. For $L > 0$, we define $u_L=\min \left\lbrace |u|,L \right\rbrace$. Taking $\phi =u u_L^{2(\beta -1)}$ as test function in \eqref{S_1}, where $\beta >1$ will be chosen later, we can deduce
\begin{eqnarray}\label{M_1}
&&\Re\left(\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(u u_L^{2(\beta -1)}(x)-u u_L^{2(\beta -1)}(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right)\nonumber \\
& & +\int_{\mathbb{R}^{N}}V(x)|u|^2u_L^{2(\beta -1)} \,\mathrm{d}x- \int_{\mathbb{R}^{N}}h_{\lambda,k}(x,|u|^2)|u|^2 u_L^{2(\beta -1)}\,\mathrm{d}x=0.
\end{eqnarray}
Now, using the fact that $\Re(z) \leq |z|$, for all $z$ in $\mathbb{C}$ and $|e^{i t}|=1$ for all $t$ in $\mathbb{R}$, we obtain
\begin{eqnarray*}\label{M_2}
& & \Re\left( \Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(u u_L^{2(\beta -1)}(x)-u u_L^{2(\beta -1)}(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)} \right) \nonumber \\
& & \geq\Big(|u(x)|-|u(y)|\Big)\Big(|u(x)| u_L^{2(\beta -1)}(x)-|u(y)| u_L^{2(\beta -1)}(y)\Big),
\end{eqnarray*}
which implies that
\begin{eqnarray}\label{M_4}
& & \Re\left(\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(u u_L^{2(\beta -1)}(x)-u u_L^{2(\beta -1)}(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\,\mathrm{d}x\mathrm{d}y\right) \nonumber \\
&& \geq \int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(|u(x)|-|u(y)|\Big)\Big(|u(x)| u_L^{2(\beta -1)}(x)-|u(y)|u_L^{2(\beta -1)}(y)\Big)}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y.
\end{eqnarray}
For any $t \geq0$, let us define the functions
$$
\alpha(t):=\alpha_{L,\beta}(t)=tt_L^{2(\beta -1)},\, \mbox{where} \ \ t_L=\min\{|t|,L\},
$$
$$
\Lambda(t)=\frac{|t|^2}{2} \quad \mbox{and} \quad \Gamma(t)=\int_{0}^{t}(\alpha'(s))^{\frac{1}{2}}\,\mathrm{d}s.
$$
The following estimates hold:
\begin{equation}\label{af1}
\Lambda'(a-b)(\alpha(a)-\alpha(b))\geq |\Gamma(a)-\Gamma(b)|^2, \quad \forall \, a,b \in \mathbb{R}
\end{equation}
and
\begin{equation}\label{af2}
\Gamma(|t|)\geq \frac{1}{\beta}|t|t_L^{\beta -1}.
\end{equation}
By using \eqref{af1}, we have
\begin{equation}\label{M_5}
|\Gamma(|u(x)|)-\Gamma(|u(y)|)|^2 \leq (|u(x)|-|u(y)|)\Big(|u(x)|u_L^{2(\beta -1)}(x) - |u(y)|u_L^{2(\beta -1)}(y) \Big).
\end{equation}
Next, combining \eqref{M_4} with \eqref{M_5}, we obtain
\begin{eqnarray}\label{M_6}
&&\Re\left(\int \! \! \! \int_{\mathbb{R}^{2N} }\frac{\Big(u(x)-u(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(u u_L^{2(\beta -1)}(x)-u u_L^{2(\beta -1)}(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right)\nonumber \\
& & \geq \int \! \! \! \int_{\mathbb{R}^{2N} }\frac{|\Gamma(|u(x)|)-\Gamma(|u(y)|)|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y=\big[\Gamma(|u|) \big]_{s}^2,
\end{eqnarray}
and using Lemma \ref{ime_1} and \eqref{af2}, we note
\begin{eqnarray}\label{M_7}
[\Gamma(|u|)]_{s}^2\geq S\|\Gamma(|u|) \|^2_{2^*_{s}} \geq S\frac{1}{\beta^2}\||u|u_L^{\beta -1} \|^2_{2^*_{s}}.
\end{eqnarray}
In view of \eqref{M_1},\eqref{M_4} and \eqref{M_7}, we infer
\begin{equation*}\label{M_8}
S\frac{1}{\beta^2}\big\||u|u_L^{\beta -1} \big\|^2_{2^*_{s}}+\int_{\mathbb{R}^{N}}V(x)|u|^2u_L^{(2(\beta -1)}\,\mathrm{d}x\leq\int_{\mathbb{R}^{N} }h_{\lambda,k}(x,|u|^2)|u|^2u_L^{2(\beta -1)}\,\mathrm{d}x.
\end{equation*}
Since $V\geq 0$, $|u|\geq u_L>0$, using the above estimate jointly with \eqref{del_1} and $(f_1)$, we reach
\begin{eqnarray}
S\frac{1}{\beta^2}\big\||u|u_L^{\beta -1} \|^2_{2^*_{s}}\leq C_0(1+\lambda k^{\frac{q-p}{2}})\int_{\mathbb{R}^{N}}|u|^{2\beta}|u|^{p-2}\,\mathrm{d}x.
\end{eqnarray}
By H\"{o}lder’s inequality with exponents $2^*_{s}/(p-2)$ and $2^*_{s}/(2^*_{s}-p+2)$, we see
\begin{eqnarray}\label{M_9}
\big\||u|u_L^{\beta -1} \big\|^2_{2^*_{s}} &\leq& S^{-1}\beta^2C_0(1+\lambda k^{\frac{q-p}{2}})\Bigg(\int_{\mathbb{R}^{N}}|u|^{2^*_{s}}\,\mathrm{d}x\Bigg)^{\frac{p-2}{2^*_{s}}}\Bigg(\int_{\mathbb{R}^{N} }|u|^{\frac{2\beta 2^*_{s}}{2^*_{s}-p+2}}\,\mathrm{d}x\Bigg)^{\frac{2^*_{s}-p+2}{2^*_{s}}}\nonumber\\
&=& S^{-1}\beta^2C_0(1+\lambda k^{\frac{q-p}{2}})\big\||u|\big\|^{p-2}_{2^*_{s}}\Bigg(\int_{\mathbb{R}^{N} }|u|^{\frac{2\beta 2^*_{s}}{2^*_{s}-p+2}}\,\mathrm{d}x\Bigg)^{\frac{2^*_{s}-p+2}{2^*_{s}}}.
\end{eqnarray}
Moreover, recalling \eqref{im_1.0} and Lemma \ref{lemma5.1}, we have
\begin{equation}\label{eq5.15}
\big\||u|\big\|^{2}_{2^*_{s}}\leq S^{-1}M_0.
\end{equation}
Now, we observe that if $|u|\in L^{\frac{2^*_{s}2\beta}{2^*_{s}-p+2}}(\mathbb{R}^N,\mathbb{R})$ and combining \eqref{eq5.15} with \eqref{M_9}, then
\begin{equation}\label{M_10}
\big\||u|u_L^{\beta -1} \big\|^2_{2^*_{s}}\leq C_1(1+\lambda k^{\frac{q-p}{2}})\beta^2\Bigg(\int_{\mathbb{R}^{N} }|u|^{\frac{2\beta 2^*_{s}}{2^*_{s}-p+2}}\,\mathrm{d}x\Bigg)^{\frac{2^*_{s}-p+2}{2^*_{s}}} < \infty,
\end{equation}
where $C_1=C_1(N,s,\theta,p,C_0)$. Hence, since $ u_L \rightarrow |u|$ almost everywhere as $L\rightarrow \infty$, using Fatou’s Lemma in \eqref{M_10}, we conclude that
\begin{equation*}
\big\||u|^{\beta} \big\|^2_{2^*_{s}}\leq C_1(1+\lambda k^{\frac{q-p}{2}})\beta^2\Bigg(\int_{\mathbb{R}^{N} }|u|^{\frac{2\beta 2^*_{s}}{2^*_{s}-p+2}}\,\mathrm{d}x\Bigg)^{\frac{2^*_{s}-p+2}{2^*_{s}}}
\end{equation*}
from which we deduce that
\begin{equation*}
\big\||u|^{\beta} \big\|^{\frac{1}{\beta}}_{L^{2^*_{s}}(\mathbb{R}^N,\mathbb{R})}\leq [C_1(1+\lambda k^{\frac{q-p}{2}})]^{\frac{1}{2\beta}}\beta^{\frac{1}{\beta}}\Bigg(\int_{\mathbb{R}^{N} }|u|^{\frac{2\beta 2^*_{s}}{2^*_{s}-p+2}}\,\mathrm{d}x\Bigg)^{\frac{2^*_{s}-p+2}{2^*_{s}2\beta}}\!\!,
\end{equation*}
that is,
\begin{equation}\label{I_1}
\big\||u| \big\|_{2^*_{s}\beta}\leq [C_1(1+\lambda k^{\frac{q-p}{2}})]^{\frac{1}{2\beta}}\beta^{\frac{1}{\beta}}\Bigg(\int_{\mathbb{R}^{N} }|u|^{\frac{2\beta 2^*_{s}}{2^*_{s}-p+2}}\,\mathrm{d}x\Bigg)^{\frac{2^*_{s}-p+2}{2^*_{s}2\beta}}
\end{equation}
thus, $|u| \in L^{2^*_{s}\beta}(\mathbb{R}^N,\mathbb{R})$.
Let us use inequality \eqref{I_1} in order to obtain the desired $L^{\infty}-$estimate, through Moser iteration method. For this, setting $\beta=\beta_1:=(2^*_{s}-p+2)/2$ in \eqref{I_1}, there holds
\begin{equation}\label{I_2}
\big\||u| \big\|_{2^*_{s}\beta_1}\leq [C_1(1+\lambda k^{\frac{q-p}{2}})]^{\frac{1}{2\beta_1}}\beta_1^{\frac{1}{\beta_1}}\big\||u| \big\|_{2^*_{s}}.
\end{equation}
Now, when $\beta =\beta_2:=\beta_1^2$ in \eqref{I_1}, we have $2\beta_2 2^*_{s}/(2^*_{s}-p+2)=2^*_{s}\beta_1$ and we deduce that
\begin{eqnarray*}
\big\||u| \big\|_{2^*_{s}\beta_2}&\leq& [C_1(1+\lambda k^{\frac{q-p}{2}})]^{\frac{1}{2\beta_2}}\beta_2^{\frac{1}{\beta_2}}\big\||u| \big\|_{2^*_{s}\beta_1}\\
&\leq& [C_1(1+\lambda k^{\frac{q-p}{2}})]^{\frac{1}{2\beta_1}+\frac{1}{2\beta_2}}\beta_1^{\frac{1}{\beta_1}}\beta_2^{\frac{1}{\beta_2}}\big\||u| \big\|_{2^*_{s}},
\end{eqnarray*}
once the estimate \eqref{I_2} holds.
Arguing by iteration $m$ times for $m \geq 2$, with $\beta\!\!=\!\!\beta_m\!\!:=\!\!\beta_{m-1}\beta_1\!\!=\!\!\beta_1^{m}$ in \eqref{I_1} and using that $2\beta_m2^*_{s}/(2^*_{s}-p+2)=2^*_{s}\beta_{m-1}$, we deduce
\begin{equation}\label{I_3}
\big\||u| \big\|_{2^*_{s}\beta_m}\leq [C_1(1+\lambda k^{\frac{q-p}{2}})]^{\frac{1}{2\beta_1}+\frac{1}{2\beta_2}+\cdots +\frac{1}{2\beta_m}}\beta_1^{\frac{1}{\beta_1}}\beta_2^{\frac{1}{\beta_2}}\cdots \beta_m^{\frac{1}{\beta_m}}\big\||u| \big\|_{2^*_{s}},
\end{equation}
which implies that $|u|\in L^{2^*_{s}\beta_m}(\mathbb{R}^N,\mathbb{R})$, for all $m \geq 2$.
Once that,
\begin{equation*}
\frac{1}{2}\sum_{j=1}^{m}\frac{1}{\beta_j}\leq\frac{1}{2}\sum_{j=1}^{\infty}\Big(\frac{1}{\beta_1}\Big)^j=\frac{1}{2(\beta_1-1)}=\gamma
\end{equation*}
and
\begin{equation*}
\beta_1^{\frac{1}{\beta_1}}\beta_2^{\frac{1}{\beta_2}}\cdots \beta_m^{\frac{1}{\beta_m}}\leq \beta_1^{\sum_{j=1}^{\infty}\frac{j}{\beta_1^{j}}}=\beta_1^{\tilde{\gamma}}, \quad \tilde{\gamma}=\frac{\beta_1}{(\beta_1-1)^2}
\end{equation*}
it follows from \eqref{I_3} that
\begin{equation*}
\big\||u| \big\|_{p}\leq [C_1(1+\lambda k^{\frac{q-p}{2}})]^{\gamma}\beta_1^{\tilde{\gamma}}\big\||u| \big\|_{2^*_{s}},
\end{equation*}
where $p:=2^*_{s}\beta_m=2^*_{s}\beta^m \geq 2^*_{s}$, for all $m \geq 2$ . Since $\beta_m=\beta_1^{m}\rightarrow \infty$ as $m \rightarrow \infty$,
we conclude that Lemma is valid for
$
C=C_1^{\gamma}\beta_1^{\tilde{\gamma}}
$
with
$
\beta_1=\frac{{2_s^*-p+2}}{2}.
$
\end{proof}
In view of Lemma \ref{lemma5.2}, since $\eqref{Eq Lemm5.2}$ and \eqref{eq5.15} hold, we are able to find suitable values of $\lambda$ and $k$ such that the following inequality holds true
\begin{equation*}
\big\||u_{\lambda,k}|\big\|_{\infty} \leq C_1^{\gamma}\beta_1^{\tilde{\gamma}}\big(1+\lambda k^{\frac{q-p}{2}}\big)^{\gamma}C_2 < k,
\end{equation*}
where $C_2=(S^{-1}M_0)^{\frac{1}{2}}$. In fact, we shall verify that
\begin{equation*}
C_1^{\gamma}\beta_1^{\tilde{\gamma}}\big(1+\lambda k^{\frac{q-p}{2}}\big)^{\gamma}C_2 < k,
\end{equation*}
or equivalently,
\begin{equation*}
\lambda k^{\frac{q-p}{2}}\leq \frac{1}{C_1\beta_{1}^{\frac{\tilde{\gamma}}{\gamma}}}(C_2^{-1}k)^{\frac{1}{\gamma}}-1.
\end{equation*}
Consider $k>0$ such that
\begin{equation*}
\frac{1}{C_1\beta_{1}^{\frac{\tilde{\gamma}}{\gamma}}}(C_2^{-1}k)^{\frac{1}{\gamma}}-1>0
\end{equation*}
and fix $\lambda_0>0$ such that
\begin{equation*}
\lambda <\lambda_{0} \leq \Bigg(\frac{1}{C_1\beta_{1}^{\frac{\tilde{\gamma}}{\gamma}}}(C_2^{-1}k)^{\frac{1}{\gamma}}-1\Bigg)\frac{1}{k^{\frac{q-p}{2}}}.
\end{equation*}
Thus, by taking $k_0>C_3:= C_1^{\gamma}\beta_1^{\tilde{\gamma}}C_2$ we obtain $\lambda_0>0$ such that
\begin{equation}\label{eq}
\big\||u_{\lambda,k_0}|\big\|_{\infty} \leq k_0,\quad \forall \, \lambda \in [0,\lambda_{0}).
\end{equation}
\section{Proof of Theorem \ref{theorem}}\label{6}
In light of Lemma \ref{lemma4.4}, for each $\lambda >0$ and $k \in \N$, the auxilary problem \eqref{aux2} admits a solution $u_{\lambda,k}$ in $E$. Thereby, in order to prove the existence of solution for the original problem \eqref{problem}, in view that \eqref{eq} holds, it is sufficient to prove that the following inequality holds:
\begin{equation*}
f_{\lambda,k_0}(|u_{\lambda,k_0}(x)|^2) \leq \frac{V(x)}{\nu},\quad \forall \, |x|>R_0 \mbox{ \ and \ } \forall\, \lambda \in [0, \lambda_0).
\end{equation*}
\begin{lem}\label{lem5.3}
For each $\lambda> 0$ and $k \in \mathbb{N}$, set $u_{\lambda,k}$ solutions for the auxiliary problem \eqref{aux2}, such that $J_{\lambda,k}(u_{\lambda,k})=c_{\lambda,k}$. Then,
$$
|u_{\lambda ,k}| \leq \frac{R_0^{N-2s}}{|x|^{N-2s}}\big \||u_{\lambda,k}|\big\|_\infty, \quad \forall\, |x|\geq R_0.
$$
\end{lem}
\begin{proof}
For the sake of simplicity, we denote $u = u_{\lambda, k}$. Let $v$ be the $C^{\infty}(\mathbb{R}^N\backslash \{0\},\mathbb{R})$ function
\begin{equation}\label{Harmonic}
v(x)=\frac{R_0^{N-2s}\big\||u|\big\|_{\infty}}{|x|^{N-2s}}, \quad x\not =0.
\end{equation}
Since $1/|x|^{N-2s}$ is $s$-harmonic (see for instance \cite{BucurVald}), it follows that $(-\Delta)^{s}v(x)=0$ in $\mathbb{R}^N\backslash \{0\}$. Note that
$$
|u| \leq \big \||u|\big\|_{\infty} \leq \frac{R_0^{N-2s}}{|x|^{N-2s}} \big \||u|\big\|_{\infty}, \quad \forall\, |x|\leq R_0.
$$
Let us introduce the function $w\in \mathcal{D}^{s,2}(\mathbb{R}^N, \mathbb{R})$ defined by
$$
w(x) = \left\{
\begin{array}{ccl}
(|u|-v)^+(x),& \mbox{if} & |x|\geq R_0,\\
0, & \mbox{if} & |x|\leq R_0.\\
\end{array}
\right.
$$
\vspace{0,2cm}
It is worth mentioning that at this moment we could think of arguing the prove as in \cite{chao} and apply Kato’s inequality at function $\psi:=\frac{u}{|u|}w$. However, as pointed out in \cite{ambro3}, we are not able to use $\psi$ as test function and we do not have a Kato’s inequality for the fractional magnetic. Thus the arguments in \cite{chao} collapse and we are not able to use directly in our situation. We overcome this difficulty arguing similarly to \cite{ambro3}, by introducing the function
\[
\psi_{\delta}:=\dsp\frac{u}{u_{\delta}}w, \quad \mbox{where} \hspace{0,3cm} u_{\delta}=\sqrt{|u|^2 +\delta^2}, \hspace{0,3cm} \delta>0.
\]
Thus, applying $\psi_{\delta}$ as test function in \eqref{S_1} and taking the limit when $\delta \rightarrow 0$, we use Dominated Convergence Theorem to obtain
\begin{equation}\label{Ftest_10}
\int \! \! \! \int_{\mathbb{R}^{2N}}\frac{\big(|u(x)|-|u(y)|\big)\big(w(x)- w(y)\big)}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \leq \int_{\mathbb{R}^N}\left(-V(x)+h_{\lambda,k}(x,|u|^2)\right)|u|w\,\mathrm{d}x.
\end{equation}
Now, using the fact that $v$ is $s$-harmonic and $\big[(u-v)(x)-(u-v)(y)\big]\big(|w(x)-w(y)\big) \geq \big|w(x)-w(y)\big|^2$ for all $x,y \in \mathbb{R}^N$, we can obtain
\begin{eqnarray}\label{Ftest_11}
\int \! \! \! \int_{\mathbb{R}^{2N}}\frac{\big(|u(x)|-|u(y)|\big)\big(w(x)- w(y)\big)}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \!\!\!&=&\!\!\!\!\int \! \! \! \int_{\mathbb{R}^{2N}}\frac{\big((u-v)(x)-(u-v)(y)\big)\big(w(x)- w(y)\big)}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \nonumber\\
&\geq& \!\!\!\! \int \! \! \! \int_{\mathbb{R}^{2N}}\frac{\big(w(x)- w(y)\big)^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y.
\end{eqnarray}
Now, if we write $\mathbb{R}^N=( B^c_{R_0}\cap \Theta) \cup (B^c_{R_0} \cap \Theta^{c}) \cup B_{R_0}$ where $\Theta:=\{x \in \mathbb{R}^N:|u(x)|\geq v(x)\}$
and since $w=0$ in $B^c_{R_0}\cap \Theta^c$ and $w=0$ in $B_{R_0}$, then we deduce
\begin{equation}\label{Ftest_12}
\int_{\mathbb{R}^N}\big(-V(x)+h_{\lambda,k}(x,|u|^2)\big)|u|w\,\mathrm{d}x=\int_{B^c_{R_0}\cap \Theta}\big(-V(x)+h_{\lambda,k}(x,|u|^2)\big)|u|w\,\mathrm{d}x.
\end{equation}
Finally, by \eqref{Ftest_11}, \eqref{Ftest_12} and once that $h_{\lambda,k}(x,|u|^2)\leq \frac{1}{\nu}V(x)$, \eqref{Ftest_10} becomes
\begin{equation*}
\int \! \! \! \int_{\mathbb{R}^{2N}}\frac{\big|w(x)- w(y)\big|^2}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y \leq \Big(\frac{1}{\nu}-1\Big)\int_{B^c_{R_0}\cap \Theta}V(x)|u|w\,\mathrm{d}x \leq 0,
\end{equation*}
showing that $w\equiv 0$. Therefore, $|u| \leq v$ in $|x|\geq R_0$ and this finishes the proof.
\end{proof}
\vspace{0,3cm}
\!\!\!\!\!\!\!\textcolor{black}{{\bf Proof of Theorem}} \ref{theorem}: Using the definition of $f_{\lambda,k}$, $(f_2)$ and combining Lemma \ref{lem5.3} with \eqref{eq}-$(V_3)$, it follows that
\begin{eqnarray}\label{ineq_4}
\nu f_{\lambda,k_0}(|u_{\lambda,k_0}(x)|^2)&\leq& \nu f_{\lambda_0,k_0}(|u_{\lambda,k_0}(x)|^2)\nonumber \\
&\leq&\nu \big(1+\lambda_0k_0^{\frac{q-p}{2}}\big)|u_{\lambda,k_0}(x)|^{2^*_s-2} \nonumber\\
&\leq& \nu \big(1+\lambda_0k_0^{\frac{q-p}{2}}\big)\Big( \frac{R_0^{N-2s}\||u_{\lambda,k_0}|\|_{\infty}}{|x|^{N-2s}}\Big)^{2^*_s-2} \nonumber\\
&\leq& \nu \big(1+\lambda_0k_0^{\frac{q-p}{2}}\big)\frac{R_0^{4s}}{|x|^{4s}}\||u_{\lambda,k_0}|\|_{\infty}^{\frac{4s}{N-2s}}\nonumber\\
&\leq& \nu \big(1+\lambda_0k_0^{\frac{q-p}{2}}\big)\frac{V(x)}{\Lambda}k_0^{\frac{4s}{N-2s}}, \quad \forall \, |x|>R_0.
\end{eqnarray}
Now, if $\Lambda\geq \Lambda_0:=\nu\big(1+\lambda_0k_0^{\frac{q-p}{2}}\big)$ and $\lambda$ in $[0,\lambda_0)$, then \eqref{ineq_4} implies that
\begin{equation*}
\nu f_{\lambda,k_0}(|u_{\lambda,k_0}(x)|^2) \leq V(x),\quad \forall \, |x|>R_{0} \mbox{ \ and \ } \forall\, \lambda \in [0, \lambda_0).
\end{equation*}
Consequently, by \eqref{flambdak} and \eqref{hlambdak}, we deduce
\begin{eqnarray}\label{ineq_5}
h_{\lambda,k_0}(x,|u_{\lambda,k_0}(x)|^2)&=&\hat{f}_{\lambda,k_0}(|u_{\lambda,k_0}(x)|^2)\nonumber\\
&=&f_{\lambda,k_0}(|u_{\lambda,k_0}(x)|^2)\nonumber\\
&=&g(u_{\lambda,k_0}(x)) + \lambda |u_{\lambda,k_0}(x)|^{q-2}, \quad \mbox{a.e. in } \mathbb{R}^N,
\end{eqnarray}
for all $\lambda$ in $[0,\lambda_0)$ and $\Lambda \geq \Lambda_0$. Finally, by \eqref{ineq_5} and since $u_{\lambda,k_0}$ is a critical point of $J_{\lambda,k_0}$, we reach
\begin{eqnarray*}
0&=&\Re\left(\int \!\!\!\int_{\mathbb{R}^{2N} }\frac{\Big(u_{\lambda,k_0}(x)-u_{\lambda,k_0}(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)\overline{\Big(\phi(x)-\phi(y)e^{i A\big(\frac{x+y}{2}\big).(x-y)}\Big)}}{|x-y|^{N+2s}}\,\mathrm{d}x\mathrm{d}y\right) \nonumber\\
&&+\,\,\Re\left( \int_{\mathbb{R}^{N}}V(x)u_{\lambda,k_0}\overline{\phi}\,\mathrm{d}x -\int_{\mathbb{R}^{N}}h_{\lambda,k}(x,|u_{\lambda,k_0}|^2)u_{\lambda,k_0}\overline{\phi}\,\,\mathrm{d}x \right)\\
& =& \Re\left( \int_{\mathbb{R}^{N}}V(x)u_{\lambda,k_0}\overline{\phi}\,\mathrm{d}x-\int_{\mathbb{R}^{N}}g(|u_{\lambda,k_0}|^2)u_{\lambda,k_0}\overline{\phi}\,\mathrm{d}x - \lambda \int_{\mathbb{R}^N}|u_{\lambda,k_0}|^{q-2}u_{\lambda,k_0}\overline{\phi}\,\mathrm{d}x\right),
\end{eqnarray*}
for all $\phi \in E$. Therefore, we concluded that $u_{\lambda,k_0}$ is a solution of problem \eqref{problem} for $\lambda$ in $[0, \lambda_0)$ and $\Lambda \geq \Lambda_0$. This finishes the proof of Theorem \ref{theorem}.$\blacksquare$\\
\begin{remark}\label{regular}
Let us denote $u = u_{\lambda,k}$ the solution of \eqref{problem}. Note that if $V \in L^{\infty}(\mathbb{R}^N,\mathbb{R})$, then
$$
f(|u|^2)u-V(x)u+\lambda |u|^{q-2}u \in L^{\infty}(\mathbb{R}^N,\C),
$$
in view $|u| \in L^{\infty}(\mathbb{R}^N,\mathbb{R})$, obtained by Moser iteration method.
Therefore, in view of the regularity results established for the fractional Laplacian
(see \cite{bisci} or \cite{silvetre} ), one can think that, under suitable regularity assumptions on $A$, it is
possible to obtain more regularity on solutions to \eqref{problem}. Next, from the arguments in \cite[Remark 2.1]{ambro4} and using fractional regularity theory, we prove that $u \in C^{0,\alpha}(\mathbb{R}^N,\C)$. For instance, assume that $A \in
L^{\infty}(\mathbb{R}^N,\mathbb{R}^N)$ and $s \in (0,\frac{1}{2})$. We set $u := v+i w$, with $v,w$ real valued. Assume that $u$ solve \eqref{problem}, that is,
$u\in E$ and satisfies the following equation
\begin{equation}\label{Delta_2}
(-\Delta)_{A}^{s}u+V(x)u =g(|u|^2)u +\lambda |u|^{q-2}u, \quad \mbox{\ in \ } \mathbb{R}^N.
\end{equation}
Thus, we may deduce that $v$ and $w$ solve, respectively,
\begin{equation}\label{Delta_0}
(-\Delta)^{s}v+V(x)v=g(|u|^2)v+\lambda|u|^{q-2}v-C_{A}(u,v)
\end{equation}
and
\begin{equation}\label{Delta_00}
(-\Delta)^{s}w+V(x)w=g(|u|^2)w+\lambda|u|^{q-2}w-D_{A}(u,w)
\end{equation}
where
\begin{equation*}
C_A(u,v)(x):=\int_{\mathbb{R}^N}\frac{v(y)\big[1-\cos\big(A(\frac{x+y}{2}\big)\cdot(x-y)\big)\big]+w(y)\sin\big (A(\frac{x+y}{2}).(x-y)\big)}{|x-y|^{N+2s}}\mathrm{d}y
\end{equation*}
and
\begin{equation*}
D_A(u,w)(x):=\int_{\mathbb{R}^N}\frac{w(y)\big[1-\cos\big(A(\frac{x+y}{2}\big).(x-y)\big)\big]-v(y)\sin\big (A(\frac{x+y}{2}).(x-y)\big)}{|x-y|^{N+2s}}\mathrm{d}y.
\end{equation*}\\
\noindent \textit{Claim.} $C_{A}(u,v), D_{A}(u,w) \in L^{\infty}(\mathbb{R}^N,\mathbb{R})$.\\
Using the fact that $|u|\in L^{\infty}(\mathbb{R}^N,\mathbb{R})$, $A \in L^{\infty}(\mathbb{R}^N,\mathbb{R}^N)$, $|\sin(t)|, |\cos(t)|<1$, $|\sin(t)|\leq |t|$, $|1-\cos(t)|\leq \frac{t^2}{2}$ for all $t\in \mathbb{R}$ for $x \in \mathbb{R}^N$, $s \in(0, \frac{1}{2})$ and using coarea formula, we deduce
\begin{eqnarray}
|C_{A}(u,v)(x)| &\leq& \int_{|x-y|>1}\frac{2\|v\|_{\infty}+\|w\|_{\infty}}{|x-y|^{N+2s}}\mathrm{d}y \nonumber \\
& & + \int_{|x-y|<1}\frac{\|A\|_{\infty}^2\|v\|_{\infty}}{2|x-y|^{N+2s-2}}\mathrm{d}y + \int_{|x-y|<1}\frac{\|A\|_{\infty}\|w\|_{\infty}}{|x-y|^{N+2s-1}}\mathrm{d}y \nonumber \\
&\leq& \omega_{N-1}2\|v\|_{\infty} \int_{1}^{+\infty}\frac{r^{N-1}}{r^{N+2s}}\,\mathrm{d}r+\omega_{N-1}\|w\|_{\infty} \int_{1}^{+\infty}\frac{r^{N-1}}{r^{N+2s}}\,\mathrm{d}r \nonumber \\
& & +\frac{\omega_{N-1}}{2}\|A\|_{\infty}^2\|v\|_{\infty} \int_{0}^{1}\frac{r^{N-1}}{r^{N+2s-2}}\,\mathrm{d}r + \omega_{N-1}\|A\|_{\infty}\|w\|_{\infty}\int_{0}^{1}\frac{r^{N-1}}{r^{N+2s-1}}\,\mathrm{d}r \nonumber \\
&<& M \nonumber,
\end{eqnarray}
for some $M>0$ indepedent of $x$, that is, $C_{A}(u,w) \in L^{\infty}(\mathbb{R}^N,\mathbb{R})$. In similar we verifed that $D_{A}(u,w) \in L^{\infty}(\mathbb{R}^N,\mathbb{R})$.
Assume that claim is true, then by \eqref{Delta_0} and \eqref{Delta_00} it follow that
\begin{equation}\label{C}
(-\Delta)^{s}v \in L^{\infty}(\mathbb{R}^N,\mathbb{R})
\end{equation}
and
\begin{equation}\label{D}
(-\Delta)^{s}w \in L^{\infty}(\mathbb{R}^N,\mathbb{R}).
\end{equation}
Therefore, invoking Proposition 2.1.9 in \cite{silvetre} to obtain that $v,w \in C^{0,\alpha}(\mathbb{R}^N,\mathbb{R})$ for any $\alpha<2s$, that is, $u\in C^{0,\alpha}(\mathbb{R}^N,\C)$.\\
\end{remark}
\begin{remark}\label{decay}
We can also prove that if $u_{\lambda,k}$ is a solution of \eqref{problem}, then $|u_{\lambda,k}|$ decay
at infinity. Indeed, in view of Remark \ref{regular} and since $u_{\lambda,k} \in \mathcal{D}^{s,2}_{A}(\mathbb{R}^N,\C)$, it follows from \cite{stein} that $|u_{\lambda,k}| \rightarrow 0$ as $|x| \rightarrow +\infty$.
\end{remark}
|
2,877,628,091,281 | arxiv | \section{Introduction} \label{sec:introd}
Continuous cohomology ${\rm H}_{\rm c}^{\scriptstyle \bullet}$ is an invariant of topological groups, defined in an analogous fashion to the ordinary group cohomology, but with the additional assumption that cochains---with values in a topological group-module as coefficient space---are continuous. A basic reference in the subject is the book by Borel--Wallach \cite{BW}, while a concise survey by Stasheff \cite{Stas} provides an overview of the theory, its relations to other cohomology theories for topological groups, and some interpretations of algebraic nature for low-degree cohomology classes. \vspace{4pt}
In the case of connected, (semi)simple Lie groups and trivial real coefficients, continuous cohomology is also known to successfully detect geometric information. For example, the situation for degree two is well understood: Let $G$ be a connected, non-compact, simple Lie group with finite center. Then ${\rm H}_{\rm c}^2(G;\bbR) \neq 0$ if and only $G$ is of Hermitian type, i.e. if its associated symmetric space of non-compact type admits a $G$-invariant complex structure. If that is the case, ${\rm H}_{\rm c}^2(G;\bbR)$ is one-dimensional, and explicit continuous 2-cocycles were produced by Guichardet--Wigner in \cite{GW} as an obstruction to extending to $G$ a homomorphism $K \to S^1$, where $K < G$ is a maximal compact subgroup. \vspace{4pt}
The goal of this note is to clarify a similar geometric interpretation of the third-degree continuous cohomology of simple Lie groups. Recall that a complex structure on a Lie algebra $\mathfrak{g}$ is a linear map $J\in {\rm End}(\mathfrak{g})$ that satisfies the identity $J^2 = -\id$ and that commutes with the adjoint representation of $\mathfrak{g}$.
\begin{thmx} \label{thm:simplefc}
Let $G$ be a connected, simple Lie group with finite center, and let $\mathfrak{g}$ be its Lie algebra. Then the following are equivalent:
\begin{enumerate}[label=\emph{(\arabic*)}]
\item ${\rm H}_{\rm c}^3(G;\bbR)\neq 0$. \vspace{1pt}
\item $\dim {\rm H}_{\rm c}^3(G;\bbR) = 1.$
\item $\mathfrak{g}$ admits a complex structure.
\item $G$ admits the structure of a complex Lie group.
\end{enumerate}
\end{thmx}
\noindent Removing the hypothesis of finite center results in the addition of only one Lie group to this collection.
\begin{thmx} \label{thm:simpleic}
For a connected, simple Lie group $G$ of infinite center, ${\rm H}_{\rm c}^3(G;\bbR) \neq 0$ if and only if $G$ is isomorphic to $\widetilde{\SL(2,\bbR)}$, the universal cover of $\SL(2,\bbR)$. In that case, $\dim {\rm H}_{\rm c}^3(G;\bbR) = 1$.
\end{thmx}
Before proceeding to the proofs of the theorems, we comment on the question of explicit 3-cocycles in the setting of \autoref{thm:simplefc}. Thus, fix $G$ as in \autoref{thm:simplefc}, and assume that the equivalent conditions hold. Let $J$ be a complex structure on the Lie algebra $\mathfrak{g}$ of $G$, and regard it as a complex Lie algebra. Let
\begin{itemize}
\item $\mathfrak{k} \subset \mathfrak{g}$ be a compact real form of $\mathfrak{g}$,
\item $B_\mathfrak{g}$ be the Killing form of $\mathfrak{g}$, and
\item $K$ the connected subgroup of $G$ with Lie algebra $\mathfrak{k}$.
\end{itemize}
Then $\mathfrak{g} = \mathfrak{k} \oplus J\!\mathfrak{k}$ is a Cartan decomposition, and $\mathfrak{k}$ is simple; see \autoref{thm:cartandecJ} below for a reference. The subgroup $K$ is maximal compact in $G$, and $G/K$ is a symmetric space of non-compact type with ${\rm T}_K(G/K) \cong \mathfrak{g}/\mathfrak{k} \cong J\!\mathfrak{k}$. Let $(\Lambda^{\scriptstyle \bullet}(\mathfrak{g}/\mathfrak{k})^\ast)^\mathfrak{k}$ denote the complex of $\mathfrak{k}$-invariant, alternating, multilinear forms on $\mathfrak{g}/\mathfrak{k}$, and $\Omega^{\scriptstyle \bullet}(G/K)^G$ be the complex of $G$-invariant differential forms on $G/K$. Left-translation of an element of the former gives rise of an element of the latter, and this assignment is an isomorphism. \vspace{4pt}
It is a consequence of van Est's theorem that there is an isomorphism ${\rm H}_{\rm c}^3(G;\bbR) \cong (\Lambda^{\scriptstyle \bullet}(J\!\mathfrak{k})^\ast)^\mathfrak{k}$; use \autoref{thm:fc} with $\mathfrak{p} = J\!\mathfrak{k}$. The formula
\begin{equation} \label{eq:cocycle}
\omega(X,Y,Z) := B_\mathfrak{g}(X,J[Y,Z]), \qquad X,Y,Z \in J\!\mathfrak{k},
\end{equation}
defines a non-zero element $\omega \in (\Lambda^3(J\!\mathfrak{k})^\ast)^\mathfrak{k}$. Let $\tilde{\omega} \in \Omega^3(G/K)^G$ be corresponding 3-form. This one integrated over ``3-simplices'' produces a non-trivial, $G$-invariant continuous 3-cocycle $I_{\omega}: G^4 \to \bbR$. \vspace{4pt}
More precisely: Fix a base point $o \in G/K$. For any $k$-tuple $(g_0,\ldots,g_k) \in G^k$, consider the \emph{geodesic $k$-simplex} $\Delta(g_0,\ldots,g_k) \subset G/K$, defined inductively as follows: let $\Delta(g_0) := \{g_0 \cdot o\}$, and for $k > 0$, set $\Delta(g_0,\ldots,g_k)$ to be the union of the geodesics connecting $g_k \cdot o$ to each point in $\Delta(g_0,\ldots,g_{k-1})$. It is not hard to verify that the expression
\begin{equation} \label{eq:integral}
I_{\omega}(g_0,\ldots,g_4) := \int_{\Delta(g_0,\ldots,g_4)} \tilde{\omega}
\end{equation}
is a well-defined $G$-invariant continuous 3-cocycle. Finally, its non-triviality follows from the fact, by Dupont \cite{Dup}, that integration over simplices realizes van Est's isomorphism at the level of cochains. \vspace{4pt}
\begin{rem}
Concerning \autoref{thm:simpleic}, the group $G := \widetilde{\SL(2,\bbR)}$ has trivial maximal compact subgroup $M$. Thus, van Est's theorem (\autoref{thm:vanest} below) yields an isomorphism ${\rm H}_{\rm c}^3(G;\bbR) \cong \HH^3(\Omega^{\scriptstyle \bullet}(G)^G)$. An obvious $G$-invariant 3-form of $G$ is its volume form. Arguing as with \eqref{eq:integral}, we conclude that the volume of 3-simplices is a non-trivial $G$-invariant continuous 3-cocycle of $G$.
\end{rem}
We point out that the expression obtained by removing the $J$ in \eqref{eq:cocycle} is known to define a generating class of $\HH^3(\mathfrak{k};\bbR)$ and of the de Rham cohomology group $\HH^3(K;\bbR)$; see the reference \cite{GHV}. However, we have found no account in the literature of the formula \eqref{eq:cocycle} nor of the statements of our two theorems. This was in fact the main motivation for writing this note. \vspace{4pt}
The rest of this paper contains proofs of Theorems \ref{thm:simplefc} and \ref{thm:simpleic}. They do not rely on the classification of simple Lie groups. \vspace{7pt}
\noindent {\bf Acknowledgments:} The author expresses his gratitude to his PhD advisor Marc Burger for suggesting the question of the characterization that led to \autoref{thm:simplefc}, and for many helpful discussions. He further thanks Tobias Hartnick and Maria Beatrice Pozzetti for their valuable comments and suggestions.
\section{Notation and background}
\subsection{Notation} Whenever there is no explicit mention of coefficients when using any notion of cohomology, it should be understood for the rest of this paper that they are trivial $\bbR$-coefficients. The functor ${\rm H}_{\rm c}^{\scriptstyle \bullet}$ refers to the continuous cohomology of topological groups. On the other hand, when applied to manifolds (including Lie groups), $\HH^{\scriptstyle \bullet}$ denotes their cohomology as spaces. \vspace{4pt}
If $\mathfrak{g}$ is a Lie algebra and $\mathfrak{m} \subset \mathfrak{g}$ is any subalgebra, then $(\Lambda^{\scriptstyle \bullet}(\mathfrak{g}/\mathfrak{m})^\ast)^\mathfrak{m}$ will denote the complex of $\mathfrak{m}$-invariant, alternating, multilinear forms on $\mathfrak{g}/\mathfrak{m}$; for the definition of the $\mathfrak{m}$-action on $\mathfrak{g}/\mathfrak{m}$ and of the coboundary operator, we refer the reader to Chapter 1 of \cite{BW}. The cohomology of this complex, denoted by $\HH^{\scriptstyle \bullet}(\mathfrak{g},\mathfrak{m})$ is known as the \emph{Lie algebra cohomology of $\mathfrak{g}$ relative to $\mathfrak{m}$}. If $\mathfrak{m}$ is trivial, we will write $\HH^{\scriptstyle \bullet}(\mathfrak{g})$ instead of $\HH^{\scriptstyle \bullet}(\mathfrak{g},\mathfrak{m})$. \vspace{4pt}
If $G$ is a connected Lie group and $M < G$ is a closed subgroup, then $\Omega^{\scriptstyle \bullet}(G/M)^G$ denotes the complex of $G$-invariant differential forms on $G/M$, where the coboundary operator is the usual differential for forms, and the $G$-action on forms on $G/M$ is by left-translation. We denote the cohomology of this complex by $\HH^{\scriptstyle \bullet}_G(G/M)$; it is known as the \emph{$G$-invariant de Rham cohomology of $G/M$}. \vspace{4pt}
\subsection{Background on continuous cohomology of Lie groups} A powerful tool for computing the continuous cohomology of a connected Lie group $G$ is \emph{van Est's theorem}; see the original references by van Est \cite{vEst} and Hochschild--Mostow \cite{HM}. We quote it from \cite[Corollary IX.5.6]{BW}.
\begin{thm}[van Est, Hochschild--Mostow] \label{thm:vanest}
Let $G$ be a connected Lie group with Lie algebra $\mathfrak{g}$, and $M$ be a maximal compact (connected) subgroup of $G$ with Lie algebra $\mathfrak{m} \subset \mathfrak{g}$. Then
\begin{equation} \label{eq:vanest}
{\rm H}_{\rm c}^{\scriptstyle \bullet}(G) \cong \HH^{\scriptstyle \bullet}_G(G/M) \cong \HH^{\scriptstyle \bullet}(\mathfrak{g},\mathfrak{m}). \vspace{10pt}
\end{equation}
\end{thm}
For the rest of this section, let $G$ be a non-compact, connected, semisimple\footnote{For background in semisimple Lie groups and Lie algebras, we refer the reader to \cite{Hel}.} Lie group with Lie algebra $\mathfrak{g}$, and let $M < G$ be a maximal compact subgroup with Lie algebra $\mathfrak{m}$. Let us fix:
\begin{itemize}
\item a maximal compactly embedded subalgebra $\mathfrak{k} \subset \mathfrak{g}$ such that $\mathfrak{m} \subset \mathfrak{k}$,
\item an associated Cartan decomposition $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$ of $\mathfrak{g}$,
\item the connected subgroup $K$ of $G$ with Lie algebra $\mathfrak{k}$,
\item the compact dual $\mathfrak{g}_u := \mathfrak{k} \oplus i\mathfrak{p}$ of $\mathfrak{g}$,
\item the \emph{compact dual} $G_u$ of $G$, i.e. the 1-connected, compact, semisimple Lie group with Lie algebra $\mathfrak{g}_u$, and
\item the connected subgroup $M_u$ of $G_u$ with Lie algebra $\mathfrak{m}$.
\item the connected subgroup $K_u$ of $G_u$ with Lie algebra $\mathfrak{k}$.
\end{itemize}
\begin{rem} \label{rem:Kclosed}
Any subgroup $K' < G$ with Lie algebra $\mathfrak{k}$ is automatically connected and closed in $G$, and contains the center of $G$. Furthermore, $K'$ is compact if and only if the center of $G$ is finite. If that is the case, $K'$ is a maximal compact subgroup of $G$, and $G/K'$ is a symmetric space of non-compact type. This is the content of Theorem VI.1.1 in \cite{Hel}.
\end{rem}
\begin{rem} \label{rem:Kuclosed}
The connected subgroup $K_u < G_u$ above is necessarily closed in $G_u$. This and the 1-connectedness of $G_u$ imply that $G_u/K_u$ is a symmetric space of compact type. This follows from Proposition IV.3.6 in \cite{Hel}. \vspace{4pt}
\end{rem}
A second computational tool is the next theorem of Chevalley--Eilenberg \cite{ChEil}, whose main ideas they attribute to Cartan. \vspace{-1pt}
\begin{thm}[Cartan, Chevalley--Eilenberg] \label{thm:cartan}
If $M_u < G_u$ is closed, then $\HH^{\scriptstyle \bullet}(\mathfrak{g}_u,\mathfrak{m}) \cong \HH^{\scriptstyle \bullet}(G_u/M_u)$.
\end{thm}
\noindent It is not hard to observe that there is an isomorphism $\HH^{\scriptstyle \bullet}(\mathfrak{g},\mathfrak{m}) \cong \HH^{\scriptstyle \bullet}(\mathfrak{g}_u,\mathfrak{m})$. Combining it with \eqref{eq:vanest}, we obtain:
\begin{cor} \label{thm:corollary_cartan}
If $M_u < G_u$ is closed, then ${\rm H}_{\rm c}^{\scriptstyle \bullet}(G) \cong \HH^{\scriptstyle \bullet}(G_u/M_u)$. \vspace{5pt}
\end{cor}
\noindent \textbf{The case of finite center.} We impose now the additional assumption that $G$ has a finite center. Then, by \autoref{rem:Kclosed}, the subgroup $K<G$ fixed above is maximal compact. In particular,
\begin{equation} \label{eq:fc}
M=K, \quad \mathfrak{m}=\mathfrak{k} \quad \mathrm{and} \quad M_u = K_u.
\end{equation}
Moreover, by the same remark, $G/K$ is a symmetric space of non-compact type. It is a theorem by Cartan that invariant differential forms on a symmetric space are automatically closed. Hence:
\begin{cor} \label{thm:fc}
Under the assumption of finite center of $G$, there exist isomorphisms
\[
{\rm H}_{\rm c}^{\scriptstyle \bullet}(G) \cong \Omega^{\scriptstyle \bullet}(G/K)^G \cong (\Lambda^{\scriptstyle \bullet} \mathfrak{p}^\ast)^\mathfrak{k} \cong (\Lambda^{\scriptstyle \bullet} (i\mathfrak{p})^\ast)^\mathfrak{k} \cong \HH^{\scriptstyle \bullet}(G_u/K_u). \vspace{8pt}
\]
\end{cor}
\section{Proof of \autoref{thm:simplefc}}
The equivalence (3) $\Leftrightarrow$ (4) is a classical fact that holds for any connected Lie group, and goes back to Newlander--Nirenberg. Thus, we will only prove the equivalence (1) $\Leftrightarrow$ (2) $\Leftrightarrow$ (3). A first reduction is provided by the following lemma.
\begin{lem}
If $G$ is a compact, connected, simple Lie group (hence with finite center), then none of the statements \emph{(1)-(4)} in \autoref{thm:simplefc} hold.
\end{lem}
\begin{proof}
If $G$ satisfied (4), then by connectedness and compactness, $G$ would be isomorphic to a quotient $\mathbb{C}^n/\Lambda$, where $\Lambda < \mathbb{C}^n$ is a lattice. In particular, $G$ would be Abelian, contradicting the assumption of simplicity. On the other hand, properties (1) and (2) fail because by \autoref{thm:vanest}, the continuous cohomology of a compact Lie group vanishes in any positive degree.
\end{proof}
Hence, from now on in this section, let $G$ be a \emph{non-compact}, connected, simple Lie group with finite center, and let $\mathfrak{g}$ be its Lie algebra. We argue now in the order (3) $\Rightarrow$ (2) $\Rightarrow$ (1) $\Rightarrow$ (3), where (2) $\Rightarrow$ (1) is evident.
\subsection{Proof of (3) $\Rightarrow$ (2)}
Assume that $\mathfrak{g}$ admits a complex structure $J$. Then:
\begin{thm} \label{thm:cartandecJ}
Any compact real form $\mathfrak{k}$ of $\mathfrak{g}$ is simple, maximal compactly embedded in $\mathfrak{g}$, and
\begin{equation} \label{eq:cartandec}
\mathfrak{g} = \mathfrak{k} \oplus J\!\mathfrak{k}
\end{equation}
is a Cartan decomposition of $\mathfrak{g}$.
\end{thm}
\begin{proof}[About the proof]
The fact that $\mathfrak{k}$ is maximal compactly embedded and that the decomposition above is a Cartan decomposition is Corollary III.7.5 of \cite{Hel}. The simplicity of $\mathfrak{k}$ is part of the proof of Theorem VIII.5.4 in \cite{Hel}.
\end{proof}
Fix a compact real form $\mathfrak{k}$ of $\mathfrak{g}$. By \autoref{thm:fc} (with $\mathfrak{p} = J\!\mathfrak{k}$), there exists isomorphisms ${\rm H}_{\rm c}^3(G) \cong (\Lambda^3 (J\!\mathfrak{k})^\ast)^\mathfrak{k} \cong (\Lambda^3 \mathfrak{k}^\ast)^\mathfrak{k}$. Let $({\rm V}^2 \mathfrak{k}^\ast)^\mathfrak{k}$ denote the space of $\mathfrak{k}$-invariant, symmetric bilinear forms on $\mathfrak{k}$. We make use of the following fact:
\begin{prop}
The assignment $\Phi: ({\rm V}^2 \mathfrak{k}^\ast)^\mathfrak{k} \to (\Lambda^3 \mathfrak{k}^\ast)^\mathfrak{k}$, defined by
\[
\Phi_B(X,Y,Z) = B(X,[Y,Z]), \quad \mbox{for } B \in ({\rm V}^2 \mathfrak{k}^\ast)^\mathfrak{k} \mbox{ and } X,Y,Z \in \mathfrak{k},
\]
is a linear isomorphism.
\end{prop}
\begin{proof}[About the proof]
This is Proposition I in Section 5.7 of \cite{GHV}. The statement holds if $\mathfrak{k}$ is any compact, simple Lie algebra.
\end{proof}
\vspace{0.2cm}
We conclude the proof of the implication by pointing out that the space $({\rm V}^2 \mathfrak{k}^\ast)^\mathfrak{k}$ has dimension one because of the simplicity of $\mathfrak{k}$; a generator of it is the Killing form of $\mathfrak{k}$. A proof of this fact can be found, for example, in the discussion at the beginning of VIII.\S 5 of \cite{Hel}.
\vspace{0.6cm}
\subsection{Proof of (1) $\Rightarrow$ (3)}
Assume now that $\mathfrak{g}$ does not admit a complex structure. We adopt the same notation of \eqref{eq:fc} and \autoref{thm:fc}, and fix:
\begin{itemize}
\item a maximal compact subgroup $K < G$,
\item its Lie algebra $\mathfrak{k} \subset \mathfrak{g}$,
\item an associated Cartan decomposition $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$ of $\mathfrak{g}$,
\item the compact dual $\mathfrak{g}_u := \mathfrak{k} \oplus i\mathfrak{p}$ of $\mathfrak{g}$,
\item the compact dual $G_u$ of $G$, and
\item the connected subgroup $K_u$ of $G_u$ with Lie algebra $\mathfrak{k}$. \vspace{10pt}
\end{itemize}
The starting point for the proof of this implication is the following theorem.
\begin{thm} \label{thm:gu_simple}
\hspace{-0.7cm} \begin{minipage}[t]{12.9cm}
\vspace{-8pt} \begin{enumerate}[label=\emph{(\roman*)}]
\item The compact dual $\mathfrak{g}_u$ is simple.
\item The symmetric space of compact type $G_u/K_u$ is irreducible. In particular, the action of ${\rm Ad}(K_u)$ on the vector space $i\mathfrak{p}$ is irreducible.
\end{enumerate}
\end{minipage}
\end{thm}
\begin{proof}[About the proof]
Part (i) is a combination of Theorem VIII.5.3 and Theorem V.2.4 of \cite{Hel}. Part (ii) is a consequence of Theorem VIII.5.3 of \cite{Hel}; see also the Definition at the beginning of VIII.\S 5 of \cite{Hel}.
\end{proof}
\begin{rem}
Note that (i) does not hold for Lie algebras that do admit a complex structure. For example, the compact dual of $\fsl(2,\mathbb{C})$ is the Lie algebra $\fsu(2) \oplus \fsu(2) \cong \fso(4,\bbR)$. \vspace{4pt}
\end{rem}
By \autoref{thm:fc}, we have ${\rm H}_{\rm c}^3(G) \cong \HH^3(G_u/K_u)$. Thus it suffices to show that $\HH^3(G_u/K_u)$ vanishes. We distinguish two cases: \vspace{6pt}
\noindent \textbf{Case 1: $\mathfrak{k}$ is Abelian.} The following proposition establishes the claim.
\begin{prop} \label{thm:abelian}
The symmetric space $G_u/K_u$ is diffeomorphic to the 2-dimensional sphere $S^2$.
\end{prop}
\begin{proof}
The group $K_u$ is Abelian; because it is a non-trivial torus, it contains an element $j$ of order four. The image of $j$ under the adjoint representation ${\rm Ad}_{G_u}(j)|_{i\mathfrak{p}}$ is a complex structure on the real vector space $i\mathfrak{p}$. Thus, we regard $i\mathfrak{p}$ now as a $\mathbb{C}$-vector space. By the commutativity of $K_u$, the ${\rm Ad}(K_u)$-action on $i\mathfrak{p}$ is $\mathbb{C}$-linear. By (ii) of \autoref{thm:gu_simple} and Schur's lemma, we obtain that $\dim_\mathbb{C} i\mathfrak{p} = 1$. In consequence,
\[
\dim(G_u/K_u) = \dim_\bbR i\mathfrak{p}= 2 \dim_\mathbb{C} i\mathfrak{p} = 2.
\]
Since $G_u$ is 1-connected and $K_u$ is connected, the space $G_u/K_u$ is 1-connected, and we conclude by the classification of surfaces.
\end{proof}
\vspace{0.2cm}
\noindent \textbf{Case 2: $\mathfrak{k}$ is non-Abelian.} The claim follows from the next proposition with $U = G_u$ and $L = K_u$.
\begin{prop} \label{thm:nonabelian}
Let $U$ be a 1-connected, compact, simple Lie group, and $L < U$ be a connected, closed, non-Abelian subgroup. Then $\HH^3(U/L) = 0$.
\end{prop}
\noindent We point out that $U/L$ needs not be a symmetric space. We give a proof of this proposition in the next subsection. \vspace{0.4cm}
\subsection{Proof of \autoref{thm:nonabelian}} We will need three facts about the topology of Lie groups.
\begin{thm}[Weyl] \label{thm:weyl}
The universal covering group of a compact semisimple Lie group is compact. In particular, any Lie group with a compact, semisimple Lie algebra is compact.
\end{thm}
\begin{proof}[About the proof]
A proof of this theorem is found in \cite{Hel} as Theorem II.6.9.
\end{proof}
\vspace{0.1cm}
\begin{thm}[Bott] \label{thm:pi23}
For any connected Lie group $H$, the second homotopy group $\pi_2(H)$ vanishes. If, moreover, $H$ is simple, then $\pi_3(H) \cong \bbZ$.
\end{thm}
\begin{proof}[About the proof]
It is a consequence of the fact, due to Bott, that the loop space of a compact, 1-connected Lie group has the homotopy type of a CW-complex with no odd-dimensional cells and finitely many cells of each even dimension. A proof of this fact is completely Morse-theoretical; see Theorem 21.6 in \cite{Mil}. Based on it, one can conclude as indicated in \cite{MO}.
\end{proof}
\vspace{0.2cm}
Before quoting the third fact, we give the definition of the \emph{Dynkin index} of a homomorphism between two compact, connected, simple Lie groups, as found in \cite{Oni}:
\begin{defn*}
Let $G_1$ and $G_2$ be compact, connected, simple Lie groups with Lie algebras $\mathfrak{g}_1$ and $\mathfrak{g}_2$, respectively, and let $\alpha_i \in\mathfrak{h}_i^\ast$ be a root of maximal length of $\mathfrak{g}_i$ with respect to a Cartan subalgebra $\mathfrak{h}_i$ ($i=1,2$). Moreover, let $B_i \in ({\rm V}^2 \mathfrak{g}_i^\ast)^{\mathfrak{g}_i}$ be a negative-definite, $\mathfrak{g}_i$-invariant bilinear form\footnote{As mentioned in the proof of the implication (3) $\Rightarrow$ (2) of \autoref{thm:simplefc}, $\dim ({\rm V}^2 \mathfrak{g}_i^\ast)^{\mathfrak{g}_i} = 1$, so the forms $B_i$ are uniquely determined up to a positive constant.} on $\mathfrak{g}_i$ normalized in such a way that the square of the length of the root $\alpha_i$ with respect to the associated inner product on $\mathfrak{h}_i^\ast$ equals two. If $\varphi: G_1 \to G_2$ is a homomorphism, then there exists a non-negative real number $j_\varphi$ such that $B_2 = j_\varphi \cdot B_1$, called the \emph{Dynkin index} of $\varphi$.
\end{defn*}
\begin{thm} \label{thm:dynkinindex}
Let $\varphi: G_1 \to G_2$ be a homomorphism between two compact, connected, simple Lie groups $G_1$ and $G_2$. Then:
\begin{enumerate}[label=(\roman*)]
\item The Dynkin index $j_\varphi$ is a non-negative integer. It is equal to zero if and only if $\varphi$ is the homomorphism that maps every element to the identity.
\item If $\pi_3(G_i) = \langle \epsilon_i \rangle$ for $i=1,2$ (see \autoref{thm:pi23}), then $\varphi_\# \, \epsilon_1 = \pm j_\varphi \, \epsilon_2$.
\end{enumerate}
\end{thm}
\begin{proof}[About the proof]
Part (i) is Proposition 11, \S 3 of \cite{Oni}, and (ii) is Theorem 2, \S 17 of the same reference.
\end{proof}
\begin{proof}[Proof of \autoref{thm:nonabelian}]
We will prove that the integral homology group $\HH_3(U/L;\bbZ)$ is finite and therefore by the universal coefficient theorem $\HH^3(U/L)=0$. Because $U$ is 1-connected and $L$ is connected, we have that $U/L$ is 1-connected. Therefore, the Hurewicz homomorphism $h:\pi_3(U/L) \rightarrow H_3(U/L;\bbZ)$ in degree three is surjective. The claim follows after proving finiteness of $\pi_3(U/L)$. \vspace{4pt}
Consider now the long exact sequence in homotopy
\[
\cdots \: \rightarrow \pi_3(L) \xrightarrow{\iota_{\#}} \pi_3(U) \xrightarrow{\pi_{\#}} \pi_3(U/L) \rightarrow \pi_2 (L) \rightarrow \cdots
\]
of the fibration $L \overset{\iota}{\hookrightarrow} U \overset{\pi}{\twoheadrightarrow} U/L$. By \autoref{thm:pi23} and exactness, the homomorphism $\pi_{\#}$ is surjective and $\pi_3(U/L) \cong \pi_3(U)/{\rm im} \, \iota_{\#} \cong \bbZ/{\rm im} \, \iota_\#$. Therefore, $\pi_3(U/L)$ is finite if and only if the image of $\iota_{\#}$ is non-trivial. \vspace{4pt}
Let $\mathfrak{l}$ be the Lie algebra of $L$, hence compact and non-Abelian. In particular, $\mathfrak{l}$ splits as the direct sum of its center and a non-trivial semisimple ideal. Thus, let $\mathfrak{l}_1$ be a simple ideal of $\mathfrak{l}$, and let $L_1$ be the 1-connected, simple Lie group with Lie algebra $\mathfrak{l}_1$, which is compact by \autoref{thm:weyl}. Now let $\phi: L_1 \rightarrow L$ be the unique Lie group homomorphism whose derivative is the inclusion $\mathfrak{l}_1 \hookrightarrow \mathfrak{l}$. Again by \autoref{thm:pi23}, we know that $\pi_3(K_1) \cong \bbZ$. We obtain the following diagram in homotopy: \vspace{-3pt}
\[
\xymatrix{\pi_3(L) \ar[r]^{\iota_{\#}} & \pi_3(U) \cong \bbZ \\
\bbZ \cong \pi_3(L_1) \qquad \ar[u]^{\phi_{\#}} \ar@{-->}[ur]_{\iota_{\#} \circ \phi_{\#} =: \psi_\#} & & & \:}
\]
Set $\psi:=\iota \circ \phi$. Obviously, the image of $\psi_{\#}=\iota_{\#} \circ \phi_{\#}$ is contained in the image of $\iota_{\#}$. Hence, it suffices to show that the former one is non-trivial in order to prove that so is the latter. \vspace{4pt}
To conclude, note that $\psi$ is an immersion, since its derivative is injective. In particular, it is not the homomorphism that maps every element of $L_1$ to the identity of $U$. Thus, the Dynkin index $j_\psi$ of $\psi$ is not zero by \autoref{thm:dynkinindex} (i). Furthermore, by \autoref{thm:dynkinindex} (ii), the generator $\epsilon_{L_1}$ gets mapped by $\psi_{\#}$ to $\pm j_\psi \, \epsilon_U$, so ${\rm im} \, \psi_{\#} \cong j_{\psi} \bbZ \neq \{0\}$.
\end{proof}
\vspace{0.1cm}
\section{Proof of \autoref{thm:simpleic}}
Let $G$ be a connected, simple Lie group with infinite center $Z$ and Lie algebra $\mathfrak{g}$. We set $G_0 := G/Z$, which is a connected, center-free Lie group with Lie algebra $\mathfrak{g}$, and let $p: G \twoheadrightarrow G_0$ be the canonical projection. In addition, let
\begin{itemize}
\item $\mathfrak{k}$ be a maximal compactly embedded subalgebra of $\mathfrak{g}$, which splits as the direct sum $\mathfrak{k} = \mathfrak{z}(\mathfrak{k}) \oplus \mathfrak{m}$ of its center $\mathfrak{z}(\mathfrak{k})$ and a semisimple or trivial ideal $\mathfrak{m}$;
\item $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$ be a Cartan decomposition of $\mathfrak{g}$,
\item $\mathfrak{g}_u = \mathfrak{k} \oplus i\mathfrak{p}$ be the compact dual of $\mathfrak{g}$,
\item $K_0 < G_0$ be the connected subgroup with Lie algebra $\mathfrak{k}$,
\item $K := p^{-1}(K_0)$,
\item $G_{0,u}$ denote the compact dual of $G_0$,
\item $K_{0,u}$ be the connected subgroup of $G_{0,u}$ with Lie algebra $\mathfrak{k}$, and
\item $G_u$ be the compact dual of $G$.
\end{itemize}
Note first that $\mathfrak{g}$ does not admit a complex structure, because complex Lie groups cannot have infinite center. In consequence, by \autoref{thm:gu_simple}, the Lie algebra $\mathfrak{g}_u$ and the compact dual $G_{0,u}$ are simple. Furthermore, being a cover of $K_0$, the Lie group $K$ has also Lie algebra $\mathfrak{k}$. By \autoref{rem:Kclosed}, $K$ is connected, closed and non-compact in $G$. The non-compactness of $K$ implies that $\mathfrak{z}(\mathfrak{k})$ cannot be trivial: if that were the case, then $\mathfrak{k} = \mathfrak{m} \neq 0$, and $K$ would be semisimple, contradicting \autoref{thm:weyl}. \vspace{4pt}
We distinguish again two cases: $\mathfrak{k}$ Abelian and $\mathfrak{k}$ non-Abelian. We will show that the former assumption corresponds to the situation in which $G$ is isomorphic to $\widetilde{\SL(2,\bbR)}$, and then show that ${\rm H}_{\rm c}^3(G)$ is one-dimensional in that case. Then, we show that the latter implies vanishing of ${\rm H}_{\rm c}^3(G)$. \vspace{0.4cm}
\noindent \textbf{Case 1: $\mathfrak{k}$ is Abelian.} We are exactly in the situation of \autoref{thm:abelian}. Thus, it follows that the symmetric space of non-compact type $G_{0,u}/K_{0,u}$ is diffeomorphic to $S^2$. Then, via duality, $G_0/K_0$ is diffeomorphic to the hyperbolic plane $H^2$, $G_0 \cong \PSL(2,\bbR)$, and $G \cong \widetilde{\SL(2,\bbR)}$, being the only infinite cover of $G_0$. \vspace{4pt}
Before showing that the dimension of ${\rm H}_{\rm c}^3(G)$ equals one, we prove the following lemma, which will also be useful in the case of $\mathfrak{k}$ non-Abelian.
\begin{lem} \label{thm:Mmaxcpt} If $\dim \mathfrak{z}(\mathfrak{k}) = 1$, then the connected subgroup $M < G$ with Lie algebra $\mathfrak{m}$ is maximal compact in $G$.
\end{lem}
\begin{proof}
By the semisimplicity of $\mathfrak{m}$ and \autoref{thm:weyl}, the Lie subgroup $M < G$ is compact. Since $K$ is non-compact, the connected Lie subgroup $R < K$ with Lie algebra $\mathfrak{z}(\mathfrak{k})$ must be non-compact as well. The dimension assumption on $\mathfrak{z}(\mathfrak{k})$ implies that $R$ is isomorphic to $\bbR$. \vspace{4pt}
Note that the intersection $R \cap M$ is a compact subgroup of $R \cong \bbR$, hence trivial. From the decomposition $\mathfrak{k} = \mathfrak{z}(\mathfrak{k}) \oplus \mathfrak{m}$ and the previous fact, we obtain an isomorphism $K \cong R \times M$. This implies that $M$ is the unique maximal compact subgroup of $K$. It is in fact a maximal compact subgroup of $G$: If $L <G$ is a compact subgroup containing $M$, then there exists an element $g \in G$ such that $gLg^{-1} < K$. Consequently, $gMg^{-1} < gLg^{-1} < M$. The equalities hold because $gMg^{-1} < M$ and the two are connected subgroups of $K$ with the same Lie algebra.
\end{proof}
Note that in our case $\mathfrak{z}(\mathfrak{k}) = \mathfrak{k} \cong \fso(2,\bbR)$, which is one-dimensional, and $\mathfrak{m}$ is trivial. Hence, the connected subgroup $M$ of $G$ with Lie algebra $\mathfrak{m}$ is therefore trivial, and by the previous lemma, maximal compact in $G$. Moreover, it is well known that the compact dual of $\mathfrak{g}$ is $\mathfrak{g}_u=\fsu(2)$. Thus, $G$ has as compact dual the Lie group $G_u = \SU(2) \cong S^3$. From \autoref{thm:corollary_cartan} with $M_u$ trivial, we have an isomorphism
\[
{\rm H}_{\rm c}^3(G) \cong \HH^3(G_u) = \HH^3(S^3),
\]
and the last one is clearly one-dimensional. \vspace{10pt}
\noindent \textbf{Case 2: $\mathfrak{k}$ is non-Abelian.} This assumption means that both $\mathfrak{z}(\mathfrak{k})$ and $\mathfrak{m}$ are non-trivial. By the non-triviality of $\mathfrak{z}(\mathfrak{k})$ and the simplicity of $G_0$, it follows from Theorems VIII.6.1 and VIII.6.2 of \cite{Hel} that $G_0/K_0$ is an irreducible Hermitian symmetric space of non-compact type, and that the center $Z(K_0)$ of $K_0$ is isomorphic to $S^1$. In particular, $\dim \mathfrak{z}(\mathfrak{k}) = 1$. \vspace{4pt}
The connected subgroup $M < G$ with Lie algebra $\mathfrak{m} \subset \mathfrak{g}$ is non-trivial and semisimple. By \autoref{thm:Mmaxcpt}, it is also a maximal compact subgroup. Thus, by \autoref{thm:vanest}, ${\rm H}_{\rm c}^3(G) \cong \HH^3(\mathfrak{g},\mathfrak{m})$. On the other hand, by \autoref{thm:weyl}, the connected Lie subgroup $M_u < G_u$ with Lie algebra $\mathfrak{m}$ is closed. We conclude now by \autoref{thm:corollary_cartan} and \autoref{thm:nonabelian}:
\[
{\rm H}_{\rm c}^3(G) \cong \HH^3(\mathfrak{g},\mathfrak{m}) \cong \HH^3(G_u/M_u) = 0. \vspace{8pt}
\]
|
2,877,628,091,282 | arxiv | \section*{\refname
\@mkboth{\MakeUppercase{\refname}}{\MakeUppercase{\refname}}}
}
\makeatother
\newcommand{\algrule}[1][.2pt]{\par\vskip.5\baselineskip\hrule height #1\par\vskip.5\baselineskip}
\renewcommand{\paragraph}[1]{\vspace*{1em} \noindent {\sc #1.}}
\newcommand{\mathit{D}}{\mathit{D}}
\newcommand{\mathit{d}}{\mathit{d}}
\newcommand{\ensuremath{\mathbb{R}}}{\mathbb{R}}
\newcommand{{random projection tree}}{{random projection tree}}
\newcommand{{random projection trees}}{{random projection trees}}
\newcommand{\it {random projection tree}}{\it {random projection tree}}
\newcommand{RP tree}{RP tree}
\newcommand{{\bf DP-RPtree}}{{\bf DP-RPtree}}
\newcommand{\mathit{S}}{\mathit{S}}
\newenvironment{prf}{\begin{proof}}{\end{proof}}
\newcommand{\hspace*{\fill}\rule{1ex}{1.4ex}}{\hspace*{\fill}\rule{1ex}{1.4ex}}
\makeatletter
\def\newproof#1{\@nprf{#1}}
\def\@nprf#1#2{\expandafter\@ifdefinable\csname #1\endcsname
\global\@namedef{#1}{\@prf{#1}{#2}}\global\@namedef{end#1}{\@endproof}}
\def\@prf#1#2{\@beginproof{#2}{\csname the#1\endcsname}\ignorespaces}
\def\@beginproof#1{\rm \trivlist \item[\hskip \labelsep{\bf #1: }]}
\def\@endproof{\hspace*{\fill}\rule{1ex}{1.4ex} \endtrivlist}
\newcommand{\ensuremath{\mathcal{X}}}{\ensuremath{\mathcal{X}}}
\renewcommand{\H}{\ensuremath{\mathcal{H}}}
\newproof{prftheorem}{Proof of Theorem~\ref{centraltheoremtechnical}}
\newtheorem{gprop1}{Proposition}
\newcommand{{\bf Algorithm}}{{\bf Algorithm}}
\newcommand{{\vspace*{1em}\bf \em Subroutine}}{{\vspace*{1em}\bf \em Subroutine}}
\newcommand{\it {MinSize}}{\it {MinSize}}
\newcommand{\it {Leaf}}{\it {Leaf}}
\newcommand{\ell}{\ell}
\newcommand{{\nu}}{{\nu}}
\newcommand{{\em diam}}{{\em diam}}
\newcommand{{\em rad}}{{\em rad}}
\newcommand{{\em global sensitivity}}{{\em global sensitivity}}
\newcommand{{\em $\epsilon$-differential privacy}}{{\em $\epsilon$-differential privacy}}
\newcommand{{$\epsilon$-differential privacy}}{{$\epsilon$-differential privacy}}
\newcommand{{$\epsilon k$-differential privacy}}{{$\epsilon k$-differential privacy}}
\newcommand{{\em differential privacy}}{{\em differential privacy}}
\newcommand{{differential privacy}}{{differential privacy}}
\newcommand{\ensuremath{\epsilon}}{\ensuremath{\epsilon}}
\newcommand{\ensuremath{\mathit X}}{\ensuremath{\mathit X}}
\newcommand{\ensuremath{\mathit f}}{\ensuremath{\mathit f}}
\newcommand{\ensuremath{\mathcal{M}}}{\ensuremath{\mathcal{M}}}
\newcommand{\ensuremath{\mathit n}}{\ensuremath{\mathit n}}
\newcommand{\ensuremath{\mathit D_1}}{\ensuremath{\mathit D_1}}
\newcommand{\ensuremath{\mathit D_2}}{\ensuremath{\mathit D_2}}
\newcommand{\ensuremath{\mathit m}}{\ensuremath{\mathit m}}
\def\operator@font{\mathgroup\symoperators}
\def\mathop{\operator@font Lap}{\mathop{\operator@font Lap}}
\newenvironment{tightitemize}
\begin{list}{$\bullet$}{\labelwidth=5p
\labelsep=5pt \leftmargin=10pt \rightmargin=0pt \topsep=5p
\listparindent=10pt \setlength{\itemsep}{-2pt} }}{\end{list}}
\newcommand{\myparagraph}[1]{\noindent {\sc #1.}}
\newcounter{mycount}
\newcounter{nmycount}
\newenvironment{tightenumerate}
{\setcounter{mycount}{1}}
\begin{list}{{\bf \themycount.} {\addtocounter{mycount}{1}}}{\labelwidth=0p
\labelsep=0pt \leftmargin=3pt \rightmargin=0pt \topsep=5p
\listparindent=10pt \setlength{\itemsep}{-1pt}
}}{\end{list}}
\newenvironment{nestedtightenumerate}
{\setcounter{nmycount}{1}}
\begin{list}{{(\alph{nmycount})} {\addtocounter{nmycount}{1}}}{\labelwidth=0p
\labelsep=0pt \leftmargin=3pt \rightmargin=0pt \topsep=5p
\listparindent=10pt \setlength{\itemsep}{-1pt}
}}{\end{list}}
\renewcommand{\baselinestretch}{0.98}
\begin{document}
\title{Efficient data hashing with structured binary embeddings}
\titlerunning{On the boosting ability of top-down decision tree learning algorithm for multiclass classification}
\author{}
\author{Krzysztof Choromanski \\ Google Research}
\authorrunning{}
\institute{}
\maketitle
\begin{abstract}
We present here new mechanisms for hashing data via binary embeddings. Contrary to most of the techniques presented before, the embedding matrix of our mechanism is highly structured.
That enables us to perform hashing more efficiently and use less memory. What is crucial and nonintuitive is the fact that imposing structured mechanism does not affect the quality of the produced hash. To the best of our knowledge, we are the first to give strong theoretical guarantees of the proposed binary hashing method by proving the efficiency of the mechanism for several classes of structured projection matrices. As a corollary, we obtain binary hashing mechanisms with strong concentration results for circulant and Topelitz matrices. Our approach is however much more general.
\end{abstract}
\section{Hashing mechanism}
\label{sec:has_mech}
In this section we explain in detail proposed hashing mechanism for initial dimensionality reduction
that is used to preprocess data before it is given as an input to the autoencoder. As mentioned earlier,
the mechanism is of its own interest.
We introduce first the aforementioned family of $\Psi$-regular matrices $\mathcal{P}$ that is
a key ingredient of the method.
Assume that $k$ is the size of the hash and $n$ is the dimensionality of the data.
Let $t$ be the size of the pool of independent random gaussian variables $\{g_{1},...,g_{t}\}$,
where each $g_{i} \sim \mathcal{N}(0,1)$. Assume that $k \leq n \leq t \leq kn$.
We say that a random matrix $\mathcal{P}$ is $\Psi$-regular if $\mathcal{P}$ is of the form:
\begin{equation}
\left( \begin{array}{ccccc}
\sum_{l \in \mathcal{S}_{1,1}}g_{l} & ... & \sum_{l \in \mathcal{S}_{1,j}}g_{l} & ... & \sum_{l \in \mathcal{S}_{1,n}}g_{l}\\
... & ... & ... & ... & ... \\
\sum_{l \in \mathcal{S}_{i,1}}g_{l} & ... & \sum_{l \in \mathcal{S}_{i,j}}g_{l} & ... & \sum_{l \in \mathcal{S}_{i,n}}g_{l}\\
... & ... & ... & ... & ... \\
\sum_{l \in \mathcal{S}_{k,1}}g_{l} & ... & \sum_{l \in \mathcal{S}_{k,j}}g_{l} & ... & \sum_{l \in \mathcal{S}_{k,n}}g_{l}
\end{array} \right)
\end{equation}
where $S_{i,j} \subseteq \{1,...,t\}$ for $i \in \{1,...,k\}$, $j \in \{1,...,n\}$, $|S_{i,1}|=...=|S_{i,n}|$ for $i=1,...,k$, $S_{i,j} \cap S_{i,u} = \emptyset$
for $i \in \{1,...,k\}$, $\{j,u\} \subseteq \{1,...,n\}$, $j \neq u$ and furthermore the following holds:
\begin{itemize}
\item for every column of $\mathcal{P}$ every $g_{l}$ appears in at most $\Phi$ entries from that column.
\end{itemize}
Notice that all structured matrices that we mentioned in the abstract are special cases of the $0$-regular matrix.
Indeed, each Toeplitz matrix is clearly $0$-regular, where subsets $\mathcal{S}_{i,j}$ are singletons.
Let $\phi$ be a function satisfying $\lim_{x \rightarrow \infty} \phi(x) = 1$ and $\lim_{x \rightarrow -\infty} \phi(x) = -1$.
We will consider two hashing methods. The first one, called by us \textit{extended $\Psi$-regular hashing}, applies first random diagonal matrix $\mathcal{R}$ to the datapoint $x$,
then the $L_{2}$-normalized Hadamard matrix $\mathcal{H}$, next another random diagonal matrix $\mathcal{D}$, then the $\Psi$-regular projection matrix $\mathcal{P}_{\Psi}$ and finally function $\phi$ (the latter one applied pointwise).
The overal scheme is presented below:
\begin{equation}
x \xrightarrow {\mathcal{R}} x_{\mathcal{R}} \xrightarrow {\mathcal{H}} x_{\mathcal{H}} \xrightarrow {\mathcal{D}} x_{\mathcal{D}} \xrightarrow {\mathcal{P}_{\Psi}} x_{\mathcal{P}_{\Psi}} \xrightarrow {\phi} h(x) \in \mathbb{R}^{k}.
\end{equation}
The diagonal entries of matrices $\mathcal{R}$ and $\mathcal{D}$ are chosen independently from the binary set $\{-1,1\}$,
each value being chosen with probability $\frac{1}{2}$.
We also propose a shorter pipeline, called by us \textit{short $\Psi$-regular hashing}, where we avoid applying first random matrix and Hadamard matrix $\mathcal{R}$ and the Hadamard matrix, i.e. the overall pipeline is of the form:
\begin{equation}
x \xrightarrow {\mathcal{D}} x_{\mathcal{D}} \xrightarrow {\mathcal{P}_{\Psi}} x_{\mathcal{P}_{\Psi}} \xrightarrow {\phi} h(x) \in \mathbb{R}^{k}.
\end{equation}
The goal is to compute good approximation of the angular distance between given $L_{2}$-normalized vectors $p,r$, given their
compact hashed versions: $h(p), h(r)$.
To achieve this goal we consider the $L_{1}$-distance in the $k$-dimensional space of hashes.
Let $\theta_{p,r}$ denote the angle between vectors $p$ and $r$. We define the \textit{normalized approximate angle between $p$ and $r$} as:
\begin{equation}
\tilde{\theta}_{p,r}^{n} = \frac{1}{2k}\|h(p)-h(r)\|_{1}
\end{equation}
In the next section we will show that the normalized approximate angle between vectors $p$ and $r$ is a very precise estimation of the actual angle if the chosen parameter $\Psi$ is not large enough. Furthermore, we show an intriguing connection between theoretical guarantess regarding the quality of the produced hash and the chromatic number of some specific undirected graph encoding the structure of $\mathcal{P}$. For many of the structured matrices under consideration this graph is induced by an algebraic group operation defining the structure of $\mathcal{P}$ (for istance, for the circular matrix the group is a single shift and the underlying graph is a collection of pairwise disjoint cycles and trees thus its chromatic number is at most $3$).
\section{Theoretical results}
\label{sec:the}
\subsection{Introduction}
We are ready to provide theoretical guarantees regarding the quality of the produced hash. Our guarantees will be given for a \textit{sign} function, i.e for $\phi$ defined as: $\phi(x) = 1$ for $x \geq 0$, $\phi(x) = -1$ for $x < 0$. However we should emphasize that empirical results showed that other functions (that are often used as nonlinear maps in deep neural networks) such as sigmoid function, also work well. It is not hard to show that $\tilde{\theta}_{p,r}^{n}$ is an unbiased estimator of $\frac{\theta_{p,r}}{\Pi}$, i.e. $E(\tilde{\theta}_{p,r}^{n}) = \frac{\theta_{p,r}}{\Pi}$. What we will focus on is the concentration of the random variable $\tilde{\theta}_{p,r}^{n}$ around its mean $\frac{\theta_{p,r}}{\Pi}$. We will prove strong exponential concentration results regarding the extended $\Psi$-regular hashing method. Interestingly, the application of the Hadamard mechanism is not necessary and it is possible to get concentration results, yet weaker than in the former case, also for short $\Psi$-regular hashing.
As a warm up, let us prove the following.
\begin{lemma}
\label{mean_lemma}
Let $\mathcal{M}$ be a $\Psi$-regular hashing model (either extended or short). Then $\tilde{\theta}_{p,r}^{n}$ is an unbiased estimator of $\theta_{p,r}$, i.e.
$$E(\tilde{\theta}_{p,r}^{n}) = \frac{\theta_{p,r}}{\Pi}.$$
\end{lemma}
\begin{proof}
Notice first that the $i$th row, call it $g^{i}$, of the matrix $\mathcal{P}$ is a $n$-dimensional gaussian vector with mean $0$ and where each element has standard deviation $\sigma_{i}$ for $\sigma_{i}=|\mathcal{S}_{i,1}|=...=|\mathcal{S}_{i,n}|$ ($i=1,...,k$).
Thus, after applying matrix $\mathcal{D}$ the new vector $g^{i}_{\mathcal{D}}$ is still gaussian and of the same distribution.
Let us consider first the short $\Psi$-regular hashing model.
Fix some $L_{2}$-normalized vectors $p,r$ (without loss of generality we may assume that they are not collinear) and denote by $H_{p,r}$ the $2$-dimensional hyperplane spanned by $\{p,r\}$. Denote by $g^{i}_{\mathcal{D},H}$ the projection of
$g^{i}_{\mathcal{D}}$ into $H$ and by $g^{i}_{\mathcal{D},H,\perp}$ the line in $H$ perpendicular to $g^{i}_{\mathcal{D},H}$. Let $\phi$ be a \textit{sign} function. Notice that the contribution to the $L_{1}$-sum $\|h(p)-h(r)\|_{1}$ comes from those $g^{i}$ for which
$g^{i}_{\mathcal{D},H,\perp}$ divides an angel between $p$ and $r$, i.e. from those $g^{i}$ for which $g^{i}_{\mathcal{D},H}$
is inside the union $\mathcal{U}_{p,r}$ of two $2$-dimensional cones bounded by two lines in $H$ perpendicular to $p$ and $r$
respectively.
Observe that, from what we have just said, we can conclude that $\tilde{\theta}_{p,r}^{n} = \frac{X_{1} + ... + X_{k}}{k}$, where:
\begin{equation}
X_{i} =
\left\{
\begin{array}{ll}
1 & \mbox{if } g^{i}_{\mathcal{D},H} \in \mathcal{U}_{p,r}, \\
0 & \mbox{otherwise.}
\end{array}
\right.
\end{equation}
Now it suffices to notice that vector $g^{i}_{\mathcal{D},H}$ is a gaussian random variable and thus its direction is uniformly distributed over all directions. Thus each $X_{i}$ is nonzero with probability exactly $\frac{\theta}{\Pi}$ and the theorem follows.
For the extended $\Psi$-regular hashing model the analysis is very similar.
The only difference is that data is preprocessed by applying $\mathcal{H}\mathcal{R}$ linear mapping first. Both $\mathcal{H}$ and $\mathcal{R}$ are matrices of rotations though, thus their product is also a rotation matrix. Since rotations do not change angular distance, the former analysis can be applied again and yields the proof.
\end{proof}
\subsection{The $\mathcal{P}$-chromatic number}
As we have already mentioned, the highly well organized structure of the projection matrix $\mathcal{P}$ gives rise
to the underlying undirected graph that encodes dependencies between different entries of $\mathcal{P}$.
More formally, let us fix two rows of $\mathcal{P}$ of indices $1 \leq k_{1} < k_{2} \leq k$.
We define a graph $\mathcal{G}_{\mathcal{P}}(k_{1},k_{2})$ as follows:
\begin{itemize}
\item $V(\mathcal{G}_{\mathcal{P}}(k_{1},k_{2})) = \{\{j_{1},j_{2}\}: \exists l \in \{1,...,t\} s.t. g_{l} \in \mathcal{S}_{k_{1},j_{1}} \cap \mathcal{S}_{k_{2},j_{2}}, j_{1} \neq j_{2}\}$,
\item there exists an edge between vertices $\{j_{1},j_{2}\}$ and $\{j_{3},j_{4}\}$ iff $\{j_{1},j_{2}\} \cap \{j_{3},j_{4}\} \neq \emptyset$.
\end{itemize}
The chromatic number $\chi(\mathcal{G})$ of the graph $\mathcal{G}$ is the minimal number of colors that can be used to color the vertices of the graph in such a way that no two adjacent vertices have the same color.
\begin{definition}
Let $\mathcal{P}$ be a $\Psi$-regular matrix. We define the $\mathcal{P}$-chromatic number $\chi(\mathcal{P})$
as:
$$\chi(\mathcal{P}) = \max_{1 \leq k_{1} < k_{2} \leq k} \chi(\mathcal{G}(k_{1},k_{2})).$$
\end{definition}
\subsection{Concentration inequalities for structured hashing with \textit{sign} function}
We present now our main theoretical results. Let us consider first the extended $\Psi$-regular hashing model.
The following is true.
\begin{theorem}
\label{ext_technical_theorem}
Take the extended $\Psi$-regular hashing model $\mathcal{M}$ with $t$ independent gaussian random variables: $g_{1},...,g_{t}$, each of distribution $\mathcal{N}(0,1)$.
Let $N$ be the size of the dataset. Denote by $k$ the size of the hash and by $n$ the dimensionality of the data. Let $f(n)$ be arbitrary positive function.
Let $p, r$ be two fixed vectors $p,r \in \mathbb{R}^{n}$ with angular distance $\theta_{p,r}$ between them.
Then for every $a,\epsilon>0$ the following is true:
$$
\mathbb{P}(|\tilde{\theta}^{n}_{p,r} - \frac{\theta}{\Pi}| \leq \epsilon) \geq (1-4{N \choose 2}e^{-\frac{f^{2}(n)}{2}}-4\chi(\mathcal{P}){k \choose 2}e^{-\frac{2a^{2}t}{f^{4}(t)}})(1-\Lambda),
$$
where $\Lambda = \frac{1}{\Pi} \sum_{j = \frac{\epsilon k}{2}}^{k}\frac{1}{\sqrt{j}}(\frac{ke}{j})^{j}\mu^{j}(1-\mu)^{k-j}+2e^{-\frac{\epsilon^{2}k}{2}}$ and $\mu=\frac{8k(a\chi(\mathcal{P}) + \Psi\frac{f^{2}(n)}{n})}{\theta_{p,r}}$.
\end{theorem}
Notice how the upper bound on the probability of failure $\mathbb{P}_{\epsilon}$ depends on the $\mathcal{P}$-chromatic number. The theorem above guarantees strong concentration of $\tilde{\theta}^{n}_{p,r}$ around its mean and therefore justifies theoretically the effectiveness of the structured hashing method. It becomes more clearly below.
As a corollary, we obtain the following result:
\begin{theorem}
\label{ext_theorem}
Take the extended $\Psi$-regular hashing model $\mathcal{M}$ with. Assume that the projection matrix $\mathcal{P}$ is Toeplitz.
Let $N$ be the size of the dataset. Denote by $k$ the size of the hash and by $n$ the dimensionality of the data. Let $f(n)$ be an arbitrary positive function.
Let $p, r$ be two vectors $p,r \in \mathbb{R}^{n}$ with angular distance $\theta_{p,r}$ between them.
Then for every $\epsilon>0$ the following is true:
$$\mathbb{P}(|\tilde{\theta}^{n}_{p,r} - \frac{\theta}{\Pi}| \leq k^{-\frac{1}{3}}) \geq (1-O(\frac{N^{2}}{n^{4.5}})-O(k^{2}
e^{-\Omega(\frac{n^{\frac{1}{3}}}{\log^{2}(n)})}))(1-(\frac{k^{7}}{n})^{\frac{1}{3}}).$$
\end{theorem}
Theorem \ref{ext_theorem} follows from Theorem \ref{ext_technical_theorem} by taking:
$a=n^{-\frac{1}{3}}$, $\epsilon = k^{-\frac{1}{3}}$, $f(n)=3\sqrt{\log(n)}$ and noticing that every Toeplitz matrix is $0$-regular and the corresponding $\mathcal{P}$-chromatic number $\chi(\mathcal{P})$ is at most $3$.
Let us switch now to the short $\Psi$-regular hashing model. The theorem presented below is the application of the
Chebyshev's inequality preceded by the careful analysis of the variance $Var(\tilde{\theta}^{n}_{p,r})$.
\begin{theorem}
\label{short_theorem}
Take the short $\Psi$-regular hashing model $\mathcal{M}$, where $\mathcal{P}$ is a Toeplitz matrix.
Let $N$ be the size of the dataset. Denote by $k$ the size of the hash and by $n$ the dimensionality of the data.
Let $p, r$ be two vectors $p,r \in \mathbb{R}^{n}$ with angular distance $\theta_{p,r}$ between them.
Then the following is true for any $c>0$:
$$\mathbb{P}(|\tilde{\theta}^{n}_{p,r} - \frac{\theta}{\Pi}| \geq c (\frac{\sqrt{\log(k)}}{k})^{\frac{1}{3}}) = O(\frac{1}{c^{2}}).$$
\end{theorem}
The proofs of Theorem \ref{ext_technical_theorem} and Theorem \ref{short_theorem} will be given in the Appendix.
\section{Appendix}
In this section we prove Theorem \ref{ext_technical_theorem} and Theorem \ref{short_theorem}.
We will use notation from Lemma \ref{mean_lemma}.
\subsection{Proof of Theorem \ref{ext_technical_theorem}}
We start with the following technical lemma:
\begin{lemma}
\label{first_lemma}
Let $\{Z_{1},...,Z_{k}\}$ be the set of $k$ independent random variables defined on $\Omega$ such that each $Z_{i}$ has the same distribution and $Z_{i} \in \{0,1\}$. Let $\{\mathcal{F}_{1},...,\mathcal{F}_{k}\}$ be the set of events, where each $\mathcal{F}_{i}$ is in the $\sigma$-field defined by $Z_{i}$ (in particular $\mathcal{F}_{i}$ does not depend on the $\sigma field$ $\sigma(Z_{1},...,Z_{i-1},Z_{i+1},...Z_{k})$). Assume that there exists $\mu < \frac{1}{2}$ such that: $\mathbb{P}(\mathcal{F}_{i}) \leq \mu$ for $i=1,...,k$.
Let $\{U_{1},...,U_{k}\}$ be the set of $k$ random variables such that $U_{i} \in \{0,1\}$ and
$U_{i} | \mathcal{F}_{i} = Z_{i} |\mathcal{F}_{i}$ for $i=1,...,k$, where $X|\mathcal{F}$ stands
for the random variable $X$ truncated to the event $\mathcal{F}$. Assume furthermore that $E(U_{i})=E(Z_{i})$ for $i=1,...,k$.
Denote $Y = \frac{Y_{1}+...+Y_{k}}{k}$. Then the following is true.
\begin{equation}
\mathbb{P}(|Y-EY| > a) \leq \frac{1}{\Pi} \sum_{r=\frac{ak}{2}}^{k}\frac{1}{\sqrt{r}}(\frac{ke}{r})^{r}\mu^{r}(1-\mu)^{k-r} + 2e^{-\frac{a^{2}k}{2}}.
\end{equation}
\end{lemma}
\begin{proof}
Let us consider the event $\mathcal{F}_{bad}$ = $\mathcal{F}_{1} \cup ... \cup \mathcal{F}_{k}$.
Notice that $\mathcal{F}_{bad}$ may be represented by the union of the so-called $r$-blocks, i.e.
\begin{equation}\mathcal{F}_{bad} = \bigcup_{Q \subseteq \{1,...,k\}} (\bigcap_{q \in Q} \mathcal{F}_{q} \bigcap_{q \in \{1,...,k\} \setminus Q} \mathcal{F}^{c}_{q}), \end{equation}
where $\mathcal{F}^{c}$ stands for the complement of event $\mathcal{F}$.
Let us fix now some $Q \subseteq \{1,...,k\}$. Denote \begin{equation}\mathcal{F}_{Q} = \bigcap_{q \in Q} \mathcal{F}_{q} \bigcap_{q \in \{1,...,k\} \setminus Q} \mathcal{F}^{c}_{q}. \end{equation}
Notice that $\mathbb{P}(\mathcal{F}_{Q}) \leq \mu^{r}(1-\mu)^{k-r}$. It follows directly from the Bernoulli scheme.
Denote $X = \frac{X_{1}+...+X_{k}}{k}$.
From what we have just said and from the definition of $\{\mathcal{F}_{1},...,\mathcal{F}_{k}\}$ we conclude that for any given $c$ the following holds:
\begin{equation}
\label{xy_diff}
\mathbb{P}(|Y-X| > c) \leq \sum_{r=ck}^{k}{k \choose r}\mu^{r}(1-\mu)^{k-r}.
\end{equation}
Notice also that from the assumptions of the lemma we trivially get: $E(Y)=E(X)$.
Let us consider now the expression $\mathbb{P}(|Y-E(Y)|) > a$.
We get: $\mathbb{P}(|Y-E(Y)|>a) =
\mathbb{P}(|Y-E(X)| > a) = \mathbb{P}(|Y-X + X-E(X)| > a) \leq \mathbb{P}(|Y-X|+|X-E(X)|>a) \leq \mathbb{P}(|Y-X| > \frac{a}{2}) + \mathbb{P}(|X-E(X)| > \frac{a}{2})$.
From \ref{xy_diff} we get:
\begin{equation}
\mathbb{P}(|Y-X| > \frac{a}{2}) \leq \sum_{r=\frac{ak}{2}}^{k}{k \choose r} \mu^{r}(1-\mu)^{k-r}.
\end{equation}
Let us consider now the expression: \begin{equation}\xi = \sum_{r=\frac{ak}{2}}^{k}{k \choose r} \mu^{r}(1-\mu)^{k-r}.\end{equation}
We have:
\begin{equation}
\xi \leq \sum_{r=\frac{ak}{2}}^{k} \frac{(k-r+1)...(k)}{r!} \mu^{r}(1-\mu)^{k-r}
\leq \sum_{r=\frac{ak}{2}}^{k} \frac{k^{r}}{r!} \mu^{r}(1-\mu)^{k-r}
\end{equation}
From the Stirling's formula we get: $r! = \frac{2\Pi r^{r+\frac{1}{2}}}{e^{r}}(1+o_{r}(1))$.
Thus we obtain:
\begin{equation}
\label{xi_ineq}
\xi \leq (1+o_{r}(1))\sum_{r=\frac{ak}{2}}^{k}\frac{k^{r}e^{r}}{2\Pi r^{r+\frac{1}{2}}}\mu^{r}(1-\mu)^{k-r} \leq \frac{1}{\Pi} \sum_{r=\frac{ak}{2}}^{k}\frac{1}{\sqrt{r}}(\frac{ke}{r})^{r}\mu^{r}(1-\mu)^{k-r}
\end{equation}
for $r$ large enough.
Now we will use the following version of standard Azuma's inequality:
\begin{lemma}
\label{azuma_general}
Let $W_{1},...,W_{k}$ be $k$ independent random variables such that $E(W_{1})=...E(W_{k})=0$.
Assume that $-\alpha_{i} \leq W_{i+1} - W_{i} \leq \beta_{i}$ for $i=2,...,k-1$.
Then the following is true:
$$
\mathbb{P}(|\sum_{i=1}^{k} W_{i}|>a) \leq 2e^{-\frac{2a^{2}}{\sum_{i=1}^{k}(\alpha_{i}+\beta_{i})^{2}}}
$$
\end{lemma}
Now, using Lemma \ref{azuma_general} for $W_{i} = X_{i} - E(X_{i})$ and $\alpha_{i} = E(X_{i}), \beta_{i}=1-E(X_{i})$ we obtain:
\begin{equation}
\label{azuma_simple}
\mathbb{P}(|X-EX| > \frac{a}{2}) \leq 2e^{-\frac{a^{2}k}{2}}.
\end{equation}
Combining \ref{xi_ineq} and \ref{azuma_simple}, we obtain the statement of the lemma.
\end{proof}
Our next lemma explains the role the Hadamard matrix plays in the entire extended $\Psi$-regular hashing mechanism.
\begin{lemma}
\label{hadamard_lemma}
Let $n$ denote data dimensionality and let $f(n)$ be an arbitrary positive function.
Let $D$ be the set of all $L_{2}$-normalized datapoints, where no two datapoints are identical.
Assume that $|D|=N$.
Consider the ${N \choose 2}$ hyperplanes $H_{p,r}$ spanned by pairs of different vectors $\{p,r\}$ from $D$. Then after applying linear transformation $\mathcal{H}\mathcal{R}$ each hyperplane $H_{p,r}$ is transformed into another hyperplane $H^{\mathcal{H}\mathcal{R}}_{p,r}$. Furthermore, the probability $\mathcal{P}_{\mathcal{H}\mathcal{R}} $that for every $H^{\mathcal{H}\mathcal{R}}_{p,r}$ there exist two orthonormal vectors $x=(x_{1},...,x_{n}),y=(y_{1},...,y_{n})$ in $H^{\mathcal{H}\mathcal{R}}_{p,r}$ such that: $|x_{i}|,|y_{i}| \leq \frac{f(n)}{\sqrt{n}}$
satisfies: $$\mathcal{P}_{\mathcal{H}\mathcal{R}} \geq 1-4{N \choose 2}e^{-\frac{f^{2}(n)}{2}}.$$
\end{lemma}
\begin{proof}
We have already noticed in the proof of Lemma \ref{mean_lemma} that $\mathcal{H}\mathcal{R}$
is a matrix of the rotation transformation. Thus, as an isometry, it clearly transforms each $2$-dimensional hyperplane into another $2$-dimensional hyperplane.
For every pair $\{p,r\}$ let us consider an arbitrary fixed orthonormal pair $\{u,v\}$ spanning $H_{p,r}$.
Denote $u=(u_{1},...,u_{n})$. Let us denote by $u^{\mathcal{H}\mathcal{R}}$ vector obtained from
$u$ after applying transformation $\mathcal{H}\mathcal{R}$.
Notice that the $j^{th}$ coordinate of $u^{\mathcal{H}\mathcal{R}}$ is of the form:
\begin{equation}
u^{\mathcal{H}\mathcal{R}}_{j} = u_{1}T_{1}+...+u_{n}T_{n},
\end{equation}
where $T_{1},...,T_{n}$ are independent random variables satisfying:
\begin{equation}
T_{i} =
\left\{
\begin{array}{ll}
\frac{1}{\sqrt{n}} & \mbox{w.p } \frac{1}{2}, \\
-\frac{1}{\sqrt{n}} & \mbox{otherwise.}
\end{array}
\right.
\end{equation}
The latter comes straightforwardly from the form of the $L_{2}$-normalized Hadamard matrix
(i.e a Hadamard matrix, where each row and column is $L_{2}$-normalized).
But then, from Lemma \ref{azuma_general}, and the fact that $\|u\|_{2}=1$, we get for any $a>0$:
\begin{equation}
\mathbb{P}(|u_{1}T_{1}+...+u_{n}T_{n}| \geq a) \leq 2e^{-\frac{2a^{2}}{\sum_{i=1}^{n}(2u_{i})^{2}}} \leq 2e^{-\frac{a^{2}}{2}}.
\end{equation}
Similar analysis is correct for $v^{\mathcal{H}\mathcal{R}}$.
Notice that $v^{\mathcal{H}\mathcal{R}}$ is orthogonal to $u^{\mathcal{H}\mathcal{R}}$
since $v$ and $u$ are orthogonal. Furthermore, both $v^{\mathcal{H}\mathcal{R}}$ and
$u^{\mathcal{H}\mathcal{R}}$ are $L_{2}$-normalized. Thus $\{u^{\mathcal{H}\mathcal{R}},v^{\mathcal{H}\mathcal{R}}\}$ is an orthonormal pair.
To complete the proof, it suffices to take $a=f(n)$ and apply the union bound over all
vectors $u^{\mathcal{H}\mathcal{R}}$, $v^{\mathcal{H}\mathcal{R}}$ for all ${N \choose 2}$
hyperplanes.
\end{proof}
From the lemma above we see that applying Hadamard matrix enables us to assume with high probability
that for every hyperplane $H_{p,r}$ there exists an orthonormal basis consisting of vectors with elements of absolute values at most $\frac{f(n)}{\sqrt{n}}$. We call this event $\mathcal{E}_{f}$. Notice that whether $\mathcal{E}_{f}$ holds or not is determined only by $\mathcal{H}$, $\mathcal{R}$ and the initial dataset $D$.
Let us proceed with the proof of Theorem \ref{ext_technical_theorem}.
Let us assume that event $\mathcal{E}_{f}$ holds. Without loss of generality we may assume that
we have the short $\Psi$-regular hashing mechanism with an extra property that every $H_{p,r}$ has an orthonormal basis consisting of vectors with elements of absolute value at most $\frac{f(n)}{\sqrt{n}}$.
Fix two vectors $p,r$ from the dataset $D$. Denote by $\{x,y\}$ the orthonormal basis of $H_{p,r}$ with the above property. Let us fix the $i$th row of $\mathcal{P}$ and denote it as $(p_{i,1},...,p_{i,n})$.
After being multiplied by the diagonal matrix $\mathcal{D}$ we obtain another vector:
\begin{equation}
w=(\mathcal{P}_{i,1}d_{1},...,\mathcal{P}_{i,n}d_{n}),
\end{equation}
where:
\begin{equation}
\mathcal{D}_{i,j} =
\begin{pmatrix}
d_{1} & 0 & \cdots & 0 \\
0 & d_{2} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & d_{n}
\end{pmatrix}.
\end{equation}
We have already noticed that in the proof of Lemma \ref{mean_lemma} that it is the projection of $w$ into $H_{p,r}$ that determines whether the value of the associated random variable $X_{i}$ is $0$ or $1$.
To be more specific, we showed that $X_{i}=1$ iff the projection is in the region $\mathcal{U}_{p,r}$.
Let us write down the coordinates of the projection of $w$ into $H_{p,r}$ in the $\{x,y\}$-coordinate system.
The coordinates are the dot-products of $w$ with $x$ and $y$ respectively thus in the $\{x,y\}$-coordinate system we can write $w$ as:
\begin{equation}
\label{coord_eq}
w_{\{x,y\}}=(\mathcal{P}_{i,1}d_{1}x_{1},...,\mathcal{P}_{i,n}d_{n}x_{n},\mathcal{P}_{i,1}d_{1}y_{1},...,\mathcal{P}_{i,n}d_{n}y_{n}).
\end{equation}
Notice that both coordinates are gaussian random variables and they are independent since they were constructed by projecting a gaussian vector into two orthogonal vectors.
Now notice that from our assumption about the structure of $\mathcal{P}$ we can conclude that
both coordinates may be represented as sums of weighted gaussian random variables $g_{i}$ for $i=1,...,t$, i.e.:
\begin{equation}
w_{\{x,y\}}=(g_{1}s_{i,1}+...+g_{t}s_{i,t},g_{1}v_{i,1}+...+g_{t}v_{i,t}),
\end{equation}
where each $s_{i,j}, v_{i,j}$ is of the form $d_{z}x_{z}$ or $d_{z}y_{z}$ for some $z$ that
depends only on $i,j$.
Notice also that
\begin{equation}
s_{i,1}^{2}+...+s_{i,t}^{2} =v_{i,1}^{2}+...+v_{i,t}^{2}.
\end{equation}
The latter inequality comes from the fact that, by \ref{coord_eq}, both coordinates of
$w_{\{x,y\}}$ have the same distribution.
Let us denote $s_{i}=(s_{i,1},...,s_{i,t})$, $v_{i}=(v_{i,1},...,v_{i,t})$ for $i=1,...,k$.
We need the following lemma stating that with high probability vectors $s_{1},...,s_{k},v_{1},...,v_{k}$
are close to be pairwise orthogonal.
\begin{lemma}
\label{small_dot_product_lemma}
Let us assume that $\mathcal{E}_{f}$ holds. Let $f(n)$ be an arbitrary positive function. Then for every $a>0$ with probability at least
$\mathbb{P}_{succ} \geq 1 - 4{k \choose 2} e^{-\frac{2a^{2}n}{f^{4}(n)}}$, taken under coin tosses used to construct $\mathcal{D}$, the following is true for every $1 \leq i_{1} \neq i_{2} \leq k$:
\label{pseudo_ortho_lemma}
$$|\sum_{u=1}^{n} s_{i_{1},u}v_{i_{1},u}| \leq a\chi(\mathcal{P}) + \Psi \frac{f^{2}(n)}{n},$$
$$|\sum_{u=1}^{n} s_{i_{1},u}s_{i_{2},u}| \leq a\chi(\mathcal{P}) + \Psi \frac{f^{2}(n)}{n},$$
$$|\sum_{u=1}^{n} v_{i_{1},u}v_{i_{2},u}| \leq a\chi(\mathcal{P}) + \Psi \frac{f^{2}(n)}{n},$$
$$|\sum_{u=1}^{n} s_{i_{1},u}v_{i_{2},u}| \leq a\chi(\mathcal{P}) + \Psi \frac{f^{2}(n)}{n}.$$
\end{lemma}
\begin{proof}
Notice that the we get the first inequality for free from the fact that $x$ is orthogonal to $y$
(in other words, $\sum_{u=1}^{n} s_{i_{1},u}v_{i_{1},u}$ can be represented as $C\sum_{u=1}^{n} x_{i}y_{i}$ and the latter expression is clearly $0$).
Let us consider now one of the three remaining expressions. Notice that they can be rewritten as:
\begin{equation}E = \sum_{i=1}^{n} d_{\rho(i)}d_{\lambda(i)} x_{\zeta(i)}x_{\gamma(i)}\end{equation}
or \begin{equation}E = \sum_{i=1}^{n} d_{\rho(i)}d_{\lambda(i)} y_{\zeta(i)}y_{\gamma(i)}\end{equation}
or \begin{equation}E = \sum_{i=1}^{n} d_{\rho(i)}d_{\lambda(i)} x_{\zeta(i)}y_{\gamma(i)}\end{equation} for some
$\rho, \lambda, \zeta, \gamma$.
Notice also that from the $\Psi$-regularity condition we immediately obtain that $\rho(i)=\lambda(i)$
for at most $\Psi$ elements of each sum. Get rid of these elements from each sum and consider the remaining ones. From the definition of the $\mathcal{P}$-chromatic number, those remaining ones can be partitioned into at most $\chi(\mathcal{P})$ parts, each consisting of elements that are independent random variables (since in the corresponding graph there are no edges between them).
Thus, for the sum corresponding to each part one can apply Lemma \ref{azuma_general}.
Thus one can conclude that the sum differs from its expectation (which clearly is zero since $E(d_{i}d_{j})=0$ for $i \neq j$) by a with probability at most
\begin{equation}
\mathbb{P}_{a} \leq 2e^{-\frac{2a^{2}}{\sum_{i=1}^{n} x_{\zeta(i)}x_{\gamma(i)}}}
\end{equation}
or
\begin{equation}
\mathbb{P}_{a} \leq 2e^{-\frac{2a^{2}}{\sum_{i=1}^{n} y_{\zeta(i)}y_{\gamma(i)}}}
\end{equation}
or
\begin{equation}
\mathbb{P}_{a} \leq 2e^{-\frac{2a^{2}}{\sum_{i=1}^{n} x_{\zeta(i)}y_{\gamma(i)}}}
\end{equation}
Now it is time to use the fact that event $\mathcal{E}_{f}$ holds.
Then we know that: $|x_{i}|,|y_{i}| \leq \frac{f(n)}{\sqrt{n}}$ for $i=1,...,n$.
Substituting this upper bound for $|x_{i}|,|y_{i}|$ in the derived expressions on the probabilities coming from Lemma \ref{azuma_general}, and then taking the union bound, we complete the proof.
\end{proof}
We can finish the proof of Theorem \ref{ext_technical_theorem}.
From Lemma \ref{small_dot_product_lemma} we see that\\ $s_{1},...,s_{k},v_{1},...,v_{k}$ are
close to pairwise orthogonal with high probability. Let us fix some positive function $f(n)>0$ and some
$a>0$. Denote
\begin{equation}
\Delta = a\chi(\mathcal{P}) + \Psi \frac{f^{2}(n)}{n}.
\end{equation}
Notice that , by Lemma \ref{small_dot_product_lemma} we see that applying Gram-Schmidt process
we can obtain a system of pairwise orthogonal vectors $\tilde{s}_{1},...,\tilde{s}_{k},\tilde{v}_{1},...,\tilde{v}_{k}$ such that
\begin{equation} \label{ineq1}\|\tilde{v}_{i}-v_{i}\|_{2} \leq k \Delta. \end{equation}
and
\begin{equation}\label{ineq2}\|\tilde{s}_{i}-s_{i}\|_{2} \leq k \Delta. \end{equation}
Let us consider again $w_{x,y}$. Replacing $s_{i}$ by $\tilde{s}_{i}$ and $v_{i}$ by $\tilde{v}_{i}$
in the formula on $w_{x,y}$, we obtain another gaussian vector: $\tilde{w}_{x,y}$ for each row $i$ of the matrix $\mathcal{P}$. Notice however that vectors $\tilde{w}_{x,y}$ have one crucial advantage over vectors $w_{x,y}$, namely they are independent. That comes from the fact that $\tilde{s}_{1},...,\tilde{s}_{k}$,$\tilde{v}_{1},...,\tilde{v}_{k}$ are pairwise orthogonal.
Notice also that from \ref{ineq1} and \ref{ineq2} we obtain that the angular distance between
$w_{x,y}$ and $\tilde{w}_{x,y}$ is at most $k\Delta$.
Let $Z_{i}$ for $i=1,...k$ be an indicator random variable that is zero if $\tilde{w}_{x,y}$ is inside the region $\mathcal{U}_{p,r}$ and zero otherwise.
Let $U_{i}$ for $i=1,...k$ be an indicator random variable that is zero if $w_{x,y}$ is inside the region $\mathcal{U}_{p,r}$ and zero otherwise.
Notice that $\tilde{\theta}^{n}_{p,r} = \frac{U_{1}+...+U_{k}}{k}$.
Furthermore, random variables $Z_{1},...,Z_{k},U_{1},...,U_{k}$ satisfy the assumptions of Lemma \ref{first_lemma} with $\mu \leq \frac{8\epsilon}{\theta}$, where $\epsilon = k\Delta$.
Indeed, random variables $Z_{i}$ are independent since vectors $\tilde{w}_{x,y}$ are independent.
From what we have said so far we know that each of them takes value one with probability exactly $\frac{\theta}{\Pi}$.
Furthermore $Z_{i} \neq U_{i}$ only if $w_{x,y}$ is inside $\mathcal{U}_{p,r}$ and $\tilde{w}_{x,y}$
is outside $\mathcal{U}_{p,r}$ or vice versa. The latter event implies (thus it is included in the event) that $w_{x,y}$ is near the border of the region $\mathcal{U}_{p,r}$, namely within an angular distance $\frac{\epsilon}{\theta}$ from one of the four semilines defining $\mathcal{U}_{p,r}$. Thus in particular an event $Z_{i} \neq U_{i}$ is contained in the event of probability at most $2 \cdot 4 \cdot \frac{\epsilon}{\theta}$ that depends only on one $w_{x,y}$.
But then we can apply Lemma \ref{first_lemma}. All we need is to assume that the premises of Lemma \ref{small_dot_product_lemma} are satisfied. But this is the case with probability specified in Lemma \ref{hadamard_lemma} and this probability is taken under random coin tosses used to product $\mathcal{H}$ and $\mathcal{R}$, thus independently from the random coin tosses used to produce $\mathcal{D}$. Putting it all together we obtain the statement of Theorem \ref{ext_technical_theorem}.
\subsection{Proof of Theorem \ref{short_theorem}}
We will borrow some notation from the proof of Theorem \ref{ext_technical_theorem}.
Notice however that in this setting no preprocessing with the use of matrices $\mathcal{H}$
and $\mathcal{R}$ is applied.
\begin{lemma}
\label{variance_lemma}
Define $U_{1},...,U_{k}$ as in the proof of Theorem \ref{ext_technical_theorem}.
Assume that the following is true:
$$|\sum_{u=1}^{n} s_{i_{1},u}v_{i_{1},u}| \leq \Delta,$$
$$|\sum_{u=1}^{n} s_{i_{1},u}s_{i_{2},u}| \leq \Delta,$$
$$|\sum_{u=1}^{n} v_{i_{1},u}v_{i_{2},u}| \leq \Delta,$$
$$|\sum_{u=1}^{n} s_{i_{1},u}v_{i_{2},u}| \leq \Delta.$$
for some $0<\Delta<1$.
The the following is true for every fixed $1 \leq i < j \leq k$:
$$|\mathbb{P}(U_{i}U_{j}=1) - \mathbb{P}(U_{i}=1)\mathbb{P}(U_{j}=1)| = O(\Delta).$$
\end{lemma}
The lemma follows from the exactly the same analysis that was done in the last section of the proof of Theorem \ref{ext_technical_theorem} thus we leave it to the reader as an exercise.
Notice that we have:
\begin{equation}
Var(\tilde{\theta}^{n}_{p,r}) = Var(\frac{U_{1}+...+U_{k}}{k}) =
\frac{1}{k^{2}}(\sum_{i=1}^{k} Var(U_{i}) + \sum_{i \neq j} Cov(U_{i},U_{j})).
\end{equation}
Since $U_{i}$ is an indicator random variable that takes value one with probability $\frac{\theta}{\Pi}$,
we get:
\begin{equation}
Var(U_{i}) = E(U_{i}^{2}) - E(U_{i})^{2} = \frac{\theta}{\Pi}(1-\frac{\theta}{\Pi}).
\end{equation}
Thus we have:
\begin{equation}
Var(\tilde{\theta}^{n}_{p,r}) = \frac{1}{k}\frac{\theta(\Pi-\theta)}{\Pi^{2}}+\frac{1}{k^{2}}\sum_{i \neq j} Cov(U_{i},U_{j}).
\end{equation}
Notice however that $Cov(U_{i},U_{j})$ is exactly: $\mathbb{P}(U_{i}U_{j}=1) - \mathbb{P}(U_{i}=1)\mathbb{P}(U_{j}=1)$.
Therefore, using Lemma \ref{variance_lemma}, we obtain:
\begin{equation}
Var(\tilde{\theta}^{n}_{p,r}) = \frac{1}{k}\frac{\theta(\Pi-\theta)}{\Pi^{2}} + O(\Delta).
\end{equation}
It suffices to estimate parameter $\Delta$.
We proceed as in the previous proof.
We only need to be a little bit more cautious since the condition: $|x_{i}|,|y_{i}| \leq \frac{f(n)}{\sqrt{n}}$ cannot be assumed right now.
We select two rows: $i_{1},i_{2}$ of $\mathcal{P}$.
Notice that , again we see that applying Gram-Schmidt process
we can obtain a system of pairwise orthogonal vectors $\tilde{s}_{i_{1}},\tilde{s}_{i_{i}},\tilde{v}_{i_{i}},\tilde{v}_{i_{2}}$ such that
\begin{equation} \label{ineq1}\|\tilde{v}_{i_{1}}-v_{i_{2}}\|_{2} \leq \Delta. \end{equation}
and
\begin{equation}\label{ineq2}\|\tilde{s}_{i_{1}}-s_{i_{2}}\|_{2} \leq \Delta. \end{equation}
The fact that right now the above upper bounda are not multiplied by $k$, as it was the case in the previous proof,
plays key role in obtaining nontrivial concentration results even when no Hadamard mechanism is applied.
We consider the related sums:\\
$E_{1} = \sum_{i=1}^{n} d_{\rho(i)}d_{\lambda(i)} x_{\zeta(i)}x_{\gamma(i)},
E_{2} = \sum_{i=1}^{n} d_{\rho(i)}d_{\lambda(i)} y_{\zeta(i)}y_{\gamma(i)},\\
E_{3} = \sum_{i=1}^{n} d_{\rho(i)}d_{\lambda(i)} x_{\zeta(i)}y_{\gamma(i)}$
as before. We can again partition each sum into at most $\chi(\mathcal{P})$ subchunks, where
this time $\chi(\mathcal{P}) \leq 3$ (since $\mathcal{P}$ is Toeplitz).
The problem is that applying Lemma \ref{azuma_general}, we get bounds that depend on the expressions of the form \begin{equation} \alpha_{x,i} = \sum_{j=1}^{n}x_{j}^{2}x_{j+i}^{2}\end{equation} and \begin{equation} \alpha_{y,i} = \sum_{j=1}^{n}y_{j}^{2}y_{j+i}^{2}, \end{equation} where indices are added modulo $n$ and this time we cannot assume that all $|x_{i}|,|y_{i}|$ are small.
Fortunately we have:
\begin{equation}
\sum_{i=1}^{n} \alpha_{x,i} = 1
\end{equation}
and
\begin{equation}
\sum_{i=1}^{n} \alpha_{y,i} = 1
\end{equation}
Let us fix some positive function $f(k)$. We can conclude that the number of variables $\alpha_{x,i}$
such that $\alpha_{x,i} \geq \frac{f(k)}{{k \choose 2}}$ is at most $\frac{{k \choose 2}}{f(k)}$.
Notice that each such $\alpha_{x,i}$ and each such $\alpha_{y,i}$ corresponds to a pair $\{i_{1},_{2}\}$ of rows of the matrix $\mathcal{P}$ and consequently to the unique element $Cov(U_{i_{1}},U_{i_{2}})$ of the entire covariance sum (scaled by $\frac{1}{k^{2}}$).
Since trivially we have $|Cov(U_{i_{1}},U_{i_{2}})|=O(1)$, we conclude that the contribution of these elements to the entire covariance sum is of order $\frac{1}{f(k)}$.
Let us now consider these $\alpha_{x,i}$ and $\alpha_{y,i}$ that are at most $\frac{f(k)}{{k \choose 2}}$.
These sums are small (if we take $f(k)=o(k^{2})$) and thus it makes sense to apply Lemma \ref{azuma_general} to them. That gives us upper bound $a=\Delta$ with probability:
\begin{equation}\mathbb{P}^{*} \geq 1-e^{-\Omega(a^{2}\frac{k^{2}}{f(k)})}.\end{equation}
Taking $f(k)=(\frac{k^{2}}{\log(k)})^{\frac{1}{3}}$ and $a=\Delta = \frac{1}{f(k)}$, we conclude that:
\begin{equation}
Var(\tilde{\theta}^{n}_{p,r}) \leq \frac{1}{k}\frac{\theta(\Pi-\theta)}{\Pi^{2}} + (\frac{\log(k)}{k^{2}})^{\frac{1}{3}}
\end{equation}
Thus, from the Chebyshev's inequality, we get the following for every $c>0$ and fixed points $p,r$:
\begin{equation}
\mathbb{P}(|\tilde{\theta}^{n}_{p,r} - \frac{\theta}{\Pi}| \geq c (\frac{\sqrt{\log(k)}}{k})^{\frac{1}{3}}) = O(\frac{1}{c^{2}}).
\end{equation}
That completes the proof.
\end{document}
|
2,877,628,091,283 | arxiv | \section{Introduction}
\label{sec:Intro}
Data assimilation is a term used in the geophysical community to describe efforts to improve our knowledge of a system by combining incomplete observations with imperfect models~\cite{Apte2008}. Data assimilation is important in many fields of engineering and geophysical applications, and is an essential part of modern numerical weather prediction where it is used to initialise the forecasts based on observations of the atmosphere, combined with short term predictions~\cite{Kalnay2003}. In this field, data assimilation is uniquely challenging due to the infinite dimensionality and nonlinearity of the weather problem. Currently employed models use discretizations with \(\mathcal{O}(10^9)\) dimensional state vectors and \(\mathcal{O}(10^7)\) partial observations of the atmosphere per day~\cite{Bauer2015}. Furthermore, equations governing the dynamics of the atmosphere are well known to exhibit sensitive dependence on initial conditions~\cite{Lorenz1963,Kalnay2003}, meaning that determining them as accurately as possible is a key factor in increasing the length of the forecasting horizon.
Combining noisy data with uncertain models is an inverse problem whose optimal solution is necessarily probabilistic and sits naturally in a Bayesian framework~\cite{Law2012} and \cite{Kalnay2003}, Sec.~5.5. Due to the nonlinear nature of the underlying equations, deriving an explicit form for the posterior distribution is in general not possible~\cite{Stuart2010}. A sufficiently precise numerical representation (e.g.~by MCMC methods or particle filters~\cite{VanLeeuwen2009}) of the solution is very computationally expensive and not currently feasible in operational weather forecasting~\cite{Law2012}, although this is a promising area of research~\cite{VanLeeuwen2015}. Therefore, the data assimilation schemes used in practice are approximations based on exact schemes derived for linear systems with Gaussian priors and additive Gaussian noise, known as the Kalman filter~\cite{Law2015}. The schemes are applied to the nonlinear dynamics sequentially with various further simplifications, the simplest of which is to assume constant prior covariance. This is known as the 3DVAR method \cite{Kalnay2003}, Sec 5.5. A more advance method, the ensemble Kalman filter, involves an evolving prior covariance, estimated through the use of ensembles, that is, several simultaneous runs of the data assimilation cycle using a set of perturbed observations \cite{Evensen}. Although clearly used with great success~\cite{Bauer2015}, these are nonetheless ad hoc approximations, and a satisfactory understanding of their fundamental properties is still lacking.
A rigorous study of data assimilation in the context of the full primitive equations (a reasonable model of atmospheric circulation~\cite{Vallis}) is currently out of scope. There has been extensive study of a simpler but still infinite dimensional model; the 2D viscous, incompressible Navier-Stokes (N-S) equations. Other models typically studied in the context of data assimilation in geophysical applications (see e.g.~\cite{Lorenz1996,Law2014,Law2016,Sanz-Alonso2014}) are the Lorenz~'63 and Lorenz~'96 models, as they exhibit many of the properties of the \mbox{N-S} equations such as being dissipative with a quadratic and energy conserving nonlinearity, while having the advantage of being finite dimensional. Fortunately some remarkable properties of the 2D \mbox{N-S} equations have been known for some time. It was first shown by C.~Foias and G.~Prodi in 1967~\cite{Foias1967} that the solution is completely determined by the temporal evolution of some finite number of spatial Fourier modes, which have since been named the ``determining modes". Subsequent work~\cite{Foias1984,Jones1992} showed that this also holds for a finite set of appropriately chosen nodal values.
More recent work re-frames these results in the context of data assimilation~\cite{Olson2003,Hayden2011}, and shows that certain data assimilation schemes have zero asymptotic error even with only finitely rank observations. Hayden, Olson and Titi~\cite{Hayden2011} consider the Lorenz~'63 and \mbox{N-S} equations with a data assimilation scheme where noiseless observations are directly replaced into the approximating solution at discrete times. Their result shows that for a sufficiently large number of observed low modes, the higher modes synchronise, that is, the error goes to zero with the number of assimilation cycles.
In~\cite{Brett2013}, Brett et al build on the results in~\cite{Hayden2011} by allowing for observational errors and using the 3DVAR algorithm. They show that for bounded observational errors, the asymptotic (\(t \to \infty\)) error between the approximating solution and the true state of the atmosphere is bounded, and of the same order of magnitude as the bound on the noise. The same result is obtained in \cite{Foias2016} for another type of data assimilation scheme, which is related to the once widely used 'nudging' schemes. Therefore, in both papers, the overall error is driven by the error in the observations, regardless of initial error. Furthermore, this result is obtained pointwise, that is, it is true for any realization of the noise. The stochastic properties of the observational errors however, do not enter into the derivation of the bound, except the boundedness, which is essential.
In~\cite{Law2014,Law2016}, results are obtained in expectation for unbounded noise for the Lorenz~'96 and~'63 models, respectively. They show that for the 3DVAR scheme, the mean square of the error is of the same order of magnitude as the variance of the noise. In~\cite{Sanz-Alonso2014}, Sanz-Alonso and Stuart extend this result, in expectation, to a wide class of dissipative PDEs, including infinite dimensional systems, that satisfy certain properties; the ``absorbing ball" property and the ``squeezing property''. As is noted in~\cite{Brett2013}, in a remark after Assumption 3.1, there is essentially a trade-off to be made between having bounded noise, with pointwise bounds, and unbounded noise, where similar techniques lead to results in expectation.
The main objective of the present paper is to investigate whether data assimilation into certain dissipative systems of PDEs is well behaved. Our approach is based on the works of~\cite{Hayden2011},~\cite{Brett2013} and~\cite{Sanz-Alonso2014}. In those publications, results regarding data assimilation accuracy with unbounded noise are given in expectation, while in the present paper we derive (almost surely) pointwise bounds, even for unbounded noise. More specifically, we prove that for large time, the error is bounded by a finite and stationary process, and give an explicit description of this process in terms of the observation noise. Technically, there are realisations of the noise for which this bound fails, but these have zero probability, and hence are statistically irrelevant.
We use the \color{black} simple replacement data assimilation scheme as studied by Titi et al in~\cite{Hayden2011} although we expect our result to be extendible to 3DVAR type algorithms as described in \cite{Brett2013}\color{black}. We require assumptions similar to the absorbing and squeezing properties of~\cite{Sanz-Alonso2014} but with some crucial differences. We allow the squeezing function to be random, and require only that its expectation is less than one. We are then able to apply Birkhoff's Ergodic Theorem to show that the squeezing function is sufficiently often less than one to give us a bound which is pointwise finite (\cref{th:Main,th:Main2}). The result holds for any strength of the noise, given by the variance \(\sigma^2\), and furthermore, the bound decreases as the variance of the noise is decreased. Therefore the data assimilation error (for large time) is at least proportional to the strength of the noise. As in~\cite{Sanz-Alonso2014}, we test our assumptions on two finite dimensional systems; Lorenz~'63 and~'96, before turning to the infinite dimensional \mbox{N-S} system.
The paper is organised as follows. In \cref{Sec:SetUp} we describe the dynamical system framework, the data assimilation scheme, and the assumptions we require on the observation error. Observations at each data assimilation time are assumed to contain a random error, the nature of which we keep as general as possible. In particular, we do not require i.i.d.\ or bounded noise, just that the noise is stationary and ergodic. In \cref{sec:Assump}, we set out general assumptions on the dynamical systems needed for our main result, \cref{th:Main}, the theorem itself and the proof. In \cref{Sec:Apriori}, we investigate the properties of an apriori bound we derive for the dissipative systems considered in this paper. In \cref{sec:Finite} we show that our assumptions are satisfied by a large class of finite dimensional dissipative systems provided they satisfy certain properties. We discuss the Lorenz~'63 and~'96 models as examples of such systems. In \cref{sec:NS}, we prove that the N-S equations satisfy the Assumptions of \cref{th:Main} as well.
\section{The data assimilation problem}
\label{Sec:SetUp}
\subsection{Dissipative dynamical system}
Informally, we think of an equation as being ``dissipative" if all solutions are eventually bounded and this bound is uniform for any initial condition. Formally, a semigroup is dissipative if it possesses a compact absorbing set \cite{Robinson2001}.
Let \(\mathbf{H}\) be a Hilbert space with \(| \ . \ |\) the induced norm. Let \(U\) be the solution of a dissipative system with initial conditions \(U_0\) at \(t_0\) and let \(\psi\) be the continuous semi-flow defined by
\begin{equation}\label{eq:Sol}
U(t) = \psi(t,t_0,U_0),
\end{equation}
where \[\psi(t+s,t_0,U_0) = \psi(t,s,\psi(s, t_0,U_0)), \ \ \ \ \text{(the semigroup property)},\] and \[\psi(0,t, U(t))=U(t) \] for all real \(t \geq 0\), such that \(\psi\) is continuous in \(t\) and with respect to initial condition \(U_0\).
We assume that this dynamical system is a perfect representation of the real world system we are interested in; for instance the atmosphere, and we refer to $ U $ as the ``reference" solution.
\subsection{Data assimilation}
As mentioned in the introduction, we will be using a simple data assimilation method as defined \color{black} by Titi et al \color{black} in \cite{Hayden2011} but with noise added at each discrete data assimilation time.
Let \(\mathbf{O}_P\), the observation \textit{space}, be a finite dimensional subspace of \(\mathbf{H}\) and \(P\) the orthogonal projection onto \(\mathbf{O}_P\).
An observation at time \(t_n\) is given by \(PU(t_n) + \sigma R_n\), where \(\sigma R_n\) is the noise, or random error, in the observation. We will define \(R_n\) more precisely in \cref{sec:Observations}. We assume that \(R_n\) is a random variable with values in \(\mathbf{O}_P\) so that \(PR_n = R_n\).
\color{black}
We note that the observations as defined above are restricted to being finite in number and the observations space is restricted to a linear transformation of the model space. In weather prediction however, this is often not the case; the observation operator can be highly non-linear, as for example, in the case of satellite observations. Restricting to a linear observation operator also means that the additive nature of the noise is preserved.
\color{black}
The approximating solution of the discrete data assimilation scheme that we use is obtained as follows. Initially at \(t_0 = 0\) we have, \[\bar{u}_0 = \eta + PU_0+ \sigma R_0,\] where \(\eta\) is the initial guess of the unobserved part of the solution. Then at discrete times \(0 < t_1 < t_2 < ...\) we set
\begin{equation}\label{eq:Approx}
\bar{u}_{n} = Q\psi(t_{n},t_{n-1},\bar{u}_{n-1}) + PU(t_{n})+ \sigma R_{n},
\end{equation}
where $ Q=I-P $ is the projection onto unobserved space.
At intermediate times \(t_n \leq t < t_{n+1}\), the approximating solution \(u(t)\) is a continuous in time function defined by
\begin{equation}\label{eq:Approx_Sol}
u(t) = \psi(t,t_n,\bar{u}_n) \ \text{for} \ t \in [t_n,t_{n+1}).
\end{equation}
We note that \(u\) is continuous on each interval $ [t_n,t_{n+1}) $ but has discontinuities at \({t_n, n \in \mathbb{N}}\), with \(u\) continuous from the right and with limits to the left, since
\[u(t_n^+) = \lim_{t \to t_n^+}\psi(t,t_n,\bar{u}_n) = \bar{u}_n = u(t_n),\] while
\[u(t_n^-) = \lim_{t \to t_n^-}\psi(t,t_{n-1},\bar{u}_{n-1}) = \psi(t_n,t_{n-1},\bar{u}_{n-1}) \neq \bar{u}_n.\]
We are interested in the data assimilation error \(\delta(t)\), which is the difference between the reference and approximating solutions described above. In particular, we are interested in the asymptotic behaviour as \(t \to \infty\). Like the approximating solution, \(\delta(t)\) is piece-wise continuous in time and defined by
\begin{equation}\label{eq:error}
\delta(t) = U(t) - u(t) =\psi(t,t_0,U_0) - \psi(t,t_n,\bar{u}_n)
\end{equation}
in the interval \([t_n,t_{n+1})\). At \(t_n\) we have \[\delta_n := \delta(t_n) = U(t_n) - \bar{u}_n =Q\psi(t_{n},t_0,U_0)-Q\psi(t_{n},t_{n-1},\bar{u}_{n-1}) - \sigma R_{n}. \]
For simplicity, we assume that the time between observational updates (the data assimilation interval), \[h = t_{n+1}-t_n >0 \] is constant.
\subsection{Observations}\label{sec:Observations}
As we will be considering the asymptotic data assimilation error, we will be looking at a sequence of noise realisation that extends into infinite time and in fact it will be useful to extend it backward in time also.
Let \((\Omega,\mathscr{F},\mathbb{P})\) be a probability space and \(T:\Omega \rightarrow \Omega\) a measure preserving map such that \(T\) and \(T^{-1}\) are ergodic with respect to $ \mathbb{P} $. Let \(R:\Omega \rightarrow \mathbf{O}_P\) be a random variable on \((\Omega,\mathscr{F})\) and denote \(R_n = R \circ T^n\); a sequence of random variables, with \(n \in \mathbb{Z}\). \(R_n\) will serve to model the noise in the observations at time \(t_n\).
We let \[\bar{R}:(\Omega, \mathscr{F}) \to (\mathbf{O}_P^{\infty},\mathscr{B}_{\infty}) \] be given by \[\omega \to (..R_{-1}(\omega), R_{0}(\omega),R_{1}(\omega)...).\] This is a measurable map and represents a realisation of the noise for all time, extending to infinite past and future. We denote the probability distribution of \(\bar{R}\) by \(P_{\bar{R}}\).
We note that with \(T\) measure preserving, \(R_n\) is a strictly stationary sequence (see e.g.~\cite{Breiman1992}, Proposition 6.9.\ for proof). We further assume that \(\mathbb{E}(R)\) = 0 and \(\mathbb{E}(|R|^2)=1\) and we model the random noise in our observation at time \(t_n\) as \(\sigma R_n\), where \(\sigma \in \mathbb{R^+}\). Therefore \(\sigma^2\) is the variance of the observation noise. If \(R\) were to have non-zero mean, this would represent a systematic error.
As an example, suppose that the \(R_n\) are i.i.d random variables with \(T: \mathbf{O}_P^{\infty} \rightarrow \mathbf{O}_P^{\infty} \) being the shift map defined by \((T^k(\bar{r}))_n = r_{n+k}\) for \(\bar{r} \in \mathbf{O}_P^{\infty}\). Then the distribution \(P_{\bar{R}}\) of \(\bar{R}\) is the product probability and \((\mathbf{O}_P^{\infty},\mathscr{B}_{\infty},P_{\bar{R}})\) is the canonical probability model\footnote{Since the distribution of the process contains all the information we are interested in, we have discarded the original process on \(\Omega\) and have represented it in term of the coordinate representation process instead on \(\mathbf{O}_P^{\infty}\). }. It can be shown that \(T\) is measure preserving and \(T\), \(T^{-1}\) are ergodic. The proof is similar to the Kolmogorov zero-one law~\cite{Breiman1992}, Theorem 3.12.
\section{Assumptions and main result}
\label{sec:Assump}
In this section we state the main assumptions that we will need in order to prove our main result, \cref{th:Main}. Assumption~\ref{as:one} requires the existence of an absorbing ball which is natural to dissipative systems. Assumption~\ref{as:two} can often be deduced from the same estimates that give us Assumption~\ref{as:one}, as is demonstrated in \cref{lemma:apriori}, and is an a priori bound on the error dynamics. Assumptions~\ref{as:three} and~\ref{as:four} are generally more difficult to prove, particularly Assumption~\ref{as:four} in the presence of unbounded random error. They represent a kind of contraction or squeezing on the unobserved part of the dynamics.\\
\newtheorem{myAs}{Assumption}
\newtheorem{myPrp}{Property}
\begin{myAs}\label{as:one}
\textbf{(Absorbing ball property)} There exists \(K > 0\), depending on the dynamical system, such that the ball \(\mathscr{B} = \{U; |U|^2\leq K\}\) is absorbing and forward invariant.
\end{myAs}
\begin{myAs}\label{as:two} \textbf{(A priori bound)}
For all \(\sigma, h >0\), there exists a measurable function \(\rho_0:\mathbb{R^+}\times \mathbb{R^+} \times \Omega \to \mathbb{R^+}\) with \[|\delta_n|^2 \leq \rho_0(h,\sigma) \circ T^n(\omega):=\rho_n\] such that \(\rho_n\) is a continuous monotone increasing function of \(\sigma\).
\end{myAs}
\begin{myAs} \label{as:three}
There exist continuous functions \(M, \gamma:(\mathbb{R^{+}}, \mathbb{R^{+}}) \rightarrow \mathbb{R^{+}}\) such that whenever \(U \in \mathscr{B}\) and \(|U-V|\leq \rho\),
\[|Q\{\psi(t+\tau, t, U)-\psi(t+\tau, t, V)\}|^2 \leq M(\tau, \rho)|Q(U-V)|^2 + \gamma(\tau, \rho)|P(U-V)|^2.\]
\end{myAs}
\textbf{Remark}: Without loss of generality we can assume that \(M\) and \(\gamma\) are not decreasing in \(\rho\) because we can always replace \(M, \gamma\) by functions that are larger and not decreasing.
\begin{myAs} \label{as:four}
With \(\rho_0\) as in Assumption~\ref{as:two} and \(M(\tau,\rho)\) and \(\gamma(\tau, \rho)\) as in Assumption~\ref{as:three}; for every \(\sigma>0\) there exists an \(h > 0\), such that \[\mathbb{E}M(h,\rho_0(h,\sigma)) < 1, \] and \[\mathbb{E}\gamma(h,\rho_0(h,\sigma)) < \infty.\]
\end{myAs}
\textbf{Remark}: We note that for any measurable function \(f:\mathbb{R} \to \mathbb{R}\), the process \(f \circ \rho_n \) is stationary and ergodic, since \(T\) is assumed to be measure preserving and ergodic.
In particular, we can write,
\begin{equation}
M_n(\tau):=M (\tau, \rho_n) = M_0(\tau, \rho_0) \circ T^n(\omega), \label{eq:MT}
\end{equation}
and
\begin{equation}
\gamma_n(\tau):=\gamma (\tau, \rho_n) = \gamma_0(\tau,\rho_0) \circ T^n(\omega). \label{eq:GammaT}
\end{equation}
\label{sec:MainResult}
We now state the main result of the paper.
\begin{theorem}\label{th:Main} Suppose Assumptions~\ref{as:one} to~\ref{as:four} hold. Let $ \sigma^*>0 $ and take \(h>0\) as in Assumption~\ref{as:four} with \(\sigma^*\) instead of \(\sigma\). Then there exists a stationary and a.s.\ finite process \(C_n\), a \color{black} non-negative \color{black} constant \(\bar{\beta} < 1\) and a random variable \(D\), such that for all \(\sigma < \sigma^*\), the error \(\delta_n = U(t_n) - u(t_n)\) satisfies
\begin{equation}
|\delta_n|^2 \leq \sigma^2C_n+D\bar{\beta}^n|QU_0-\eta|^2,
\end{equation}
almost surely. In particular,
\begin{equation} \label{eq:MainEq}
\limsup_{n}\Big(|\delta_n|^2 - \sigma^2 C_n \Big)\leq 0,
\end{equation}
a.s., where \(C_n\), \(\bar{\beta}\) and \(D\) are given in the proof by Equations \eqref{eq:middle_term}, \eqref{eq:C_n} and~\eqref{eq:B_explicit}.
In particular, \(C_n, \bar{\beta} \) and \(D\) only depend on \(\sigma^*\).
\end{theorem}
\Cref{th:Main} shows that, for almost all realisations of the noise, at any data assimilation update time \(t_n\), the error \(\delta_n\) is bounded. In addition, asymptotically for large time, the bound is given by \(\sigma^2C_n\) which constitutes a stationary process so that its distribution is time independent. Furthermore as \(\sigma \to 0\) the bound decreases to zero like \(\sigma^2\).
To get a bound for intermediate times \(t \in (t_n, t_{n+1})\), we require a further assumption.
\begin{myAs}\label{as:five}
There exists a constant \(\kappa>0\) such that \(|\delta(t)|^2 \leq e^{\kappa(t-t_n)}|\delta_n|^2\) for \(t \in [t_n,t_{n+1}).\)
\end{myAs}
We can easily see that if Assumption~\ref{as:five} holds, then the following modified version of \cref{th:Main} follows.
\begin{theorem}\label{th:Main2} Suppose Assumptions~\ref{as:one} to~\ref{as:five} hold. Let $ \sigma^*>0 $ and take \(h>0\) as in Assumption~\ref{as:four} with \(\sigma^*\) instead of \(\sigma\). Then there exists a stationary and a.s.\ finite process \(C_n\), a \color{black} non-negative \color{black} constant \(\bar{\beta} < 1\) and a random variable \(D\), such that for all \(\sigma < \sigma^*\), the error \color{black} \(\delta(t) = U(t) - u(t)\) with \(t \in [t_n, t_{n+1}):=I\)\color{black} \ satisfies
\begin{equation}
|\delta(t)|^2 \leq (\sigma^2C_n+D\bar{\beta}^n|QU_0-\eta|^2)e^{\kappa h},
\end{equation}
almost surely. In particular,
\color{black}
\begin{equation}
\limsup_{n}\Big[\sup_{t\in I}\Big(|\delta(t)|^2 - e^{\kappa h}\sigma^2 C_n \Big)\Big]\leq 0,
\end{equation}
\color{black}
a.s., where \(C_n\), \(\bar{\beta}\) and \(D\) are given in the proof by Equations \eqref{eq:middle_term}, \eqref{eq:C_n} and~\eqref{eq:B_explicit}.
In particular, \(C_n, \bar{\beta}\) and \(D\) only depend on \(\sigma^*\).
\end{theorem}
\begin{comment}
\begin{theorem}\label{th:Main2}
Under Assumptions~\ref{as:one} to~\ref{as:five}, there exists a stationary, a.s.\ finite process \(C_n\) such that \(\delta(t) = U(t)-u(t)\) satisfies \[\limsup_{n \to \infty}\Big(|\delta(t)|^2 - e^{\kappa h}\sigma^2 C_n \Big)\leq 0,\] with \(C_n\) as given in \cref{th:Main}.
\end{theorem}
\end{comment}
Before turning to the proof of the main result, we require some lemmas.
\begin{lemma} \label{Lemma:Meq}
Under Assumptions~\ref{as:one} to \ref{as:three}, \(\delta_n=U(t_n)-u(t_n)\) satisfies
\begin{equation}\label{eq:Meq}
|\delta_{n}|^2 \leq \sigma^2\sum_{l=1}^{n} \prod_{k=l}^{n-1}M_{k}|R_{l-1}|^2\gamma_{l-1} +\prod_{k=0}^{n-1}M_{k}|QU_0-\eta|^2 + \sigma^2|R_{n}|^2,
\end{equation}
where \(M_k:= M(h,\rho_k(h))\) and \(h=t_{n+1} - t_n\) is the update interval.
\end{lemma}
\begin{proof}
By Assumption~\ref{as:one} we have that the solution \(U(t) \in \mathscr{B} \) for some \(t>0\). Without loss of generality we can assume that \(U(t_0) \in \mathscr{B} \). Then, \(U(t_n) \in \mathscr{B} \), by the forward invariance of \(\mathscr{B}\). Furthermore, by Assumption~\ref{as:two}, for any \(h>0\), we have a stationary process \(\rho_n\) such that \(|\delta_n|^2 \leq \rho_n\) for all \(n \in \mathbb{N}\). Therefore we can apply Assumption~\ref{as:three} at each update time \(t_n\). Let \(t \in [t_n,t_{n+1})\), \(U = U(t_n), V=u(t_n)\), and \(M_n(\tau)\)(respectively $ \gamma_n(\tau) $) be as in Equation~\cref{eq:MT} (respectively Eq.~\cref{eq:GammaT}) where \(\tau = t-t_n \in [0, h)\). We obtain
\color{black}
\begin{align*}
|Q \delta_{n+1}|^2 & = \lim_{t \to t_{n+1}} |Q \delta(t)|^2 \\
& \leq \lim_{t \to t_{n+1}} M_n(t-t_n)|Q\delta_{n}|^2 + \sigma^2\gamma_n( t-t_n)|R_{n}|^2 \\
& = M_n(h)|Q\delta_{n}|^2+ \sigma^2\gamma_n( h)|R_{n}|^2 ,
\end{align*}
\color{black}
where we have used the continuity of \color{black} \(Q\delta(t)\) \color{black} at \(t_{n+1}\). Write \(M_n: = M_n(h)\) and \(\gamma_n: = \gamma_n(h)\) for simplicity.
By induction on the above,
\iffalse
\begin{align*}
|Q\delta_{n}|^2 & \leq M_{n-1}|Q\delta_{n-1}|^2 + \sigma^2|R_{n-1}|^2 \gamma_{n-1}\\
& \leq M_{n-1}M_{n-2}|Q\delta_{n-2}|^2 + M_{n-1}\sigma^2|R_{n-2}|^2\gamma_{n-2}+ \sigma^2 |R_{n-1}|^2\gamma_{n-1} \\
& ...\\
& \leq \prod_{k=0}^{n-1}M_{k}|Q\delta_{0}|^2 +\sigma^2\prod_{k=1}^{n-1}M_{k}|R_{0}|^2\gamma_0+ ...+\sigma^2\prod_{k=n-1}^{n-1}M_{k}|R_{n-2}|^2\gamma_{n-2}+\sigma^2|R_{n-1}|^2\gamma_{n-1}\\
& = \sigma^2\sum_{l=1}^{n-1} \prod_{k=l}^{n-1}M_{k}|R_{l-1}|^2\gamma_{l-1} +\prod_{k=0}^{n-1}M_{k}|QU_0-\eta|^2 + \sigma^2|R_{n-1}|^2\gamma_{n-1},
\end{align*}
since \(|Q\delta_0|^2 = |QU_0- \eta|^2\).
Finally,
\begin{align*}
|\delta_n|^2 &=|Q\delta_n|^2+|P\delta_n|^2\\ &\leq \sigma^2\sum_{l=1}^{n-1} \prod_{k=l}^{n-1}M_{k}|R_{l-1}|^2\gamma_{l-1} +\prod_{k=0}^{n-1}M_{k}|QU_0-\eta|^2 + \sigma^2\gamma_{n-1}|R_{n-1}|^2 + \sigma^2|R_{n}|^2
\end{align*}
as required.
\fi
\begin{equation*}
|Q\delta_{n}|^2 \leq \sigma^2\sum_{l=1}^{n} \prod_{k=l}^{n-1}M_{k}|R_{l-1}|^2\gamma_{l-1} +\prod_{k=0}^{n-1}M_{k}|QU_0-\eta|^2,
\end{equation*}
since \(|Q\delta_0|^2 = |QU_0- \eta|^2\) and we define \(\prod_{k=n}^{n-1}M_{k}=1\).
Finally, using that \(|P\delta_n|^2 = \sigma^2|R_n|^2\),
\begin{align*}
|\delta_n|^2 &=|Q\delta_n|^2+|P\delta_n|^2,\\ &\leq \sigma^2\sum_{l=1}^{n} \prod_{k=l}^{n-1}M_{k}|R_{l-1}|^2\gamma_{l-1} +\prod_{k=0}^{n-1}M_{k}|QU_0-\eta|^2 + \sigma^2|R_{n}|^2,
\end{align*}
as required. \end{proof}
To obtain a meaningful bound as stated in \cref{th:Main}, we need that the RHS of estimate~\cref{eq:Meq} is almost surely finite in the long term. This would clearly be the case if \(M_k\) would be less than one, for all \(k\) (with some conditions on \(\gamma_n\)). Unfortunately, since the a priori bound is stochastic, the \(M_k\) are also stochastic and it is not, in general, possible to guarantee that \(M_k< 1\) for all \(k\), whatever the value of \(h\). However, we are able to use the Ergodic Theorem to show that if \(\mathbb{E}(M_k)<1\), it ensures \(M_k<1\) often enough to guarantee that estimate~\cref{eq:Meq} is almost surely finite. That is, for almost all realizations of the sequence \(\{M_k\}_k\), the proportion of \(M_k<1\) is sufficient to ensure that the product is less than 1.
\begin{lemma} \label{Lemma:Mprod}
For any real \(\xi >0\), there exist almost surely finite random variables \(C_{\omega, \xi}\) and \( C_{\omega, \xi}^{'}\), such that for all \(N>0\)
\begin{equation}\label{eq:Mprodpos}
\prod_{k=0}^{N-1} M_{-k} \leq C_{\omega, \xi}(\beta + \xi)^N,
\end{equation}
\begin{equation}\label{eq:Mprodneg}
\prod_{k=0}^{N-1} M_{k} \leq C_{\omega, \xi}^{'}(\beta + \xi)^N,
\end{equation}
where
\begin{equation}\label{eq:C}
C_{\omega, \xi}:=\max_{N} \frac{\prod_{k=0}^{N-1} M_{-k}}{(\beta + \xi)^N},
\end{equation}
\begin{equation}\label{eq:C'}
C_{\omega, \xi}^{'}:=\max_{N} \frac{\prod_{k=0}^{N-1} M_{k}}{(\beta + \xi)^N},
\end{equation}
where \(\{M_k\}\) is as in \cref{Lemma:Meq} and
\(\beta = \mathbb{E}(M_k)\).
\end{lemma}
\begin{proof}
Assuming \(\log{M_0(\omega)}\) is measurable we can apply the Ergodic Theorem~\cite{Walters, Breiman1992} to \(T^{-1}\) to obtain
\begin{align}\label{eq:ergodic_eq}
\lim_{n \to \infty} \frac{1}{n}\sum_{k=0}^{n-1} \log{M_{-k}(\omega)} & = \lim_{n \to \infty} \frac{1}{n}\sum_{k=0}^{n-1} \log{M_0(\omega) \circ T^{-k}(\omega)} \nonumber \\ & = \mathbb{E}(\log{M_0(\omega)}) \nonumber \\ & \leq \log{\mathbb{E}(M_0(\omega)),}
\end{align}
where the last inequality follows from Jensen's Inequality.
We note that we did not require that \(\log{M_0(\omega)}\) is integrable as we can apply the Ergodic Theorem to random variables that are either bounded below or above. In the present case, \(\log{M_{0}(h,\omega)}\) could be unbounded below but we may replace it with \(\bar{M_0}(h,\omega)=\max(\epsilon,M_0(h,\omega))\) for some small \(\epsilon > 0\) and apply the Ergodic Theorem to \(\log{\bar{M_0}(h,\omega)}\).
Let \(\beta =\mathbb{E}(M_k)\). From Inequality~\cref{eq:ergodic_eq} we have that for a.e.\ \(\omega\), for all \(\xi > 0,\) there exists \(\ N_{\omega,\xi}\) such that for all \(n\geq N_{\omega,\xi}\),
\[\frac{1}{n}\sum_{k=0}^{n-1} \log{M_{-k}} \leq \ln(\beta + \xi),\] and hence
\[ \prod_{k=0}^{n-1} M_{-k} \leq (\beta + \xi)^n.\] This implies
\begin{equation}\label{eq:Mprodfinite}
\frac{\prod_{k=0}^{n-1} M_{-k}}{(\beta + \xi)^n} \leq 1.
\end{equation}
Next, we note that for all \(N>0\) it holds that
\begin{align*}
\prod_{k=0}^{N-1} M_{-k} & = \frac{\prod_{k=0}^{N-1} M_{-k}}{(\beta + \xi)^N} (\beta + \xi)^N\\
& \leq C_{\omega, \xi} (\beta + \xi)^N,
\end{align*}
where \[C_{\omega, \xi}:=\max_{N} \frac{\prod_{k=0}^{N-1} M_{-k}}{(\beta + \xi)^N}. \]
\(C_{\omega, \xi}\) is finite for a.e.\ \(\omega\) since by Inequality~\cref{eq:Mprodfinite} it is less than 1 for large enough \(N\). To get estimate~\cref{eq:Mprodneg}, we repeat the proof above with \(k=-k\) but using the ergodicity and \(\mathbb{P}\)-invariance of \(T\). \end{proof}
\begin{lemma} \label{Lemma:Mterm}
Let \(\chi_n(\omega) = \chi_0 \circ T^{n}(\omega)\) be a sequence of random variables and let \[ E_{n,m}: =\sum_{l=m}^{n}\Big(\prod_{k=l}^{n}M_{k}\Big)\chi_l\] for \(n>m\).
Then \[E_{n,0} = E_{0,-n} \circ T^{n}.\]
\end{lemma}
\begin{proof}
\begin{align*}
E_{n,0}(\omega) & =\sum_{l=0}^{n}\Big(\prod_{k=l}^{n}M_{k}\Big)\chi_l(\omega) \\
& = \sum_{l=-n}^{0}\Big(\prod_{k=l}^{0}M_{k+n}\Big)\chi_{l+n}(\omega) \\
& = \sum_{l=-n}^{0}\Big(\prod_{k=l}^{0}M_{k} \circ T^{n}(\omega)\Big) \Big(\chi_n \circ T^{n}(\omega)\Big)_l \\
& = E_{0,-n} \circ T^{n} (\omega).
\end{align*}
\end{proof}
\begin{lemma}\label{Lemma:Mterm2} Let \(\chi_n(\omega) = \chi_0 \circ T^{n}(\omega)\) be non-negative random variables with finite expectation, and suppose Assumption \ref{as:four} holds. Then \[ \sum_{l=0}^{n} \prod_{k=l}^{n-1}M_{k}\chi_l \leq B_\xi \circ T^{n-1}, \]
where
\begin{equation}\label{eq:beta_xi}
B_\xi= C_{\omega, \xi}\sum_{l=0}^{\infty} (\beta + \xi)^{l} \chi_{-l} + \chi_1
\end{equation}
is an almost surely finite random variable and \( C_{\omega, \xi}\) is as defined by~\cref{eq:C}.
\end{lemma}
\begin{proof}
By definition and by \cref{Lemma:Mterm}, we have that \[\sum_{l=0}^{n-1} \prod_{k=l}^{n-1}M_{k}\chi_l = E_{n-1,0} = E_{0,-(n-1)} \circ T^{n-1} (\omega),\]
where \[E_{0,-n} (\omega) = \sum_{l=-n}^{0}\Big(\prod_{k=l}^{0}M_{k}\Big) \chi_l = \sum_{l=-n}^{0}\Big(\prod_{k=0}^{-l}M_{-k}\Big)\chi_l. \]
Therefore,
\[\sum_{l=0}^{n} \prod_{k=l}^{n-1}M_{k}\chi_l =E_{n-1,0} + \chi_n=\Big(E_{0,-(n-1)} +\chi_1\Big)\circ T^{n-1} (\omega).\]
Then using estimate~\cref{eq:Mprodpos} from \cref{Lemma:Mprod} we have
\begin{align*}
E_{0,-(n-1)} & \leq C_{\omega, \xi}\sum_{l=-(n-1)}^{0} (\beta + \xi)^{|l|} \chi_l \nonumber \\
& \leq C_{\omega, \xi} \sum_{l=-\infty}^{0} (\beta + \xi)^{|l|} \chi_l,\\
&= C_{\omega, \xi}\sum_{l=0}^{\infty} (\beta + \xi)^{l} \chi_{-l}.
\end{align*}
Let \[B_\xi :=C_{\omega, \xi}\sum_{l=0}^{\infty} (\beta + \xi)^{l} \chi_{-l}+\chi_1,\]
then
\[\sum_{l=0}^{n} \prod_{k=l}^{n-1}M_{k}\chi_l \leq B_\xi \circ T^{n-1}, \] as required.
It is clear that \(B_\xi\) is measurable since \(\chi_n\) are non-negative. We need to show that \(B_\xi\) is finite for a.e.\ \(\omega\).
Since \(\beta < 1\) by Assumption~\ref{as:four}, we can choose \(\xi>0\) such that \(\beta + \xi < 1\). We know that \(C_{\omega, \xi}\) is a.s.\ finite by \cref{Lemma:Mprod} and \(\chi_1\) is non-negative with finite expectation. Hence, by Monotone Convergence Theorem, \[\mathbb{E}\Big(\sum_{l=0}^{\infty} (\beta + \xi)^{l} \chi_{-l}\Big) = \sum_{l=0}^{\infty} (\beta + \xi)^{l} \mathbb{E}(\chi_{-l}) < \infty. \] Hence \[\sum_{l=0}^{\infty} (\beta + \xi)^{l} \chi_{-l} < \infty\] almost surely. Therefore, \(B_\xi\) is a.s.\ finite as required.
\end{proof}
For clarity, where necessary, we will use \(\sigma\) as a parameter in the notation for the remainder of this section.
\begin{proof}[Proof of \cref{th:Main}]
By our choice of $ \sigma^* $ and $ h $, we have $ \mathbb{E}M^*_k < 1 $, where \\ \(M_k^*:=~M_k(h,\rho_k(h,\sigma^*))\).
We consider Inequality~\cref{eq:Meq}. By monotonicity of \(M_k\) and \(\gamma_k\) we can replace \(\sigma^*\) inside the functions so that the Inequality~\cref{eq:Meq} still holds. We have
\begin{equation}\label{eq:Meq_sigma_star}
|\delta_{n}|^2 \leq \sigma^2\sum_{l=1}^{n} \prod_{k=l}^{n-1}M_{k}^*|R_{l-1}|^2\gamma_{l-1}^* +\prod_{k=0}^{n-1}M_{k}^*|QU_0-\eta|^2 + \sigma^2|R_{n}|^2,
\end{equation}
where \(M_k^*:= M(h,\rho_k(h,\sigma^*))\) and \(\gamma_k^*:= \gamma(h,\rho_k(h,\sigma^*))\).
We note first that the second term of Inequality~\cref{eq:Meq_sigma_star} is bounded a.s.\ by~\cref{eq:Mprodneg};
\[\prod_{k=0}^{n-1}M_{k}^*|QU_0-\eta|^2 \leq C_{\omega, \xi}^{'*}(\beta^{*} + \xi)^{n} |QU_0-\eta|^2,\] where \(\beta^* =\beta(\sigma^*)\) and \(C_{\omega, \xi}^{'*} = C_{\omega, \xi}^{'}(\sigma^*).\)
Fix \(\xi>0\) so that \(\beta^* + \xi < 1\). Then,
\begin{equation}\label{eq:middle_term}
\lim_{n \to \infty}\prod_{k=0}^{n-1}M_{k}^*|QU_0-\eta|^2 \leq \lim_{n \to \infty} D\bar{\beta}^{n} |QU_0-\eta|^2 =0,
\end{equation}
with \(D=C_{\omega, \xi}^{'*}\) and \(\bar{\beta} = \beta^* +\xi\).
Next, we use \cref{Lemma:Mterm2}. Let
\begin{equation}\label{eq:C_n}
C_n: = B_\xi^* \circ T^{n-1} + |R\circ T^n|^2,
\end{equation} where \(B_\xi^*\) is as defined by Equation~\cref{eq:beta_xi} with \(\sigma\) replaced by \(\sigma^*\) and \(\chi_l = |R_{l-1}|^2\gamma_{l-1}^*\). Hence explicitly,
\begin{equation}\label{eq:B_explicit}
B_{\xi}^{*}= C_{\omega, \xi}^*\sum_{l=0}^{\infty} \bar{\beta}^{l} |R_{-l+1}|^2\gamma_{-l+1}^* + |R_{0}|^2\gamma_{0}^*,
\end{equation}
with \(\bar{\beta} = \beta^* + \xi\). The remaining terms of Inequality~\cref{eq:Meq_sigma_star} are bounded by \(\sigma^2 C_n\) which is a.s.\ finite and stationary by \cref{Lemma:Mterm2} and by our assumptions on \(R_n\).
Therefore,
\[|\delta_n|^2 \leq \sigma^2C_n + D\bar{\beta}^n|QU_0-\eta|^2\] and
\[\limsup_{n}\Big(|\delta_n|^2 -\sigma^2 C_n\Big) \leq 0,\] by Equation \eqref{eq:middle_term} a.s.\ as required. Furthermore it holds that \(\sigma^2C_n \to 0\) as \(\sigma \to 0\) since \(C_n\) does not depend on \(\sigma\). \end{proof}
\section{A priori bound for strongly dissipative systems}\label{Sec:Apriori}
The next lemmas show that we can usually have a more explicit candidate for the a priori bound \(\rho_n\), if one has an estimate of the rate of contraction to the attractor. This rate is closely related to the absorbing ball property and to our requirement that the system is dissipative. This contraction can be shown to hold for many important dynamical systems, such as Lorenz~'63, '96 and the 2D, incompressible, Navier-Stokes. In fact, it is how we are able to show that these systems have the absorbing ball property and are dissipative. We will study this in more detail in the subsequent sections.
The next lemma derives a bound on the approximating solution based on a specific rate of contraction. The bound depends on the observation noise up to time \(t_n\), the initial guess \(\eta\), initial condition \(U(t_0)\) and the length of the data assimilation interval \(h\).
\begin{lemma} \label{lemma:apriori}
Let $ U $ be a solution to a semi-dynamical system and suppose that there exist constants \(c_1, c_2 >0\) such that
\begin{equation} \label{eq:dissip}
|U(t)|^2 \leq e^{-c_1 (t-s)}|U(s)|^2 + c_2
\end{equation}
for all \(0 \leq s < t\).
Let \(u(t)\) be the approximating solution as defined by Equation~\cref{eq:Approx_Sol}, then
\color{black}
\begin{equation} \label{eq:iterated_dissip}
|u(t_n)|^2 \leq \phi_n(h,\eta, |U(t_0)|^2)+ 2\sigma^2\sum_{k=0}^{n} e^{-c_1kh}|R_{n-k}|^2
\end{equation}
\color{black}
for all \(n \in \mathbb{N}\), where \(h=t_n-t_{n-1} \) and
\color{black}
\[\phi_n(h,\eta,x) = |\eta|^2+\frac{2x}{c_1h}+ 3c_2\frac{1- e^{-c_1nh}}{1- e^{-c_1h}}.\]
\color{black}
\end{lemma}
\begin{proof} By Inequality~\cref{eq:dissip} and because \(u_{n-1}(t)\) is a solution in the interval \([t_{n-1},t_n)\), we have
\begin{equation} \label{eq:approx_dissip}
|u(t_n^-)|^2 \leq e^{-c_1h}|u(t_{n-1})|^2 + c_2.
\end{equation}
By definition and continuity of \(Qu(t)\) at \(t_{n}\) we have
\begin{equation}\label{eq:PQ_sept}
|u(t_{n})|^2 = |Qu(t_{n}^-)|^2 + |PU(t_{n})+\sigma R_{n}|^2 \leq |u(t_{n}^-)|^2 + |PU(t_{n})+\sigma R_{n}|^2.
\end{equation}
For simplicity, let \(O_n =|PU(t_n)+\sigma R_{n}|^2 \) and substitute Inequality~\cref{eq:approx_dissip} into Inequality~\cref{eq:PQ_sept} to get;
\begin{equation*}
|u(t_n)|^2 \leq e^{-c_1h}|u(t_{n-1})|^2 + O_{n-1} + c_2.
\end{equation*}
Therefore by induction
\begin{equation} \label{eq:iterative_interm}
|u(t_n)|^2 \leq e^{-c_1nh}|u(t_0)|^2+ \sum_{k=0}^{n-1} e^{-c_1kh}\Big(O_{n-k}+ c_2\Big).
\end{equation}
We note that
\begin{align*}
|PU(t_{n-k})+\sigma R_{n-k}|^2 &\leq 2|U(t_{n-k})|^2+2\sigma^2|R_{n-k}|^2\\ &\leq 2e^{-c_1(n-k)h}|U(t_0)|^2 +2c_2+ 2\sigma^2|R_{n-k}|^2,
\end{align*}
where we have used Inequality~\cref{eq:dissip} on \(U(t_{n-k})\). This implies
\begin{align*}
\sum_{k=0}^{n-1} e^{-c_1kh}O_{n-k} &\leq \sum_{k=0}^{n-1} e^{-c_1kh}\Big( 2e^{-c_1(n-k)h}|U(t_0)|^2 +2c_2+ 2\sigma^2|R_{n-k}|^2\Big)\\
&=2n e^{-c_1nh}|U(t_0)|^2 + 2c_2\frac{1- e^{-c_1nh}}{1- e^{-c_1h}} +2\sigma^2\sum_{k=0}^{n-1} e^{-c_1kh}|R_{n-k}|^2.
\end{align*}
Then Inequality~\cref{eq:iterative_interm} becomes
\begin{equation*}
|u(t_n)|^2 \leq |\eta|^2+\frac{2|U(t_0)|^2}{c_1h}+ 3c_2\frac{1- e^{-c_1nh}}{1- e^{-c_1h}}+ 2\sigma^2\sum_{k=0}^{n} e^{-c_1kh}|R_{n-k}|^2.
\end{equation*}
where we have used that \(ne^{-c_1hn} \leq \frac{1}{c_1h}\) for all \(n\geq 0\) and \(h>0\) and \(|u(t_0)|= |\eta|^2 + \sigma^2 |R_{0}|^2\), where \(\eta\) is the initial guess. Thus we have shown Inequality~\cref{eq:iterated_dissip}.
\end{proof}
We can readily see that Inequality~\cref{eq:dissip} gives us an absorbing ball \(B(0,r)\) with \(r>c_2^{1/2}\) since any bounded set will eventually be inside the ball. However, we cannot deduce forward invariance. We will see that the actual contractions we encounter in the dynamical systems we study, do guarantee forward invariance and hence imply that Assumption~\ref{as:one} holds.
The following corollary of \cref{lemma:apriori} gives the a priori bound required for Assumption~\ref{as:two}.
\begin{corollary} \label{cor:apriori}
Let the conditions of \cref{lemma:apriori} hold and let \(\delta_n=U(t_n)-u(t_n)\) be the data assimilation error and \(h=t_{n}-t_{n-1}\) the update interval. Then there exists a stationary, a.s.\ finite process
\begin{equation}\label{eq:apriori}
\rho_n = \bar{K}+ F(h)+ 4\sigma^2\sum_{k=0}^{\infty} e^{-c_1kh}|R_{n-k}|^2,
\end{equation}
such that \(|\delta_n|^2 \leq \rho_n\), for all \(n \in \mathbb{N}\).
\end{corollary}
\begin{proof}
By definition of \( |\delta_n|^2\), we have
\begin{equation}\label{eq:error_decomp}
|\delta_n|^2
\leq 2|U(t_n)|^2+ 2|u(t_{n})|^2.
\end{equation}
We insert \cref{eq:dissip,eq:iterated_dissip} into \cref{eq:error_decomp} to obtain
\begin{equation*}
|\delta_n|^2 \leq 2\phi(h,\eta,|U(t_0)|^2)+ 4\sigma^2\sum_{k=0}^{n} e^{-c_1kh}|R_{n-k}|^2 +2e^{-c_1hn}|U(t_0)|^2+2c_2.
\end{equation*}
The above simplifies to
\[|\delta_n|^2 \leq \bar{K}+ F(h)+ 4\sigma^2 \sum_{k=0}^{\infty} e^{-c_1kh}|R_{n-k}|^2,\]
where \(F(h) = \frac{6c_2}{1-e^{-c_1h}}+\frac{4|U(t_0)|^2}{c_1h} \) and \(\bar{K} = 2\Big(|U(t_0)|^2+ c_2+|\eta|^2\Big),\) as required.
To see that \(\rho_n\) is a measurable process, set \[\rho_n^N:= \bar{K}+ F(h) + 4\sigma^2\sum_{k=0}^{N} e^{-c_1kh}|R_{n-k}|^2.\] For each \(N\), $\rho_n^N $ is a finite sum of random variables and therefore measurable and $ \{ \rho_n^N\} $ is a pointwise non decreasing sequence, since we are adding non-negative terms. Therefore, \(\rho_n = \sup_{N} \rho_n^N,\) is measurable. To see that \(\rho_n\) is almost surely finite, we note that by the Monotone Convergence Theorem
\begin{equation}\label{eq:exp_rho}
\mathbb{E}(\rho_n) = \sup_N \mathbb{E}(\rho_n^N) = \bar{K}+ F(h)+ \frac{4\sigma^2}{ 1-e^{-c_1h}} < \infty
\end{equation}
for all \(h>0\).
Furthermore, \(\rho_n\) is stationary as \(R_n\) is stationary.
\end{proof}
We can see from Equation~\cref{eq:exp_rho} that the a priori bound behaves badly at \(h=0\) as its expectation is \(\mathcal{O}(\frac{1}{h})\), for small \(h\). In the next lemma we show that for almost all \(\omega \in \Omega\), \(\lim_{h \to 0} \rho_nh: = D_{\omega}\) exists. Therefore, pointwise, for small \(h\), \(\rho_n = \mathcal{O}(\frac{1}{h})\) as well. We note also that \(\rho_n\) is decreasing if the noise level \(\sigma\) decreases and converges to a noise-independent constant when \(\sigma \to 0\).
\begin{lemma}\label{Lemma:apriori}
For \(\rho_n\) as defined by Equation~\cref{eq:apriori} we have that
\begin{enumerate}
\item \(\lim_{h \to 0} \mathbb{E}(\rho_n)h =C <\infty\) where \(C>0\) is a constant,
\item \(\lim_{h \to 0} \rho_n(\omega)h = D_{\omega}\) for a.e. \(\omega\),
\item for all \(h>0\), \(\rho_n(\omega)\) is monotone in \(\sigma\) and $ \lim_{\sigma \to 0}\rho_n(\omega) =\bar{K} + F(h)$ almost surely.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove item 1, note that
\begin{align*}
\lim_{h \to 0} \mathbb{E}(\rho_n(\omega))h &= \lim_{h \to 0} (\bar{K}+ F(h)+
4\sigma^2 \sum_{k=0}^{\infty} e^{-c_1kh})h\\
&=\lim_{h \to 0} \frac{6c_2h}{1-e^{-c_1h}}+\frac{4|U(t_0)|^2h}{c_1h} + \frac{4\sigma^2h }{1-e^{-c_1h}}\\
&= \frac{6c_2+4|U(t_0)|^2+4\sigma^2}{c_1}:=C.
\end{align*}
To prove item 2, it remains to check the pointwise limit of the third term in Equation~\cref{eq:apriori}. Using summation by parts, for any \(N>0\),
\begin{equation}\label{eq:Parts}
\sum_{k=0}^{N} e^{-c_1kh}|R_{n-k}|^2 = e^{-Nc_1h} \sum_{k=0}^{N}|R_{n-k}|^2+\sum_{k=0}^{N-1}e^{-kc_1h}(1-e^{c_1h})\sum_{j=0}^{k}|R_{n-j}|^2.
\end{equation}
Considering the first term of RHS of Equation~\cref{eq:Parts}, by ergodicity of \(R_n\),
\begin{equation*}
\lim_{N \to \infty} Ne^{-Nc_1h} \frac{\sum_{k=0}^{N}|R_{n-k}|^2}{N} = \lim_{N \to \infty} \Big(Ne^{-Nc_1h}\Big)\mathbb{E}(|R_{n-k}|^2) = 0,
\end{equation*}
for a.e.\ \(\omega\).
Next we consider the second term. Again from ergodicity, we have that\\ \(\lim_{k \to \infty}\frac{\sum_{j=0}^{k}|R_{n-j}|^2}{k} = 1\), since \(\mathbb{E}(|R_n|^2) = 1\). Therefore, for any \(\epsilon>0\), there exists \(N_{\omega, \epsilon}\) such that for all \(k\geq N_{\omega, \epsilon}\), \(\frac{\sum_{j=0}^{k}|R_{n-j}|^2}{k} < 1+\epsilon\). Hence for any $ k>0 $, \[\sum_{j=0}^{k}|R_{n-j}|^2 =\frac{\sum_{j=0}^{k}|R_{n-j}|^2}{k}k \leq \bar{D}_{\omega}k, \] where \[\bar{D}_{\omega} := \sup_{k} (\frac{\sum_{j=0}^{k}|R_{n-j}|^2}{k}),\] and \(\bar{D}_{\omega} < \infty\) since for large enough \(k\) it is smaller than \(1+ \epsilon\).
Thus the second term of the RHS of Equation~\cref{eq:Parts} is bounded a.s.\ by \[ (1-e^{-c_1h})\bar{D}_{\omega} \sum_{k=0}^{N-1}e^{-kc_1h}k=(1-e^{-c_1h})\bar{D}_{\omega}\frac{e^{-c_1h}}{(1-e^{-c_1h})^2}=\bar{D}_{\omega}\frac{e^{-c_1h}}{(1-e^{-c_1h})}.\]
In summary, in the limit \(h \to 0\), \[\rho_n h \to \frac{6c_2}{c_1}+\frac{4|U(t_0)|^2}{c_1}+ 4\sigma^2\bar{D}_{\omega}:=D_{\omega}\] and \(\rho_n = \mathcal{O}(\frac{1}{h})\) a.s.\ as required.
For item 3, we note that the random term of \(\rho_n\) is a.s.\ finite, therefore for a.e.\ \(\omega\), and \(h>0\), $ \lim_{\sigma \to 0}\rho_n =\bar{K}+ F(h)$, is a constant that does not depend on the noise.
\end{proof}
\section{Application to finite dimensional systems}
\label{sec:Finite}
In this section we derive more concrete properties, sufficient to imply the general Assumptions~\ref{as:one}~to~\ref{as:five} in \cref{sec:Assump}, for dissipative and finite dimensional systems of the form
\begin{equation}\label{eq:ODE}
\frac{dU}{dt} + A U + B(U,U) = f,
\end{equation}
where solutions \(U\) and forcing \(f\) are functions in a finite dimensional vector space \(\mathbf{H}=\mathbb{R}^d\), $ A $ is a linear operator and \(B\) is a symmetric, bilinear operator; consequently, the results of \cref{th:Main,th:Main2} hold. In \cref{ss:L63,ss:L96} we apply our results to the Lorenz~'63 and Lorenz~'96 models respectively.
We assume the following properties,
\begin{myPrp}\label{Prop:Finite}
\begin{enumerate}
\item $ B(U,V) = B(V,U)$ for all \(U,V \in \mathbf{H}\).
\item \((B(U,U),U) = 0,\) for all \(U \in \mathbf{H}\).
\item $ B(QU,QU)=0, $ for all \(U \in \mathbf{H}\).
\item There exists a constant \(a_1>0\) such that for all \(U,V \in \mathbf{H}\), \\ \(|(B(U,V)| \leq a_1|U||V| \).
\item \((AU,U) \geq |U|^2\), for all \(U \in \mathbf{H}\).
\end{enumerate}
\end{myPrp}
Similar properties are used in~\cite{Law2016},~\cite{Law2014} and~\cite{Sanz-Alonso2014}. For the Lorenz~'63 model and standard observation operator \(P\), as specified in \cref{ss:L63}, Properties~\ref{Prop:Finite}.1 to~\ref{Prop:Finite}.4 are easily deduced, while Property~\ref{Prop:Finite}.5 is shown in e.g.~\cite{Hayden2011}. For the Lorenz~'96 system and standard \(P\), as specified in \cref{ss:L96}, all the properties are shown in~\cite{Law2016}.
\textbf{Remark 1:}
Property \ref{Prop:Finite}.1 is not a restriction on our dynamical system \cref{eq:ODE} since only the symmetric part of $ B $ enters the dynamics anyway. Property \ref{Prop:Finite}.2 implies that the non-linear term does not contribute to the change in energy, analogous with the nonlinear part of the Navier-Stokes Equations. Property \ref{Prop:Finite}.3 effectively represents a non trivial condition on the observation operator $ P $, ensuring a form of observability of the system. Property \ref{Prop:Finite}.4 is true for any bilinear operator on a finite dimensional space and hence represents no loss of generality. Property \ref{Prop:Finite}.5 reflects the fact that \(Au\) is considered to be a dissipative term in the dynamics.
\textbf{Remark 2:} From the above description of the dynamical system, it is clear there are many parallels with the \mbox{N-S} equations, such as dissipativity, and a nonlinearity which is quadratic and energy conserving. Furthermore, we will see in \cref{sec:NS} that the \mbox{N-S} equations can be rewritten in a very similar form as Equation \cref{eq:ODE}.
\textbf{Remark 3:} We note that by orthogonality of \(Q\) and following from Property~\ref{Prop:Finite}.5 we always have that
\begin{equation} \label{eq:Property6}
(AU,PU) \geq a_2|PU|^2-a_3|U|^2
\end{equation}
for some \(a_2 > 0\) and \(a_3\geq 0\).\footnote{\((AU,PU) = (A(P+Q)U,PU) = (APU,PU)+(AQU,PU)\geq |PU|^2 - \|A\||U|^2\), where we have used Property~\ref{Prop:Finite}.5. Therefore we have \(a_2 = 1\) and \(a_3 = \|A\|\) but these are not necessarily the sharpest such constants.}
\textbf{Remark 4:} We note that if Property~\ref{Prop:Finite}.3 holds for an orthogonal projection \(Q\) then they also hold for any projection whose image is contained in the image of \(Q\).
The next two lemmas follow directly from Property~\ref{Prop:Finite}. For the case of Lorenz~'96, the proofs are given in~\cite{Law2014}.
\begin{lemma}\label{Lemma:Pre_Prop1.3}
Properties~\ref{Prop:Finite}.1~and~\ref{Prop:Finite}.2 imply that
\[(B(V,V),U) = -2(B(U,V),V)\] holds for all \(U,V \in \mathbf{H}\).
\end{lemma}
\begin{comment}
\textbf{Proof:} The proof follows by expanding \((B(U+V,U+V),U+V)\) and \((B(U-V,U-V),U-V)\) using bilinearity and Properties~\ref{Prop:Finite}.1~and~\ref{Prop:Finite}.2.
\end{comment}
The proof is simply expanding $ (B(U+V,U+V),U+V)$ and $ (B(U-V,U-V),U-V) $ using Properties~\ref{Prop:Finite}.1 and~\ref{Prop:Finite}.2 and bilinearity of \(B\).
\begin{comment}
\textbf{Proof:}
By Properties~\ref{Prop:Finite}.1 and~\ref{Prop:Finite}.2 and bilinearity of \(B\), we obtain
\begin{align*}
(B(U+V,U+V),U+V) = & 2(B(U,V),U)+2(B(U,V),V)\\&+(B(U,U),V)+(B(V,V),U) \\&= 0.
\end{align*}
Similarly, one computes that
\begin{align*}
(B(U-V,U-V),U-V) = &-2(B(U,V),U)+2(B(U,V),V)\\&-(B(U,U),V)+(B(V,V),U) \\&= 0.
\end{align*}
Adding the two we get \[4(B(U,V),V)+2(B(V,V),U) = 0,\] which implies
\[(B(V,V),U)=-2(B(U,V),V),\] as required.\\ \qed
\end{comment}
\begin{lemma}\label{Lemma:Prop1.3}
Suppose that Properties~\ref{Prop:Finite}.1,~\ref{Prop:Finite}.2 and \ref{Prop:Finite}.4 are satisfied. Then Property~\ref{Prop:Finite}.3 is equivalent to the following; there exists a constant \(b>0\) such that
\begin{equation}\label{eq:New_prop}
2|(B(U,V),V)|\leq b|PV||U||V|.
\end{equation}
\end{lemma}
\begin{proof} By \cref{Lemma:Pre_Prop1.3}, \(2|(B(U,V),V)|=|(B(V,V),U)|\). Note that
\begin{align*}
(B(V,V),U) &= (B(PV+QV,PV+QV),U)\\ &= 2(B(PV,QV),U)+(B(PV,PV),U),
\end{align*}
where we have used Property~\ref{Prop:Finite}.3. Therefore by Property~\ref{Prop:Finite}.4,
\begin{align*}
|(B(V,V),U)| &\leq 2a_1|PV||QV||U|+a_1|PV|^2|U|\\
&=a_1|PV||U|(2|QV|+|PV|)\\
&\leq 3a_1|PV||U||V|,
\end{align*}
as required with \(b=3a_1\).
Conversely, suppose that Inequality~\cref{eq:New_prop} holds. Then \[|B(QV,QV),U)|\leq b|PQV||QV||U| = 0\] since $ |PQV|=0 $. As this holds for all \(U \in \mathbf{H}\) we get that \(B(QV,QV)=0\) for all \(V \in \mathbf{H}.\)
\end{proof}
In the next several lemmas we show that if Property~\ref{Prop:Finite} holds, then ODEs of the form~\cref{eq:ODE} satisfy Assumptions~\ref{as:one} to~\ref{as:five}, and consequently \cref{th:Main,th:Main2} hold.
We start with showing that Properties~\ref{Prop:Finite}.2 and~\ref{Prop:Finite}.5 imply Assumptions~\ref{as:one}~and~\ref{as:two}.
\begin{lemma}\label{Lemma:Ass12}
Let \(U\) be the solution of a finite dimensional ODE as defined by \cref{eq:ODE} and suppose that Properties~\ref{Prop:Finite}.2~and~\ref{Prop:Finite}.5 are satisfied. Then Assumption~\ref{as:one} holds for any \(K > |f|^2\) and Assumption~\ref{as:two} for \(\rho_n\) as given in \cref{cor:apriori} with \(c_1 = 1\) and \(c_2 = |f|^2\).
\end{lemma}
\begin{proof}
The absorbing ball property is easily verified. Take the inner product of ODE~\cref{eq:ODE} with \(U\) and use Property~\ref{Prop:Finite}.2 and Property~\ref{Prop:Finite}.5 to get
\[\frac{1}{2}\frac{d|U|^2}{dt}+|U|^2\leq (f,U). \]
Then, by the Cauchy-Schwarz and Young's inequality we obtain
\[\frac{1}{2}\frac{d|U|^2}{dt}+|U|^2 \leq |(f,U)| \leq |f||U| \leq \frac{1}{2}|f|^2 + \frac{1}{2}|U|^2,\] and hence,
\[\frac{d|U|^2}{dt}+|U|^2\leq |f|^2. \]
Assumption~\ref{as:one} follows from using Gronwall's lemma;
\begin{equation}\label{eq:dissipF}
|U(t)|^2 \leq |U(0)|^2 e^{-t}+|f|^2(1-e^{-t}).
\end{equation}
We see that any ball \(B(0,K^{1/2})\) with \(K>|f|^2\) is absorbing and forward invariant. Furthermore, Inequality~\cref{eq:dissipF} implies that the conditions of \cref{cor:apriori} are satisfied with \(c_1 = 1\) and \(c_2 = |f|^2\) and hence Assumption~\ref{as:two} (a priori bound) holds.
\end{proof}
Before proceeding to the next lemmas we derive an equation for the error \(\delta~=~U~-~u\). Since the approximating solution \(u\) satisfies Equation~\cref{eq:ODE} in the interval \([t_n,t_{n+1})\), we have that
\begin{equation}\label{eq:error_eq}
\frac{d\delta}{dt} + A \delta + 2B(U,\delta)- B(\delta, \delta) = 0,
\end{equation}
where we have used the bilinearity and symmetry of \(B\) to derive the above.
In the next Lemma we show that Assumption~\ref{as:five} holds (Eq. \cref{eq:er_bound}), and we derive a bound on \(|P\delta|\) (Eq. \cref{eq:Pdelta_bound}) which is used in \cref{Lemma:ass3} to show that Assumption~\ref{as:three} holds. The bound on \(|P\delta|\) and its proof are similar to that of the bound obtained in \cite{Sanz-Alonso2014}, Lemma 5.3, but with an important difference. If we were to simply replace the bound on \(|\delta_0|\) (given by \(r'^2\) in that paper) by our a priori bound \(\rho_n\), we would have a term multiplying \(|\delta|^2\) that in the limit \(h \to 0\) tends to a constant (see \cref{Lemma:apriori}). In our bound \cref{eq:Pdelta_bound}, the a priori bound appears in lower order, \(\rho_n^{1/2}\). This means that in the limit, this term goes to zero, which, in turn, enables us to show in \cref{Lemma:Ass4}, that there is a \(h\) for which the squeezing holds in expectation, as required by Assumption~\ref{as:four}.
\begin{lemma}\label{Lemma:bounds}
Assume that Properties~\ref{Prop:Finite}.1,~\ref{Prop:Finite}.2,~\ref{Prop:Finite}.4 and~\ref{Prop:Finite}.5 hold. Let \(U\) be a solution to ODE~\cref{eq:ODE} contained in the invariant set \(\mathscr{B} = B(0,K^{1/2})\). Then \(\delta(t) = U(t) - u(t)\) satisfies
\begin{equation}\label{eq:er_bound}
|\delta(t)|^2 \leq |\delta_n|^2e^{\kappa(t-t_n)},
\end{equation}
and
\begin{equation}\label{eq:Pdelta_bound}
|P\delta|^2\leq |\delta_n|^2(a_4+ a_5\rho_n^{1/2})(t-t_n)+|P\delta(t_n)|^2,
\end{equation}
for \(t \in [t_n,t_{n+1})\), \(n \in \mathbb{N}_0\), \(\kappa = 2(2a_1K^{1/2}-1)\), \(a_4 =2e^{\kappa h}(\frac{a_1^2}{a_2} K+a_3) \), \(a_5 =2a_1 e^{3\kappa h/2}, \) and \(\rho_n\) is as in \cref{Lemma:Ass12}.
\end{lemma}
\begin{comment}
The proof is very similar to that in~\cite{Hayden2011}, Lemma 2.4 for the Lorenz~'63 model. We take the inner product of the error Equation \eqref{eq:error_eq} with \(\delta\) and use Properties \ref{Prop:Finite}.2, \ref{Prop:Finite}.4 and \ref{Prop:Finite}.5 as well as Gronwall's inequality.
\end{comment}
\begin{comment}
\textbf{Proof of (32):}
Take the inner product of the error Equation~\eqref{eq:error_eq} with \(\delta\) and employ Properties~\ref{Prop:Finite}.2 and~\ref{Prop:Finite}.5 to obtain
\[\frac{1}{2}\frac{d|\delta|^2}{dt} + |\delta|^2 \leq 2|(B(U,\delta),\delta)|.\]
By Cauchy-Schwartz and Property~\ref{Prop:Finite}.4 we get
\begin{equation*}
\frac{1}{2}\frac{d|\delta|^2}{dt} + |\delta|^2 \leq 2a_1|U||\delta|^2 \leq 2a_1K^{1/2}|\delta|^2.
\end{equation*}
Finally by Gronwall's inequality,
\[|\delta(t)|^2\leq e^{\kappa(t-t_n)}|\delta_n|^2,\] where \(\kappa = 2(2a_1K^{1/2}-1)\). \qed
\end{comment}
\textbf{Outline of Proof:}
Proof of \cref{eq:er_bound} is straightforward and similar to the proof given for the Lorenz system in \cite{Hayden2011}, so we omit it for brevity.
Proof of \cref{eq:Pdelta_bound}; Taking inner product of the error Equation~\cref{eq:error_eq} with \(P\delta\) and applying Inequality~\cref{eq:Property6}, we get
\[\frac{1}{2}\frac{d|P\delta|^2}{dt} + a_2|P\delta|^2 - a_3|\delta|^2 + 2(B(U,\delta),P\delta)-(B(\delta,\delta),P\delta)\leq0.\]
Inequality \cref{eq:Pdelta_bound} is obtained by applying Cauchy-Schwarz, Property~\ref{Prop:Finite}.4, Inequality \cref{eq:er_bound}, Young's and the a priori bound, which holds by \cref{Lemma:Ass12}, to the above and then applying Gronwall's lemma.
\begin{comment}
\color{black}
By Cauchy-Schwarz, Property~\ref{Prop:Finite}.4 and Young's,
\begin{align*}
\frac{1}{2}\frac{d|P\delta|^2}{dt} + a_2|P\delta|^2 &\leq 2a_1|U||\delta||P\delta|+ a_1|\delta|^3+a_3|\delta|^2,\\
& \leq (\frac{4a_1^2}{4a_2} K+a_3)|\delta|^2+ \frac{2a_2}{2}|P\delta|^2+ a_1|\delta|^3.
\end{align*}
Hence by \eqref{eq:er_bound},
\[\frac{d|P\delta|^2}{dt} \leq 2(\frac{a_1^2}{a_2} K+a_3) |\delta_n|^2e^{\kappa(t-t_n)}+ 2a_1|\delta_n|^3e^{3\kappa(t-t_n)/2}.\]
Note that \(e^{\kappa(t-t_n)} < e^{\kappa h}\) for all \(t \in [t_n,t_{n+1} )\). Furthermore, by Lemma~\ref{Lemma:Ass12}, there exists a process \(\rho_n\) such that \(|\delta_n| \leq \rho_n^{1/2}\), which we use to replace in the $ |\delta_n|^3 $ in the second term of the RHS by the a priori bound.
Let \(a_4 =2e^{\kappa h}(\frac{a_1^2}{a_2} K+a_3) \) and \(a_5 =2a_1 e^{3\kappa h/2}, \) then we get
\begin{equation*}
\frac{d|P\delta|^2}{dt} \leq (a_4+ a_5\rho_n^{1/2})|\delta_n|^2.
\end{equation*}
Integrating from $ t_n $ to $t$ yields
\begin{equation*}
|P\delta|^2 \leq |\delta_n|^2(a_4+ a_5\rho_n^{1/2})(t-t_n)+|P\delta(t_n)|^2.
\end{equation*}
Recall that \(h, |\delta_n|^2\) and its a priori bound \(\rho_n\) are fixed at the beginning of the interval \([t_n,t_{n+1})\) and do not vary within it.
\color{black}
\end{comment}
The next lemma shows that Assumption~\ref{as:three} holds.
\begin{lemma}\label{Lemma:ass3}
Let \(U \in \mathscr{B} = B(0,K^{1/2})\) be a solution to ODE~\cref{eq:ODE}, satisfying Properties~\ref{Prop:Finite}.1 to~\ref{Prop:Finite}.5 with \(\delta(t)\) as defined by Equation~\cref{eq:error}, then there exist continuous functions \(M\) \(:\mathbb{R^{+}} \times \mathbb{R^{+}} \rightarrow \mathbb{R^{+}}\) and and \(\gamma\)\(:\mathbb{R^{+}} \rightarrow \mathbb{R^{+}}\) such that
\[|\delta(t)|^2 \leq M(t-t_n, \rho_n)|\delta_n|^2 + \gamma(t-t_n)|P\delta_n|^2,\]
for \(t\geq t_n\), where \[M(\tau, \rho_n) = e^{-\tau}(1 + a_6\int_0^{\tau}(a_4+ a_5\rho_n^{1/2})e^s s ds) \] and \[\gamma(\tau)= a_6(1-e^{-\tau}),\] with \(a_6= b^2K\).
\end{lemma}
\begin{proof}
Taking inner product of error Equation~\cref{eq:error_eq} with \(\delta\) and using Properties~\ref{Prop:Finite}.5 and~\ref{Prop:Finite}.2 we get
\begin{equation*}
\frac{1}{2}\frac{d|\delta|^2}{dt}+ |\delta|^2 \leq 2|(B(U,\delta),\delta)|.
\end{equation*}
Note that \(|U|\leq K^{1/2}\). Using \cref{Lemma:Prop1.3} and then Young's, we obtain
\begin{equation*}\label{eq:delta_bound}
\frac{1}{2}\frac{d|\delta|^2}{dt} + |\delta|^2 \leq |\delta|^2/2 + b^2K|P\delta|^2/2,
\end{equation*}
and hence
\begin{equation}\label{eq:L12}
\frac{d|\delta|^2}{dt} + |\delta|^2 \leq b^2K|P\delta|^2.
\end{equation}
We use the bound \cref{eq:Pdelta_bound} on \(|P\delta|^2\) from \cref{Lemma:bounds} and replace in above inequality to obtain
\[\frac{d|\delta|^2}{dt} + |\delta|^2 \leq b^2K\Big(|\delta_n|^2(a_4+ a_5\rho_n^{1/2})(t-t_n)+|P\delta(t_n)|^2\Big).\] Multiplying by the integrating factor \(e^{t-t_n}\) and using Gronwall we get
\begin{align*}
|\delta|^2 &\leq |\delta_n|^2M_n(t-t_n,\rho_n) + |P\delta_n|^2\gamma(t-t_n),
\end{align*}
\begin{comment}
\begin{align*}
|\delta|^2 &\leq |\delta_n|^2e^{-(t-t_n)} + b^2Ke^{-(t-t_n)}\int_{t_n}^{t}e^{(s-t_n)}\Big(|\delta_n|^2(a_4+ a_5\rho_n^{1/2})(s-t_n)+|P\delta(t_n)|^2\Big)ds,\\
&=|\delta_n|^2M_n(t-t_n,\rho_n) + |P\delta_n|^2\gamma(t-t_n),
\end{align*}
\end{comment}
where \[M_n(\tau):=M(\tau, \rho_n) = e^{-\tau}(1 + a_6\int_0^{\tau}(a_4+ a_5\rho_n^{1/2})e^s s ds) \] and \[\gamma(\tau)= a_6(1-e^{-\tau}),\] with \(a_6= b^2K\). Since \(\rho_n\) is continuous w.r.t. \(\tau\) for all \(\tau >0\), so are \(M_n\) for a.e.\ \(\omega\). \end{proof}
We note that in this case the \(\gamma_n\) are all the same, non-random and finite for all \(\tau \geq 0\). Therefore Assumption~\ref{as:four} is satisfied if the following lemma holds.
\begin{lemma}\label{Lemma:Ass4}
There exists \(\tau^* > 0\) such that \(\mathbb{E}M_n(\tau) < 1\) and \(\mathbb{E}\gamma_n(\tau) < \infty\) for all \(\tau \in (0,\tau^*]\).
\end{lemma}
\begin{proof} We wish to show that the function \[m(\tau) = \mathbb{E}M_n(\tau) = e^{-\tau}(1 + a_6\int_0^{\tau}(a_4+ a_5\mathbb{E}(\rho_n^{1/2}))e^s s ds)\]
is less than 1 in some neighbourhood around 0. The a priori bound \(\rho_n\), and consequently \(M_n\), is not well defined at zero. However, we will show that \(\mathbb{E}(\rho_n(\tau)^{1/2})\tau^{1/2}\) is finite in a neighbourhood around \(\tau=0\), that is, \(\mathbb{E}(\rho_n(\tau)^{1/2})s^{1/2} < B\) for some constant \(B>0\), for all \(s\leq \tau\) and \(\tau\) sufficiently small.
Supposing the above holds, we have that in this neighbourhood
\begin{equation} \label{eq:m}
m(\tau) \leq e^{-\tau}(1 + a_6\int_0^{\tau}(a_4s^{1/2}+a_5B)s^{1/2}e^s ds):= \overline{m}(\tau),
\end{equation}
which implies that
\[m(0) = \lim_{\tau \to 0} \mathbb{E}M_n(\tau) \leq \lim_{\tau \to 0}\overline{m}(\tau) = 1.\]
Furthermore, \[\frac{d\overline{m}(\tau)}{d\tau} = -\overline{m}(\tau)+a_6(a_5\tau^{1/2} + a_4B)e^{\tau}\tau^{1/2},\] and hence \[\frac{d\overline{m}(0)}{d\tau} = -1.\]
Therefore, there exists a \(\tau^*\) such that \(\overline{m}(\tau)<1\) for all \(0< \tau \leq \tau^*\). Hence by the bound in~\cref{eq:m} the same is true of \(m(\tau)\), for sufficiently small \(\tau\).
It remains to show that \( \mathbb{E}(\rho_n^{1/2})\tau^{1/2} =\mathbb{E}((\rho_n\tau)^{1/2}) \) is bounded in a neighbourhood around \(\tau=0\). Recall that
\begin{align*}
\rho_n & = \bar{K}+\frac{6|f|^2}{1-e^{-\tau}} + \frac{4|U(t_0)|}{\tau}+4\sigma^2\sum_{k=0}^{\infty}e^{-k\tau}|R_{n-k}|^2.
\end{align*}
Therefore,
\begin{align*}
\mathbb{E}((\rho_n\tau)^{1/2})& \leq \mathbb{E}(\rho_n\tau)^{1/2}\\
&=\Big(\bar{K}\tau+\frac{6|f|^2}{1-e^{-\tau}}\tau+4|U(t_0)|^2+4\sigma^2\tau\sum_{k=0}^{\infty}e^{-k\tau}\Big)^{1/2}\\
&=\Big(\bar{K}\tau+4|U(t_0)|^2+\frac{6|f|^2+4\sigma^2}{1-e^{-\tau}}\tau\Big)^{1/2}.
\end{align*} This bound is continuous at 0, and the limit is
\[\lim_{\tau \to 0}\mathbb{E}((\rho_n\tau)^{1/2})\leq (4|U(t_0)|^2+6|f|^2+4\sigma^2)^{1/2},\] which is finite.
\end{proof}
Before turning to the N-S equations we will analyse two well known finite dimensional systems, known as Lorenz~'63 and~'96, that are commonly used as model problems for data assimilation.
\subsection{Lorenz '63 model}\label{ss:L63}
The Lorenz~'63 model consists of a system of three coupled ODEs, obtained from the N-S equations by truncation of the Fourier series to the first three modes \cite{Lorenz1963,TemamBook}. It is given by
\[
\begin{cases}
\dot{U_1} = -\alpha U_1 + \alpha U_2,\\
\dot{U_2} = -\alpha U_1 - U_2 - U_1U_3,\\
\dot{U_3} = -bU_3 + U_1U_2 - b(r+\alpha) ,
\end{cases}
\]
where the parameters \(b, r, \alpha \geq 0\) are real constants with standard values of \(b = 10, r=8/3, \alpha = 28\).
We can write this system in the form of ODE~\cref{eq:ODE}, (see e.g.~\cite{FoiasJolly2001}), where \[A = \begin{pmatrix} \alpha & -\alpha & 0 \\ \alpha & 1 & 0 \\ 0 & 0 & b \end{pmatrix}, B(U,\bar{U}) = \begin{pmatrix} 0 \\ (U_1\bar{U_3}+U_3\bar{U_1})/2 \\ -(U_1\bar{U_2}+U_2\bar{U_1})/2 \end{pmatrix}, f=\begin{pmatrix} 0 \\ 0 \\ - b(r+\alpha) \end{pmatrix}.\]
The standard observation operator \(P\) is the projection onto the \(U_1\) subspace. With this operator \(P\) all items of Property~\ref{Prop:Finite} are easily verified. Furthermore, we have \[(B(U,V),PW)=0\] for all \(U,V,W \in \mathbb{R}^3\), meaning that the nonlinear part of the flow is always perpendicular to the observations.
This last property is specific to Lorenz~'63; it does not hold for Lorenz~'96 or \\ N~-~S. It means that we can have a much simplified estimate for \(|P\delta|^2\), since taking inner product of the error Equation~\cref{eq:error_eq} and \(P\delta\) and applying Inequality~\cref{eq:Property6} now yields;
\[\frac{d|P\delta|^2}{dt} + 2a_2|P\delta|^2 \leq 2a_3|\delta_n|^2e^{\kappa(t-t_n)}\leq 2a_3|\delta_n|^2e^{\kappa h}.\] Setting \(a_7 = 2a_3e^{\kappa h}\), the estimate~\cref{eq:Pdelta_bound} on \(|P\delta|^2\) is simplified to
\color{black} \[ |P\delta(t)|^2\leq e^{-2a_2(t-t_n)}(\frac{a_7}{2a_2}|\delta_n|^2(e^{-2a_2(t-t_n)}-1) + |P\delta_n|^2). \] \color{black}
We note that the stochastic \(\rho_n\) no longer appears. We follow the proof of \cref{Lemma:ass3} till Equation~\cref{eq:L12} and then use the simplified bound obtained above. Thus we get,
\[|\delta|^2 \leq |\delta_n|^2M(t-t_n) + \gamma(t-t_n)|P\delta_n|^2,\]
where
\color{black} \[M(\tau)= e^{-\tau}(1 + a_8\int_0^{\tau}e^{s}-e^{(-2a_2+1)s} ds) \] \color{black} and \[\gamma(\tau)= b^2Ke^{-\tau}\int_0^{\tau}e^{(-2a_2+1)s} ds, \]
where \color{black} \(a_8 = b^2K\frac{a_7}{2a_2}.\) \color{black}
We can see that in the particular case of Lorenz~'63, we get a stronger result because \(M\) is deterministic and does not depend on the size of \(|\delta(t_n)|\). Consequently we just need to show that the non-random function \(M(\tau)<1\) for Assumption~\ref{as:four} to hold. This can readily be verified as \(M(0) = 1\) and \color{black} \(M^{'}(\tau) = -M(\tau) + a_8(e^{\tau}-e^{-2a_2 \tau})+ \kappa e^{-\tau}a_8\int_0^{\tau}e^{s}-e^{(-2a_2+1)s} ds\)\color{black} , so that \(M^{'}(0) = -1 < 0\). Therefore, there exists a \(\tau^{*}>0\) such that \(M(\tau)<1\) for all \(\tau < \tau^{*}\).
In this case the \(C_n\) of \cref{th:Main} have a much simpler form. Choose some \(h \in (0,\tau^*)\). Let \(\zeta > 0\) be some constant such that \(M(h) < \zeta < 1\). We can replace \(M_k\) by the constant \(\zeta\) in Equation~\cref{eq:Meq} and get;
\begin{align*}
|\delta_{n}|^2 &\leq \sigma^2\sum_{l=0}^{n-1} \zeta^l \gamma|R_{n-l-1}|^2 +\zeta^n|QU_0-\eta|^2+\sigma^2|R_n|^2.
\end{align*}
Therefore \[\limsup_{n}\Big(|\delta_{n}|^2-\sigma^2(\sum_{l=0}^{\infty}\zeta^l \gamma|R_{n-l-1}|^2+ |R_n|^2)\Big) \leq 0.\]
Hence, a possible form of \(C_n\) is \(C_n =\sum_{l=0}^{\infty}\zeta^l \gamma|R_{n-l-1}|^2+ |R_n|^2 \), which is a stationary process due to the assumptions on \(R_n\). Furthermore, \(\mathbb{E}(C_n) = \frac{1-\zeta+ \gamma}{1-\zeta} < \infty\). Therefore, since \(C_n \geq 0\), it is a.s.\ finite.
We note also that \[\limsup_{n}\mathbb{E}|\delta_{n}|^2\leq \sigma^2\mathbb{E}(C_n) = \frac{\sigma^2(1-\zeta+\gamma)}{1-\zeta},\]
so that the long-term mean square of the error is proportional to the strength of the noise, since constants \(\zeta\) and \(\gamma\) are independent of the noise and only depend on the data assimilation interval \(h\).
The bounding process \(C_n\) gives little information in the limit \(h \to 0\), because then \(\zeta~\to~1\). The same problem arises using 3DVAR as shown by~\cite{Law2014}, however they also give numerical results showing that the accuracy of the filter is fortunately a lot better than the theoretical bound implies. Clearly the bounds we give are not sharp since we make a number of estimates along the way. The main problem with our analysis for small \(h\) is that we are always summing the squared magnitude of the observational error. If \(h\) is small enough however, the dynamics is close to the identity, which should lead to considerable cancellations between the propagated errors. This is not taken into account in our approach.
We remark also that the above \(P\) is not the only observation projection that would allow for \cref{th:Main} to hold. Any such \(P\) would need to satisfy Property~\ref{Prop:Finite}.3. That is, \(B(QU,QU) = 0\), so that the image of \(Q\) is contained in the null space of \(B\). The null space of \(B\) is given by \(U_3U_1=0\) and \(U_1U_2 =0\) so that it is composed of the plane \(U_1 = 0\) and the line \(U_3 = U_2= 0\). This means that \(Q\) must project either onto the \((U_2,U_3)\)-plane or the \(U_1 \) subspace or the origin. Since \(P = I-Q\), \(P\) can project either onto the \((U_2,U_3)\)-plane or the \(U_1\) subspace, or the whole space (i.e.~P is the identity). We note that observing only the \(U_2\) or only the \(U_3\) subspace would not work.
\subsection{Lorenz '96 model}\label{ss:L96}
The Lorenz~'96 model~\cite{Lorenz1996} is given by
\[\frac{dU_i}{dt} = (U_{i+1} - U_{i-2})U_{i-1}-U_i + F,\] for \(i=1...N\), \(N=3M\), for some \(M \in \mathbb{N}\) with \(U_{-1} = U_{N-1}, U_0 = U_N\), \(U_{N+1} = U_1\) and \(F=8\).
As given in~\cite{Law2016}, in this model \(A\) is the \(N \times N\) identity matrix, \(f=(8,...,8)^T\) is an \(N\) dimensional vector, and the symmetric bilinear form is given by \[B(U,V)_i = -\frac{1}{2}((U_{i+1}- U_{i-2})V_{i-1}+(V_{i+1}- V_{i-2})U_{i-1}).\]
The projection operator \(P\) is produced by setting every third column of the identity matrix to 0. That is, \[P = (e_1, e_2,0, e_4,..., 0, e_{N-2}, e_{N-1}, 0).\]
With the above observation operator it has been shown, see~\cite{Law2014}, that Property~\ref{Prop:Finite} holds and that \(a_2 = 1\) and \(a_3 = 0\) since $ A $ is the identity matrix. Furthermore, we have that \(b = 6\) and \(a_1 = 2\).
In some ways the Lorenz~'96 model behaves more like the 2D Navier-Stokes, in that the equation for $ P $ is not as simple; Lorenz~'63 is in this sense exceptional. Thus, in the case of Lorenz~'96 we cannot easily deduce an explicit form for the process \(C_n\).
\section{Application to Navier Stokes}
\label{sec:NS}
In this section we show that the 2-D incompressible Navier-Stokes equations, with L-periodic boundary conditions, satisfy Assumptions~\ref{as:one} to~\ref{as:five}, and therefore that \cref{th:Main} and \cref{th:Main2} hold also for this model.
As we will see, the strategy for showing Assumptions~\ref{as:three} and \ref{as:four} for the N-S equations will differ from the finite dimensional case we saw in \cref{sec:Finite}. In the case of N-S, we are able to use only the $ Q $ part of the error equation to derive the ``squeezing" property of Assumption~\ref{as:three}. This is due to the specific form that the observation operator \(P_{\lambda}\) takes, which means that the $ Q $ equation represents the higher modes, which are dissipated quicker, the larger the \(\lambda\). In the Lorenz models all modes are dissipated at the same rate so we cannot hope to adjust the operator \(P\) in order to obtain the same effect.
Following the notation of~\cite{Hayden2011}, let \(\Omega = [0,L] \times [0,L]\). The equations for the velocity field \(u\) and pressure \(p\) are given by
\begin{align}\label{eq:NS}
& \frac{\partial u}{\partial t} - \nu \Delta u + (u.\nabla)u +\nabla p = f, \\ \nonumber
& \nabla \cdot u =0,
\end{align}
where $ \nu $ is the kinematic viscosity and \(f\) the time independent body forcing. Let $ \mathbb{V} $ be the space of L-periodic trigonometric polynomials, with zero divergence and zero constant term. That is, \[\mathbb{V} = \{u: \mathbb{R}^2 \rightarrow \mathbb{R}^2; \text{L-periodic trig. polynomial}, \nabla . u= 0, \int_{\Omega} u = 0 \}, \] and let \(H\) be the closure of \(\mathbb{V}\) in \(L^2(\Omega)\) and \(V\) the closure of \(\mathbb{V}\) in Sobolev space \(H^1\). Let \(v \in \mathbb{V}\) and let \(u \in V\) be a solution to Equation~\cref{eq:NS}. Take the \(L^2\) inner product of~\cref{eq:NS} with \(v\) to get \[(\frac{\partial u}{\partial t},v) - \nu(\Delta u, v) + (u\cdot\nabla u,v) + (\nabla p,v) = (f,v).\] Since \(v\) is divergence-free we obtain for the pressure term \[(\nabla p,v) = \int_{\Omega} \nabla p \cdot v = -\int_{\Omega} p \nabla \cdot v=0,\] where we also use that \(v\) is periodic. By density of \(\mathbb{V} \in H^1\), the weak form
\begin{equation}\label{eq:NS2}
\frac{du}{dt} + \nu A u + B(u,u) = f
\end{equation}
of the N-S equations holds for all \(v \in V\). Equation~\cref{eq:NS2} is an ODE in the dual space \(V^*\), so that $ A $ and $ B $ are operators from \(V\) to \(V^*\). If \(u \in H^2\) then \((Au,v) = \int_{\Omega} -\Delta u \cdot v \ dx \) and \((B(u,u),v) = \int_{\Omega} (u\cdot\nabla u)\cdot v \ dx \).
We can express \(u \in H\) by its Fourier series
\[ u= \sum_{\bar{k} \in \mathscr{J}} u_{\bar{k}} e^{i\bar{k}.x},\]
where \[ \mathscr{J} = \Big\{ \bar{k}=\frac{2 \pi}{L}(k_1, k_2): k_i \in \mathbb{Z}, \bar{k} \neq 0 \Big\}. \]
We define norms on \(H, V\) and \(H^2 \cap H\) respectively as
\[|u|^2 = L^2\sum_{\bar{k} \in \mathscr{J}}|u_{\bar{k}}|^2,\]
\[\|u\|^2 = L^2\sum_{\bar{k} \in \mathscr{J}}\bar{k}^2|u_{\bar{k}}|^2,\] and
\[|Au|^2 = L^2\sum_{\bar{k} \in \mathscr{J}}\bar{k}^4|u_{\bar{k}}|^2,\]
which can be shown to be equivalent to the standard norms on \(L^2, H^1\) and \(H^2\) on these spaces.
The key idea of the approach taken in~\cite{Hayden2011}, and which we follow, is that there is a natural splitting of the phase space \(V\) into a finite-dimensional sub-space and its infinite dimensional orthogonal complement such that the orthogonal projection of the solution onto the finite dimensional subspace dominates.
We define the orthogonal projection $P_\lambda$ as
\[P_\lambda u = \sum_{|\bar{k}|^2 \leq \lambda} u_{\bar{k}} e^{i\bar{k}.x}, \]
where $0 < \lambda \in \mathbb{Z}$. We say that \(P_\lambda\) is a projection onto the low modes.
Let us state some well known properties of the system. In this setting and with initial conditions in \(V\), the existence and uniqueness of strong solutions is shown for example in~\cite{Robinson2001}. Therefore we can define a semi-flow. We will verify Assumption~\ref{as:one} and~\ref{as:two} for Equation~\cref{eq:NS2} by the following Theorem which is proved in~\cite{Jones1992}.\\
\begin{theorem}
Let \(u(t)\) solve the N-S Equation~\cref{eq:NS2} and \(u_0 \in H\), then the following estimate holds
\begin{equation} \label{eq:NS_disspi}
\|u(t)\|^2 \leq e^{-\nu\lambda_1(t-s)}\|u(s)\|^2 + \frac{1}{\nu}\int_{s}^{t}e^{-\nu\lambda_1(t-\tau)}|f|^2 \ d\tau
\end{equation}
for every \(0 <s\leq t\), where \(\lambda_1\) is the smallest eigenvector of \(A\). In particular, we have
\begin{equation}\label{eq:NS_attractor}
\limsup_{t \to \infty} \|u(t)\|^2 \leq \frac{|f|^2}{\nu^2\lambda_1}:=K.
\end{equation}
\end{theorem}
It follows from \cref{cor:apriori} that Assumption~\ref{as:two} is satisfied with constant \(c_1 = \nu \lambda_1\) and \(c_2 = K\).
It follows from Inequality~\cref{eq:NS_disspi} that the ball \(B(0,r)\) with \(r > K^{1/2}\) is an absorbing set because whatever bounded set we start with there will be a time after which it will be contained in the ball. Furthermore it's straightforward to show from~\cref{eq:NS_disspi} that \(B(0,r)\) is forward invariant, as required for Assumption~\ref{as:one}.
In the case where no noise is present in the observations, the existence of a function \(M\), as required for Assumption~\ref{as:three}, is shown in~\cite{Hayden2011}, Theorem 3.9. We follow the same reasoning but with the adjustment that in our setting \(P_\lambda \delta(t_n) \neq 0\), so that the induction argument used to ensure a bound on \(\|\delta_n\|^2\) is in our case impossible due to the noise term in the observation that can be arbitrarily large. Hence, we replace the \(R = \|\delta_0\|^2\) bound from~\cite{Hayden2011}, by an a priori bound from Assumption~\ref{as:two}. We conclude that whenever there exists a \(\rho>0\) such that \(\|\delta_n\|^2 < \rho\) we have for \(t \in [t_n,t_{n+1})\),
\[\|Q_\lambda \delta(t)\|^2 \leq M(t-t_n, \rho)\|\delta(t_n)\|^2, \] where
\[M(h) = e^{-\nu \lambda h} \Big(1+\int_{0}^{h}g(s, \rho) e^{\nu\lambda s} \ ds \Big)\] and \[g(s, \rho) = C_1 \lambda ^{1/4} e^{\kappa s}(\rho(h,\omega)^{1/2}e^{\kappa s /2}+ 2K^{1/2})^2 + C_2 e^{\kappa s}(\rho(h,\omega)^{1/2}e^{\kappa s /2}+ 2K^{1/2})^{8/3}, \] and where
\(C_1 = 2^{-1/4}\nu^{-1}\lambda_1^{-1/4}\), \(C_2 = 5^{5/3}2^{-22/3}3\nu^{-5/3}\lambda_1^{-1/3}\)\footnote{We have used the explicit value for the dimensionless constant \(c = 2^{-3/2}\) that appears in~\cite{Hayden2011}, Theorem 3.4.}. Further, \(K\) is the size of the attractor of the N-S dynamical system defined by Equation~\cref{eq:NS_attractor}. Finally, \(\kappa~=~2^{-1/3}(5/8)^{5/3}(3/8)\nu^{-5/3}\lambda_1^{-1/3}K^{4/3}\) is the constant as in~\cite{Hayden2011},~Theorem~3.8.
We want to use \cref{th:Main} to show that this random bound is sufficient to obtain convergence. Indeed, we can show that Assumption~\ref{as:four} holds.
\begin{theorem}
Suppose that $\mathbb{E}(|R_{0}|^{8/3})< \infty$, then for all \(h>0\), there exists a \(\lambda^* < \infty\) such that for all \(\lambda > \lambda^*\), Assumption~\ref{as:four} holds. That is, \(\mathbb{E}(M(h,\rho_0(h))) < 1\).
\end{theorem}
\begin{proof}
By the previous discussion, we have that
\[\mathbb{E}(M(h,\rho_0(h))) = e^{-\nu \lambda h} \Big(1+\int_{0}^{h}\bar{g}(s,\rho_0(h)) e^{\nu\lambda s} \ ds \Big),\]
where
\begin{equation*}
\bar{g}(s,\rho_0(h)) := C_1 \lambda^{1/4}e^{\kappa s}\mathbb{E}(l(h,s)^2) + C_2 e^{\kappa s}\mathbb{E}(l(h,s)^{8/3}),
\end{equation*}
and where \(l(h,s):=\rho_0(h)^{1/2}e^{\kappa s/2}+ 2K^{1/2}\).
Note that \(\bar{g}(s,\rho_0(h)) \leq \bar{g}(h, \rho_0(h))\) for all \( s \leq h\). Then
\begin{align*}
\mathbb{E}(M(h,\rho_0(h))) & \leq e^{-\nu\lambda h} \Big(1+ \bar{g}(h, \rho_0(h)) \int_{0}^{h} e^{\nu \lambda s} \ ds \Big) \\
& = e^{-\nu\lambda h} + \frac{\bar{g}(h, \rho_0(h))}{\nu \lambda} \Big(1- e^{- \nu \lambda h}\Big).
\end{align*}
From the above it follows that \(\mathbb{E}(M_n(h))<1\) if
\(-\nu\lambda + \bar{g}(h,\rho_0(h)) < 0\). Using the definition of \(\bar{g}\), we get
\begin{equation}\label{eq:lambda2}
-\nu \lambda + C_1 \lambda ^{1/4} e^{\kappa h}\mathbb{E}(l^2) + C_2 e^{\kappa h}\mathbb{E}(l^{8/3}) < 0,
\end{equation}
where \(l:=l(h,h)\).
It is clear that Inequality~\cref{eq:lambda2} will hold for some sufficiently large \(\lambda\) if the second and third terms of~\cref{eq:lambda2} are finite. It is sufficient to show that \(\mathbb{E}(l^{8/3})\) is finite, since then, any lower moment is finite.
\begin{comment}To see this, consider any real \(\alpha > \beta >0\) and positive random variable \(X\). We can write \(X^\beta \leq 1+ X^\alpha\) so that \(\mathbb{E}(X^\beta) \leq 1+ \mathbb{E}(X^\alpha)\) by linearity and monotonicity of the integral. Therefore, if \(\mathbb{E}(X^\alpha)\) is finite, so is \(\mathbb{E}(X^\beta)\).\footnote{We can show this using Jensen's inequality also, since \(\alpha/\beta >1\); \[\mathbb{E}(X^\beta)^{\alpha/\beta} \leq \mathbb{E}(X^\alpha) < \infty. \]}\end{comment}
Recall that \(l^{8/3}= (\rho_0(h)^{1/2}e^{\kappa h /2}+ 2K^{1/2})^{8/3} \). It is sufficient to show that \newline \(\mathbb{E}(\rho_0(h)^{4/3})<~\infty\) since
\begin{align*}
\mathbb{E}(l^{8/3}) & = \int (\rho_0(h)^{1/2}e^{\kappa h /2}+ 2K^{1/2})^{8/3} d\mathbb{P} \\
& = \| (\rho_0(h)^{1/2}e^{\kappa h /2}+ 2K^{1/2})\|_{8/3}^{8/3} \\
& \leq \Big(e^{\kappa h /2}\|\rho_0(h)^{1/2}\|_{8/3} + 2K^{1/2}\Big)^{8/3},\
\end{align*}
where in the last step we applied the Minkowski inequality.
It's clear that the right hand side of the above inequality is finite if \[\|\rho_0(h)^{1/2}\|_{8/3} =\|\rho_0(h)\|_{4/3}^{1/2} < \infty.\]
\begin{comment}
It's clear that the right hand side of the above inequality is finite if \[\|\rho_0(h)^{1/2}\|_{8/3} = \Big(\int \rho_0(h)^{4/3} d\mathbb{P} \Big)^{3/8} < \infty.\] which is true if
\[\int \rho_0(h)^{4/3} d\mathbb{P} < \infty. \]
\end{comment}
Using the Minkowski inequality on the a priori bound we get
\[\|\rho_0(h)\|_{4/3} \leq \bar{K}+ F(h)+ 4\sigma^2\sum_{k=0}^{\infty} e^{-\nu\lambda_1kh}\|R_{-k}^2\|_{4/3}, \]
where \(\bar{K}\) and \(F(h)\) are both deterministic and the right hand side is finite if \(h>0\) and \(\|R_{-k}^2\|_{4/3} < \infty\).
\end{proof}
The above result does not hold uniformly for small \(h\) since the bound diverges at \(h=~0\).
In the previous theorem we saw that for any \(h>0\), there exists a finite \(\lambda\) which guarantees that \(\mathbb{E}(M(h,\rho_0(h))) < 1\). We can compute an explicit expression for a possible \(\lambda\) from Equation~\cref{eq:lambda2}, which is given in \cref{Lemma:appendix} in the Appendix.
\subsection*{Acknowledgements}
We would like to thank Peter Jan van Leeuwen, Andrew M. Stuart, Edriss S. Titi for fruitful discussions.
\input{Arxiv_file1_bibliography.bbl}
\section{Appendix}
\begin{lemma} \label{Lemma:appendix}
Equation~\cref{eq:lambda2} holds for all
\begin{equation}\label{eq:lambda}
\lambda \geq \max\Big(2^{-1}e^{4/3\kappa h}\mathbb{E}(l^2)^{4/3},5^{5/3}2^{-19/3}3e^{\kappa h}\mathbb{E}(l^{8/3})\Big)\lambda_1^{-1/3}\nu^{-8/3}
\end{equation}
\end{lemma}
\begin{proof}
We consider two possible cases of the second term of Inequality~\cref{eq:lambda2} being greater or smaller than the third term, which correspond to \(\lambda\) being greater or smaller than the expression
\begin{equation}\label{eq:tag1}
\Big(\frac{\mathbb{E}(l^{8/3})}{\mathbb{E}(l^2)}\Big)^4(5^{5/3}2^{-19/12}3)^4\lambda_1^{-1/3}\nu^{-8/3}:=M_1.
\end{equation}
Replacing in Inequality~\cref{eq:lambda2}, we have that for \(\lambda \) greater than or equal to~\cref{eq:tag1}, if \(\lambda\) holds for below equation then it holds for~\cref{eq:lambda2} as well;
\[-\nu \lambda + 2^{-3/4}\nu^{-1}\lambda_1^{-1/4} \lambda ^{1/4} e^{\kappa h}\mathbb{E}(l^2) < 0,\] so that
\[\lambda > 2^{-1}\nu^{-8/3}\lambda_1^{-1/3} e^{4/3\kappa h}\mathbb{E}(l^2)^{4/3}:=M_2,\]
and hence
\begin{equation}\label{eq:lambda3}
\lambda > \max\Big(M_1,M_2\Big).
\end{equation}
On the other hand, if
\( \lambda \) is less than expression~\cref{eq:tag1}, we can replace Inequality~\cref{eq:lambda2} with
\[-\nu \lambda + 5^{5/3}2^{-19/3}3\nu^{-5/3}\lambda_1^{-1/3} e^{\kappa h}\mathbb{E}(l^{8/3}) < 0,\] so that
\[\lambda > 5^{5/3}2^{-19/3}3\lambda_1^{-1/3}\nu^{-8/3}e^{\kappa h}\mathbb{E}(l^{8/3}):=M_3,\] and hence
\begin{equation}\label{eq:lambda4}
M_3< \lambda <M_1.
\end{equation}
There are solutions for \(\lambda\) in Inequality~\cref{eq:lambda4} if and only if \[e^{\kappa h} < \frac{\mathbb{E}(l^{8/3})^3}{\mathbb{E}(l^2)^4}5^52^{-16}3^3,\] so that
\[e^{4/3\kappa h} <\frac{\mathbb{E}(l^{8/3})^4}{\mathbb{E}(l^2)^{16/3}}(5^52^{-16}3^3)^{4/3}.\]
Multiplying both sides by \(2^{-1}\nu^{-8/3}\lambda_1^{-1/3} \mathbb{E}(l^2)^{4/3}\) we get precisely that
\[M_2 < M_1.\]
Conversely, when \(M3 > M_1\), we have that \(M_2>M_1\), which means that Inequality~\cref{eq:lambda3} becomes
\begin{equation}\label{eq:lambda5}
\lambda > M_2.
\end{equation}
Putting Inequalities~\cref{eq:lambda4} and~\cref{eq:lambda5} together, we see that we require that
\[\lambda > \max \Big(M_2, M_3\Big).\]
\end{proof}
\end{document} |
2,877,628,091,284 | arxiv | \section{Motivation and Results}
Nature prefers Yang-Mills theory in exactly $1+3$ dimensions. There has been much recent interest in a mathematically exceedingly rich four-dimensional Yang-Mills model, the nearly unique ${\cal N}=4$ supersymmetric theory \cite{Brink:1976bc,Gliozzi:1976qd}. In addition to its gauge and super-conformal symmetries, it exhibits, in the planar limit, the phenomenon of {\it integrability}, see the series of review papers \cite{Beisert:2010jr}. What is special about $1+3$ dimensions? One remarkable fact is that general space-time events with Minkowski coordinates $x^\mu \in \mathbb{R}^{1,3}$ may be packaged into general $2 \times 2$ hermitian matrices. After Fourier-transforming to dual space-time, a momentum four-vector $p^\mu \in \mathbb{R}^{1,3}$ may be written as the hermitian matrix
\begin{equation}
p^{\alpha \dot \alpha} =
\begin{pmatrix}
p_0+\, p_3& p_1-i\, p_2\\
p_1+i\, p_2&p_0-\, p_3
\end{pmatrix}.
\end{equation}
Massless particles satisfy $p^2= p^\mu p_\mu=\det p^{\alpha \dot \alpha}=0$. The matrix then has at most rank $1$, and we can ``factor'' it into spinorial Weyl variables: $p^{\alpha \dot \alpha}=\lambda^\alpha \tilde \lambda^{\dot \alpha}$. For ${\cal N}=4$ super Yang-Mills the spinors $\lambda^\alpha$, $\tilde \lambda^{\dot \alpha}$ are nicely complemented by the four Gra{\ss}mann spinor variables $\eta^A$ with $A=1,2,3,4$. The resulting eight spinor-helicity variables $(\lambda^\alpha_j, \tilde \lambda^{\dot \alpha}_j, \eta^A_j)$ are highly efficient for neatly expressing the general color-stripped tree-level amplitudes for the scattering of $j=1, \ldots, n$ massless particles of the model. With total momentum $P^{\alpha \dot \alpha}=\sum_j \lambda^{\alpha}_j \tilde \lambda^{\dot \alpha}_j$ and super-momentum $Q^{\alpha A}=\sum_j \lambda^{\alpha}_j \eta_j^A$ and the brackets
$\langle p q \rangle=\epsilon_{\alpha \beta} \lambda_p^\alpha \lambda_q^\beta$
and
$ [p q]=\epsilon_{\dot \alpha \dot \beta} \tilde \lambda_p^{\dot \alpha} \tilde \lambda_q^{\dot \beta}$
the result is the distribution
\begin{equation}\label{general-tree}
{\mathcal A}_{n,k} =
\frac{\delta^4(P^{\alpha \dot \alpha}) \delta^8(Q^{\alpha A})}{\langle 1 2\rangle \langle 2 3\rangle \ldots \langle n-1, n\rangle \langle n 1\rangle}\,{\cal P}_n(\{\lambda_j,\tilde \lambda_j,\eta_j\}),
\end{equation}
see \cite{Drummond:2008cr} and references therein. All external helicity configurations are generated by expansion in the $\eta_j^A.$ Super-helicity $k$ corresponds to the terms of order $\eta^{4 k}$. In the simplest, maximally-helicity-violating (MHV) case we have $k=2$, where ${\cal P}_n=1$. We may also define ``nextness'' $\hat k=k\!-\!2$. Then, for $k>2$, ${\mathcal A}_{n,k}$ corresponds to the N$^{\hat k}$MHV (pronounced ``Next-to-the-$\hat k$-MHV'') amplitude, where the ${\cal P}_n$ are recursively determined rational functions of the spinor helicity variables.
In \cite{ArkaniHamed:2009dn} a remarkable reformulation of \eqref{general-tree} was presented. It takes the form of an integral over a Gra{\ss}mannian space ${\rm Gr}(k,n)$. The latter is the set of $k$-planes intersecting the origin of an $n$-dimensional vector space. Note that $k=1$ is ordinary projective space. ``Points'' in ${\rm Gr}(k,n)$ are described by ``homogeneous'' coordinates, which are packaged into a $k \times n$ matrix $C=(C_{aj})$. Here $C$ and $A \cdot C$ with $A \in {\rm GL}(k)$ correspond to the same point in ${\rm Gr}(k,n)$. It is convenient to employ super-twistors $\mathcal{W}_j^{\mathcal{A}}=(\tilde \mu^{\alpha}_{j},{\tilde \lambda}^{\dot\alpha}_{j}, \eta^{A}_{j})$, where ${\mathcal A}=(\alpha, \dot \alpha, A)$
and $\alpha, \dot \alpha=1,\ldots,2$, $A=1,\ldots,4$,
by performing a formal half-Fourier transform from $\lambda_{j}^{\alpha}$ to $\tilde \mu^{\alpha}_{j}$. The Gra{\ss}mannian integral then reads
\begin{equation}\label{ACCK}
{\mathcal A}_{n,k} =
\int \frac{ d^{k\cdot n} C}{{\rm vol}({\rm GL}(k))}\,
\frac{\delta^{4k|4k}(C\cdot\mathcal{W})}{(1,\,...\,, k)(2,\,...\,, k\!+\!1)\ldots(n,\,...\,, n\!+\!k\!-\!1)}\, .
\end{equation}
The $(i, i+1,\ldots,i\!+\!k\!-\!1)$ are the $n$ cyclic $k \times k$ minors of the coordinate matrix $C$.
Note that $(n,\,...\,, n\!+\!k\!-\!1) = (n,\,...\,, k\!-\!1)$.
Integration is along ``suitable contours''. The ${\rm GL}(k)$ symmetry is manifest.
Fourier-transforming back to spinor-helicity space, all tree-level N$^{(k-2)}$MHV amplitudes may then indeed be obtained if the contours are correctly chosen. The amplitudes ${\mathcal A}_{n,k}$ enjoy superconformal symmetry
\begin{equation}\label{psu-symmetry}
J^{\mathcal{A} \mathcal{B}} \cdot {\mathcal A}_{n,k}=0\, , \quad {\rm with} \quad J^{\mathcal{A} \mathcal{B}}\in\alg{psu}(2,2|4).
\end{equation}
However, there is also a hidden dual super-conformal symmetry of the tree-level amplitudes
\begin{equation}\label{dual-psu-symmetry}
\tilde J^{\mathcal{A} \mathcal{B}} \cdot {\mathcal A}_{n,k}=0\, , \quad {\rm with} \quad \tilde J^{\mathcal{A} \mathcal{B}}\in\alg{psu}(2,2|4)^{\rm{dual}}\,.
\end{equation}
Commuting $J$ and $\tilde J$, one obtains Yangian symmetry \cite{Drummond:2009fd}. The latter is generated by an infinite algebra consisting of the level-zero generators $J^{\mathcal{A} \mathcal{B}}$ and a set of level-one generators $\hat J^{\mathcal{A} \mathcal{B}}$, plus an infinite tower of further symmetry generators of higher levels, which satisfy certain Serre relations. Using $\alg{psu}(2,2|4)$ generators in super-twistor form acting ``locally'' on the $j$-th particle
\begin{equation}\label{psu-generators}
J^{\mathcal{A} \mathcal{B}}_j=\mathcal{W}_j^{\mathcal{A}} \frac{\partial}{\partial \mathcal{W}_j^{\mathcal{B}}} - \sfrac{1}{8} (-1)^{\mathcal B} \delta^{\mathcal{A} \mathcal{B}} \sum_{\mathcal{C}} (-1)^{\mathcal C} \mathcal{W}_j^{\mathcal{C}} \frac{\partial}{\partial \mathcal{W}_j^{\mathcal{C}}}\,,
\end{equation}
where the second term removes the supertrace from $\alg{psu}(2,2|4)$ (this is related to the letter $\alg{s}$ in $\alg{psu}(2,2|4)$),
one may succinctly summarize the Yangian algebra relevant to amplitudes as
\begin{equation}\label{yangian-symmetry}
J^{\mathcal{A} \mathcal{B}} = \sum_{j=1}^n J^{\mathcal{A} \mathcal{B}}_j
\quad {\rm and} \quad
\hat J^{\mathcal{A} \mathcal{B}} = \sfrac{1}{2} \sum_{i<j} (-1)^{\mathcal C} \left[J^{\mathcal{A} \mathcal{C}}_i J^{\mathcal{C} \mathcal{B}}_j
- J^{\mathcal{A} \mathcal{C}}_j J^{\mathcal{C} \mathcal{B}}_i\right].
\end{equation}
This is how integrability first appeared in the planar scattering problem. To exhibit the hidden dual symmetry \eqref{dual-psu-symmetry} of the Gra{\ss}mannian integral \eqref{ACCK}, a clever change of variables was found in \cite{ArkaniHamed:2009vw}. Employing $4|4$ super momentum-twistors
$\mathcal Z_j^{\mathcal A}=(Z_j^\alpha,\chi_j^A)$ with ${\cal A}=(\alpha,A)$ and $\alpha=1,\ldots,4$,
$A=1,\ldots,4$, one transforms \eqref{ACCK} to an integral over the points of a dual Gra{\ss}mannian space ${\rm Gr}(\hat k,n)={\rm Gr}(k\!-\!2,n)$
\begin{equation}\label{MS}
{\mathcal A}_{n,k} =
\frac{\delta^4(P^{\alpha \dot \alpha}) \delta^8(Q^{\alpha A})}{\langle 1 2\rangle \langle 2 3\rangle \ldots \langle n 1\rangle}
\int \frac{ d^{\hat k\cdot n} \hat C}{{\rm vol}({\rm GL}(\hat k))}\,
\frac{\delta^{4 \hat k|4 \hat k}(\hat C\cdot\mathcal{Z})}{(1,\,...\,, \hat k) \ldots(n,\,...\,, n\!+\! \hat k\!-\!1)}\, ,
\end{equation}
where the $k=2$ MHV part neatly factors out.
One has $(n,\,...\,, n\!+\! \hat k\!-\!1)=(n,\,...\,, \hat k\!-\!1)$.
This Gra{\ss}mannian integral based on dual momentum-twistors had been independently discovered in \cite{Mason:2009qx}. Clearly it computes the function ${\cal P}_n(\{\lambda_j,\tilde \lambda_j,\eta_j\})$ in \eqref{general-tree}.
Much of the above beautiful structure is intimately tied to four dimensions. At loop level, infrared divergences appear. These are commonly dealt with by dimensional regularization. However, deviation from four dimensions irretrievably destroys all of the above structure. One is then led to look for a more natural regulator, where natural means it should a) respect the fixed space-time dimensionality four and b) respect the Yangian symmetry, i.e.\ integrability. Such a regularization scheme was proposed in \cite{Ferro:2012xw,Ferro:2013dga}. It may be understood as follows. We should look at the ordinary (as opposed to super) trace in \eqref{psu-generators}. Define the ``local'' and ``overall'' central charge operators,
the minus sign being a convention, respectively as
\begin{equation}\label{centralcharge}
C_j=-\sum_{\mathcal A} J^{\mathcal{A} \mathcal{A}}_j=
-\sum_{\mathcal A} \mathcal{W}_j^{\mathcal{A}} \frac{\partial}{\partial \mathcal{W}_j^{\mathcal{A}}},
\qquad \qquad
C=\sum_{j=1}^n C_j
\,.
\end{equation}
These are related to the letter $\alg{p}$ in $\alg{psu}(2,2|4)$. We should ``locally'' or ``overally'' impose $C_j=0$ and $C=0$, respectively, to obtain local or overall $\alg{psu}(2,2|4)$ symmetry, and not some central extension of it. The idea in \cite{Ferro:2012xw,Ferro:2013dga} was to do away with local invariance, and to just impose the overall one. This maneuver has an interesting mathematical as well as physical interpretation. {\it Mathematically}, we are led to the so-called evaluation representation of the Yangian, where \eqref{yangian-symmetry} is modified to
\begin{equation}\label{deformed-yangian-symmetry}
J^{\mathcal{A} \mathcal{B}} = \sum_{j=1}^n J^{\mathcal{A} \mathcal{B}}_j
\quad {\rm and} \quad
\hat J^{\mathcal{A} \mathcal{B}} = \sfrac{1}{2} \sum_{i<j} (-1)^{\mathcal C} \left[J^{\mathcal{A} \mathcal{C}}_i J^{\mathcal{C} \mathcal{B}}_j
- J^{\mathcal{A} \mathcal{C}}_j J^{\mathcal{C} \mathcal{B}}_i\right]-\sum_{j=1}^n v_j J^{\mathcal{A} \mathcal{B}}_j\,.
\end{equation}
``Switching on'' non-zero eigenvalues $c_j$ for the deformed local central charges $C_j$ results in non-vanishing evaluation (or spectral) parameters $v_j$. We will momentarily give the relation between the $c_j$ and the $v_j$, see \eqref{cyclic-identification} below. {\it Physically}, we can interpret the procedure by rewriting the $C_j$ of \eqref{centralcharge} in terms of spinor-helicity variables. One finds
\begin{equation}\label{centralcharge-spinhel}
C_j=2+\lambda^\alpha_j \frac{\partial}{\partial \lambda^\alpha_j}
-\tilde \lambda^{\dot \alpha}_j \frac{\partial}{\partial \tilde \lambda^{\dot \alpha}_j}
-\eta^A_j \frac{\partial}{\partial \eta^A_j}=2-2\,h_j\,
\end{equation}
where $h_j$ is the super-helicity of particle $j$. So we are deforming the helicities of the scattering particles. This is algebraically, read ``locally," consistent, since the quantization of helicities to integer or half-integer values is due to global properties of the conformal group. One could then ask how the Gra{\ss}mannian contour formulas are deformed. The final answer is exceedingly simple, and very natural. Let us define shifted spectral parameters \cite{Beisert:2014qba}
\begin{equation}\label{v-plus-minus}
v_j^\pm=v_j\pm\sfrac{c_j}{2}\,.
\end{equation}
As we will prove in section \ref{sec:Furtherdetails}, one then finds that \eqref{ACCK} is elegantly deformed to
\begin{equation}\label{ACCK-deformed}
{\mathcal A}_{n,k}\left(\{v_j^\pm\}\right)=
\int \frac{ d^{k\cdot n} C}{{\rm vol}({\rm GL}(k))}
\frac{\delta^{4k|4k}(C\cdot\mathcal{W})}{(1,\,...\,, k)^{1+v_k^+-v_1^-}
\ldots(n,\, ...\, ,k\!-\!1)^{1+v_{k-1}^+-v_n^-}}\,.
\end{equation}
Note that it is not really the Gra{\ss}mannian space ${\rm Gr}(k,n)$ as such that is deformed, but the integration measure on this space. One easily sees that the ${\rm GL}(k)$ symmetry of \eqref{ACCK} is also preserved: The measure times delta function factors are ${\rm SL}(k)$ invariant, and so are the minors. Finally, invariance under an overall scale transformation of $C$ is ensured by the telescoping property of the deformation weights on the minors and the vanishing of overall central charge.
We will show below that formula \eqref{ACCK-deformed} is Yangian invariant, iff we impose $n$ conditions on the $2 n$ deformation parameters $\{v_j^\pm\}$:
\begin{equation}\label{cyclic-identification}
v^+_{j+k}=v^-_j
\qquad {\rm for} \qquad j=1,\ldots,n\,.
\end{equation}
One may then ask, whether the change of variables allowing to go from \eqref{ACCK} to \eqref{MS} still goes through under the deformation.
Using the following relation from \cite{ArkaniHamed:2009vw} between the minors of the matrices $C$ and $\hat C$
\begin{equation}\label{minor-change}
(i+1,\, ...\, i+\hat k)_{\hat C}
=
\frac{(i,\, ...\, i+k-1)_C}{\langle i,i+1\rangle \ldots \langle i+k-2,i+k-1\rangle}
\,,
\end{equation}
where the subscripts indicate which matrix we consider when evaluating the minors, one easily proves that \eqref{MS} deforms into
\begin{equation}\label{MS-deformed}
{\mathcal A}_{n,k}\left(\{v_j^\pm\}\right)=
\frac{\delta^4(P_{\alpha \dot \alpha}) \delta^8(Q_\alpha^A)}{\langle 1 2\rangle^{1+v_2^+-v_1^-} \ldots \langle n 1\rangle^{1+v_1^+-v_{n}^-}}\times A_{n,k}\left(\{v_j^\pm\}\right),
\end{equation}
with
\begin{equation}\label{MS-integral}
A_{n,k}\left(\{v_j^\pm\}\right)= \int \frac{ d^{\hat k\cdot n} \hat C}{{\rm vol}({\rm GL}(\hat k))}\,
\frac{\delta^{4 \hat k|4 \hat k}(\hat C\cdot\mathcal{Z})}{(1,\,...\,, \hat k)^{1+v_{\hat k+1}^+-v_n^-} \ldots(n, \,...\,, \hat k\!-\!1)^{1+v_{\hat k}^+-v_{n-1}^-}}\,.
\end{equation}
Note that both the MHV-prefactor and the contour integral are deformed. From \eqref{cyclic-identification}, we see that the total number of deformation parameters is $k$-independent and equals $n\!-\!1$, since \eqref{ACCK-deformed},\eqref{MS-deformed} depends only on differences of the $\{v_j^\pm\}$.
\section{Meromorphicity Lost and Gained}
\label{meromorphicity}
Let us take a closer look at the deformed Gra{\ss}mannian integrals \eqref{ACCK-deformed} and \eqref{MS-deformed},\eqref{MS-integral}, and compare them to their undeformed versions \eqref{ACCK},\eqref{MS}. The latter have poles in the integration variables $C_{aj}$ or $\hat C_{aj}$, related to the vanishing of the minors. Apart from the delta functions, the integrand is meromorphic, or even better, just a rational function. In contrast, choosing the parameters $\{v_j^\pm\}$, constrained by \eqref{cyclic-identification}, to be non-integer, we see that generically all poles turn into branch points. Meromorphicity is lost. This does not seem to cause a problem for the MHV amplitudes, where, at least formally, we simply obtain a deformed Parke-Taylor formula, namely the prefactor of the integral in \eqref{MS-deformed}. However, for non-MHV amplitudes with $\hat k >0$, some integrations remain. In the undeformed case, these integrations are performed by the residue theorem. Here it is important to properly choose the contours in order to encircle the correct poles. This choice is dictated by the Britto-Cachazo-Feng-Witten (BCFW) recursion relations \cite{Britto:2005fq}, which of course are also based on the residue theorem. The result is that the ``top-cell'' expressions \eqref{ACCK},\eqref{MS} decompose into specific linear combinations of residues. These are themselves Yangian-invariant, and correspond to on-shell diagrams of \cite{ArkaniHamed:2012nw}. The important point now is to realize that the residue theorem is no longer available in the deformed case due to the appearance of branch cuts. So it does not make sense anymore to decompose the top-cell diagram into subsidiary on-shell components in a naive fashion, i.e.\ as though the residue theorem was still valid. Put differently, we have to give up the BCFW recursion relations, at least in the way we knew them. This is entirely consistent with the findings of \cite{Beisert:2014qba, Broedel:2014pia, Broedel:2014hca}, where it was shown that the deformed subsidiary Yangian-invariant on-shell diagrams in the non-MHV case cannot consistently be summed up to a deformed amplitude. However, this does {\it not} mean that non-MHV amplitudes cannot be deformed. It merely means that we cannot decompose them as in the undeformed case. Instead, we should take the deformed top-cell Gra{\ss}mannian integrals seriously, and consider them to yield Yangian-invariant deformations of all N$^{\hat k}$MHV tree-level amplitudes. We then have to perform the remaining integrations in the presence of branch cuts. While this certainly complicates things, there are three, related, potential benefits. Firstly, if the contours are chosen appropriately, we may hope to gain meromorphicity in the deformation parameters $\{v_j^\pm\}$, to compensate for the lost meromorphicity of the integrand on the Gra{\ss}mannian manifold. This opens up an exciting perspective: We should look for a deformed analog of the BCFW relations in the space of spectral parameters. Secondly, by way of conjecture, demanding complete analyticity of the deformed amplitude away from the poles in the $\{v_j^\pm\}$ should strongly constrain the contours. The contours of the Gra{\ss}mannian integral would be determined from a powerful principle. Thirdly, we may hope that all proper contours will be compact, and will stay away from all branch points. At loop level, this should ensure the regularization of all notorious infrared divergences, as no minors on the Gra{\ss}mannian will ever vanish along the contours.
Let us further motivate these ideas with a small mathematical Gedankenexperiment. Consider Euler's integral of the first kind, or beta function
\begin{equation}\label{beta}
B(\alpha_1,\alpha_2)=\int_{0}^1 \frac{dc}{c^{1-\alpha_1}(1-c)^{1-\alpha_2}}\,.
\end{equation}
It is well defined if ${\rm Re}\, \alpha_1>0$ and ${\rm Re}\, \alpha_2>0$. Euler showed that it equals $\Gamma(\alpha_1) \Gamma(\alpha_2)/\Gamma(\alpha_1+\alpha_2)$, where $\Gamma(\alpha)$ is his integral of the second kind, also known as the Gamma function. The result is actually a meromorphic function in both $\alpha_1$ and $\alpha_2$, a fact that is totally obscure from the integral representation \eqref{beta}.
In order to render this double-analytic continuation manifest, Pochhammer \cite{Pochhammer:1890}, not being scared by passing several times through a cut, replaced \eqref{beta} by
\begin{equation}\label{bac}
\tilde B(\alpha_1,\alpha_2)=\frac{1}{(1-e^{2\pi i \alpha_1})(1-e^{2\pi i \alpha_2})}
\int_{\mathcal{P}} \frac{dc}{c^{1-\alpha_1}(1-c)^{1-\alpha_2}}\,,
\end{equation}
where the Pochhammer contour $\mathcal{P}$ is a closed path in the complex $c$ plane going clockwise around $c=0$, then clockwise around $c=1$, then counterclockwise around $c=0$, then counterclockwise around $c=0$, finally returning to the starting point. This continued function equals again $\Gamma(\alpha_1) \Gamma(\alpha_2)/\Gamma(\alpha_1+\alpha_2)$, but now allows for any complex values of $\alpha_1, \alpha_2 \neq \mathbb{Z}$. Poles and zeros are recovered by taking limits where the $\alpha_j$ tend to integer values. Note that the poles at which the beta function diverges have neatly factored out; the prefactor-stripped contour integral in \eqref{bac} is manifestly finite (we never come close to $c=0,1$) and manifestly analytic in the $\alpha_j$ (the contour is compact and does not care about the specific values of the $\alpha_j$).
In summary, \eqref{beta} should be a toy ``positive Gra{\ss}mannian'' integral, while \eqref{bac} should be the proper analytically continued complex version. Of course, given the integrand, meromorphicity is not sufficient. If we e.g. take a big circle around both branch points such that, for simplicity, $\alpha_1+\alpha_2=0$, we just get zero: Certainly a meromorphic function. But then we do not match the ``positive Gra{\ss}mannian'' integral. This is how positivity properties might complement meromorphicity in order to completely constrain the contours.
\section{Further Details}\label{sec:Furtherdetails}
In this section we present some details on the derivation of the deformed Gra{\ss}mannian formula \eqref{ACCK-deformed} and prove that it is invariant under the action of the level-zero and the level-one Yangian generators \eqref{deformed-yangian-symmetry}. The deformed dual Gra{\ss}mannian formula \eqref{MS-deformed} then follows through the same change of variables used in \cite{ArkaniHamed:2009vw}. As we have already pointed out, the ${\rm GL}(k)$ symmetry restricts possible deformations of
\eqref{ACCK} considerably. Let us make the following ansatz
\begin{equation}\label{ACCK-withgamma}
{\mathcal A}_{n,k}\left(\{\gamma_j\}\right)=
\int \frac{ d^{k\cdot n} C}{{\rm vol}({\rm GL}(k))}
\left(\prod_{i=1}^n(i,\,...\,,i+k-1)^{-1+\gamma_i}\right)\delta^{4k|4k}(C\cdot\mathcal{W})\,,
\end{equation}
with $\sum_{i}\gamma_i=0$. It differs from the most general form by the fact that only cyclic minors are employed. However, we will see shortly that this suffices. Indeed, we may relate $\gamma_j$ to the evaluation representation parameters $v_j$ and central charges $c_j$ by demanding Yangian invariance of \eqref{ACCK-withgamma}. One way to proceed in order to verify this ansatz is to construct the Yangian invariants as presented in \cite{Kanning:2014maa}, see also \cite{Broedel:2014pia}. The authors of these papers generalized the approach proposed in \cite{Chicherin:2013ora}, similar to, but different from a standard Algebraic Bethe Ansatz, in order to find eigenvectors of the monodromy matrices acting on a suitable quantum space of an inhomogeneous spin chain. There is a natural classification of all such invariants by permutations $\sigma$, and we will be interested here only in the case where the invariants are associated to the shift
\begin{equation}\label{shift}
\sigma_{n,k}(i)=i+k\,\, (\mbox{mod } n).
\end{equation}
It corresponds to the aforementioned top-cell of the positive Gra{\ss}mannian ${\rm Gr}_+(k,n)$ of \cite{ArkaniHamed:2012nw}. The permutation \eqref{shift} admits the following decomposition into adjacent transpositions \cite{Broedel:2014pia}
\begin{equation}\label{decomposition}
\sigma_{n,k}=\underbrace{(k,k+1)\ldots(n-1,n)}_{}\ldots\underbrace{(23)\ldots(n-k+1,n-k+2)}\underbrace{(12)\ldots(n-k,n-k+1)}\,,
\end{equation}
where $(ij)$ denotes the transposition of the elements $i$ and $j$. Using \eqref{decomposition} one can construct Yangian invariants $|\psi\rangle_{n,k}$ for top-cells as
\begin{align}\label{Yangianinv}\nonumber
|\psi\rangle_{n,k}&=\underbrace{\mathcal{B}_{n-k,n-k+1}(y_{n-k,n-k+1})\ldots\mathcal{B}_{12}(y_{1,n-k+1})}\underbrace{\mathcal{B}_{n-k+1,n-k+2}(y_{n-k,n-k+2})\ldots\mathcal{B}_{23}(y_{1,n-k+2})}\\&\ldots\underbrace{\mathcal{B}_{n-1,n}(y_{n-k,n})\ldots\mathcal{B}_{k,k+1}(y_{1,n})}\prod_{i=1}^k \delta^{4|4}(\mathcal{W}_i),
\end{align}
where $y_{ij}=v^-_i-v^-_j$, and the $v^-_i$ are given in \eqref{v-plus-minus}. The operators $\mathcal{B}_{ij}(u)$ are formally defined in terms of complex powers $u$ of the product of super-twistor variables and their derivatives
\begin{equation}
\label{integraloperatorB}
\mathcal{B}_{i j}(u)=
(-\mathcal{W}_j \cdot \partial_{\mathcal{W}_i})^u
=-\frac{\Gamma(u+1)}{2 \pi i}\int\frac{d\alpha}{(-\alpha)^{1+u}}
e^{\alpha\,\mathcal{W}_j \cdot \partial_{\mathcal{W}_i}}\,,
\end{equation}
where we abbreviated $\partial_{\mathcal{W}^{\mathcal{A}}_i}\equiv \frac{\partial}{\partial{\mathcal{W}^{\mathcal{A}}_i}}$. The attentive reader should be puzzled by this complex power of a derivative operator. In fact, extensions of ordinary derivatives to operators with arbitrary powers are called fractional derivatives. They are more akin to integral operators and have manifold representations, which depend on the ranges of variables and parameters, see \cite{lavoie} for a review. We will not enter into any details here, but suggest that fractional calculus might play an important role in the construction of deformed amplitudes. Using the fact that the operators $\mathcal{B}_{ij}(u)$ act as shift operators, we may rewrite \eqref{Yangianinv} as a Gra{\ss}mannian integral and read off the powers of the minors. In a case-by-case study up to a high number of particles $n$ as well as various values for $k$, we obtained \eqref{ACCK-deformed}, up to a trivial normalization, along with the proper deformation parameters written in terms of the $v_j$ and the $c_j$ subject to the relation \eqref{cyclic-identification}. It is possible to prove \eqref{ACCK-deformed} for all $n$ and $k$ by induction, using the approach presented above. However, the proof is very technical and is omitted here. Instead, we shall simply prove Yangian invariance by directly acting with the Yangian generators on the expressions \eqref{ACCK-deformed}, \eqref{cyclic-identification} generalized from the case-by-case results.
To this purpose we will follow closely the steps of \cite{Drummond:2010qh}, appropriately adapted to our deformed case. Let us start from a Gra\ss mannian integral deformed with generic powers, see again \eqref{ACCK-withgamma}.
We notice that invariance under the level-zero generators imposes restrictions equivalent to the requirement that the measure of the Gra{\ss}mannian integral is ${\rm GL}(k)$ invariant. This leads to
\begin{equation}\label{deformed-level0}
\sum_{i=1}^n \gamma_i=0 \,.
\end{equation}
Next, let us turn to the level-one generators $\hat J$ in \eqref{deformed-yangian-symmetry} and rewrite their bilocal part as
\begin{align}\label{deformed-J1-bilocal}
\sfrac{1}{2} \sum_{i<j} (-1)^{\mathcal C} \left[J^{\mathcal{A} \mathcal{C}}_i J^{\mathcal{C} \mathcal{B}}_j
- (i \leftrightarrow j)\right] = \sfrac{1}{2} \left(2 \sum_{i<j} + \sum_{i=j} - \sum_{i,j}\right) (-1)^{\mathcal C} J^{\mathcal{A} \mathcal{C}}_i J^{\mathcal{C} \mathcal{B}}_j \,.
\end{align}
The last term is just a product of level-zero generators, and thus vanishes on the Gra\ss mannian integral. A rearrangement of the other two terms leads to
\begin{equation}\label{deformed-J1-bilocal-final}
\sum_{i<j} \left(\mathcal{W}_i^{\mathcal{A}}\partial_{\mathcal{W}_j^{\mathcal{B}}}\mathcal{W}_j^{\mathcal{C}}\partial_{\mathcal{W}_i^{\mathcal{C}}}-\mathcal{W}_i^{\mathcal{A}}\partial_{\mathcal{W}_i^{\mathcal{B}}}\right) + \sum_{i} c_i \mathcal{W}_i^{\mathcal{A}}\partial_{\mathcal{W}_i^{\mathcal{B}}},
\end{equation}
where we again omitted level-zero generator contributions.
Along the lines of \cite{Drummond:2010qh}, the differential operators in the variables $\mathcal{W}_i^{\mathcal{A}}$ can be exchanged for operators in the variables $c_{ai}$ when acting on the delta functions:
\begin{equation}
\mathcal{W}_j^{C}\partial_{\mathcal{W}_i^{\mathcal{C}}} \delta^{4k|4k}(C\cdot\mathcal{W}) =\left( \sum_{a=1}^k c_{ai} \frac{\partial}{\partial c_{aj}}\right) \delta^{4k|4k}(C\cdot\mathcal{W}) .
\end{equation}
The next and crucial step is to integrate by parts. Here we need to be sure that no boundary terms arise. This is ensured as long as the integration contours are closed. For open contours, one has to check that the boundary terms vanish. Proceeding under this assumption, we arrive, after some manipulations of the minors, at
\begin{align}\label{deformed-invariance-result}
\hat J^{\mathcal{A} \mathcal{B}} {\mathcal A}_{n,k}\left(\{\gamma_j\}\right) &= \sum_{b=1}^k \int \frac{ d^{k\cdot n} C}{{\rm vol}({\rm GL}(k))}
\left(\prod_{i=1}^n(i,\,...\,,i+k-1)^{-1+\gamma_i}\right) \\
& \hspace{1cm} \times \left[-\sum_{i<j}\gamma_j + \sfrac{1}{2}\sum_{i=1}^n c_i - \sum_{i=1}^n v_i \right] \mathcal{W}_i^{\mathcal{A}}\, c_{bi}\,
\partial_{\mathcal{B}}\delta_b \prod_{m\neq b}\delta_m\,,
\end{align}
where we have defined for sake of simplicity
\begin{equation}
\delta_l := \delta^{4|4}(\sum_{i=1}^n c_{li} \mathcal{W}^{\mathcal{A}}_i) \,.
\end{equation}
Since we require this expression to vanish, we need to impose, that the term inside the square bracket be proportional to a mutual constant for every $i$
\begin{equation}
-\sum_{j=i+1}^{n}\gamma_j + \sfrac{1}{2} c_i - v_i = \beta\,, \qquad i=1,\ldots,n\,.
\end{equation}
Any such $\beta$ simply multiplies a term proportional to level-zero generators, which leads to immediate annihilation of the deformed amplitude.
This system of equations, together with \eqref{deformed-level0}, has the solution
\begin{equation}
\gamma_j=v^-_j-v^-_{j-1}, \qquad j=1,\ldots,n, \qquad \mbox{with } \qquad v^-_{n}= -\beta \,.
\end{equation}
This is exactly the same condition we found for a large number of $n$ and $k$ by using the $\mathcal{B}$-operator method. By acting with the central charges $C_j$ on \eqref{ACCK-withgamma} we easily arrive at the relation \eqref{cyclic-identification}. This finishes the proof that \eqref{ACCK-deformed} with \eqref{cyclic-identification} is Yangian invariant.
\section{A First Look at n=6, k=3}
In this section our main focus will be on the simplest non-trivial example, namely the NMHV six-point amplitude.
The emerging structure is already very rich and rather subtle. Here we present only a preliminary exploration, an in-depth study will be performed elsewhere.
As a warm-up exercise, let us start with the five-point NMHV amplitude, which was already successfully deformed in \cite{Ferro:2013dga} in ordinary (as opposed to momentum) twistor space. In the present context it is given by \eqref{MS-deformed} together with the integral \eqref{MS-integral}, where $n=5$ and $\hat k=1$. One immediately sees that the number of delta functions equals the number of integrations and the integral is formally evaluated by localizing it on the support of the delta functions. This yields
\begin{equation}\label{A53}
A_{5,3}\left(\{v_j^\pm\}\right)=
\frac{\delta^{0|4}(\langle 1234\rangle \chi_5+\langle 5123\rangle \chi_4+\langle 4512\rangle \chi_3+\langle 3451\rangle \chi_2+\langle 2345\rangle \chi_1)}{\langle 1234\rangle^{1+v_1^+-v_4^-}\langle 5123\rangle^{1+v_5^+-v_3^-}\langle 4512\rangle^{1+v_4^+-v_2^-}\langle 3451\rangle^{1+v_3^+-v_1^-}\langle 2345\rangle^{1+v_2^+-v_5^-}}\,,
\end{equation}
written in terms of $4 \times 4$ determinants of four momentum-twistors
\begin{equation}\label{fourbracket}
\langle ijkl\rangle=
\epsilon_{ABCD}\,Z_{i}^{A}Z_{j}^{B}
Z_{k}^{C}Z_l^D\,,\qquad A,B,C,D=1,2,3,4\,.
\end{equation}
One observes that the result is a deformed version of the 5-cyclic so-called R-invariant
\begin{equation}
\label{R-invariant}
[ijklm]=\frac{\delta^{0|4}(\langle ijkl\rangle \chi_m+\langle jklm\rangle \chi_i+\langle klmi\rangle \chi_j+\langle lmij\rangle \chi_k+\langle mijk\rangle \chi_l)}{\langle ijkl\rangle\langle jklm\rangle\langle klmi\rangle\langle lmij\rangle\langle mijk\rangle} \,.
\end{equation}
Let us then proceed to the scattering of six particles. This corresponds to a Gra\ss mannian integral \eqref{ACCK} defined on $\mathrm{Gr}(3,6)$ in super-twistor space or, equivalently, to a $\mathrm{Gr}(1,6)$ integral in super-momentum twistor variables \eqref{MS}. In the following we will focus on the latter. It is known \cite{ArkaniHamed:2009dn} that in the undeformed case \eqref{MS} may be reduced to an integral over one variable, and that the integrand is a rational function with six poles: the amplitude is a specific combination of three residues evaluated at these poles, accomplished by choosing a suitable contour of integration. It is fixed by the BCFW recursion relation. The answer is given by a sum of three 5-cyclic terms
\begin{equation}\label{inv63}
A_{6,3}=[12345]+[12356]+[13456]\,.
\end{equation}
This result is not manifestly 6-cyclic. However, using a six-term identity, which stems from the fact that a contour enclosing all six poles yields a vanishing integral due to the rationality of the integrand, one may alternatively rewrite it in 6-cyclic form as
\begin{equation}\label{inv63cyclic}
A_{6,3}=\sfrac{1}{2}\left([12345]+[23456]+[34561]+[45612]+[56123]+[61234]\right).
\end{equation}
Let us study what happens once we introduce our deformation parameters. Since we have to abandon the BCFW recursion relations, which led to the particular combination of R-invariants in \eqref{inv63}, we do not immediately have a first-principle prescription on how to define the deformed amplitude. However, we may study the properties of the integral \eqref{MS-integral} and analyze the emergence of \eqref{inv63} as all deformation parameters tend to zero. The Gra\ss mannian integral \eqref{MS-integral} now reads
\begin{eqnarray}\label{MS-deformed-16}
A_{6,3}\left(\{v_j^\pm\}\right) =
\int \prod_{i=2}^6 \frac{dc_{1i}}{c_{1i}^{1-\alpha_i}}\,
\delta^{4|4}(\mathcal{Z}_1 + c_{12} \mathcal{Z}_2 +...+ c_{16} \mathcal{Z}_6)\,,
\end{eqnarray}
where we have fixed the $\mathrm{GL}(1)$ invariance by setting $c_1=1$, and put for brevity $\alpha_i = v_{i-1}^- - v_{i+1}^+$. Note again $\alpha_1 + \ldots \alpha_6=0$, which explains why the dependence on $\alpha_1$ has disappeared from \eqref{MS-deformed-16}. In order to render the integral \eqref{MS-deformed-16} well-defined we need to specify a contour of integration. As we know, \eqref{MS-deformed-16} is a formal a solution of the Yangian invariance conditions. These take the form of second order differential equations in many variables, which means that there are many linearly independent solutions. These solutions will be specified by choosing different contours. We postpone the discussion of finding appropriate contours and treat the integral formally for the moment. By saturating the four bosonic delta functions in \eqref{MS-deformed-16}, we can express any four of the variables $c_{1i}$ in terms of the remaining fifth one, which still remains to be integrated. We choose this w.l.o.g.\ to be $c_{16}$ and find the following solution
\begin{equation}
c_{1i} = a_i + b_i \, c_{16} \qquad i=2, ...,5\,,
\end{equation}
where $a_i$ and $b_i$ are given by ratios of momentum-twistor four-brackets \eqref{fourbracket}.
In explicit form,
\begin{align}
&a_2 = - \frac{\langle 1345\rangle}{\langle 2345\rangle}\,, \qquad a_3 = - \frac{\langle 1245\rangle}{\langle 3245\rangle}\,, \qquad a_4 = - \frac{\langle 1235\rangle}{\langle 4235\rangle} \,, \qquad a_5 = - \frac{\langle 1234\rangle}{\langle 5234\rangle}\,,\\
&b_2 = - \frac{\langle 6345\rangle}{\langle 2345\rangle} \,,\, \qquad b_3 = - \frac{\langle 6245\rangle}{\langle 3245\rangle}
\,,\, \qquad b_4 = - \frac{\langle 6235\rangle}{\langle 4235\rangle}
\,, \,\qquad b_5 = - \frac{\langle 6234\rangle}{\langle 5234\rangle}\,.
\end{align}
The reader may easily convince herself that, after the change of variables $c_{16} = -\sfrac{a_5}{b_5}\tau$, the remaining one-variable integral becomes
\begin{align}\label{ourlauricella}\nonumber
\mathcal{I}=&\frac{1}{\langle2345\rangle} \left(\frac{\langle 1234\rangle}{\langle 2346\rangle}\right)^{\alpha_6} \prod_{i=2}^5 a_i^{-1+\alpha_i} \int d\tau \, \tau^{-1+\alpha_6} (1-\tau)^{-1+\alpha_5} \prod_{i=2}^4 (1- z_i \tau)^{-1+\alpha_i}\\
&\times\delta^{0|4}\left( \chi_1 +\sum_{i=2}^5 (1-z_i \tau)a_i\chi_i +\frac{\langle 1234\rangle}{\langle 2346\rangle}\, \tau\,\chi_6\right),
\end{align}
with (note $z_5=1$)
\begin{equation}
z_i=\frac{a_5 b_i}{b_5 a_i}\,.
\end{equation}
The fermionic delta function is a polynomial in $\tau$ of degree four, with Gra{\ss}mann-valued coefficients. The integrand has branch points at $\tau =\infty, z_2^{-1}, z_3^{-1}, z_4^{-1}, 1,0$ for $\alpha_1, \ldots, \alpha_6 \notin \mathbb{Z}$. We notice that this integral is of hypergeometric type. It satisfies a supersymmetric version of the hypergeometric differential equation, a statement which is equivalent to the Yangian invariance of NMHV amplitudes, see section \ref{sec:Furtherdirections} below. So far we have not specified the contour, nor spelled out any possible boundaries of integration in \eqref{ourlauricella}. As we pointed out before, this integral is Yangian invariant only if all potential boundary terms vanish when integrating by parts as in section \ref{sec:Furtherdetails}. This is trivially the case if we take a closed contour, and less trivially for open contours between any two branch points such that their associated exponents $\alpha_j$ have positive real parts.
Note that this is not simultaneously possible for all $\alpha_j$, since their sum vanishes.
The five branch points at finite positions and the branch point at infinity divide the real line into six segments. For any two consecutive branch points $\tau_1<\tau_2$ let us define $\mathcal{I}_{(\tau_1,\tau_2)}$ to be the integral \eqref{ourlauricella} integrated between $\tau_1$ and $\tau_2$. With a suitable change of coordinates all the allowed (i.e.\ positive real parts of the exponents at $\tau_1$ and $\tau_2$) integrals $\mathcal{I}_{(\tau_1,\tau_2)}$ may be brought to the form of the type-D Lauricella hypergeometric function, which is defined as
\begin{equation}
\label{lauricella}
F_D(\alpha,\beta_1, \beta_2, \beta_3, \gamma; z_1, z_2, z_3) = \frac{\Gamma(\gamma)}{\Gamma(\alpha) \Gamma(\gamma-\alpha)} \int_0^1 u^{\alpha-1} (1-u)^{\gamma-\alpha-1} \prod_{j=1}^3 (1-z_j u)^{-\beta_j} du \,,
\end{equation}
where convergence restricts this integral representation to ${\rm Re}\, (\alpha) >0$, ${\rm Re}\, (\gamma -\alpha)>0$.
In order to uncover some of the analytic properties of our deformed integral, let us focus on $\mathcal{I}_{(0,1)}$.
After expanding the fermionic delta functions in \eqref{ourlauricella} and using the definition \eqref{lauricella}, we can substitute the integral with the series expansion of the type-D Lauricella hypergeometric function
\begin{equation}
F_D(\alpha,\beta_1, \beta_2, \beta_3, \gamma; z_1, z_2, z_3) =\sum_{m_1=0}^\infty \sum_{m_2=0}^\infty \sum_{m_3=0}^\infty \frac{(\alpha)_{m_1+m_2+m_3}(\beta_1)_{m_1}(\beta_2)_{m_2}(\beta_3)_{m_3}}{(\gamma)_{m_1+m_2+m_3}m_1!m_2!m_3!}z_1^{m_1}z_2^{m_2}z_3^{m_3}\,,
\end{equation}
where $(\alpha)_m$ is the (raising) Pochhammer symbol. We may now evaluate \eqref{ourlauricella} as an expansion in the $\alpha_i$ around zero. The result, up to the first subleading order, is given by
\begin{align}\label{I01}
\mathcal{I}_{(0,1)}&=\frac{1}{\alpha_6}\frac{\delta^{0|4}(\langle 1234\rangle\chi_5+\langle 5123\rangle\chi_4+\langle 4512\rangle\chi_3+\langle 3451\rangle\chi_2+\langle 2345\rangle\chi_1)}{\langle 2345\rangle^{1-\alpha_1}\langle 3451\rangle^{1-\alpha_2}\langle 4512\rangle^{1-\alpha_3}\langle 5123\rangle^{1-\alpha_4}\langle 1234\rangle^{1-\alpha_5}}\\ \nonumber
&+\frac{1}{\alpha_5}\frac{\delta^{0|4}(\langle 1234\rangle\chi_6+\langle 6123\rangle\chi_4+\langle 4612\rangle\chi_3+\langle 3461\rangle\chi_2+\langle 2346\rangle\chi_1)}{\langle 2346\rangle^{1-\alpha_1}\langle 3461\rangle^{1-\alpha_2}\langle 4612\rangle^{1-\alpha_3}\langle 6123\rangle^{1-\alpha_4}\langle 1234\rangle^{1-\alpha_6}}\\ \nonumber
&+([12345]+[12346])\log\langle 1234\rangle-[23456]\log\frac{\langle 2346\rangle}{\langle 2345\rangle}+[13456]\log\frac{\langle 1346\rangle}{\langle 1345\rangle}\\\label{fourthline}
&-[12456]\log\frac{\langle 1246\rangle}{\langle 1245\rangle}+[12356]\log\frac{\langle 1236\rangle}{\langle 1235\rangle}+\mathcal{O}(\alpha_i) \nonumber\,,
\end{align}
where the term proportional to $\frac{1}{\alpha_6}$ is exact to all orders in $\alpha_i$ for $i=1,\ldots,5$, and similarly for the term proportional to $\frac{1}{\alpha_5}$. We notice that the residues in front of the leading divergent terms are the deformed R-invariants as in \eqref{A53}! Clearly we could recover all possible R-invariants by focusing on other branch points. This is exciting, since we now see where the deformed lower-cell diagrams hide: They are no longer residues on the Gra{\ss}mannian manifold as in the undeformed case, but instead sit in front of poles in the space of deformation parameters. This already points towards the dissolution of the no-go theorem derived in \cite{Beisert:2014qba}. There it was shown that it is impossible to just add the deformed BCFW terms and to thereby obtain a Yangian invariant result without restricting the deformation parameters. The just derived result suggests instead, that the deformed BCFW terms should be multiplied by poles, appropriately summed, and then analytically completed by infinitely many further terms, see \eqref{I01}.
While it is clear that we have not lost any relevant information by our deformation, of course the reader would presumably still like to know how to recover the undeformed amplitude from the deformed integral in practice. Let us sketch a possible procedure. From the point of view of the differential equations given by demanding Yangian invariance, the undeformed result follows directly from setting $v_i=0$ in the definition of Yangian generators \eqref{deformed-yangian-symmetry}. Let us try to take the same limit at the level of the solutions to those equations. We need to proceed very carefully here. To demonstrate subtleties of removing the deformation, let us consider the much simpler classic hypergeometric function ${}_2 F_1$ as an example. This function gives a basis of solutions to the second order ordinary differential equation
\begin{equation}
z(1-z)\frac{d^2 w(z)}{dz^2}+(c-(1+a+b)z)\frac{d w(z)}{dz}-a b\, w(z)=0\,.
\end{equation}
For generic values of $a,b,c$ there are two linearly independent solutions to that equation
\begin{equation}
{}_2 F_1(a,b,c,z)\qquad\mbox{and}\qquad z^{1-c} {}_2 F_1(a-c+1,b-c+1,2-c,z).
\end{equation}
However, these two solutions do not span the solution space at the ``resonant'' values of the parameters, where either of the conditions $c,c-a-b,a-b\in \mathbb{Z}$ is satisfied. In that case one has to first take a particular combination of two generic solutions, and in a subsequent step take the limit to a resonant value. We expect a similar behavior in our case -- removing the deformations corresponds to considering the resonant values of parameters. The proper combination of solutions should be given by a deformed version of the BCFW recursion relations, presumably transferred from the Gra{\ss}mannian manifold to the set of spectral planes. This will be analyzed elsewhere.
Before closing this section let us point out that there exists a method to render the integrals $\mathcal{I}_{(\tau_1,\tau_2)}$ manifestly meromorphic in the deformation parameters $\alpha_i$ employing Pochhammer's procedure described in section \ref{meromorphicity}. We just define the analytic continuation of $\mathcal{I}_{(\tau_1,\tau_2)}$ by using Pochhammer cycles around the branch points $\tau_1$ and $\tau_2$. As a first example let us take again $\mathcal{I}_{(0,1)}$. We may then define, confer \eqref{bac},
\begin{align}\label{ourlauricella_ac}\nonumber
\mathcal{\tilde I}_{(0,1)}=&\frac{1}{(1-e^{2\pi i\alpha_6})(1-e^{2\pi i \alpha_5})}\frac{1}{\langle2345\rangle} \left(\frac{\langle 1234\rangle}{\langle 2346\rangle}\right)^{\alpha_6} \prod_{i=2}^5 a_i^{-1+\alpha_i} \int_{\mathcal{P}_{(0,1)}} d\tau \, \tau^{-1+\alpha_6} (1-\tau)^{-1+\alpha_5} \\
&\times\prod_{i=2}^4 (1- z_i \tau)^{-1+\alpha_i}\,\delta^{0|4}\left( \chi_1 +\sum_{i=2}^5 (1-z_i \tau)a_i\chi_i +\frac{\langle 1234\rangle}{\langle 2346\rangle}\, \tau\,\chi_6\right),
\end{align}
where $\mathcal{P}_{(0,1)}$ is the Pochhammer contour snaking around the branch points at $0$ and $1$.
This integral agrees with $\mathcal{I}_{(0,1)}$ as long as $\mathrm{Re} \, \alpha_5 > 0$ and $\mathrm{Re} \, \alpha_6 > 0$. The question we have not fully analyzed yet is how to reassemble these building blocks into the ``correct'' multi-meromorphic function corresponding to the properly deformed amplitude. Here a matching to the ``positive Gra{\ss}mannian'' with ``positive'' external data is presumably sufficiently constraining.
\section{Further Directions}
\label{sec:Furtherdirections}
In the previous section we have encountered the deformation of the $\mathcal{A}_{6,3}$ amplitude in terms of Lauricella hypergeometric functions. It turns out that there is a broader class of hypergeometric functions, introduced by Gelfand \cite{MR841131}, which are very closely connected to our deformations\footnote{The relevance of Gelfand hypergeometric functions, as described in \cite{Aomoto,vilenkin1993representation},
as well as the relation to the qKZ equations were independently noticed by Nils Kanning and Rouven Frassek.}.
In this section we will sketch possible relations between the two. General hypergeometric functions also make their appearance as solutions to the Knizhnik-Zamolodchikov equation. We suggest how the latter may be related to Yangian invariants.
First of all, let us emphasize that Yangian invariants are solutions to a particular set of differential equations of first and second order. In the case of NMHV amplitudes\footnote{We suspect that there exists an even larger class of hypergeometric differential equations satisfied by the general ${\rm N}^{k}{\rm MHV}$ amplitudes. However, we were not able to find these in the mathematical literature.} written in momentum twistor space these equations may be elegantly rewritten as
\begin{equation}\label{hyperseteq}
\begin{cases}
& \sum_{\mathcal{A}}^{} \mathcal{Z}^\mathcal{A}_j \frac{\partial}{\partial \mathcal{Z}^\mathcal{A}_j} F = \alpha_j F\,, \\
& \sum_{j} \mathcal{Z}^\mathcal{A}_j \frac{\partial}{\partial \mathcal{Z}^\mathcal{B}_j} F = -(-1)^{\mathcal{A}} \delta_{\mathcal{A}\mathcal{B}} F\,, \\
& \frac{\partial^2}{\partial\mathcal{Z}^\mathcal{A}_j \partial\mathcal{Z}^\mathcal{B}_i} F = \frac{\partial^2}{\partial\mathcal{Z}^\mathcal{A}_i \partial\mathcal{Z}^\mathcal{B}_j} F\,,
\end{cases}
\end{equation}
where $F$ is the $\mathrm{Gr}(1,n)$ Gra\ss mannian integral \eqref{MS-integral}.
The first set of equations is the statement of homogeneity of $F$ in the $\mathcal{Z}_j^\mathcal{A}$ variables, where the $\alpha_j$ are related to the representation labels $c_j$ and $v_j$. The second group of equations is the statement of $\mathfrak{gl}(4|4)$ (or more generally $\mathfrak{gl}(N|M)$) invariance of $F$. The third set may be interpreted as the action of the level-one Yangian generators when written in momentum twistor space.
A similar set of equations arises for the bosonic algebras $\mathfrak{gl}(N)$ in the definition of the Gelfand hypergeometric functions, see \cite{Aomoto,vilenkin1993representation} for introductions to this subject. These are hypergeometric functions in several variables. They possess representations in terms of complex integrals of complex powers of polynomials, and are naturally associated to Gra\ss mannians. Let us note that N$^{\hat k}$MHV amplitudes for $\hat k>1$ do not satisfy \eqref{hyperseteq}. It would be intriguing to find a more general version of these differential equations allowing for arbitrary $\hat k$.
Another interesting observation is a link between Yangian invariants and the solutions to the quantum version of the Knizhnik-Zamolodchikov (qKZ) equation \cite{frenkel1992}, which appears e.g.~as a constraint on correlation functions of vertex operators in two-dimensional integrable conformal field theories. Let us consider a function $\Phi(z_1,\ldots,z_n)$ with values in a tensor product $V_1\otimes \ldots\otimes V_n$ of highest weight $\mathfrak{gl}(N|M)$-modules. The qKZ equation is a system of difference equations satisfied by $\Phi$ of the form
\begin{equation}\label{qKZ}
\Phi(z_1,\ldots,z_i+p,\ldots,z_n)=K_i(z_1,\ldots,z_n;p)\, \Phi(z_1,\ldots,z_n)
\end{equation}
with the qKZ operators $K_i$ given by
\begin{equation}
K_i(z_1,\ldots,z_n;p)=L_{i\,i-1}(z_i-z_{i-1}+p)\ldots L_{i\,1}(z_i-z_1+p)L^{-1}_{n\,i}(z_n-z_i)\ldots L^{-1}_{i+1\,i}(z_{i+1}-z_i)\,.
\end{equation}
The operators $L_{ij}(z)$ are intertwiners corresponding to pairs $V_i,V_j$ and $p\in\mathbb{C}$. The solutions to the system of equations \eqref{qKZ} can be found using the Algebraic Bethe Ansatz technique \cite{Reshetikhin1992}, since they are related to the eigenvectors of a suitable transfer matrix defined on an inhomogeneous spin chain. In \cite{Frassek:2013xza} it was shown that also the Yangian invariance condition can be rewritten as an eigenproblem for such a spin chain, where the Yangian invariants are the eigenvectors of the monodromy matrix. Since the transfer matrix may be obtained from the monodromy matrix by taking the trace over an auxiliary vector space, this should result in a relation between Yangian invariants and the solutions $\Phi$ in \eqref{qKZ}. It would be very interesting to make this relation explicit.
Interestingly, the classical limit of the qKZ equation, the ordinary Knizhnik-Zamolodchikov (KZ) equation \cite{Knizhnik198483}, appeared already in the context of scattering amplitudes in $\mathcal{N}=4$ SYM in the direct Feynman diagram calculations \cite{Henn:2013tua}. This is not surprising, since it is closely related to polylogarithm functions, which form a functional basis for loop-level results in any quantum field theory. It should be instructive to further investigate this relation.
One may suspect that the deformation of the ${\cal N}=4$ Yangian-invariant Gra{\ss}mannian contour integrals will also work in the case of the planar three-dimensional ${\cal N}=6$ super-conformal Chern-Simons model: A Yangian-invariant Gra{\ss}mannian integral formula for this so-called planar ABJM theory was derived in \cite{Lee:2010du}, and much of the on-shell diagram formalism of \cite{ArkaniHamed:2012nw} carries over from the four- to the three-dimensional model. This is indeed the case, as was very recently shown in \cite{Bargheer:2014mxa}. The authors also report an independent derivation of our ${\cal N}=4$ expressions \eqref{ACCK-deformed} and \eqref{MS-deformed},\eqref{MS-integral}.
\section{Outlook}
Clearly, the integrable deformation of the Gra{\ss}mannian approach to scattering amplitudes is of great mathematical interest. It is fairly obvious that the deformed integrals lead to generalized multi-variate hypergeometric functions. This is a rich subject intensively investigated by mathematicians from the mid-18th century all the way to the present. From the physics perspective, we need to establish that the deformed Gra{\ss}mannian integrals are useful for loop calculations. We hope that they will lead to a deepened analytic understanding of all radiative corrections to the tree-level amplitudes, while staying in exactly $1+3$ dimensions. We feel very encouraged by the interesting work of Penrose and Hodges, who considered the very same deformations already since the early days of the twistor approach, with the goal of regulating massless scattering processes in quantum field theory \cite{Penrose:1972ia,Hodges:1985ac}. The only missing elements were supersymmetry and integrability. In this context the next step, apart from in-depth investigations of various special cases, might be to directly deform the amplituhedron of \cite{Arkani-Hamed:2013jha, Arkani-Hamed:2013kca}.
\section*{Acknowledgments}
We thank Nima Arkani-Hamed, Johannes Brödel, James Drummond, Rouven Frassek, Andrew Hodges, Nils Kanning, Yumi Ko, Arthur Lipstein, David Meidinger, Gregor Richter, and Jaroslav Trnka for useful discussions.
This research is supported in part by the SFB 647 \emph{``Raum-Zeit-Materie. Analytische und Geometrische Strukturen''} and the Marie Curie network GATIS (\texttt{\href{http://gatis.desy.eu}{gatis.desy.eu}}) of the European Union’s Seventh Framework Programme FP7/2007-2013/ under REA Grant Agreement No 317089. T.L.\ is supported by ERC STG grant 306260. L.F. \ is supported by the Elitenetwork of Bavaria. M.S.\ thanks the Theory Group at CERN for hospitality during a precious sabbatical.
\bibliographystyle{nb}
|
2,877,628,091,285 | arxiv | \section{Introduction}
Zeolites are an important family of materials with periodic arrays of aluminosilicate cages that are widely used in different industrial processes.
Moreover, they also show interesting electronic phenomena when intercalated with alkali metals\cite{Breck_DW_1973} associated with the electronic states localized within individual cages.
For example, exotic magnetism with different magnetically ordered states has been reported in alkali-doped zeolites \cite{Nozue_Y_1992, Srdanov_VI_1998, Damjanovic_L_2000, Nakano_T_2006, Nakano_T_2012, Nakano_T_2013}.
With more than 200 possible zeolite frameworks known today\cite{Verheyen_E_2012}, alkali-doped zeolites thus represent a unique playground to control and study the effects of geometry and dopant concentration on the electronic potential depth, electron-electron repulsion and electron-phonon coupling at different length-scales.
\par
Since electronic states are associated to the alkali s-electrons, they remain confined to cages, just like alkali-metals that form clusters or superatoms.
It is thus not surprising that the metallic zeolites have been elusive for many years with only one documented exception in rubidium doped zeolite rho, where the microwave conductivity measurements indicated the metallic ground state\cite{Anderson_2004}.
Only very recently, the insulator-to-metal transition has been reported in sodium loaded low-silica X (LSX) zeolite, Na$_n$/Na$_{12}$-LSX\cite{Nakano_T_2010,Nozue_Y_2012}.
So far, experimental evidence for the metallic state was mainly limited to the observation of Drude reflection appearing in the infrared region \cite{Nakano_T_2010} and a drastic decrease in the resistivity \cite{Nozue_Y_2012} for the heavily loaded samples.
We stress that the measured resistivity is still very high and atypical of simple metals as it does not decrease with decreasing temperature.
Additional hint of metallic ground state was provided by a precise x-ray diffraction analysis\cite{Ikeda_T_2014}, where it was shown that Na atoms make bonding network through the tunnel windows that connect zeolite cages and thus establish a precondition for a narrow conduction band.
However, firm direct experimental evidence for the metallic state in Na$_n$/Na$_{12}$-LSX is still lacking.
\par
Nuclear magnetic resonance (NMR) is a powerful local-probe experimental tool to investigate a state of matter even in powder and highly air-sensitive samples.
By measuring the temperature dependence of the NMR shift and spin-lattice relaxation time it is in principle possible to distinguish between insulating, metallic and superconducting states \cite{Pennington_1996, Walstedt_2008, Grafe_2008, Potocnik_2014}.
Unfortunately, in alkali-doped zeolites, the spin-lattice relaxation rate, $1/T_1$, is dominated by strong fluctuations of local magnetic fields and electric field gradients originating from large amplitude atomic motion of alkali metals \cite{Heinmaa_M_2000,Igarashi_M_2013} thus masking the conventional Korringa-like behavior expected in the metallic state.
We show here that at room temperature the values of $^{23}$Na $1/T_1$ due to the Na motion indeed typically exceed by four orders of magnitude contributions from the coupling of nuclear magnetic moments to itinerant electrons in the metallic state.
Cooling sample to cryogenic temperatures freezes out the atomic motions on the NMR time scale and for Na$_n$/Na$_{12}$-LSX finally discloses Korringa behavior below 25~K thus confirming the metallic ground state for $n \geq 14.2$.
Surprisingly, a small portion of density of states (DOS) at the Fermi level persists deep into the insulating state.
This important finding that was not possible before with bulk-property measurements, holds important clues about the metal-to-insulator crossover in Na$_n$/Na$_{12}$-LSX, which is here discussed within the correlation-driven and disorder-driven aspects of metal-to-insulator transition (MIT)\cite{Dobrosavljevic_2011,Siegrist_2011}.
\section{Experimental}
The LSX zeolites have a chemical formula A$_{12}$Al$_{12}$Si$_{12}$O$_{48}$, where A stands for alkali-metal cations that are required for charge compensation of the aluminosilicate framework\cite{Breck_DW_1973}.
The main structural motif is comprised of truncated octahedral $\beta$ cages, which are arranged in a diamond structure by doubly connecting their 6-membered rings.
This way additional supercages are formed with a diameter approximately twice of that of $\beta$ cages.
Following the Lowenstein's rule\cite{Loewenstein_W_1954} the Si and Al atoms alternatingly occupy the framework sites resulting in structurally ordered LSX zeolite framework.
\par
When sodium-based parent structures, i.e. A$=$Na, hereafter abbreviated as Na$_{12}$-LSX, are exposed to Na vapour following the standard procedure described elsewhere\cite{Nozue_Y_2012}, a controlled amount of Na is additionally loaded yielding a targeted composition Na$_n$/Na$_{12}$-LSX.
Here $n$ denotes the loading density of guest Na atoms per supercage.
The values of $n$ were for the purpose of this study recalibrated by inductively coupled plasma technique.
Particularly for higher density levels, the calibrated values substantially exceed those calculated from known amount of starting materials used in our preliminary NMR study \cite{Igarashi_M_2013}.
\par
Here we present detailed $^{23}$Na NMR experiments on Na$_{n}$/Na$_{12}$-LSX samples in the Na-loading range $11.3 \leq n \leq 16.5$. The $^{23}$Na ($I = 3/2$) NMR spectra and $T_1$ were measured in a magnetic field of 4.7~T in the temperature range between 6~K and 340~K.
The $^{23}$Na reference
frequencies of 52.9055~MHz was determined from NaCl aqueous solution standards.
Optical reflectance spectra were measured by conventional apparatus.
Since we were dealing with powder samples with a grain diameter of few $\mu$m, the resistivity, $\rho$, was measured by pinching the powder between two metallic plates acting also as terminals\cite{Nozue_Y_2012}.
\begin{figure} [t]
\includegraphics[width=1.0\linewidth]{Fig1}
\caption{(Color online) (a) Room temperature optical conductivity of Na$_{n}$/Na$_{12}$-LSX zeolites as a function of sodium loading level $n$. (b) Temperature dependence of resistivity for insulating ($n=11.6$) and metallic ($n=16.5$) samples.}
\label{fig1}
\end{figure}
\section{Experimental Results}
The optical reflectance spectra of the samples included in this study clearly demonstrate the emergence of Drude peak at lower photon energies for $n \geq 14.2$ as shown in Fig.~\ref{fig1}(a).
Moreover, the temperature dependence of resistivity reveals finite low-temperature values for $n = 16.5$, but diverges for $n = 11.6$ as displayed in Fig.~\ref{fig1}(b).
The measured resistivity may be affected by the constriction resistance at the connection between powder particles, which can result in a misleading negative temperature coefficient of the resistivity for $n = 16.5$.
Nonetheless, finite value of $\rho$ at 2 K for $n = 16.5$ demonstrates a finite DOS at the Fermi level, $N(E_F)$, consistent with the metallic state. These two standard characterization techniques thus comply with the MIT as a function of Na-loading at a critical loading concentration between $n=11.6$ and $n=14.2$, in full agreement with the literature data\cite{Nakano_T_2010,Nozue_Y_2012}.
\par
At 270~K, the $^{23}$Na NMR spectrum of insulating Na$_{11.6}$/Na$_{12}$-LSX powder comprises several overlapping peaks close to the Larmor frequency [Fig.~\ref{fig2}(a)].
The structure of $^{23}$Na NMR lineshape reflects the multitude of Na sites in the $\beta$ cages and supercages of the LSX structure\cite{Ikeda_T_2014,Feuerstein_M_1998}.
The bulk magnetic susceptibility of this sample shows a diamagnetic response, although the presence of diluted localized magnetic moments is revealed by a characteristic low-temperature Curie tail \cite{Nozue_Y_2012}.
Therefore, the predominately diamagnetic susceptibility of $n=11.6$ sample suggests that the lineshape and the shift of the $^{23}$Na NMR spectrum are almost entirely determined by the nuclear chemical shift and quadrupole interactions.
The insulating Na$_{11.6}$/Na$_{12}$-LSX sample can be set thus as a suitable NMR reference against which all changes of NMR parameters when crossing the MIT are compared.
\par
Indeed, for samples with $n \geq 14.2$, a Lorentzian line [hereafter named as a shifted component (SC)] appears in the metallic samples on the high-frequency side of the $^{23}$Na NMR spectrum, well separated from the diamagnetic frequency range [Fig.~\ref{fig2}(a)].
We stress that this line is completely absent in all insulating samples.
The SC is optimally detected with an echo pulse sequence with precisely two times longer pulse length than that optimized for the residual diamagnetic $^{23}$Na spectral component centered around zero shift -- hereafter we call it a residual component (RC) as it is reminiscent to that described above for the insulating $n=11.6$ sample.
We conclude that the electric field gradient for Na atoms contributing to this SC is averaged out on the time scale of $^{23}$Na NMR measurements, $\sim 10^{-5}$~s.
Motional effects also explain the Lorentzian lineshape of SC.
EFG averaged to zero and the Lorentzian lineshape are clear signatures that at elevated temperatures Na atoms undergo large amplitude displacements.
\par
\begin{figure} [b]
\includegraphics[width=1.0\linewidth]{Fig2}
\caption{(Color online) (a) $^{23}$Na NMR spectra at 270~K as a function of loading level $n$. (b) Temperature dependence of $^{23}$Na spectra for metallic $n=16.5$ and insulating $n=11.6$ samples. All spectra were measured with short (solid black line) and long pulses (thick color line). See text for more details.}
\label{fig2}
\end{figure}
The appearance of SC is limited to samples that show metallic-like response in optical and resistivity measurements (Fig.~\ref{fig1}) and is completely absent in insulating samples, e.g. as for $n=11.6$ shown in Fig.~\ref{fig2}(b).
Temperature dependence of $^{23}$Na NMR spectra shown for $n=16.5$ in Fig.~\ref{fig2}(b), reveals that with increasing temperature the intensity of SC increases significantly above $\approx 150$~K.
This means that such states must be thermally excited from the ground state.
We can rationalize the appearance and the strong temperature dependent shift of SC within a polaron model, where thermally activated behavior is associated with the creation/annihilation of localized small polarons from the bath of (conducting) large polarons\cite{Igarashi_M_2013}.
\par
Unfortunately, the observation of SC for $n\geq 14.2$ is only an indirect proof of metallic state.
Additional complexity in the analysis of $^{23}$Na spectra arises for $n \leq 14.4$, where we observe another non-shifted component (see Fig.~\ref{fig2}), which has the optimal pulse lengths the same as SC detected at $n \geq 14.2$.
Using the same arguments as for SC, we conclude that Na atoms contributing to this line must perform large amplitude jumps between different sites in the cage.
However, unlike to SC, this line has no hyperfine interaction with unpaired electron spins, therefore we call it zero component(ZC), implying that it has a completely different origin.
It is interesting to note that the $^{23}$Na NMR spectrum for $n = 14.4$ [Fig.~\ref{fig2}(a)] shows a coexistence of ZC and SC, which may indicate an inhomogeneous distribution of sodium atoms throughout the zeolite cages.
\par
\begin{figure} [t!]
\includegraphics[width=1.0\linewidth]{Fig3}
\caption{(Color online) Representative temperature dependences of $^{23}$Na $1/T_1T$ for the SC/ZC component with shorter relaxation time.
The solid lines are fits obtained from the BPP model described by Eq.~(\ref{BPP}).}
\label{fig3}
\end{figure}
Since the metallic state cannot be unambiguously confirmed from the analysis of $^{23}$Na NMR spectra, we next move to spin-lattice relaxation data, where the Korringa behavior ($1/T_1T = {\rm const.}$) is a signature of metallic state\cite{Walstedt_2008,Slichter_1989}.
For all compositions RC has, compared to ZC and SC, by two orders of magnitude longer $T_1$.
This distinction is manifested over a wide temperature range as two separate nuclear magnetization recoveries with long and short time constants.
The temperature dependence of the short ZC/SC $T_1$ component (Fig.~\ref{fig3}), has a pronounced maximum in $1/T_1 T$, which can be between 100 and 350~K empirically modeled within the Bloembergen-Purcell-Pound (BPP)-type mechanism\cite{BPP_1948}
\begin{equation}
\label{BPP}
\left( \frac{1}{T_{1}T} \right)_{\rm BPP}=\frac{C}{T} \frac{\tau_c}{1+\omega^2\tau_c^2}.
\end{equation}
Here $\tau_c$ is the correlation time for the local field fluctuations at the nucleus, $\omega$ is the Larmor angular frequency and $C$ is a measure of the fluctuating local fields magnitude.
We estimated the activation energy for the local field fluctuations by assuming the Arrhenius type correlation time and obtained a value of around $0.1$~eV for all loading densities investigated.
The estimated activation energy is typical for atomic motion and provides yet another independent proof that sodium motion is present in both insulating and metallic state.
However, such strong relaxation due to the atomic motion completely masks the metallic Korringa contribution to the spin-lattice relaxation.
\par
\begin{figure} [b!]
\includegraphics[width=1.0\linewidth]{Fig4}
\caption{(Color online) Low-temperature dependences of $^{23}$Na $1/T_1T$ showing plateau-like behavior below 25~K for various loading levels $n$.
The solids lines are fits obtained by combining the BPP and Korringa relaxation mechanisms [Eqs.~(\ref{total})].
For comparison we added the low-temperature data for the $n=9.4$ sample, which is according to the optical reflectance and resistivity measurements considered to lie deep in the insulating phase.}
\label{fig4}
\end{figure}
Since $\tau_{\rm c}$ increases exponentially with decreasing temperature, we anticipate that according to Eq. (1) the BPP contribution to the total $^{23}$Na relaxation rate diminishes at low temperatures, i.e., the atomic-motion driven $(1/T_1T)_{\rm BPP}\rightarrow 0$ as temperature decreases.
In addition, the relaxation rates for RC and ZC/SC become comparable below $\sim 100$~K thus implying that other relaxation mechanisms, which are only moderately site dependent, set in.
Therefore, the low temperature relaxation curves were fitted with stretched exponential model
$\sim {\rm exp} [-(\tau /T_1)^\alpha]$ with a single effective $T_1$, where a factor $\alpha \approx 0.6$ accounts for distribution of $T_1$'s due to multiple sites.
The most important observation arising from this analysis is that plotting $1/T_1T$ versus temperature for metallic samples with $n \geq 14.2$, we finally find temperature independent $1/T_1T$ below $\sim 25$~K (Fig.~\ref{fig4}).
This is a hallmark of the nuclear spin-lattice relaxation via the itinerant electrons and is accounted for by the Korringa expression \cite{Walstedt_2008,Slichter_1989}
\begin{equation}
\label{Korringa}
\left( \frac{1}{T_{1}T} \right)_{\rm metal} = \frac{4\pi k_B}{\hbar}\frac{\gamma_{\rm Na}^2}{\gamma_e^2}K_{\rm iso}^2 \propto [N(E_F)]^2.
\end{equation}
Here $\gamma_e$ and $\gamma_{\rm Na}$ are the electronic and $^{23}$Na gyromagnetic ratios, respectively.
The temperature independent isotropic Knight shift, $K_{\rm iso}$, which is proportional to Pauli spin susceptibility, is a measure of DOS at the Fermi level, $N(E_F)$.
Comparing the low-temperature $(1/T_1T)$ values for different $n$, we find that in metallic samples $1/T_1T$ is enhanced relative to insulating samples by an order of magnitude thus corroborating with the additional relaxation channel presented by Korringa relaxation mechanism.
Moreover, a monotonic increase of low-temperature $(1/T_1T)$ with $n$ for $n \geq 14.2$ speaks for a monotonic increase of $N(E_F)$ with Na loading level.
\par
In order to quantitatively extract $(1/T_1T)_{\rm metal}$ from the measured temperature dependence of $1/T_1T$, we assumed that the relaxation rate has two contributions, the BPP and Korringa, described by Eqs.~(\ref{BPP}) and (\ref{Korringa}), respectively. That is, the data was fitted to
\begin{equation}
\label{total}
\left( \frac{1}{T_{1}T} \right) = \left( \frac{1}{T_{1}T} \right)_{\rm BPP}+\left( \frac{1}{T_{1}T} \right)_{\rm metal}.
\end{equation}
Table~\ref{DOS} summarizes the values of $\left( 1/T_{1}T \right)_{\rm metal}$ for all $n$ in the range between 9.4 and 16.5 and is compared to the related value measured in bulk metallic Na.
We note that the $(1/T_1T)_{\rm metal}$ values are by more than one order of magnitude smaller than that of bulk metallic Na thus indicating a relatively low $N(E_F)$ in these metallic samples.
This conclusion is further supported, if we use $(1/T_1T)_{\rm metal} = 2.3\times 10^{-2}$~s$^{-1}$K$^{-1}$ of $n = 16.5$ sample to calculate $K_{\rm iso} = 300$~ppm from Eq.~(\ref{Korringa}) and compare it to much larger value of 1120~ppm found in metallic sodium \cite{Walstedt_2008}.
Surprisingly, the plateau-like low-temperature behavior in $1/T_1T$ is also seen for $n=11.6$ and $11.3$, where the Drude term is not observed and resistivity diverges at low temperature.
We stress that the observed low-temperature $1/T_1T$ clearly rules out the possibility that for this loading range the system can be discussed as a narrow-gap semiconductor. In this case, $1/T_1T$ would be dominated by the $\exp(-\Delta/k_BT)$ term for $k_BT \ll \Delta$, which decays exponentially with decreasing temperature\cite{Grykalowska_2007}, in disagreement with the experimental data.
\begin{table}
\caption{Extracted values of $^{23}$Na $\left( 1/T_{1}T \right)_{\rm metal}$ as a function of Na loading level $n$. The values of $N(E_F)$ normalized to metallic sodium $^{\rm Na}N(E_F)$ listed in column three are calculated using Eq.~(\ref{Korringa}) and taking $\left( 1/T_{1}T \right)=0.210 \pm 0.004$~s$^{-1}$K$^{-1}$ of metallic sodium\cite{Walstedt_2008}.}
\begin{tabular}{cccc}
\hline \hline
Loading & $\left( 1/T_{1}T \right)_{\rm metal}$ & $N(E_F)/^{\rm Na}N(E_F)$
\\
level $n$ & (s$^{-1}$K$^{-1}$) &
\\
\hline
$16.5$ & $0.0245 \pm 0.0017$ & $0.343 \pm 0.031$
\\
$15.9$ & $0.0190 \pm 0.0006$ & $0.302 \pm 0.016$
\\
$14.5$ & $0.0146 \pm 0.0004$ & $0.264 \pm 0.012$
\\
$14.4$ & $0.0086 \pm 0.0008$ & $0.204 \pm 0.023$
\\
$14.2$ & $0.0047 \pm 0.0004$ & $0.150 \pm 0.017$
\\
$11.6$ & $0.0019 \pm 0.0001$ & $0.094 \pm 0.007$
\\
$11.3$ & $0.0013 \pm 0.0001$ & $0.080 \pm 0.008$
\\
$9.4$ & $0.0005 \pm 0.0001$ & $0.049 \pm 0.005$
\\
\hline \hline
\end{tabular}
\label{DOS}
\end{table}
\section{Discussion}
Plotting the normalized $N(E_F)$ extracted from Eq.~(\ref{Korringa}) versus the sodium loading level $n$ (Fig.~\ref{fig5}), we find that $N(E_F)$ markedly increases with $n$ for $n \geq 14.2$, thus speaking for the enhancement of DOS at Fermi level $N(E_F)$ with doping, which is in qualitative agreement with the enhancement of optical reflectance [Fig.~\ref{fig1}(a)].
We note that for the most loaded sample ($n = 16.5$), the extracted $N(E_F)$ is by a factor of $\sim 3$ smaller than the corresponding value in bulk Na.
Our study of a minimal single-orbital Hubbard model revealed\cite{Zitko_2015} that the experimentally observed strong variation of Drude peak in optical conductivity cannot be explained solely by band filling effects. In fact, it shows that the variation of electron-electron repulsion $U$ divided by a bandwidth $W$, is much more relevant, meaning that the main effect of sodium loading is to define the electronic potential and the Coulomb repulsion felt by the electrons in the zeolite cages.
Similarly, the importance of electron correlations has been theoretically\cite{Arita_2004} and experimentally\cite{Nakano_1999,Ikemoto_2000} recognized for related potassium-loaded LTA and FAU zeolites.
\par
\begin{figure} [t!]
\includegraphics[width=1.0\linewidth]{Fig5}
\caption{(Color online) The phase diagram of Na$_n$/Na$_{12}$-LSX zeolite showing the values of (a) $\left( 1/T_{1}T \right)_{\rm metal}$, (b) normalized $N(E_F)$ and (c) NMR spectrum intensity for ZC and SC as a function of Na loading level $n$.
The color gradient divides the insulating and metallic regions as experimentally observed from the resistivity and optical conductivity measurements.}
\label{fig5}
\end{figure}
However, the correlation-driven MIT is expected to be of first order\cite{Dobrosavljevic_2011}, which is not directly supported by the present data.
We recall that a finite $N(E_{\rm F})$ has been observed even in the nominally insulating samples of Na$_n$/Na$_{12}$-LSX.
For example, the $n=11.3$ sample exhibits a resistivity diverging at low temperatures and is described by an energy band gap of 0.2~eV.
At the same time, the local probe $^{23}$Na NMR for this sample shows the finite $N(E_{\rm F})$, more precisely $\sim 8$\% of the value found in metallic sodium.
The phase diagram shown in Fig.~\ref{fig5} is more reminiscent of a metal-to-insulator crossover rather than the sharp transition thus calling for considerations of other relevant microscopic factors.
\par
In the alternative picture, where disorder is the driving mechanism for MIT, a continuous metal-to-insulator is typically found at finite temperatures. \cite{Rosenbaum_1980}
The reason is that at finite temperatures the electrons can escape the trapping potential through the thermal activation.
Indeed, the alkali-doped zeolites can be viewed as a strongly disordered system for several reasons.
First, by a careful analysis of the spectral intensities belonging to SC and ZC as a function of sodium loading level [Fig.~\ref{fig5}(c)], we identify a region around $n=14.4$ where both spectral components coexist, implying an inhomogeneous Na distribution in the cages that is responsible for a cage-to-cage variation in the electric potential depths.
Second, as pointed out in the x-ray study by Ikeda {\it et al.}\cite{Ikeda_T_2014}, in Na$_n$/Na$_{12}$-LSX the coordinates of available sodium sites in the zeolite framework remain the same as a function of sodium loading level.
What is changing with $n$ is the average occupancy of these sites. Local variations in the Na arrangements in the neighboring cages is responsible for the variations in the local potential depth and thus for the disorder.
For low carrier densities, the trapping potential of disordered sodium clusters within the cages is expected to become comparable or larger than the Fermi level, and the electrons get localized.
At higher loading densities not only the carrier density increases, but also the disorder strength decreases since the loaded sodium atoms reach the limit of the highest possible occupancy in the cages\cite{Ikeda_T_2014}, explaining the MIT.
\par
However, at the critical loading densities yet another possibility of a percolation-type metal-to-insulator transition\cite{Dobrosavljevic_2011} opens.
Although, the disorder-driven scenario in the presence of varying correlation effects seems plausible, we should be aware that the variation of alkali atom loading level not only changes the amount of disorder but also strongly determines the electric potential felt by the electrons.
Using electron-density distribution analysis Ikeda {\it et al.}\cite{Ikeda_T_2014} observed a formation of the chain-like Na cation distribution in metallic Na$_{16.7}$/Na$_{12}$-LSX, which connects the neighboring supercages.
This Na-Na connectivity is not formed in nominally insulating Na$_{9.4}$/Na$_{12}$-LSX.
In the percolation picture of a random (disordered) potential a small metallic regions of connected supercages are formed at low sodium loading densities separated by insulating areas.
When the electron density increases the metallic regions grow and eventually become connected at the percolation threshold\cite{Dobrosavljevic_2011}.
Although our observation of the crossover regime and the inhomogeneous distribution of sodium atoms throughout the zeolite lattice tentatively support the percolation picture, we leave the important question of the true nature of MIT in alkali doped zeolites open for further studies.
However, what is made clear from present results is that for the Na$_n$/Na$_{12}$-LSX zeolites the concentration of sodium atoms strongly affects in a very complex way the electronic properties by simultaneously changing the band filling, the electronic potential depth, the electron-electron repulsion and the amount of disorder.
\par
\section{Summary}
Using NMR as a local probe of sodium loaded low-silica X zeolite (Na$_n$/Na$_{12}$-LSX), we have unambiguously confirmed a metallic ground state for higher loading densities of $n \geq 14.2$.
By extracting the DOS at the Fermi level as a function of sodium loading level, we have shown a rather continuous (crossover like) evolution across the metal-to-insulator transition.
Most importantly, a finite DOS at the Fermi level for nominally insulating samples and a clear indication of inhomogeneous Na distribution in the neighboring cages in the crossover region, put some constraints on the driving mechanism of electron localization and the nature of MIT in alkali-doped zeolites.
\section{Acknowledgment}
We thank Dr. T. Ikeda for helpful discussions about the structure of Na$_n$/Na$_{12}$-LSX zeolite.
MI especially thanks to Dr. T. Shimizu, Dr. A. Goto, Dr. K. Hashi, and Mr. S. Ohki.
This study was partially supported by Grants-in-Aid for Scientific Research (KAKENHI) [Grants No. 24244059(A) and No.26400334(C)] from Japan Society for the Promotion of Science.
|
2,877,628,091,286 | arxiv | \section{Introduction}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{bodewadtha.eps}
\caption{B\"{o}dewadt-Hartmann Configuration.}
\label{bodew_ha}
\end{figure}
We are concerned here with the interaction of a vortex with a plane surface
in the presence of an imposed magnetic field $\mathbf B$. The axis of the vortex and the
magnetic field are both normal to the surface and, for simplicity, we take
the flow to be axisymmetric (see Figure \ref{bodew_ha}). We are interested in
characterizing the decay of the vortex, due either to surface friction or to
magnetic damping. Such geometries are important in geophysics (motion in
planetary interiors are dominated by Coriolis and Lorentz forces), in
engineering (for example, in the magnetic damping of turbulence in
castings), and in laboratory studies of MHD turbulence.
Our geometry combines two classical problems in fluid mechanics: the
B\"{o}dewadt layer and the Hartmann layer. These are illustrated in Figure
\ref{classical_bl}. In the B\"{o}dewadt problem there is no magnetic field and the fluid is
in a state of rigid-body rotation above a plane surface. A boundary layer
develops, of approximate thickness,
\begin{displaymath}
\delta _\Omega = \left( {\nu / \Omega } \right)^{1 \mathord{\left/ {\vphantom
{1 2}} \right. \kern-\nulldelimiterspace} 2}
\end{displaymath}
\noindent
where $\nu$ is the fluid viscosity and \textit{$\Omega $} is the core rotation rate. Within this
boundary layer there is an imbalance between the local centrifugal force,
$\rho u_\theta ^2 / r$, and the radial pressure gradient, ${\partial p}
\mathord{\left/ {\vphantom {{\partial p} {\partial r}}} \right.
\kern-\nulldelimiterspace} {\partial r}$, which is established outside the
boundary layer by the rigid-body rotation
\footnote{ We
use cylindrical polar coordinates (\textit{r,$\theta $,z}) throughout.}.
That is, the core
rotation sets up a radial pressure gradient of ${\partial p} \mathord{\left/
{\vphantom {{\partial p} {\partial r}}} \right. \kern-\nulldelimiterspace}
{\partial r} = \rho \Omega ^2r$ and this is imposed on the boundary layer
where $u_{\theta }$ is locally diminished due to viscous drag. The result is
a radial inflow within the boundary layer. By continuity there is an upward
flux of mass out of the B\"{o}dewadt layer and into the core, and in the
configuration shown in Figure \ref{classical_bl}(a) this leads to a weak secondary flow in
the core, \textbf{u}$_{p}$. This secondary (poloidal) flow is crucial to the
development of the vortex, since it sets up a Coriolis force, $ - 2u_r
\Omega {\rm {\bf \hat {e}}}_\theta $, which opposes the core motion and
tends to decelerate the vortex. This kind of motion is seen, for example, in
the spin-down of a stirred cup of tea.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{classical_bl.eps}
\caption{Classical Boundary layers. (a) B\"{o}dewadt layer; (b) Hartmann layer. In
the Hartmann layer, \textbf B is dominant in the sense that $\sigma B^2 >>\rho \Omega $.}
\label{classical_bl}
\end{figure}
The Hartmann problem is shown in Figure \ref{classical_bl}(b). Here the inertial forces are
weak and the Lorentz force is strong in the sense that the Elsasser number
\begin{equation}
\label{eq1}
A = \frac{\sigma B^2}{2\rho \Omega }
\end{equation}
\noindent
is assumed large. (\textit{$\sigma $} is the electrical conductivity of the fluid.) A boundary
layer is again established, although this time it turns out to have a
thickness of the order of
\begin{equation}
\label{eq2}
\delta _B = \left( {\nu\tau } \right)^{1 \mathord{\left/ {\vphantom {1 2}}
\right. \kern-\nulldelimiterspace} 2}\;\;\;\;,\;\;\;\;\;\tau ^{ - 1} =
\sigma B^2 / \rho
\end{equation}
\noindent
where \textit{$\tau $} is the so-called Joule damping time. Within this boundary layer an
electric current flows in accordance with Ohm's law
\begin{equation}
\label{eq3}
{\rm {\bf J}} = \sigma \left( {{\rm {\bf u}}\times {\rm {\bf B}} - \nabla V}
\right)
\end{equation}
\noindent
where $V$ is the electrostatic
potential. (The induced magnetic field, defined via $\nabla \times {\rm {\bf
b}} = \mu {\rm {\bf J}}$, is neglected throughout on the assumption that
$\mu \sigma u\delta$ is small. This is valid in laboratory and engineering applications, but is
not always true in the core of the earth.) The electric field, ${\rm {\bf
E}} = - \nabla V$, in the Hartmann layer is set by the electric field in the
core flow, ${\rm {\bf E}} = - \Omega rB{\rm {\bf \hat {e}}}_r $, and this
dominates the weaker ${\rm {\bf u}}\times {\rm {\bf B}}$ term in the
boundary layer. The net result is radially inward flow of current.
Continuity of current then requires that there is an upward flow of current
out of the boundary layer and into the core. In the configuration shown in
Figure \ref{classical_bl}(b) this leads to a weak poloidal current, $\mathbf J_{p}$, in the
core. (Note the similarity between \textbf{u}$_{p}$ and \textbf{J}$_{p}$ in
Figures \ref{classical_bl}(a) and \ref{classical_bl}(b).) This core current is crucial since it results in a
Lorentz force ${\rm {\bf J}}\times {\rm {\bf B}} = - J_r B{\rm {\bf \hat
{e}}}_\theta $ which retards the core vortex.
The crucial feature of both the B\"{o}dewadt and Hartmann flows is that
they constitute active boundary layers, in the sense that they react back on
the core flow which created them in the first place. The problem of interest
here is shown in Figure \ref{fig3}. The Elsasser number is allowed to be big or
small, so that we may capture both the B\"{o}dewadt problem, when $A\to
0$, and the Hartmann problem, $A$~$ \to $~\textit{$\infty $}. When $A\sim 1$ we expect both
phenomena to be present. The questions which are important are: (i) how does
the boundary layer thickness scale with $A$; and (ii) for a given value of $A$, is
the deceleration of the core vortex due primarily to the Coriolis force, $ -
2u_r \Omega {\rm {\bf \hat {e}}}_\theta $, or to the Lorentz force, $ - J_r
B{\rm {\bf \hat {e}}}_\theta $?
There is close correspondence between our problem and the well-known
Ekman-Hartmann layer. This latter is shown in Figure \ref{ekman_ha}. An Ekman layer is
formed when there is an infinitesimal difference in rotation between a
rapidly rotating fluid and an adjacent, plane surface. When the surface
rotates slightly slower than the fluid we find a secondary flow very like
that in the B\"{o}dewadt problem. Indeed, the mechanism which generates the
poloidal flow is essentially the same as in a B\"{o}dewadt layer. Thus,
phenomenologically, Ekman layers and B\"{o}dewadt layers are very similar.
When a magnetic field is added to an Ekman layer we get the Ekman-Hartmann
problem, which shares many of the same characteristics as a
B\"{o}dewadt-Hartmann layer. However, one of the main differences is that,
when viewed in a rotating frame of reference, inertia is negligible in the
Ekman-Hartmann problem. (In fact, we interpret the phrase `rapid rotation'
to mean that ${\rm {\bf u}} \cdot \nabla {\rm {\bf u}}$ is negligible by
comparison with the Coriolis force, $2{\rm {\bf u}}\times \Omega $.) Thus
the Ekman and Ekman-Hartmann problems are linear. The B\"{o}dewadt-Hartmann
problem, on the other hand, is not.
The layout of the paper is as follows. In section 2 we review the properties
of an Ekman-Hartmann layer in a semi-infinite fluid. The local properties of
such layers are well-known. (See, for example, \cite{acheson73}.)
However, we are interested here in axisymmetric flows of the Karman type
($u_{r}$ and $u_{\theta }$ linear in $r)$ and so we redevelop the conventional
analysis in cylindrical polar coordinates and restrict solutions to those
with Karman similarity. This allows us to place the subsequent non-linear
problem in context. Next, in section 3, we focus on B\"{o}dewadt-Hartmann
layers. Once again, the discussion is restricted to a semi-infinite domain.
Here we develop the ideas of \cite{stephenson69} and \cite{loffredo86}, who
noted that such layers admit self-similar solutions of the Karman type.
However, we go further than these authors, developing approximate solutions
for these Karman-like flows, the validity of which is confirmed by fully
non-linear numerical simulations.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\textwidth]{Bodewadt_Ha.eps}
\caption{B\"{o}dewadt-Hartman geometry. The secondary flow generates a Coriolis
Force, $2u_r$\textit{$\Omega $, }which tends to oppose the vortex outside the boundary layer.
The induced current interacts with \textbf{B} to create a Lorentz force
$J_{r}B, $which also opposes the vortex.}
\label{fig3}
\end{figure}
The novel results of the paper lie in sections 4 and 5 where we move from
semi-infinite domains to confined flows. There is then a coupling of the
core motion to the boundary layer through the radial components of
\textbf{u} and \textbf{J} (see Figure \ref{fig3}). The associated Lorentz and
Coriolis forces tend to oppose the core motion and the central questions now
relate to the influence of the boundary layer on the external vortex, rather
than on the boundary layer itself. There are two canonical problems of
interest here. One is the steady-state case in which the core vortex is
maintained by some external azimuthal force (say that generated by a
rotating magnetic field) and the other is the transient problem of spin-down
from some initial state of rotation. In both cases we are interested in
determining the dominant force balance in the core. (Does the primary
resistance to motion come from the Lorentz force or the Coriolis force?) In
the steady flow we determine the magnitude of \textit{$\Omega $} as a function of $A$, while in
the transient problem we calculate the spin-down time, which also depends on
$A$.
There are two dimensionless groups which appear throughout. We have already
mentioned the Elsasser number which provides a measure of the relative sizes
of the Lorentz and Coriolis forces,
\begin{equation}
\label{eq4}
A = \frac{\sigma B^2}{2\Omega \rho } = \frac{1}{2\Omega \tau } =
\frac{\delta _\Omega ^2 }{2\delta _B^2 }.
\end{equation}
This usually lies in the range $0<A<10$, and very rarely exceeds 50. On the
other hand, the Reynolds number, $Re = \Omega W^2 / \nu$ is invariably very
large (Here $W$ is the depth of fluid: see Figure \ref{fig3}). Thus we consider the
range of parameters:
\begin{equation}
\label{eq5}
0 < A < < Re\;\;\;\;,\;\;\;\;\;Re > > 1.
\end{equation}
However, the flow is assumed to be laminar so that, in practice, $Re$ cannot
be made too large.
\section{Ekman-Hartmann Layers of the Karman Type}
Ekman and Hartmann layers are usually described as a local phenomenon, the
boundary layer being the result of some local difference in the core and
boundary velocities. As a result, they are usually discussed in a planar
framework, using cartesian coordinates. Since we are ultimately interested
in axisymmetric, nonlinear flows of the Karman type, we shall take a
different approach. We restrict ourselves to axisymmetric motion, described
using cylindrical polar coordinates ($r$,\textit{$\theta $},$z)$, and look for Ekman-Hartmann layers
which possess Karman similarity ($u_{r}$ and $u_{\theta }$ linear in $r$).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Ekman_Ha.eps}
\caption{Ekman-Hartmann geometry}
\label{ekman_ha}
\end{figure}
Consider a conducting fluid which fills the domain $z>0$ and rotates above a
plane, insulating surface located at $z=0$ (see Figure \ref{ekman_ha}). Far from the
surface we have rigid-body rotation, $u_\theta = \Omega r$, and the surface
itself rotates at the lower rate,$\Omega - \Delta \Omega $, $\Delta \Omega <
< \Omega $. Both $\Omega $ and $\Delta \Omega $ are assumed to be constant
and so, in a frame of reference rotating with the unperturbed fluid we have,
\begin{eqnarray}
\label{eq6}
0 &=& 2{\rm {\bf u}}\times \Omega - \nabla \left( {p / \rho } \right) +
\nu\nabla ^2{\rm {\bf u}} + {\rm {\bf J}}\times {\rm {\bf B}} / \rho \\
\label{eq7}
{\rm {\bf J}} &=& \sigma \left( {{\rm {\bf u}}\times {\rm {\bf B}} - \nabla V}
\right),
\end{eqnarray}
\noindent
where \textbf{B} is a uniform, imposed magnetic field which is parallel to
\textbf{$\Omega $}. (The inertial term, ${\rm {\bf u}} \cdot \nabla {\rm
{\bf u}}$, is assumed to be much smaller than the Coriolis force, and so is
omitted from (\ref{eq6}).) Taking the curl of (\ref{eq6}) twice and substituting for
\textbf{J} yields,
\begin{equation}
\label{eq8}
\nu\nabla ^4{\rm {\bf u}} - \left( {\sigma / \rho } \right)\left( {{\rm {\bf
B}} \cdot \nabla } \right)^2{\rm {\bf u}} = 2\left( {\Omega \cdot \nabla }
\right)\mathbf{\omega},
\end{equation}
\noindent
where $\mathbf{\omega} = \nabla \times {\rm {\bf u}}$. From this we may obtain the
governing equation for \textbf{u}:
\begin{equation}
\label{eq9}
\left[ {\nu\nabla ^4 - \frac{1}{\tau }\,\frac{\partial ^2}{\partial z^2}}
\right]^2{\rm {\bf u}} + \left( {2\Omega \cdot \nabla } \right)^2\nabla
^2{\rm {\bf u}} = 0
\end{equation}
\noindent
(See, for example, \cite{acheson73}). Let us now look for
axisymmetric solutions of the Karman form:
\begin{equation}
\label{eq10}
{\rm {\bf u}} = {\rm {\bf u}}_p + {\rm {\bf u}}_\theta = - \nabla \times
\left[ {r\Psi (z){\rm {\bf \hat {e}}}_\theta } \right] + rG(z){\rm {\bf \hat
{e}}}_\theta .
\end{equation}
\noindent
We find that both $G$ and ${\Psi }'(z)$ satisfy,
\begin{equation}
\label{eq11}
\left[ {\frac{\delta _\Omega ^2 }{2}\,\frac{\partial ^2}{\partial z^2} - A}
\right]^2\left( {G,{\Psi }'} \right) + \left( {G,{\Psi }'} \right) = 0,
\end{equation}
\noindent
where $\delta _\Omega$ is the B\"{o}dewadt (or Ekman) boundary-layer scale,
\begin{equation}
\label{eq12}
\delta _\Omega = \left( {\nu / \Omega } \right)^{1 \mathord{\left/ {\vphantom
{1 2}} \right. \kern-\nulldelimiterspace} 2}.
\end{equation}
\noindent
Next we solve for $G$ and ${\Psi }'$. After a little algebra we find,
\begin{eqnarray}
\label{eq13}
\frac{u_r }{\Delta \Omega r}&=& - \exp \left[ { - Rz / \delta _\Omega }
\right]\sin \left[ {z / R\delta _\Omega } \right]\\
\label{eq14}
\frac{u_\theta }{\Delta \Omega r} &=& - \exp \left[ { - Rz / \delta _\Omega }
\right]\cos \left[ {z / R\delta _\Omega } \right]\\
\label{eq15}
\frac{u_z - \left( {u_z } \right)_\infty }{2\Delta \Omega \delta _\Omega } &=&
- \exp \left[ { - Rz / \delta _\Omega } \right]\,\,\left[ {\frac{R}{1 +
R^4}\cos \left( {\frac{z}{R\delta _\Omega }} \right) + \frac{R^3}{1 +
R^4}\sin \left( {\frac{z}{R\delta _\Omega }} \right)} \right]\\
\label{eq16}
\left( {u_z } \right)_\infty &=& 2\Delta \Omega \delta _\Omega \frac{R}{1 +
R^4}
\end{eqnarray}
\noindent
where $R$ is a function of the Elsasser number,
\begin{equation}
\label{eq17}
R = \left[ {A + \left( {1 + A^2} \right)^{1 \mathord{\left/ {\vphantom {1
2}} \right. \kern-\nulldelimiterspace} 2}} \right]^{1 \mathord{\left/
{\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}
\end{equation}
\noindent
Returning now to (\ref{eq7}) we evaluate \textbf{J}. In particular we find that,
outside the boundary layer, we have,
\begin{equation}
\label{eq18}
\left( {J_z } \right)_\infty = 2\sigma B\Delta \Omega \delta _\Omega
\frac{R^3}{1 + R^4}
\end{equation}
\noindent
It is convenient that all of the electromagnetic effects are bound up in the
single parameter, $R$. If we let $A$~$ \to $~0 we capture the conventional Ekman
solution,
\begin{eqnarray}
\label{eq19}
\frac{u_r }{\Delta \Omega r} = - \exp \left[ { - z / \delta _\Omega }
\right]\sin \left[ {z / \delta _\Omega } \right] \quad &,&
\quad
\frac{u_\theta }{\Delta \Omega r} = - \exp \left[ { - z / \delta _\Omega }
\right]\cos \left[ {z / \delta _\Omega } \right]\\
\label{eq21}
\left( {u_z } \right)_\infty &=& \Delta \Omega \delta _\Omega
\end{eqnarray}
\noindent
Conversely, if we let $A\to\infty $ then we obtain the Hartmann
solution,
\begin{equation}
\label{eq22}
u_r = u_z = 0,
\quad
u_\theta = - \Delta \Omega r\,\exp \left[ { - z / \delta _B } \right],
\quad
(J_z )_\infty = 2\sigma B\Delta \Omega \delta _B
\end{equation}
Thus we have a smooth transition from a Coriolis dominated flow to a Lorentz
dominated motion. Of particular interest is the far-field values of
\textbf{u} and \textbf{J}, since it is these which feed into the core flow.
In dimensionless form these are related by,
\begin{equation}
\label{eq23}
\frac{\left( {J_z } \right)_\infty }{\sigma B(u_z )_\infty }\, = A + \left(
{1 + A^2} \right)^{1 \mathord{\left/ {\vphantom {1 2}} \right.
\kern-\nulldelimiterspace} 2}
\end{equation}
\noindent
If we define the boundary-layer thickness, \textit{$\delta $}, to be distance over which
$u$ declines by a factor of $e^{-1}$, then we also have,
\begin{equation}
\label{eq24}
\delta ^2 = \frac{\delta _\Omega ^2 }{\left( {1 + A^2} \right)^{1/2}+
A} = \frac{2\delta _B^2 }{1 +
\left({1 + A^{ - 2}} \right)^{1/2}}
\end{equation}
\noindent
Note that when $\delta _\Omega $ and $\delta _B $ are very different there
is still only one relevant length scale, \textit{$\delta $}. That is, there is no nesting of
the boundary layers, with one lying within the other. Comparing (\ref{eq23}) with
(\ref{eq24}) we see that, for arbitrary $B$,
\begin{equation}
\label{eq25}
\frac{(J_z )_\infty }{\sigma B(u_z )_\infty } = \left( {\frac{\Omega \delta
^2}{\nu }} \right)^{ - 1}
\end{equation}
\noindent
The inward flow of mass and current in the boundary layer is essentially for
the reasons given in Section 1. The mass flow arises from the radial pressure
gradient set up in the core flow. A similar argument explains the
radial current. Outside the boundary layer the electrostatic term in Ohm's
law (\ref{eq7}) is balanced by $\mathbf{u_\theta} \times {\rm {\bf B}}$
\begin{displaymath}
E_r = - \frac{\partial V}{\partial r} = - u_\theta B = - \Omega Br
\end{displaymath}
\noindent
This electric field is also imposed on the boundary layer where $ - u_\theta
B$ is insufficient to balance it. The result is a flow of current as shown
in Figure \ref{ekman_ha}.
\section{B\"{o}dewadt-Hartmann Layers in a Semi-infinite Fluid}
\subsection{The Governing Equations}
Let us now turn to the non-linear problem in which the plate in Figure 4 is
stationary. That is, we consider a B\"{o}dewadt-Hartmann layer in a
semi-infinite fluid. As noted by \cite{stephenson69}, such a layer admits a
Karman-like solution in which $u_\theta $ and $u_{r}$ are linear in $r$. This
time our governing equations, in an absolute frame of reference, are
\begin{eqnarray}
\label{eq26}
{\rm {\bf u}} \cdot \nabla {\rm {\bf u}} = - \nabla \left( {p / \rho }
\right) + \nu\nabla ^2{\rm {\bf u}} + {\rm {\bf J}}\times {\rm {\bf B}} / \rho\\
\label{eq27}
{\rm {\bf J}} = \sigma \left( {{\rm {\bf u}}\times {\rm {\bf B}} - \nabla V}
\right)
\end{eqnarray}
\noindent
From this we find
\begin{equation}
\label{eq28}
\nu\nabla ^4{\rm {\bf u}} - \left( {\sigma / \rho } \right)\left( {{\rm {\bf
B}} \cdot \nabla } \right)^2{\rm {\bf u}} = \nabla \times \nabla \times
({\rm {\bf u}}\times \bf{\omega} )
\end{equation}
\noindent
which might be compared with (\ref{eq8}). We now look for Karman-like solutions of
the form:
\begin{displaymath}
{\rm {\bf u}} = \left[ {\Omega rF(z / l),\;\;\Omega rG(z / l),\,\;\;\Omega
lH(z / l)} \right]\;,\; p = \frac{1}{2}\rho \Omega
^2\left[ {r^2 + \hat {P}(z / l)} \right]
\end{displaymath}
\noindent
where $l$ is some (as yet) unspecified length scale. The boundary conditions on
$F$, $G$, $H$ and $\hat {P}$ are:
\begin{displaymath}
{\begin{array}{*{20}c}
{z = 0\;\;:\,\;\;\;F = 0\;\;\;,\;\;\;G = 0\;\;,\;\;\;H = 0} \hfill \\
{z \to \infty \;\;:\,\;\;\;F = 0\;\;\;,\;\;\;G = 1\;\;,\;\;\;\hat {P} = 0.}
\hfill \\
\end{array} }
\end{displaymath}
\noindent
Substitution of the expression for \textbf{u} into (\ref{eq28}) yields three
ordinary differential equations for $F$, $G$, $H$ and $\hat {P}$. However, from a
physical point of view it is more interesting to work with (\ref{eq26}). First we
note that the axial component of (\ref{eq26}) gives a differential equation for
$\hat {P}$, from which we may deduce that $\hat {P}\sim \delta ^2$. In other
words, $\hat {P}$ is a small perturbation in pressure within the boundary
layer. Next we turn to the radial and azimuthal components of (\ref{eq26}). This,
in turn, requires that we evaluate \textbf{J}~$\times $~\textbf{B}. From
Ohm's law it is readily confirmed that
\begin{equation}
\label{eq29}
{\rm {\bf J}}\times {\rm {\bf B}} = \left[ { - \sigma u_r B^2,\;\; - \sigma
B(u_\theta B - \partial V / \partial r),\,0} \right]
\end{equation}
\noindent
In order to fix $V$ we specify that there is no radial current, and hence no
azimuthal Lorentz force, outside the boundary layer. In addition, (\ref{eq27})
demands
\begin{displaymath}
\nabla ^2V = \nabla \cdot \left( {{\rm {\bf u}}\times {\rm {\bf B}}} \right)
= {\rm {\bf B}} \cdot \mathbf \omega
\end{displaymath}
\noindent
from which we deduce that the electrostatic potential is of the form,
\begin{equation}
V = \frac{1}{2}B\Omega r^2 - 2B\Omega \int{\int{(1 - G)dzdz}}
\end{equation}
\noindent
It follows that the Lorentz force is simply,
\begin{equation}
\label{eq30}
{\rm {\bf J}}\times {\rm {\bf B}} = \left[ { - \sigma B^2u_r ,\; - \sigma
B^2(u_\theta - \Omega r),\;0} \right]
\end{equation}
\noindent
The radial and azimuthal components of (\ref{eq26}), along with
the continuity equation, then yield
\begin{eqnarray}
\label{eq31}
F^2 + H{F}' - G^2 + 1 = (\nu / \Omega l^2){F}'' - 2AF\\
\label{eq32}
2FG + {G}'H = (\nu / \Omega l^2){G}'' - 2A[G - 1]\\
{H}' + 2F = 0. \nonumber
\end{eqnarray}
\noindent
Finally, it is of interest to determine the
magnitude of the current leaving the boundary layer. This is fixed by (\ref{eq27})
in the form,
\begin{equation}
\nabla \times {\rm {\bf J}} = \sigma ({\rm {\bf B}} \cdot \nabla ){\rm {\bf
u}}
\end{equation}
\noindent
the axial component of which yields
\begin{equation}
\label{eq33}
(J_z )_\infty = 2\sigma B\Omega \int_0^\infty {(1 - G)dz}
\end{equation}
\noindent
It is convenient to choose $l = \delta _\Omega $, the B\"{o}dewadt boundary
layer thickness. Our governing equations for \textbf{u} then simplify to
\begin{eqnarray}
\label{eq34}
{F}'' = F^2 + H{F}' + (1 + G)(1 - G) + 2AF\\
\label{eq35}
{G}'' = 2FG + H{G}' + 2A(G - 1)\\
\label{eq36}
{H}' = - 2F,
\end{eqnarray}
These equations are readily solved numerically to give F, G and H. This then
yields $(J_z )_\infty / \sigma B\Omega \delta _\Omega $ and $(u_z )_\infty / \Omega
\delta _\Omega $ as functions of A, which is the primary information we need
for the problems of sections 4 and 5. However, we shall see that it is
possible to obtain analytical estimates of $J_{z}$ and $u_{z}$, which turn out
to be more useful.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\textwidth]{Ekman.eps}
\caption{B\"{o}dewadt layer profiles obtained under different assumptions. Solid:
analytical solution of the Ekman problem. Dotted: numerical solution for the B\"odewadt problem
calculated in section 4. Dashed: weakly non-linear solution. Curves above the $z-$ axis
represent azimuthal velocity profiles and curves below represent radial velocity profiles.}
\label{ekman_sol}
\end{figure}
\noindent
When $A=0$ (no magnetic field) our governing equations represent the standard
equations describing a B\"{o}dewadt layer. Let us consider this special case
for a moment. The governing equations are,
\begin{eqnarray}
\label{eq37}
{F}'' = (1 + G)(1 - G) + F^2 + H{F}'\\
\label{eq38}
{G}'' = 2FG + {G}'H\\
\label{eq39}
{H}' = - 2F
\end{eqnarray}
\noindent
On integration these yield $F_\infty = 0$, $G_\infty = 1$ and $H_\infty =
1.349$. Actually, as Greenspan points out, a rough approximation to the
B\"{o}dewadt solution may be obtained by replacing the non-linear inertial
terms on the right of (\ref{eq37}) and (\ref{eq38}) by the Coriolis force, $ - 2{\rm
{\bf u}}_r \times \Omega $, where \textbf{u}$_{r}$ is the velocity measured
in a frame of reference rotating with the fluid at infinity. This gives
\begin{equation}
\label{eq40}
{F}'' = 2(1 - G)\;\;\;,\;\;\;\;{G}'' = 2F
\end{equation}
\noindent
Of course, this leads to the Ekman solutions (\ref{eq19}), with $\Delta
\Omega $ set equal to $\Omega $. Such a procedure is usually called the
linear, or Ekman, approximation. Surprisingly, there is a reasonable
qualitative agreement between the linear (Ekman) approximation and the exact
non-linear solution (see \cite{green69}, and Figure \ref{ekman_sol}). In both cases the
solution takes the form of a decaying oscillation of $F$ and $G-1$, and the
frequency of oscillation is very similar in the two cases. However, the
linear approximation over-estimates the exponential-like decay of $F$ and
$G-1$ by a factor of about 2. It also underestimates $H_\infty $ by around
35{\%}.\\
Turning now to the other extreme, of large $A$, equation (\ref{eq35}) reduces to
\begin{equation}
\label{eq41}
{G}'' = 2A(G - 1)
\end{equation}
\noindent
This, plus (\ref{eq33}), yields
\begin{eqnarray}
\label{eq42}
(J_z )_\infty = 2\sigma B\Omega \delta _B \\
\label{eq43}
G = 1 - \exp \left[ { - z / \delta _B } \right]
\end{eqnarray}
\noindent
Of course, these coincide exactly with the results of the linear
Ekman-Hartmann analysis (in the limit of large $A)$. The exact correspondence
between (\ref{eq43}) and (\ref{eq22}), when $\Delta \Omega $ is set equal to $\Omega $,
is inevitable since in both cases the non-linear inertial terms are
neglected.
The general picture, then, is that the linear Ekman-Hartmann approximation
(with $\Delta \Omega $ set equal to $\Omega )$ yields results which are
qualitatively similar to the B\"{o}dewadt-Hartmann problem when $A$~=~0, and
that the two analyses coincide when $A$ becomes large. We now show how to
obtain an improved approximation to the B\"{o}dewadt-Hartmann solution which
has a simple algebraic form. We follow a method originally developped for B\"odewadt
layers (see for example \cite{cole68}).
\subsection{An Approximate Analytical Solution}
Before tackling the weakly non-linear problem, it is important to note that
the full system (\ref{eq34})-(\ref{eq35}) with associated boundary conditions need not have
a unique solution (see \cite{zandbergen87}). Physically, however this most likely
relates to the absence of lateral boundary
conditions, which appear to play a determining role in real experiments. The solution we
look at is the physically most important, in that it is the one which appears in practice when fixed
lateral boundaries are included at large radius.
Let us return to (\ref{eq34}) and (\ref{eq35}) and look for solutions at large $z$. If we
linearise the equations around the far-field solution $(F,G,H) =
(0,1,H_\infty )$ we obtain
\begin{eqnarray}
\label{eq44}
{F}'' - H_\infty {F}' - 2AF = - 2\hat {G}\\
\label{eq45}
{\hat {G}}'' - H_\infty {\hat {G}}' - 2A\hat {G} = 2F
\end{eqnarray}
\noindent
where $\hat {G} = G - 1$. This yields oscillatory solutions of the form,
\begin{equation}
\label{eq46}
\hat {G}_\infty ,F_\infty \sim \exp \left[ { - (\hat {R} - H_\infty / 2)z /
\delta _\Omega } \right]\exp \left[ {\pm jz / \hat {R}\delta _\Omega }
\right]
\end{equation}
\noindent
where
\begin{equation}
\label{eq47}
\hat{R} = \left[ {\hat{A}+(1+\hat{A}^2)^{1/2}} \right]^{1/2}
, \quad \hat {A} = A + H_\infty ^2 / 8, \quad j^2 = -1
\end{equation}
\noindent
Note that if we set $H_\infty $ to zero in (\ref{eq46}) and (\ref{eq47}) we obtain the
linear Ekman estimate. Let us now make two approximations. First, we take
$H_\infty $ to be given by the linear Ekman-Hartmann solution (\ref{eq17}):
\begin{equation}
H_\infty = \frac{2R}{1 + R^4}\;\;\;\;,\;\;\;\;\;R = \left[ {A + \left( {1 +
A^2} \right)^{1 \mathord{\left/ {\vphantom {1 2}} \right.
\kern-\nulldelimiterspace} 2}} \right]^{1 \mathord{\left/ {\vphantom {1 2}}
\right. \kern-\nulldelimiterspace} 2}
\end{equation}
\noindent
Second, we assume that our estimate of $\hat {G}_\infty $ and $F_\infty $
are valid, not just for large $z$, but for all $z$. If this is true then,
\begin{eqnarray}
\label{eq48}
\hat {G} = G - 1 = - \exp \left[ { - (\hat {R} - H_\infty / 2)z / \delta
_\Omega } \right]\cos (z / \hat {R}\delta _\Omega )\\
\label{eq49}
F = - \exp \left[ { - (\hat {R} - H_\infty / 2)z / \delta _\Omega }
\right]\sin (z / \hat {R}\delta _\Omega )
\end{eqnarray}
\noindent
Let us now see how our guesses have faired. We look first at small $A$. When
$A$~=~0 (a pure B\"{o}dewadt layer) we have $\hat {R} = 1.064$ and the
resulting curves for $F$ and $G$ are plotted in Figure 5. The exact solution and
the linear Ekman approximation are also given for comparison. Evidently,
there is a reasonable correspondence between (\ref{eq48}) and (\ref{eq49}) and the exact
solution. For large $A$, on the other hand, (\ref{eq48}) and (\ref{eq49}) reduce to
\begin{equation}
\label{eq50}
F = 0\,\,\,,\,\,\,G = 1 - \exp \left[ { - z / \delta _B } \right]
\end{equation}
\noindent
which corresponds precisely with both the exact solution and the Ekman
approximation.
Given that (\ref{eq48}) and (\ref{eq49}) are reasonably accurate for small $A$, and exact
for large $A$, our proposal is to adopt them as approximations to the
B\"{o}dewadt-Hartmann layer in sections 4 and 5. The corresponding current
distribution is given by (\ref{eq33}) and this, combined with (\ref{eq48}), fixes
$J_{z}$. In summary then, we have the following estimates of $(u_z )_\infty $
and $(J_z )_\infty $:
\begin{eqnarray}
\label{eq51}
\frac{(u_z )_\infty }{\Omega \delta _\Omega } = \frac{2R}{1 +
R^4}\;\;\;\;,\;\;\;\;R = \left[ {A + (1 + A^2)^{1 \mathord{\left/ {\vphantom
{1 2}} \right. \kern-\nulldelimiterspace} 2}} \right]^{{\,\;1}
\mathord{\left/ {\vphantom {{\,\;1} 2}} \right. \kern-\nulldelimiterspace}
2}\\
\label{eq52}
\frac{(J_z )_\infty }{2\sigma B\Omega \delta _\Omega } = \frac{\hat
{R}^2(\hat {R} - H_\infty / 2)}{1 + \hat {R}^2(\hat {R} - H_\infty /
2)^2}\;\;\;\;,\;\;\;\;\hat {R} = \left[ {\hat {A} + (1 + \hat {A}^2)^{1
\mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}}
\right]^{\;1 \mathord{\left/ {\vphantom {1 2}} \right.
\kern-\nulldelimiterspace} 2}
\end{eqnarray}
\subsection{Numerical Solutions of the Governing Equations}
Before adopting (\ref{eq51}) and (\ref{eq52}) it seems sensible to compare these with
the exact solutions of (\ref{eq34})-(\ref{eq36}) obtained by numerical means.
First of all, the problem is expressed on a finite interval, using the
variable shift $\frac{z}{\delta _\Omega } = - \ln \left( {1 - t} \right)$.
The resulting system is then discretized with a centred finite differences
method. The associated non-linear set of equations is solved using a
Newton-Raphson algorithm (the equivalent first-order system is then
5-dimensional). In order to be able to compute the large number of points
required to reach high values of $z$ in a reasonable computation time, we need a
fast matrix inversion. We proceed as follows: the $5n$ equations (where $n$ is the
number of points in the mesh) are ordered so that the finite
difference system is represented by a bi-diagonal 5$\times $5-block matrix
(diagonal blocks $J_{k}$ and sub-diagonal blocks $K_{k}$ ) with a block C in
the upper right corner containing the boundary conditions:
\begin{equation}
\left[ {{\begin{array}{*{20}c}
{J_1 } \hfill & \hfill & \hfill & C \hfill \\
{K_2 } \hfill & {J_2 } \hfill & \hfill & \hfill \\
\hfill & {...} \hfill & {...} \hfill & \hfill \\
\hfill & \hfill & {K_n } \hfill & {J_n } \hfill \\
\end{array} }} \right]\times \left[ {{\begin{array}{*{20}c}
{X_1 } \hfill \\
{X_2 } \hfill \\
{...} \hfill \\
{X_n } \hfill \\
\end{array} }} \right] = \left[ {{\begin{array}{*{20}c}
{F_1 } \hfill \\
{F_2 } \hfill \\
{...} \hfill \\
{F_n } \hfill \\
\end{array} }} \right]
\end{equation}
A first system S$_{1}$ is formed with the first block-line. It involves
unknown blocks $X_{1 }$and $X_{n }$ (the unknown vector $X$ is split into $n$
5-dimensional ``block-vectors''). The unknown $X_{n}$ is expressed
recursively as a function of $J_{k}$, $K_{k}$ \textit{/ k$ \in ${\{}2..n{\}}} and $X_{1}$ thanks to the
bi-diagonal structure of the matrix. The resulting system of 5 equations
($i.e.$ one block) can then be added to S$_{1}$ to give an invertible system whose
solutions are $X_{1}$ and $X_{n}$. The other unknowns are then deduced
recursively. This inversion method requires a number
of operations proportional to $n$ (versus $n^2$, if the Jacobian matrix had
been directly inverted) which considerably reduces the computation time.
The accuracy of the procedure was checked by comparing the analytical and numerical
solutions of the Ekman problem.\\
\indent
We first investigate the non-magnetic case ($A=0$). Figure \ref{ekman_sol} shows a comparison
between different estimates the B\"odewadt problem: the fully non-linear solution of
(\ref{eq37})-(\ref{eq39}) (obtained numerically
on 23000 points), the weakly non-linear solution (\ref{eq48})-(\ref{eq49}) and the
linear Ekman approximation. It appears
that non-linear effects are responsible for rather stronger oscillations in the
velocity profiles than those predicted by (\ref{eq48})-(\ref{eq49}). This is consistent with the assumptions on which the
analytical solution (\ref{eq48})-(\ref{eq49}) relies, as the latter extrapolates a
solution (\ref{eq46})-(\ref{eq47}) valid for large $z$ and takes the value returned by the
linear Ekman-Hartmann theory for $H_{\infty }$. The associated Ekman pumping is
then underestimated by \textit{35{\%}} (compared to the fully non-linear solution), and so
are the radial velocity and the oscillations. For $A=0$, the discrepancy between
(\ref{eq48})-(\ref{eq49}) and the full numerical result is below \textit{10{\%}} on azimuthal and
radial velocity, which is not such an expensive price to pay for an
analytical solution.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{vitesses.eps}
\caption{Velocity profiles in the B\"odewadt-Hartmann layer for values of $A$
in the range $10^{-3} \to 10^3$. The arrows go from low to high $A$ curves.}
\label{vitesses}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{debits.eps}
\caption{Vertical velocity (left) and vertical electric current (right) at the edge
of the B\"odewadt-Hartmann layer. The numerical simulation is given by the solid line
and the approximate solution (\ref{eq51}-\ref{eq52}) by the dashed line.}
\label{debits}
\end{figure}
We now turn our attention toward the non-linear MHD case (\textit{A$ \ne $}0). System
(\ref{eq37}-\ref{eq39}) is solved numerically for values of $A$ taken in the range $10^{ -
3} \rightarrow 10^{3}$ (figure \ref{debits}).
For A$<1$, the velocity
profiles across the layer are close to the well-known B\"odewadt layer
profile, but with the difference that the oscillating part of the profile is
softened by the action of the magnetic field, as predicted by (\ref{eq48})-(\ref{eq49}).
For $A>>1,$ the profile is rather close to the exponential profile of the Hartmann
layer without oscillations. In this case, the results of the numerical
simulation may be compared not only with our approximate solution (\ref{eq48})-(\ref{eq49}),
but also with those of \cite{psm00}, which applies to Hartmann layers with weak inertia.
Figure \ref{debits}
shows the various estimates of $(u_z)_\infty$ and $(J_z)_\infty$, the vertical velocity
and current leaving the boundary layer. The left-hand figure shows $(u_z)_\infty$. It can be seen
that our approximate solution (\ref{eq51}) is close to the exact value for all $A$. The model
of \cite{psm00} is accurate for $A>2$ but not for small $A$. The right-hand figure shows $(J_z)_\infty$
corresponding to approximation (\ref{eq52}), as well as the exact, computed value. The approximate solution
is good for $A>1$ but overestimates the current for $A<1$.\\
Note that \cite{desjardins99}, \cite{desjardins01} have investigated the stability of
geophysical Ekman-Hartmann layers. Their study differs from the present
problem by the geometry (spherical) and also by the fact that magnetic field
and rotation are not aligned nor orthogonal to the layer. The results
however do not depend on the co-latitude (along which the angle between the
layer and the rotation vary) which suggest that they might be of some
relevance here. In the non-magnetic case, it is found that the flow is
non-linearly unstable for values of the Reynolds number scaled on the boundary
layer thickness above 0.55 ($A=0)$ and linearly unstable for values above 40. The
presence of the magnetic field makes the flow more stable so that these
values are changed to 0.71 and 160 respectively for $A=1$. (The linear stability
value corresponds to the co-latitude for which the rotation is orthogonal to
the boundary layer). This is consistent with the fact that plane Hartmann
layers are indeed much more stable than rotation layers. (They have a
linear stability threshold around 50000, according to
\cite{roberts67}). These results underline the fact that
the solutions obtained numerically in this section are only valid below a
threshold value of $\Omega$, which increases with $A$. For stronger rotation,
\cite{desjardins01} showed that traveling waves appear in the plane of the
layer.
\section{Forced Vortex in a Confined Layer}
We now look at flows which are typical of laboratory experiments. In particular,
we consider a pool of depth $W$, the depth being
assumed to be much greater than the B\"{o}dewadt-Hartmann boundary layer
thickness (figure \ref{fig3}). That is, we restrict ourselves to free surface
flows which have a
high Reynolds number. There are two particular cases of interest. The first
is where a steady vortex is maintained by an external azimuthal force, say
that produced by a rotating magnetic field. We shall study that problem
here. The second, which we leave to section 5, is the transient problem of
spin down from some initial state of rotation. The geometry for both cases
is shown in Figure \ref{fig3}. For simplicity, we model the free surface at
$z=W$ as a symmetry plane.
In this section we look at the case where the vortex is maintained by the
body force,
\begin{equation}
{\rm {\bf F}} = \frac{1}{2}\Omega _f r{\rm {\bf \hat {e}}}_\theta
\;\;\;\;\;\;\;\;\;\;,\;\;\Omega _f = constant
\end{equation}
\noindent
Since $F_\theta $ is linear in $r$ we can once again look for solutions of the
Karman type. The resulting equations are of a form similar to (\ref{eq29})-(\ref{eq33}).
That is, if we look for solutions of the form
\begin{equation}
\label{eq53}
{\rm {\bf u}} = \left[ {\Omega _c rF\left( {z / l} \right)\;\;,\;\;\;\Omega
_c rG(z / l)\;\;,\;\;\;\Omega _c lH(z / l)} \right]
\end{equation}
\noindent
we find,
\begin{eqnarray}
\label{eq54}
{H}' + 2F = 0\\
\label{eq55}
F^2 + H{F}' - G^2 + 1 = (\nu / \Omega _c l^2){F}'' - 2AF\\
\label{eq56}
2FG + {G}'H = (\nu / \Omega _c l^2){G}'' + \frac{1}{2}\left( {\Omega _f /
\Omega _c } \right)^2 - 2A\left[ {G - f} \right]
\end{eqnarray}
\noindent
Here $\Omega _c $ is a typical rotation rate outside the boundary layer, $l$ is
an arbitrary length scale, and $f$ is related to the radial gradient of $V$,
\begin{displaymath}
f = \left( {\Omega _c rB} \right)^{ - 1}\frac{\partial V}{\partial r}
\end{displaymath}
\noindent
(See equation (\ref{eq29}) for the origin of the term $G-f$.) The only differences
between (\ref{eq54})-(\ref{eq56}) and (\ref{eq31})-(\ref{eq32}) is that: (i) we have incorporated
the driving force $\frac{1}{2}\Omega _f^2 r$ ; and (ii) we have yet to
specify $f$. In the semi-infinite domain problem of section 3 we specified that
$J_r $ is zero outside the boundary layer and this fixed $f$ as $f = 1$.
However, it is clear from Figure \ref{fig3} that this is no longer legitimate.
\noindent
Now we know that there is a uniform flux of current out of the boundary
layer, which we called $(J_z )_\infty $. Thus the poloidal current in the
core satisfies
\begin{eqnarray}
\label{eq57}
\nabla \cdot {\rm {\bf J}}_p = 0\,\,\,\,\,,\,\,\,\,\,\,\nabla \times {\rm
{\bf J}}_p = \sigma {\rm {\bf B}} \cdot \nabla {\rm {\bf u}}_\theta \\
\label{eq58}
J_z = 0\;\;\;\mbox{on }z = W\;\;,\;\;\;J_z \to (J_z )_\infty \mbox{ as }z
\to 0.
\end{eqnarray}
\noindent
We shall see shortly that $u_\theta $ is virtually independent of $z$ in the
core and so (\ref{eq57}) and (\ref{eq58}) have the unique solution:
\begin{equation}
\label{eq59}
(J_r )_c = \frac{(J_z )_\infty }{2}\,\frac{r}{W}\,\;\;\;,\;\;\;\;(J_z )_c =
(J_z )_\infty \left[ {1 - \frac{z}{W}} \right]
\end{equation}
\noindent
This represents an outward flow of current in the core, as indicated in
Figure 3. From (\ref{eq59}) we can find $V$ and it follows that, in the core of the
flow,
\begin{equation}
\label{eq60}
f_c = G_c - \frac{(J_z )_\infty }{2\sigma \Omega _c WB}
\end{equation}
\noindent
We may simplify (\ref{eq60}) using (\ref{eq33}) in the form
\begin{equation}
\label{eq61}
(J_z )_\infty = 2\sigma B\Omega _c \int_0^\delta {(1 - G)dz}
\end{equation}
\noindent
which gives,
\begin{equation}
\label{eq62}
f_c = G_c - W^{ - 1}\int_0^\delta {(1 - G)dz}
\end{equation}
\noindent
Thus 1~-~$f_{c}$ is of the order of $\delta / W$ in the core of the flow. In
the boundary layer, on the other hand, we may continue to take $f$~=~1 since
the curvature of the current lines in the core is negligible on the scale of
\textit{$\delta $}.
\noindent
We are now in a position to write down the governing equations for the core
and for the boundary layer. In the boundary layer we take $l = \delta
_\Omega $, which gives
\begin{eqnarray}
\label{eq63}
{F}''_b = F_b^2 + H_b {F}'_b - G_b^2 + 1 + 2AF_b \\
\label{eq64}
{G}''_b = 2F_b G_b + H_b {G}'_b + 2A(G_b - 1) - \frac{1}{2}\left( {\Omega _f
/ \Omega _c } \right)^2\\
\label{eq65}
{H}'_b = - 2F_b
\end{eqnarray}
\noindent
These are identical to the equations of Section 3 except for the forcing
term. (Consult equations (\ref{eq34})-(\ref{eq36}).) In the core, on the other hand, we
take $l = W$ and neglect the viscous stresses since $\Omega W^2 / \nu > > 1$.
The result is
\begin{eqnarray}
\label{eq66}
{H}'_c + 2F_c = 0\\
\label{eq67}
F_c^2 + H_c {F}'_c - G_c^2 + 1 = - 2AF_c \\
\label{eq68}
2F_c G_c + {G}'_c H_c = \frac{1}{2}\left( {\Omega _f / \Omega _c } \right)^2
- 2A\left( {G_c - f_c } \right)
\end{eqnarray}
\noindent
We now introduce the parameter $\varepsilon = \delta _\Omega / W$. Recall
that we consider $Re = \Omega W^2 / \nu$ to be asymptotically large but retain
$A$ as zero or finite. It follows that $\varepsilon \to 0$ as $\nu \to 0$
irrespective of the value of $A$. Now the matching condition on $u_z $ at the
edge of the boundary layer gives
\begin{displaymath}
H_c (z_c \to 0) = \varepsilon H_b (z_b \to \infty ) = \varepsilon (H_b
)_\infty
\end{displaymath}
\noindent
where $z_c = z / W$ and $z_b = z / \delta _\Omega $. It follows that $H_c $
and $F_c $ are both of order $\varepsilon $. We now expand $H_{c}$, $F_{c}$
and $G_{c}$ in powers of $\varepsilon $ and look for solution of (\ref{eq66}) -
(\ref{eq68}). We find that \textbf{u} has the same structure as \textbf{J} in the
core:
\begin{eqnarray}
\label{eq69}
F_c = \frac{\varepsilon }{2}\left( {H_b } \right)_\infty + 0(\varepsilon )\\
\label{eq70}
G_c = 1 + 0(\varepsilon)\\
\label{eq71}
H_c = \varepsilon (H_b )_\infty \left[ {1 - z / W} \right] + 0(\varepsilon)
\end{eqnarray}
\noindent
It follows that the azimuthal equation of motion reduces to
\begin{equation}
\label{eq72}
\varepsilon (H_b )_\infty + 2AW^{ - 1}\int_0^\delta {(1 - G_b )dz =
\frac{1}{2}(\Omega _f / \Omega _c )^2}
\end{equation}
\noindent
If we retrace our steps to find the origin of these terms we discover that
(\ref{eq72}) is simply a statement of
\begin{equation}
\label{eq73}
2u_r \Omega _c + \rho ^{ - 1}J_r B = F_\theta
\end{equation}
\noindent
It appears that $F_\theta $, is balanced either by the Coriolis force,
$2{\rm {\bf u}}\times \Omega $, or else the Lorentz force, $\rho ^{ - 1}{\rm
{\bf J}}\times {\rm {\bf B}}$. Thus, as noted in Section 1, the dynamics of
the core is determined by the radial components of ${\rm {\bf u}}_c $ and
${\rm {\bf J}}_c $. These, in turn, depend on the axial flux of current and
mass released by the B\"{o}dewadt-Hartmann layer.
Let us now turn to the boundary-layer equations. From (\ref{eq72}) we see that
$\frac{1}{2}\left( {\Omega _f / \Omega _c } \right)^2$ is of order
$\varepsilon H_b $ and so the forcing term in (\ref{eq64}) is negligible. The
dynamical equations for the boundary therefore reduce to those of Section 3.
It follows, from (\ref{eq51}) and (\ref{eq52}), that,
\begin{displaymath}
(H_b )_\infty = \frac{2R}{1 + R^4},\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;R =
\left[ {A + (1 + A^2)^{1/2}}\right]^{1/2}
\end{displaymath}
\begin{displaymath}
\frac{1}{\delta _\Omega }\int_0^\delta {(1 - G_b )dz = \frac{\hat {R}^2(\hat
{R} - H_\infty / 2)}{1 + \hat {R}^2(\hat {R} - H_\infty / 2)^2}}
\,\,\,,\,\,\,\hat {R} = \left[ {\hat {A} + (1 + \hat {A}^2)^{1/2}}
\right]\,^{1 / 2}
\end{displaymath}
\noindent
The azimuthal force balance in the core therefore reduces to
\begin{equation}
\label{eq74}
\underbrace{\frac{2R}{1 + R^4}}_{Coriolis} + \underbrace{2A\frac{\hat {R}^2(\hat {R} - H_\infty / 2)}{1 + \hat
{R}^2(\hat {R} - H_\infty / 2)^2}}_{Lorentz} = \frac{W}{2\delta _\Omega }\left(
{\frac{\Omega _f }{\Omega _c }} \right)^2
\end{equation}
\noindent
When $A$ is small (negligible magnetic field) we have a balance between the
Coriolis force and $F_\theta $, which yields,
\begin{equation}
\label{eq75}
\Omega _c = \frac{\Omega _f }{2^{2/3}}\;\left[ {\frac{\Omega _f W^2}{\nu}} \right]^{1
/3} (A\to 0)
\end{equation}
\noindent
This result was first obtained by \cite{dav92}. In the event that the
Coriolis force is negligible, on the other hand, we find,
\begin{equation}
\label{eq77}
\Omega _c = \frac{\Omega _f^2 W\delta _B }{2\nu}
\end{equation}
\noindent
However, it is unlikely that we can ever reach a situation in which the
Coriolis force is negligible. To see why this is so, we must rewrite (\ref{eq74})
in a way in which (the undetermined) $\Omega _c $ is made more explicit.
Let $A_f = \left( {2\tau \Omega _f } \right)^{ - 1}$, $(Re)_f = \Omega _f
W^2 / \nu$ and $\lambda = \Omega _f \left( {Re} \right)_f^{1 \mathord{\left/
{\vphantom {1 3}} \right. \kern-\nulldelimiterspace} 3} / \Omega _c $. Then
our force balance (\ref{eq74}) becomes
\begin{displaymath}
\frac{2R}{1 + R^4} + 2A_f \frac{\hat {R}^2\left( {\hat {R} - H_\infty / 2}
\right)}{1 + \hat {R}^2\left( {\hat {R} - H_\infty / 2} \right)^2}\left(
{Re} \right)_f^{ - 1 \mathord{\left/ {\vphantom {1 3}} \right.
\kern-\nulldelimiterspace} 3} \lambda = \frac{1}{2}\lambda ^{3
\mathord{\left/ {\vphantom {3 2}} \right. \kern-\nulldelimiterspace} 2}
\end{displaymath}
\noindent
We now let $(Re)_f \to \infty $ while retaining $A_{f}$ as finite. The
Lorentz term then goes to zero and we are left with a balance between the
Coriolis force and $F_{\theta }$. The estimate of \textit{$\Omega $}$_{c}$ is then
\begin{equation}
\label{eq78}
\Omega _c = \left({\frac{4R}{1 + R^4}}\right)^{-2/3}(Re)_f^{1/3}
\Omega _f \;\;\;\;\; (\textit{Any A})
\end{equation}
\section{Spin-down from Some Initial State}
\indent
It is well-known that Karman-type similarity also extends to unsteady flows
(see, for example, \cite{green69}). It is necessary only to replace the
forcing term, $\frac{1}{2}\Omega _f^2 r$, in the azimuthal equation (\ref{eq56}) by
${ - \partial u_\theta } \mathord{\left/ {\vphantom {{ - \partial u_\theta }
{\partial t}}} \right. \kern-\nulldelimiterspace} {\partial t}$. There is
also a deceleration term $ - {\partial u_r } \mathord{\left/ {\vphantom
{{\partial u_r } {\partial t}}} \right. \kern-\nulldelimiterspace} {\partial
t}$ on the right of (\ref{eq55}). However, this turns out to be negligible by
comparison with the other inertial terms, essentially because the spin-down
time is relatively slow. We now repeat all of the steps leading up to (\ref{eq72})
and find,
\begin{equation}
\label{eq79}
\varepsilon (H_b )_\infty + 2AW^{ - 1}\int_0^\delta {(1 - G_b )dz = - \Omega
_c^{ - 2} \partial \Omega _c / \partial t}
\end{equation}
\noindent
Physically, this represents the balance,
\begin{displaymath}
2u_r \Omega _c + \rho ^{ - 1}J_r B = - \partial u_\theta / \partial t
\end{displaymath}
\noindent
Since we are now considering an initial value problem it is convenient to
introduce $\hat {t} = \Omega _0 t$, $\hat {\Omega } = \Omega / \Omega _0 $,
$\varepsilon _0 = (Re)_0^{ - 1 \mathord{\left/ {\vphantom {1 2}} \right.
\kern-\nulldelimiterspace} 2} = (\nu / \Omega _0 W^2)^{1 \mathord{\left/
{\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}$ and $A_0 = (2\Omega
_0 \tau )^{-1}$, where $\Omega _0 = \Omega (t = 0)$. Our force balance can
then be rewritten as,
\begin{equation}
\label{eq80}
\varepsilon _0 \hat {\Omega }^{3 \mathord{\left/ {\vphantom {3 2}} \right.
\kern-\nulldelimiterspace} 2}(H_b )_\infty + \hat {\Omega }2A_0 W^{ -
1}\int_0^\delta {(1 - G_b )dz = - \partial \hat {\Omega } / \partial \hat
{t}}
\end{equation}
\noindent
Substituting now for $(H_b )_\infty $ and $\int{\left( {1 - G_b }
\right)dz} $ using (\ref{eq51}) and (\ref{eq52}) yields,
\begin{equation}
\label{eq81}
\varepsilon _0 \hat {\Omega }^{3/2}
\frac{2R}{1 + R^4} + 2A_0 \varepsilon _0 \hat
{\Omega }^{1/2}\frac{\hat {R}^2(\hat {R} - H_\infty / 2)}{1 +
\hat {R}^2(\hat {R} - H_\infty / 2)^2} = - \frac{\partial \hat {\Omega
}}{\partial t}
\end{equation}
\noindent
This is too complicated to integrate by analytical means for the general case because $R$ and $\hat
{R}$ are themselves functions of $\Omega $. However, we can integrate (\ref{eq81})
for the two extremes of $A_0 \to 0$, $A_0 \to \infty $. When $A_0 = 0$ we
have $R$~=~1 and (\ref{eq81}) simplifies to
\begin{equation}
\label{eq82}
\frac{\partial \hat {\Omega }}{\partial \hat {t}} + \varepsilon _0 \hat
{\Omega }^{3 \mathord{\left/ {\vphantom {3 2}} \right.
\kern-\nulldelimiterspace} 2} = 0
\end{equation}
\noindent
This integrates to give
\begin{equation}
\label{eq83}
\begin{array}{l}
\Omega / \Omega _0 = \left( {1 + \frac{t}{t_E }} \right)^{ - 2} \\
t_E = \left( {\frac{1}{2}\varepsilon _0 \Omega _0 } \right)^{ - 1} \\
\end{array},
\end{equation}
\noindent
where $t_{E }$ is the typical friction time associated to a linear Ekman
boundary layer. On the other hand, when $A_{0}$ is very
large ($i.e.$ for negligible inertia), $R =
\hat {R} = \left( {2A} \right)^{1 \mathord{\left/ {\vphantom {1 2}} \right.
\kern-\nulldelimiterspace} 2}$ and (\ref{eq81}) reduces to
\begin{equation}
\label{eq84}
\frac{\partial \hat {\Omega }}{\partial \hat {t}} + \left( {2A_0 \varepsilon
_0^2 } \right)^{1 \mathord{\left/ {\vphantom {1 2}} \right.
\kern-\nulldelimiterspace} 2}\hat {\Omega } = 0
\end{equation}
\noindent
which yields
\begin{equation}
\label{eq85}
\begin{array}{l}
\Omega = \Omega _0 \exp \left[ { - t / t_H } \right] \\
t_H = \frac{W\delta _B }{\nu } \\
\end{array}
\end{equation}
\noindent
where $t_{H}$ is a time which characterizes Hartmann layer friction.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.75\textwidth]{decay.eps}
\caption{Free decay of a vortex in a magnetic field. Dash-doted lines: Bodewadt
case ($A_{0}=0)$. Solid: numerical solution of (\ref{eq81}) for Elsasser number equal to
{\{}0.00032, 0.001, 0.0032, 0.01, 0.032, 0.1, 0.32, 1, 3.16, 10 {\}}
(stronger decay when $A_{0}$ increases). Dotted lines: linear approximation
(\ref{eq85}) for $A_{0}$ in {\{}0.1, 0.32, 3.16, 10{\}}. The last two dotted lines
cannot be distinguished from the fully non-linear solution.}
\label{decay}
\end{center}
\end{figure}
The general equation (\ref{eq81}) has been solved numerically for values of the
initial Elsasser number in the range $10^{-3} \to 10^3$ (see figure \ref{decay}). The approximation (\ref{eq85})
turns out to be very accurate when $A_{0}$\textit{$ \geqslant $1. } Even for an initially dominant
rotation, the decay becomes exponential when $A$ reaches a value of the order of
$1$.
\section{Conclusions}
The first part of this work has provided some weakly non-linear analytical
solutions to the semi-infinite Hartmann-B\"{o}dewadt problem, which provides
a better approximation than the usual Ekman linear approximation. These new
velocity profiles turn out to be quite close to the fully non-linear
numerical solution (though they slightly underestimate the oscillating part
of the profile), which justifies their use in further work. The numerical
results are new as well, and together with the analytical solutions, they
point out that the B\"{o}dewadt-Hartmann layer becomes very close to the
simple Hartmann layer exponential profile as soon as the Elsasser number
reaches a few units.
The second part (sections \textbf{4} and \textbf{5}) of this work tackles
the problem of a forced or free vortex in a confined layer of fluid, in which the
B\"{o}dewadt-Hartmann boundary layer is shown to have the same
dynamics as in the semi-infinite problem. As the core flow directly depends
on the quantities injected by the boundary layer into the core (vertical flow
rate and vertical electric current), the results of the semi-infinite
problem allows us to derive an expression for the core global angular
velocity both in the case of a constant forcing and for the spin-down from
some initial value of the rotation. In the latter case, it is found that
meridian electric current and secondary flows essentially result in effects
similar to friction, with a characteristic time varying from a linear Ekman
layer characteristic friction time (when rotation dominates electromagnetic
effects) to the Hartmann layer friction time (when electromagnetic
effects dominates rotation).
\textit{Acknowledgements}
The authors would like to thank Ren\'{e} Moreau and Jo\"{e}l
Sommeria for the fruitful discussions on this work.
\bigskip
|
2,877,628,091,287 | arxiv |
\section{Introduction}
Deep neural network have become a widely used model for machine learning, achieving state-of-the-art results on many tasks. The most common task these models are used for is to perform classification, as in the case of convolutional neural networks (CNNs) used to classify images to a semantic category. CNN models are currently considered the standard for visual tasks, allowing far better accuracy than preceding approaches \citep{krizhevsky2012imagenet, he2016deep, szegedy2015going}.
Training NN models and using them for inference requires large amounts of memory and computational resources, thus, extensive amount of research has been done lately to reduce the size of networks. \citet{han2015deep} used weight sharing and specification, \citet{micikevicius2017mixed} used mixed precision to reduce the size of the neural networks by half. \citet{tai2015convolutional} and \citet{jaderberg2014speeding} used low rank approximations to speed up NNs.
\citet{hubara2016quantized}, \citet{li2016ternary} and ~\citet{zhou2016dorefa}, used a more aggressive approach, in which weights, activations and gradients were quantized to further reduce computation during training. Although aggressive quantization benefits from smaller model size, the extreme compression rate comes with a loss of accuracy.
Past work noted the fact that predefined \citep{park1991universal} and random \citep{huang2006extreme} projections can be used together with a learned affine transformation to achieve competitive results on several tasks. In this study suggest the reversed proposal - that common NN models used can learn useful representation even without modifying the final output layer, which often holds a large number of parameters that grows linearly with number of classes.
\subsection{Classifiers in convolutional neural networks}
Convolutional neural networks (CNNs) are commonly used to solve a variety of spatial and temporal tasks. CNNs are usually composed of a stack of convolutional parameterized layers, spatial pooling layers and fully connected layers, separated by non-linear activation functions.
Earlier architectures of CNNs \citep{lecun1998gradient, krizhevsky2012imagenet} used a set of fully-connected layers at later stage of the network, presumably to allow classification based on global features of an image. The final classifier can also be replaced with a convolutional layer with output feature maps matching the number of classes, as demonstrated by \citet{springenberg2014striving}.
Despite the enormous number of trainable parameters these layers added to the model, they are known to have a rather marginal impact on the final performance of the network \citep{zeiler2014visualizing} and are easily compressed and reduced after a model was trained by simple means such as matrix decomposition and sparsification \citep{han2015deep}. Further more, modern architecture choices are characterized with the removal of most of the fully connected layers \citep{lin2013network, szegedy2015going,he2016deep}, which was found to lead to better generalization and overall accuracy, together with a huge decrease in the number of trainable parameters.
Additionally, numerous works showed that CNNs can be trained in a metric learning regime \citep{bromley1994signature, schroff2015facenet, hoffer2015deep}, where no explicit classification layer was introduced and the objective regarded only distance measures between intermediate representations. \cite{hardt2016identity} suggested an all-convolutional network variant, where they kept the original initialization of the classification layer fixed with no negative impact on performance on the Cifar10 dataset.
All of these properties provide evidence that fully-connected layers are in fact redundant and play a small role in learning and generalization.
Despite the apparent minor role they play, fully-connected layers are still commonly used as classification layers, transforming from the dimension of network features $N$ to the number of required class categories $C$. Therefore, each classification model must hold $N\cdot C$ number of trainable parameters that grows in a linear manner with the number of classes. This property still holds when the fully-connected layer is replaced with a convolutional classifier as shown by \citet{springenberg2014striving}.
In this work we claim that for common use-cases of convolutional network, the parameters used for the final classification transform are completely redundant, and can be replaced with a pre-determined linear transform. As we will show for the first time, this property holds even in large-scale models and classification tasks, such as recent architectures trained on the ImageNet benchmark \citep{deng2009imagenet}.
The use of a fixed transform can, in many cases, allow a huge decrease in model parameters, and a possible computational benefit. We suggest that existing models can, with no other modification, devoid their classifier weights, which can help the deployment of those models in devices with low computation ability and smaller memory capacity. Moreover, as we keep the classifier fixed, less parameters need to be updated, reducing the communication cost for models deployed in distributed systems. The use of a fixed transform which does not depend on the number classes can allow models to scale to a large number of possible outputs, without a linear cost in the number of parameters. We also suggest that these finding might shed light on the importance of the preceding non-linear layers to learning and generalization.
\section{Using a fixed classifier}
\subsection{Fully-connected classifiers}
We focus our attention on the final representation obtained by the network (the last hidden layer), before the classifier.
We denote these representation as $x=F(z;\theta)$ where $F$ is assumed to be a deep neural network with input $z$ and parameters $\theta$, e.g., a convolutional network, trained by back-propagation.
In common NN models, this representation is followed by an additional affine transformation $$y=W^Tx+b$$
where $W$ and $b$ are also trained by back-propagation.
For input $x$ of $N$ length, and $C$ different possible outputs,
$W$ is required to be a matrix of $N\times C$.
Training is done using cross-entropy loss, by feeding the network outputs through a softmax activation
\[v_i=\frac{e^{y_i}}{\sum_j^C{e^{y_j}}}, \ i\in \{ 1, \dots ,C\} \] and reducing the expected negative log likelihood with respect to ground-truth target $t \in \{ 1, \dots ,C\}$,
by minimizing
$$\mathcal{L}(x,t) = -\log v_t = -w_t\cdot x - b_t + \log\left(\sum_j^Ce^{w_j\cdot x + b_j}\right)$$
where $w_i$ is the $i$-th column of $W$.
\subsection{Choosing the projection matrix}
To evaluate our conjecture regarding the importance of the final classification transformation, we replaced the trainable parameter matrix $W$ with a fixed orthonormal projection $Q\in \mathbb{R}^{N\times C}$, such that $\forall i\neq j: q_i\cdot q_j=0$ and $\|q_i\|_2=1$, where $q_i$ is the $i$th column of $Q$. This can be ensured by a simple random sampling and singular-value decomposition
As the rows of classifier weight matrix are fixed with an equally valued $L_2$ norm, we find it beneficial
to also restrict the representation of $x$ by normalizing it to reside on the $n$-dimensional sphere
\begin{equation}
\hat{x} = \frac{x}{\|x\|_2} \label{eq: normalization}
\end{equation}
This allows faster training and convergence, as the network does not need to account for changes in the scale of its weights.
We now face the problem that $q_i \cdot \hat{x}$ is bounded between $-1$ and $1$.
This causes convergence issues, as the softmax function is scale sensitive,
and the network is affected by the inability to re-scale its input.
This is similar to the phenomenon described by \citet{vaswani2017attention} with respect to softmax function used for attention mechanisms.
In the same spirit, we can amend this issue with a fixed scale $T$ applied to softmax inputs $f(y)=\mathrm{softmax}(\frac{1}{T}y)$, also known as a softmax temperature.
However, this introduces an additional hyper-parameter which may differ between networks and datasets. Instead, we suggest to introduce a single scalar parameter $\alpha$ to learn the softmax scale, effectively functioning as an inverse of the softmax temperature $\frac{1}{T}$ . \\
Using normalized weights and an additional scale coefficient is similar in spirit to weight-normalization \citep{salimans2016weight}, with the difference that we use a single scale for all entries in the weight matrix, in contrast to a scale for each row that \cite{salimans2016weight} uses.
We keep the additional vector of bias parameters $b\in \mathbb{R}^C$, and train using the same negative-log-likelihood criterion.
More explicitly, our classifier output is now
\[v_i=\frac{e^{\alpha q_i\cdot\hat{x} + b_i}}{\sum_j^C{e^{\alpha q_j\cdot\hat{x} + b_j}}}, \ i\in \{ 1, \dots ,C\} \]
and we minimize the loss:
$$\mathcal{L}(x,t)=-\alpha q_t\cdot \frac{x}{\|x\|_2} + b_t + \log\left(\sum_{i=1}^C \exp{\left(\alpha q_i\cdot \frac{x}{\|x\|_2} + b_i\right)}\right)$$
where we recall $x$ is the final representation obtained by the network for a specific sample, and $t \in \{1, \dots, C\}$ is the ground-truth label for that sample.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{alpha}
\caption{The softmax scale coefficient $\alpha$ was observed to follow a logarithmic growth over the course of training.}
\label{alpha_graph}
\end{figure}
Observing the behavior of the $\alpha$ parameter over time revealed a logarithmic growth depicted in graph \ref{alpha_graph}.
Interestingly, this is the same behavior exhibited by the norm of a learned classifier, first described by \citet{hoffer2017train} and linked to
the generalization of the network. This was recently explained by the under-review work of \citet{soudry2017implicit} as convergence to a max margin classifier.
We suggest that using a single parameter will enable a simpler examination and possible further exploration of this phenomenon and its implications.
We note that as $-1\leq q_i\cdot \hat{x} \leq 1$, we also found it possible to train the network with a simple cosine angle loss:
\begin{equation*}
\mathcal{L}(\hat{x},t) = \begin{cases}
q_i\cdot \hat{x} - 1, & \text{if } i = t, \\
q_i\cdot \hat{x} + 1, & \text{otherwise}.
\end{cases}
\end{equation*}
allowing to discard the softmax function and its scale altogether, but resulting in a slight decrease in final validation accuracy compared to original models.
\subsection{Using a fixed Hadmard matrix}
We further suggest the use of a Hadamard matrix \citep{hedayat1978hadamard} as the final classification transform. Hadamard matrix $H$ is an $n\times n$ matrix, where all of its entries are either $+1$ or $-1$. Further more, $H$ is orthogonal, such that $HH^T=nI_n$ where $I_n$ is the identity matrix.\\
We can use a truncated Hadamard matrix $\hat{H}\in \{-1,1\}^{C\times N}$ where all $C$ rows are orthogonal as our final classification layer such that $$y=\hat{H}\hat{x} + b$$
This usage allows two main benefits:
\begin{itemize}
\item A deterministic, low-memory and easily generated matrix that can be used to classify.
\item Removal of the need to perform a full matrix-matrix multiplication - as multiplying by a Hadamard matrix can be done by simple sign manipulation and addition.
\end{itemize}
We note that $n$ must be a multiple of $4$, but it can be easily truncated to fit normally defined networks.
We also note the similarity of using a Hadamard matrix as a final classifier to methods of weight binarization such as the one suggested by
\cite{courbariaux2015binaryconnect}. As the classifier weights are fixed to need only 1-bit precision, it is now possible to focus our attention on the features preceding it.
\section{Experimental results}
\begin{table}[h!]
\small
\centering{}
\caption{Validation accuracy results on learned vs. fixed classifier}
\label{conv_table:val_accuracy}
\begin{tabular}{lllllll}
\toprule{}\
Network & Dataset & Learned & Fixed & \# Params & \% Fixed params \\
\midrule
Resnet56 \citep{he2016deep} & Cifar10 & 93.03\% & 93.14\% & 855,770 & 0.07\% \\
DenseNet(k=12)\citep{huang2017densely} & Cifar100 & 77.73\% & 77.67\% & 800,032 & 4.2\% \\
Resnet50 \citep{he2016deep} & ImageNet & 75.3\% & 75.3\% & 25,557,032 & 8.01\% \\
DenseNet169\citep{huang2017densely} & ImageNet & 76.2\% & 76\% & 14,149,480 & 11.76\% \\
ShuffleNet\citep{zhang2017shufflenet} & ImageNet & 65.9\% & 65.4\% & 1,826,555 & 52.56\% \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Cifar10/100}
We used the well known Cifar10 and Cifar100 datasets by \citet{krizhevsky2009learning} as an initial test-bed to explore the idea of a fixed classifier. Cifar10 is an image classification benchmark dataset containing $50,000$ training images and $10,000$ test images. The images are in color and contain $32 \times 32$ pixels. There are 10 possible classes of various animals and vehicles. Cifar100 holds the same number of images of same size, but contains 100 different classes.
We trained a residual network of \citet{he2016deep} on the Cifar10 dataset. We used a network of depth 56 and the same hyper-parameters used in the original work. We compared two variants: the original model with a learned classifier, and our version, where a fixed transformation is used. The results shown in figure \ref{error_graph} demonstrate that although the training error is considerably lower for the network with learned classifier, both models achieve the same classification accuracy on the validation set. Our conjecture is that with our new fixed parameterization, the network can no longer increase the norm of a given sample's representation - thus learning its label requires more effort. As this may happen for specific seen samples - it affects only training error.
We also compared using a fixed scale variable $\alpha$ at different values vs. a learned parameter. Results for $\alpha=\{0.1,1,10\}$ are depicted in figure \ref{vary_alpha} for both training and validation error. As can be seen, similar validation accuracy can be obtained using a fixed scale value (in this case $\alpha=1$ or $10$ will suffice) at the expense of another hyper-parameter to seek. In all our experiments we opted to train this parameter instead. In all experiments the $\alpha$ scale parameter was regularized with the same weight decay coefficient used on original classifier.
We then followed to train a model on the Cifar100 dataset. We used the DenseNet-BC model of \citet{huang2017densely} with depth of 100 layers and $k=12$. We continued to train according to the original regime and setting described for this network and dataset. Naturally, the higher number of classes caused the number of parameters to grow and encompass about $4\%$ of the whole model. Validation accuracy for the fixed-classifier model remained equally good as the original model, and we continued to observe the same training curve.
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth,trim={0 0 0 1cm},clip]{train}
\caption{Training error}
\label{error_graph:train}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth,trim={0 0 0 1cm},clip]{val}
\caption{Validation error}
\label{error_graph:val}
\end{subfigure}
\caption{Comparing training and validation error of fixed and learned classifier (ResNet56, Cifar10)}\label{error_graph}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth,trim={0 0 0 1cm},clip]{vary_alpha_train}
\caption{Training error}
\label{vary_alpha:train}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth,trim={0 0 0 1cm},clip]{vary_alpha_val}
\caption{Validation error}
\label{vary_alpha:val}
\end{subfigure}
\caption{Comparing fixed vs. trained variable scale $\alpha$ (ResNet56, Cifar10)}\label{vary_alpha}
\end{figure}
\subsection{Imagenet}
In order to validate our results on a more challenging dataset, we used the Imagenet dataset introduced by \citet{deng2009imagenet}. The Imagenet dataset spans over 1000 visual classes, and over 1.2 million samples. CNNs used to classify Imagenet such as \citet{krizhevsky2012imagenet}, \citet{he2016deep}, \citet{szegedy2016rethinking} usually have a hidden representation leading to the final classifier of at least 1024 dimensions. This architectural choice, together with the large number of classes, causes the size of classifier to exceed millions of parameters and taking a sizable share from the entire model size.
We evaluated our fixed classifier method on Imagenet using Resnet50 by \citet{he2016deep} with the same training regime and hyper-parameters. By using a fixed classifier, approximately 2-million parameters were removed from the model, accounting for about 8\% of the model parameters. Following the same procedure, we trained a Densenet169 model \citep{huang2017densely} for which a fixed classifier reduced about $12\%$ of the parameters. Similarly to results on Cifar10 dataset, we observed the same convergence speed and approximately the same final accuracy on both the validation and training sets.
Furthermore, we were interested in evaluating more challenging models where the classifier parameters constitutes the majority amount. For this reason we chose the Shufflenet architecture \citep{zhang2017shufflenet}, which was designed to be used in low memory and limited computing platforms. The Shufflenet network contains about 1.8 million parameters, out of which 0.96 million are part of the final classifier. Fixing the classifier resulted with a model with only 0.86 million parameters. This model was trained and found, again, to converge to similar validation accuracy as the original.
Interestingly, this method allowed Imagenet training in an under-specified regime, where there are more training samples than number of parameters. This is an unconventional regime for modern deep networks, which are usually over-specified to have many more parameters than training samples \citep{zhang2017}. Moreover, many recent theoretical results related to neural network training \citep{soudry2017exponentially, xie2016diversity, safran2016quality, soltanolkotabi2017theoretical, soudry2016no}
and even generalization \citep{gunasekar2017implicit, advani2017high, wilson2017marginal} usually assume over-specification.
Table \ref{conv_table:val_accuracy} summarizes our fixed-classifier results on convolutional networks, comparing to originally reported results. We offer our drop-in replacement for learned classifier that can be used to train models with fixed classifiers and replicate our results\footnote{Code is available at \url{https://github.com/eladhoffer/fix_your_classifier}}.
\subsection{Language modeling}
As language modeling requires classification of all possible tokens available in the task vocabulary, we were interested to see if a fixed classifier can be used, possible saving a very large number of trainable parameters (vocabulary size can have tens or even hundreds of thousands of different words).
Recent works have already found empirically that using the same weights for both word embedding and classifier can yield equal or better results than using a separate pair of weights \citep{inan2016tying,press2017using,vaswani2017attention}.
This is compliant with our findings that the linear classifier is largely redundant.
To examine further reduction in the number of parameters, we removed both classifier and embedding weights and replaced them with a fixed transform.
We trained a language model on the WikiText2 dataset described in \cite{merity2016pointer}, using the same setting in \cite{merityRegOpt}.
We used a recurrent model with 2-layers of LSTM \citep{hochreiter1997long} and embedding + hidden size of 512.
As the vocabulary of WikiText2 holds about $33K$ different words, the expected number of parameters in embedding and classifier is about 34-million. This number makes for about $89\%$ from the $38M$ parameters used for the whole model.
We found that using a random orthogonal transform yielded poor results compared to learned embedding. We suspect that, in oppose to image classification benchmarks, the embedding layer in language models holds information of the words similarities and relations, thus requiring a fine initialization. To test our intuition, we opted to use pre-trained embeddings using word2vec algorithm by \cite{mikolov2013distributed} or PMI factorization as suggested by \cite{levy2014neural}. We find that using fixed word2vec embeddings, we achieve much better results. Specifically, we use $89\%$ less parameters than the fully learned model, and obtain only somewhat worse perplexity.
We argue that this implies a required structure in word embedding that stems from semantic relatedness between words and the natural imbalance between classes. However, we suggest that with a much more cost effective ways to train word embeddings (e.g., \citet{mikolov2013distributed}), we can narrow the gap and avoid their cost when training bigger models.
\begin{table}[h!]
\centering{}
\caption{Validation perplexity results}
\label{lm_table:val_accuracy}
\begin{tabular}{lllllll}
\toprule{}\
Network & Dataset & Learned & Fixed & \# Params & \% Fixed params \\
\midrule
2-layer LSTM (h=512) & WikiText-2 & 74.1 & 81.2 & 38,312,446 & 88.94\% \\
\bottomrule
\end{tabular}
\end{table}
\section{Discussion}
\subsection{Implications to future DNN models and use cases}
In the last couple of years a we observe a rapid growth in the number of classes benchmark datasets contain, for example: Cifar100 \citep{krizhevsky2009learning}, ImageNet1K, ImageNet22k \citep{deng2009imagenet} and language modeling \citep{merity2016pointer}. Therefore the computational demands of the final classifier will increase as well and should be considered no less than the architecture chosen.
We use the work by \citet{sun2017revisiting} as our use case, which introduced JFT-300M - an internal Google dataset with over 18K different classes. Using a Resnet50 \citep{he2016deep}, with a 2048 sized representation, this led to a model with over 36M parameters.
This means that over $60\%$ of the model parameters reside in the final classification layer.
\citet{sun2017revisiting} further describes the difficulty in distributing this amount of parameters between the training servers, and the need to split them between 50 sub-layers. We also note the fact that the training procedure needs to account for synchronization after each parameter update - which must incur a non-trivial overhead.
Our work can help considerably in this kind of scenario - where using a fixed classifier removes the need to do any gradient synchronization for the final layer. Furthermore, using a Hadamard matrix, we can remove the need to save the transformation altogether, and make it more efficient, allowing considerable memory and computational savings.
\subsection{Possible caveats}
We argue that our method works due to the ability of preceding layers in the network to learn separable representations that are easily classified even when the classifier itself is fixed. This property can be affected when the ratio between learned features and number of classes is small -- that is, when $C>N$. We've been experimenting with such cases, for example Imagenet classification ($C=1000$) using mobilenet-0.5 \citep{howard2017mobilenets} where $N=512$, or reduced version of ResNet \citep{he2016deep} where $N=256$. In both scenarios, our method converged similarly to a fully learned classifier reaching the same final validation accuracy. This is strengthening our finding, showing that even in cases in which $C>N$, fixed classifier can provide equally good results.
Another possible issue may appear when the possible classes are highly correlated. As a fixed orthogonal classifier does not account for this kind of correlation, it may prove hard for the network to learn in this case. This may suggest another reason for the difficulties we experienced in training a language model using an orthogonal fixed classifier, as word classes tend to have highly correlated instances.
\subsection{Future work}
Understanding that linear classifiers used in NN models are largely redundant allows us to consider new approaches in training and understanding these models.
Recent works \citep{neyshabur2017exploring, bartlett2017spectrally} suggested a connection between generalization capabilities of models and various norm-related quantities of their weights. Such results might be potentially simplified in our model, since we have a single scalar variable (i.e., scale), which seems to be the only relevant parameter in the model (since we normalize the last hidden layer, and fix the last weight layer).
The use of fixed classifiers might be further simplified in Binarized Neural Networks \citep{hubara2016binarized}, where the activations and weights are restricted to $\pm1$ during propagations. In this case the norm of the last hidden layer is constant for all samples (equal to the square root of the hidden layer width). This constant can be absorbed into the scale constant $\alpha$, and there is no need in a per-sample normalization as in eq. \ref{eq: normalization}.
We also plan to further explore more efficient ways to learn word embedding, where similar redundancy in classifier weights may suggest simpler forms of token representations - such as low-rank or sparse versions, allowing similar benefits to the fixed transformations we suggested.
\section{Conclusion}
In this work we suggested removing the parameters from the classification layer used in deep neural networks. We showed empirical results suggesting that keeping the classifier fixed cause little or no decline in classification performance for common balanced datasets such as Cifar and Imagenet, while allowing a noticeable reduction in trainable parameters. We argue that fixing the last layer can reduce the computational complexity for training as well as the communication cost in distributed learning. Furthermore, using a Hadamard matrix as classifier might lead to some computational benefits when properly implemented, and save memory otherwise spent on large amount of transformation coefficients. As datasets tend to become more complex by time (e.g., Cifar100, ImageNet1K, ImageNet22k, JFT-300M, and language modeling) we believe that resource hungry affine transformation should remain fixed during training, at least partially.
We also found that new efficient methods to create pre-defined word embeddings should be explored,
as they require huge amount of parameters that can possibly be avoided when learning a new task.
Based on these findings, we recommend future research to focus on representations learned by the non-linear part of neural networks - up to the final classifier, as it seems to be highly redundant.
\subsubsection*{Acknowledgments}
The research leading to these results has received funding from the Taub Foundation, and the European Research Council under European Unions Horizon 2020 Program, ERC Grant agreement no. 682203 ”SpeedInfTradeoff”.
|
2,877,628,091,288 | arxiv |
\section{Experimental setup}
\subsection{Description of the apparatus}
\noindent
Our cryogenic nuclear magnetic resonance (NMR) setup is inside a liquid helium (LHe) bath cryostat with a solenoidal superconducting magnet (Cryomagnetics, Inc. Model 90-300-010), Fig.~\ref{fig:setup1}. The apparatus is built around a crystal that is inductively coupled to a pickup probe along one axis, and an excitation probe along an orthogonal axis, both in the plane transverse to the leading magnetic field created by the magnet (Fig. 1(a) in the main text). The experimental setup is used both when measuring pulsed NMR and when performing the axion search.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{S_Fig1.eps}
\caption{Schematic of the setup operating at 4.2~K. Cylindrical PMN-PT crystal is placed close to the center of the superconducting magnet, and it is coupled to the mutually orthogonal excitation and pickup coils. The sample and the pickup probe are mounted inside a cylindrical electromagnetic shielding enclosure within the superconducting magnet bore. The low-noise preamplifier is inside the liquid helium bath, above the superconducting magnet.}
\label{fig:setup1}
\end{figure}
During pulsed NMR calibration measurements, a digital-to-analog converter (DAC) generates a radio frequency (RF) voltage waveform $V_e$, which is coupled into the excitation probe (Fig.~\ref{fig:setup2}). The resulting RF magnetic field exerts a torque on the spins, whose magnitude is quantified by the excitation Rabi frequency $\Omega_e$. The excitation-probe transfer function is defined as
\begin{align}
\kappa = \frac{\Omega_e}{V_e}.
\label{eq:010}
\end{align}
The excitation pulse tilts $^{207}\mathrm{Pb}$ nuclear spins into the plane transverse to the leading field $B_0$, creating a crystal magnetization $M_1$ that rotates at the Larmor frequency. After the excitation pulse ends, this magnetization decays (free induction decay, FID). The magnetization induces an oscillating current in the pickup coil, and voltage $V_1$ at the input of the low-noise preamplifier $A_1$ (Fig.~\ref{fig:setup2}). The pickup probe transfer function is defined as
\begin{align}
\alpha = \frac{V_1}{\mu_0 M_1},
\label{eq:020}
\end{align}
where $\mu_0$ is the permeability of free space.
The preamplifier $A_1$ has gain of $40$. Its output is connected to a low-pass filter $LP_1$ and another amplifier stage $A_2$ (gain $= 15$) mounted inside the cryostat near the top flange. After a third amplifier stage $A_3$ (gain $= 10$) outside the cryostat, the signal is sent to an analog-to-digital converter (ADC).
The excitation signal is routed through a switch $S_1$ (Fig.~\ref{fig:setup2}) that is controlled with a transistor-transistor logic (TTL) pulse with the same duration as the excitation RF pulse. This prevents the DAC output noise from coupling into the pickup probe after the end of the excitation pulse, during FID detection. When the TTL state is high at $5\uu{V}$, the DAC is connected to the excitation probe through amplifier $A_e$, and when the TTL state is low at $0\uu{V}$ the input of $A_e$ is connected to ground via a $50\,\Omega$ termination.
\begin{figure}[h]
\includegraphics[width=1\textwidth]{S_Fig2.eps}
\caption{Full electrical schematic of the experimental setup incorporating the Spectrum Instrumentation m4i.6622-x8 digital-to-analog converter (DAC), RF Lambda coaxial reflective SP2T RFSP2TRDC06G switches ($S_1$), Stanford Research Systems SIM954 inverting amplifiers ($A_e$), coil inductances ($L_c$, $L_e$ and $L_p$), surface mount probe tuning capacitors ($C_{c1}$, $C_{e1}$ and $C_{p1}$), surface mount impedance matching capacitors ($C_{c2}$, $C_{e2}$ and $C_{p2}$), surface mount resistors ($R_{c}$, $R_{e}$ and $R_{p}$) that determine the quality factor of the circuit resonances, ULF-LNA-$\#$159 cryogenic low-noise preamplifier designed and constructed by the Arizona State University group ($A_1$), Mini-Circuits ZX75LP-50-S+ 50 MHz low-pass filter ($LP_1$), Mini-Circuits ZX60-P103LN+ low-noise amplifier ($A_2$), Femto HVA-200M-40-B amplifier ($A_3$), and Spectrum Instrumentation m4i.4421-x8 analog-to-digital converter (ADC).}
\label{fig:setup2}
\end{figure}
\subsection{The crosstalk minimization scheme}
\noindent
During experimental assembly we carefully adjust the orthogonal axes of the excitation and the pickup coils to minimize mutual inductance between them, to $\approx 1.5\%$ of its maximum value for parallel axes.
Despite these efforts, the excitation pulse induces a crosstalk current in the pickup coil, with amplitude and phase depending on the residual inductive and capacitive couplings between the coils. This crosstalk signal saturates the preamplifier, resulting in a recovery time of~$\approx200\uu{\mu s}$, which complicates the detection of the FID signal. In order to prevent saturation, during the excitation pulse we apply a waveform to the cancellation coil that is optimized to compensate the crosstalk current in the pickup probe with minimal effect on spin dynamics. The phase and amplitude of this waveform are optimized by monitoring the current measured at the pickup probe and minimizing its magnitude. This is done at zero leading magnetic field to avoid spin excitation during optimization. We emphasize that only a small ($<1.5\%$) fraction of the excitation pulse RF field couples into the pickup probe and has to be compensated, therefore our compensation scheme has a correspondingly small effect on spin dynamics.
In many room-temperature NMR measurements, preamplifier saturation is prevented by using a transmit/receive (T/R) switch between the pickup probe and the preamplifier. Because our preamplifier is at $4.2\uu{K}$ temperature, we chose to use the compensation scheme discussed above, rather than designing and constructing a cryogenic T/R switch.
\subsection{Tuning and matching of pickup probe, excitation probe, and cancellation coil}
\noindent
Our magnetic resonance probes are designed as series capacitance-tuned tank circuits, Fig.~\ref{fig:setup2}. In these circuits, coil inductance $L$ is in parallel with a tuning capacitor $C_1$ and a resistor $R$, and this tank circuit is in turn in series with a matching capacitor $C_2$.
The total probe impedance is
\begin{align}
Z &= \frac{1}{\frac{1}{i\omega L} + \frac{1}{R} + i\omega C_1} + \frac{1}{i\omega C_2} \nonumber\\
&= \left(\frac{(\omega L)^2 R}{R^2(1-\omega^2 L C_1)^2 + (\omega L)^2}\right) + i \left(\frac{\omega L R^2 (1-\omega^2 L C_1)}{R^2 (1-\omega^2 L C_1)^2 + (\omega L)^2} - \frac{1}{\omega C_2}\right).
\end{align}
In order to match the probe impedance to $Z=R_0=50\uu{\Omega}$ at the resonance angular frequency $\omega_r$, we have to choose the following values for the circuit elements:
\begin{align}
R &= Q \omega_r L \nonumber, \\
C_1 &= \frac{1}{\omega_r^2 L} \left( 1 - \frac{1}{Q}\sqrt{\frac{R-R_0}{R_0}} \right), \\
C_2 &= \frac{1}{\omega_r} \sqrt{\frac{1}{R_0(R-R_0)}}, \nonumber
\end{align}
where $Q$ is the resonance quality factor. We used fixed-value surface mount capacitors and resistors, so the probes are not tunable after the setup is assembled.
The pickup coil with $N_p = 9$ turns of 26 AWG (American Wire Gauge) copper wire is a solenoid with return path cancellation. It has a radius $r_p = 3.2\uu{mm}$, an inductance of $L_p = 0.5 \uu{\mu H}$, and is tuned to the resonant frequency $\omega_p/(2\pi) = 39.71 \uu{MHz}$ with quality factor $Q_p = 26$ at $4.2\uu{K}$. The width of the pickup probe resonance limits the frequency range over which we can search for axion-like dark matter without re-tuning the probe. This is why we limited $Q_p$ to 26. The excitation coil has a Helmholtz geometry with $N_e = 2\times3$ turns of $26$ AWG copper wire with radius $r_e = 7.1\uu{mm}$ and inductance of $L_e = 0.3\uu{\mu H}$, which is tuned to the resonant frequency $\omega_e/(2\pi) = 42.01 \uu{MHz}$ with quality factor $Q_e = 1.5$ at $4.2\uu{K}$. The cancellation coil is a single turn loop around $r_c=4.8\uu{mm}$ radius with $26$ AWG copper and inductance of $L_c = 0.01 \uu{\mu H}$, which is tuned to resonant frequency $\omega_c/(2\pi) = 40.31 \uu{MHz}$ with quality factor $Q_c = 2$ at $4.2\uu{K}$. All the probes are matched to $50\uu{\Omega}$.
\subsection{Estimates of the pickup probe transfer function $\alpha$ and the excitation probe transfer function $\kappa$}\label{sec:D}
\noindent
Based on the electrical schematic described above, let us estimate the values of the transfer functions $\alpha$ and $\kappa$, defined in Eqs.~(\ref{eq:010},\ref{eq:020}). Using Faraday's law we can estimate the voltage induced in the pickup coil by an oscillating transverse magnetization $M_1$:
\begin{align}
V_p \approx \frac{1}{3} (\gamma B_0) N_p (\mu_0M_1) (\pi r_{s}^2),
\end{align}
where $B_0=4.4\uu{T}$ is the leading static magnetic field, $r_s = 2.3\uu{mm}$ is the radius of the sample, and $\gamma$ is the $^{207}$Pb gyromagnetic ratio. We set the demagnetizing factor to $1/3$, as for a sphere, as an approximation for a cylindrical sample with height $\approx$ diameter. For the pickup probe on resonance with the spin Larmor frequency, the resulting voltage at the input of preamplifier $A_1$ is calculated from circuit analysis~\cite{Miller2000}:
\begin{align}
V_1 \approx \frac{V_p}{2} \sqrt{\frac{Q_pR_0}{\omega_p L_p}},
\end{align}
while $Q_p \omega_p L_p \gg R_0$. We can therefore estimate the pickup probe transfer function:
\begin{align}
\alpha = \frac{V_1}{\mu_0M_1} \approx \frac{1}{3} (\gamma B_0)N_p (\pi r_{s}^2) \left(\frac{1}{2} \sqrt{\frac{Q_pR_0}{\omega_p L_p}}\right)
\approx 2\times10^{4}\uu{V/T}.
\label{eqn:alpha}
\end{align}
For an excitation voltage $V_e$, referred to the output of the DAC, the current through the excitation coil is calculated from circuit analysis:
\begin{align}
I_e \approx A_e V_e {\sqrt{\frac{Q_e}{R_0 \omega_e L_e}}},
\end{align}
where $|A_e| = 4$ is the gain of the SRS SIM954 amplifier. Note that the SIM954 has an output impedance $3.3\uu{\Omega}\ll 50\uu{\Omega}$.
The magnetic field produced by this current at the center of the Helmholtz excitation coil is
\begin{align}
B_e = \mu_0 I_e \frac{r_e^2}{\left(\sqrt{(r_e/2)^2+r_e^2}\right)^3} \frac{N_e}{2}.
\end{align}
Assuming the excitation is resonant with the spin Larmor frequency, the Rabi frequency is $\Omega_e = \gamma (B_e/2)$. The factor of $1/2$ arises because only one circular component of the linearly polarized excitation magnetic field $B_e$ is resonant (rotating wave approximation). Therefore the excitation probe transfer function can be estimated as
\begin{align}
\kappa = \frac{\Omega_e}{V_e} \approx \frac{\gamma \mu_0}{2} \left(A_e{\sqrt{\frac{Q_e}{R_0 \omega_e L_e}}}\right) \frac{r_e^2}{\left(\sqrt{(r_e/2)^2+r_e^2}\right)^3} \frac{N_e}{2}
\approx 0.4\uu{rad/(ms\cdot V)} .
\label{eqn:kappa}
\end{align}
Section \ref{simbloch} describes how we used pulsed NMR to measure the values of $\alpha$ and $\kappa$. The proximity of the measured values to the estimates above validates the approximations used when analyzing the apparatus design shown in Fig.~\ref{fig:setup2}.
\subsection{Shielding to reduce RF interference}
\noindent
The probes are mounted on a G-10 fiberglass cylinder, with a~0.2-mm thick copper sheet wrapped around the outside. The cylinder is positioned inside the magnet bore, Fig.~\ref{fig:setup1}. The copper sheet forms a closed shielding enclosure, together with aluminum end caps on top and bottom. The RG316 coaxial cable between the pickup probe and the low-noise amplifier $A_1$ is shielded with a~0.5-mm thick copper mesh sleeve. Another copper mesh sleeve shields the bundled RG316 coaxial cables that run up to the top flange of the cryostat. Shields are connected to the ground pin of the $A_1$ amplifier used as a common ground. We estimate the RF interference noise reduction factor due to the shields to be on the order of~$10$ within the $1\uu{MHz}$ range centered at~$39.71\uu{MHz}$.
\section{Characterization of the setup with nuclear magnetic resonance}
\subsection{Properties of the $^{207}$Pb spin ensemble}
\noindent
The $^{207}$Pb isotope has nuclear spin $I=1/2$ and gyromagnetic ratio
\begin{align}
\frac{\gamma}{2\pi} = \frac{\mu}{h I} = 9.03\uu{MHz/T},
\end{align}
where $\mu = 0.5926\mu_{\mathrm{N}}$ is the magnetic moment of $^{207}\mathrm{Pb}$ nucleus~\cite{NIST2013}, and the nuclear magneton is $\mu_N/h= 7.6226\uu{MHz/T}$~\cite{CODATA2014}.
The chemical formula of PMN-PT is \chem{(PbMg_{1/3}Nb_{2/3}O_3)_{2/3}-(PbTiO_3)_{1/3}}. The number density of $^{207}$Pb nuclear spins in a PMN-PT crystal is given by:
\begin{align}
n = \frac{\rho}{M} \cdot N_A \cdot 0.221 = 3.4\times10^{27}\uu{m^{-3}},
\end{align}
where $\rho = 8.2\uu{g/cm^{3}}$~\cite{Kochary2007} is the mass density, $M=317.9\uu{g/mole}$~\cite{NIST2013} is the molar mass, and $N_A$ is the Avogadro constant. The natural abundance of $^{207}\mathrm{Pb}$ is $22.1\%$~\cite{NIST2013}.
We perform our experiments in the leading magnetic field $B_0=4.4\uu{T}$ and at temperature $T=4.2\uu{K}$. The equilibrium magnetization of the $^{207}\mathrm{Pb}$ nuclear spin ensemble is given by~\cite{Abragam1961}
\begin{align}
\mu_0M_0 = \mu_0 \frac{n\gamma^2\hbar^2I(I+1)B_0}{3k_BT}=2.9\uu{nT},
\label{eq:M0}
\end{align}
where $k_B$ is the Boltzmann constant, $\mu_0$ is the permeability of free space, and $\hbar$ is the reduced Planck constant.
We model the NMR excitation spectrum as a super-Gaussian distribution of order 2, given by
\begin{align}
f(\nu) = \frac{6.33}{\Gamma}\exp{\left(-\left[\frac{2(\nu-\nu_0)}{\Gamma/(2\pi)}\right]^4\ln{2}\right)},
\label{eqn:gamma}
\end{align}
where $\Gamma/(2\pi)$ is the full-width at half-maximum, $\nu$ is the excitation frequency, and $\nu_0$ is the center of the distribution.
The scaling pre-factor is chosen to ensure that the area under the distribution is normalized to 1.
\subsection{Saturation-recovery measurements of the relaxation time $T_1$}\label{sec:satur}
\noindent
We use the standard NMR saturation recovery scheme to measure the $T_1$ relaxation time of the $^{207}$Pb nuclear spin ensemble in PMN-PT at $4.2\uu{K}$. Each measurement begins with a saturation step, comprising $100$ consecutive repetitions of a sequence with $101$ pulses whose carrier frequencies vary across the width of the excitation spectrum from $39.66\uu{MHz}$ to $39.76\uu{MHz}$, and whose Rabi frequencies are fixed at $0.88\uu{rad/ms}$. Each pulse duration is $0.8\uu{ms}$, and the pulse spacing is $1.4\uu{ms}$. Bloch-equation simulations confirm that this step saturates the spin ensemble, Fig.~\ref{fig:saturation}.
The saturation step is followed by a variable recovery wait time $t$, after which a pulsed NMR measurement is performed, with spin FID recorded after excitation pulses of $20\uu{ms}$ duration and $180\uu{ms}$ repetition time. The dependence of the FID amplitude on recovery time $t$ is modeled as an exponential $1-e^{-t/T_1}$. The best-fit value for the population relaxation time is $T_1=(25.8\pm 0.6)\uu{min}$.
\begin{figure}[h]
\includegraphics[width=0.35\textwidth]{S_Fig3.eps}
\caption{Saturation of the spin-ensemble excitation spectrum by the sequence described in section~\ref{sec:satur}. Bloch equation simulations are performed as described in section~\ref{simbloch}. The initial spin excitation spectrum is shown by the blue dashed line. The spectrum immediately after the saturation step is shown by the orange solid line.}
\label{fig:saturation}
\end{figure}
\subsection{Spin-dynamics simulations with Bloch equations}\label{simbloch}
\noindent
We use the Bloch equations to quantitatively describe the magnetic resonance dynamics of the $^{207}$Pb nuclear spin ensemble~\cite{Bloch1946,Abragam1961}. We choose the direction of the z-axis to be along the static magnetic field $B_0$. The linearly-polarized excitation magnetic field $B_e = (2\Omega_e/\gamma) \cos{(\omega_1t)}$ is applied in the $x$-direction. In the reference frame that rotates at the angular frequency $\omega_1$ around the leading magnetic field, the Bloch equations read
\begin{align}
\frac{\d\tilde{M}_x}{\d t} &= - \frac{\tilde{M}_x}{T_2} + \Delta\omega\tilde{M}_y ,\nonumber\\
\frac{\d\tilde{M}_y}{\d t} &= - \Delta\omega\tilde{M}_x - \frac{\tilde{M}_y}{T_2} - \Omega_e M_z ,\\
\frac{\d M_z}{\d t} &= \Omega_e \tilde{M}_y -\frac{M_z - M'_0}{T_1} ,\nonumber
\end{align}
where
$\Delta\omega=\omega_1-\omega_0$
is the detuning of spin Larmor frequency $\omega_0$ from the rotating frame frequency, $M'_0$ is the initial ensemble magnetization, $T_2$ is the transverse spin coherence time, and $\tilde{M}_{x,y}$ are the transverse spin magnetization components in the rotating frame. The transformation between magnetization in the laboratory and the rotating frames is
\begin{align}
M_x = \tilde{M}_{x} \cos(\omega_1t) - \tilde{M}_{y} \sin(\omega_1t) ,\nonumber\\
M_y = \tilde{M}_{x} \sin(\omega_1t) + \tilde{M}_{y} \cos(\omega_1t),
\end{align}
where in the lab frame $\vec{M} = M_x\hat{x} + M_y \hat{y} + M_z \hat{z}$.
We numerically solve the Bloch equations using the Runge-Kutta method. The inhomogeneously-broadened spin ensemble is represented by $3251$ spins, with their Larmor frequencies uniformly distributed in an excitation bandwidth of $65\uu{kHz}$ with $0.02\uu{kHz}$ spacing. We simulate the dynamics of each spin independently, and add their contributions to obtain the total magnetization.
The simulation parameters are the spin coherence time $T_2$, and the transfer functions $\alpha$ and $\kappa$, defined in Sec.~\ref{sec:D}. We perform fits to experimental FID spectra, shown in Fig. 2(a) of the main text and Fig.~\ref{fig:nmr}, by varying the values of these parameters to achieve the minimum value of the goodness-of-fit parameter $\chi^2 = \chi^2_1+\chi^2_2+\chi^2_3$, where the subscript enumerates the measurements with different pulse duration $t_p = 0.2\uu{ms},\,2\uu{ms},\,20\uu{ms}$. For each measurement $i=1,2,3$
\begin{align}
\chi_i^2 = \sum_{\nu} \left[\operatorname{Re}\left(F_{\text{exp}}[\nu]-F_{\text{sim}}[\nu]\right)^2 + \operatorname{Im}\left(F_{\text{exp}}[\nu]-F_{\text{sim}}[\nu]\right)^2\right],
\end{align}
where $F_{exp}$ is the Fourier transform of the experimentally detected voltage and $F_{sim}$ is the Fourier transform of simulation results, converted into voltage using the transfer coefficient $\alpha$, and the index $\nu$ labels discrete frequency points within the window shown in Fig. 2(a) of the main text and Fig.~\ref{fig:nmr}. The real part of the Fourier transform corresponds to the in-phase quadrature, and the imaginary part corresponds to the out-of-phase quadrature of the FID, relative to the carrier phase of the excitation pulse.
The excitation pulses induce probe ringing with time constant $\approx 500\uu{ns}$, therefore we use the FID response data starting at $5\uu{\mu s}$ after the end of an excitation pulse. To improve the signal-to-noise ratio, we average the recorded FID response data for several consecutive excitation pulses: 10 data sets are averaged for $t_p=0.2\uu{ms}$ pulse duration, 4 data sets are averaged for $t_p=2\uu{ms}$ pulse duration, and 4 data sets are averaged for $t_p=20\uu{ms}$ pulse duration. After performing the discrete Fourier transform, data points are binned along the frequency axis, with 4 points per bin for $t_p=0.2\uu{ms}$ pulse duration, 2 points per bin for $t_p=2\uu{ms}$ pulse duration, and 2 points per bin for $t_p=20\uu{ms}$ pulse duration. The error bars shown in Fig. 2(a) of the main text and Fig.~\ref{fig:nmr} are the standard deviation of the points within each bin.
The spin ensemble was saturated before every FID measurement, and the FID measurements started after a wait time $\approx T_1$ after saturation.
Therefore the initial magnetization at the start of every FID measurement was $\mu_0M'_0 = (0.67\pm0.05)\mu_0M_0 = (1.9\pm0.2)\uu{nT}$, where $M_0$ is the thermal equilibrium ensemble magnetization given by Eq.~(\ref{eq:M0}).
\begin{figure}[h]
\includegraphics[width=0.6\textwidth]{S_Fig4.eps}
\caption{Measurements of $^{207}$Pb FID spectra following a spin excitation pulse of length $t_p$, as indicated in the panels. We performed fitting simultaneously to in-phase (blue) and out-of-phase (orange) components of Fourier transforms of averaged FID from three data sets: with excitation pulse duration $t_p=20\uu{ms}$ shown in Fig. 2(a) in the main text, (a) with excitation pulse duration $t_p=0.2\uu{ms}$, and (b) with excitation pulse duration $t_p=2\uu{ms}$. Data points were binned, and the error bars show one standard deviation in each bin.
The lines show the best-fit simulation of the spin response, with the light-colored narrow bands indicating the range of simulation results if parameters $T_2$, $\kappa$, and $\alpha$ are varied by one standard deviation away from their best-fit values. We note that there is a sharp central feature with linewidth on the order of the Rabi frequency, visible in the FID spectrum shown in Fig. 2(a) of main text. Simulations show that the shapes of the FID Fourier spectra depend on the interplay between the excitation-pulse spectrum, the distribution of tipping angles across the spin ensemble, and the $T_2$ coherence time.
}
\label{fig:nmr}
\end{figure}
Using the measurements shown in Fig. 2(a) in the main text and Fig.~\ref{fig:nmr}, we extract the best-fit parameter values:
\begin{align}
T_2 &= (16.7\pm0.9)\uu{ms},\nonumber\\
\kappa &= (0.352\pm0.007)\uu{rad/(ms \cdot V)},\\
\alpha &= (2.3\pm0.2)\times10^4\uu{V/T}.\nonumber
\end{align}
The uncertainties are evaluated by bootstrapping: the frequency-domain data are down-sampled into 16 groups, and the fit is performed independently on each data group; the uncertainty is given by the standard deviation of the best-fit parameter values.
The proximity of the best-fit values of transfer parameters $\alpha$ and $\kappa$ to the estimates in eqns.~(\ref{eqn:alpha}) and~(\ref{eqn:kappa}) validates the analysis of the apparatus design in Sec.~\ref{sec:D}.
\subsection{NMR response as a function of the Rabi frequency $\Omega_e$}
\noindent
In order to confirm the validity of our NMR model in the limit of small spin-tip angles, we record and analyze FID data for a range of excitation Rabi frequencies $\Omega_e$. For these measurements we keep the excitation pulse width at $20\uu{ms}$ -- approximately the coherence time of an axion-like dark matter field with Compton frequency near $40\uu{MHz}$.
We vary the Rabi frequency from $0.02\uu{rad/ms}$ to $0.88\uu{rad/ms}$. At each Rabi frequency, we apply 100 consecutive excitation pulses, spaced by $180\uu{ms}$. After each pulse, we sample the FID voltage, starting $5\uu{\mu s}$ after the end of the pulse, and lasting for $16.4\uu{\mu s}$.
We average the 100 FID data sets, and calculate the discrete Fourier transform $F[n]$ of the averaged FID, where index $n$ labels frequency points. Since we only sample the beginning of the FID, before it can start to decay, we model it as a sinusoidal signal at the excitation carrier frequency. We extract the amplitude of the spin ensemble transverse magnetization by numerically integrating the power spectrum $|F[n]|^2$ over a $400\uu{kHz}$-wide frequency band centered at the excitation carrier frequency, and using the pickup probe transfer function $\alpha$ to convert the voltage to magnetization. Uncertainties are calculated using bootstrapping: we group the 100 FID data sets into 5 sets of 20 and perform analysis on these 5 sets independently. Error bars are set at the standard deviation of the results for these 5 sets. To obtain the theory curve in Fig.~2(c) of main text, we use our Bloch equation model to generate numerical time-domain FID data, which we analyze in the same way as we analyze experimental data.
\section{Spectral properties of the CW NMR response}
\noindent
Under CW excitation with Rabi frequency $\Omega_e$ and carrier angular frequency $\omega_1$, the steady-state transverse magnetization of an unsaturated homogeneously-broadened spin ensemble is given by~\cite{Abragam1961}
\begin{align}
M_1 = L(\omega_0-\omega_1)M_0\Omega_e T_2\cos{(\omega_1 t)},
\label{eq:910}
\end{align}
where $M_0$ is the longitudinal magnetization, $T_2$ is the transverse coherence time, $\omega_0$ is the Larmor angular frequency, and $L$ is the Lorentzian lineshape function:
\begin{align}
L(\omega_0-\omega_1)=\frac{1}{1+(\omega_0-\omega_1)^2T_2^2}.
\label{eq:920}
\end{align}
Let us describe the spin ensemble inhomogeneous broadening with the excitation lineshape $h(\omega_0+\Delta)$, normalized such that
\begin{align}
\int_{-\infty}^{\infty}h(\omega_0+\Delta)\,d\Delta=1.
\label{eq:930}
\end{align}
Under CW excitation, the steady-state transverse magnetization is then
\begin{align}
M_1 = uM_0\Omega_eT_2\cos{(\omega_1 t)},
\label{eq:940}
\end{align}
where the spectral $u$ factor is given by the integral over the lineshape:
\begin{align}
u = \int_{-\infty}^{\infty}L(\omega_0+\Delta-\omega_1)h(\omega_0+\Delta)\,d\Delta.
\label{eq:950}
\end{align}
Let us estimate the value of $u$.
Our NMR measurements indicate that the excitation spectrum is much broader than $1/T_2$, therefore we can approximate the Lorentzian with the delta-function: $L(\omega_0+\Delta-\omega_1)\approx(\pi/T_2)\delta(\omega_0+\Delta-\omega_1)$. Furthermore, we approximate the excitation spectrum as a rectangular function, centered at $\omega_0$, with full width $\Gamma$ and height $1/\Gamma$. Then, provided $|\omega_0-\omega_1|<\Gamma/2$, we can approximate
\begin{align}
u\approx \pi h(\omega_1)/T_2\approx \pi/(\Gamma T_2).
\label{eq:960}
\end{align}
In order to more accurately determine $u$, we solved the Bloch equations with the experimentally-determined values $T_2 = (16.7 \pm 0.9)\uu{ms}$ and excitation spectrum with $\Gamma/(2\pi) = (78 \pm 2)\uu{kHz}$ (Fig. 2(b) in the main text). We obtained
\begin{align}
u = (3.8 \pm 0.3)\times 10^{-4},
\label{eq:970}
\end{align}
in agreement with the estimate in Eq.~(\ref{eq:960}).
The correction due to spin saturation by axion-like dark matter depends on the experimental sensitivity to the drive strength $\Omega_e$. Our signal detection threshold corresponds to $\Omega_e = 0.23\uu{rad/s}$, which corresponds to a 30\% correction to the value of the magnetization in Eq.~\eqref{eq:940}~\cite{Castner1959}. This correction was used for all our axion-like dark matter limits.
\section{Ferroelectric polarization of PMN-PT}
\noindent
We polarize the ferroelectric PMN-PT crystal by applying a voltage across its faces at room temperature. To ensure good electrical contact, we paint the faces with graphite paint, which is removed after polarization. We connect the crystal to the Trek model 610E-G-CE high-voltage amplifier as shown in Fig.~\ref{fig:ferrorun}(a). The amplifier measures the applied voltage and the current through the sample. In order to measure the ferroelectric hysteresis loop, we apply triangular voltage ramps with alternating polarities, Fig.~\ref{fig:ferrorun}(b). Current spikes are visible when the applied voltage is sufficient to reverse the ferroelectric polarization.
In this experimental run the crystal started with a remanent polarization corresponding to positive polarity, so there is no current spike during the first ramp.
We obtain the sample polarization by integrating the current:
\begin{align}
P(t) = \frac{q(t)}{\pi r_s^2} = \frac{1}{\pi r_s^2}\int_{0}^{t}{I(t') \d t'} ,
\end{align}
where $q(t)$ is the electric charge on the crystal surface and $r_s=2.3\uu{mm}$ is the base radius of the cylindrical sample. The hysteresis loop shown in Fig. 2(d) of the main text is the plot of polarization as a function of applied voltage. The remanent polarization $P_r$ persists after the voltage has been ramped down to zero. We verified that the remanent polarization does not decay after thermal cycling of the sample.
\begin{figure}[h]
\includegraphics[width=0.9\textwidth]{S_Fig5.eps}
\caption{(a) Ferroelectric polarization setup, showing the signal generator controlling the high-voltage amplifier (HVA) that is connected to the electrodes in contact with the sample. TREK Model 610E high-voltage supply/amplifier/controller houses the HVA, as well as an ammeter $A$, a unity gain buffer amplifier $A_1$, and a voltmeter $V$. (b) Voltage applied at the output of HVA is measured at the voltmeter $V$ and converted to the voltage across the sample (blue dashed line). The current measured at the ammeter $A$ is plotted as the orange full line.}
\label{fig:ferrorun}
\end{figure}
\section{Nuclear spin dynamics due to the EDM interaction with axion-like dark matter}
\subsection{P,T-odd axion-like dark matter physics}
\noindent
Axion-like cold dark matter is a classical field: $a(t)=a_0\cos{(\omega_a t)}$, where
$\omega_a \approx m_ac^2/\hbar$.
If the axion-field energy density dominates dark matter, then
$\rho_{\text{DM}}=m_a^2a_0^2/2 \approx 3.6\times 10^{-42}\uu{GeV^4}$
~\cite{PDG2019}. In the QCD Lagrangian, this gives rise to an oscillating $\theta$ angle:
\begin{align}
\theta(t) = \frac{a}{f_a}=\frac{a_0}{f_a}\cos{(\omega_a t)}.
\label{eq:100}
\end{align}
Let us consider the nucleon EDM induced by axion-like dark matter:
\begin{align}
d_n = g_d a = 2.4\times 10^{-16}\,\theta \uu{e\cdot cm} = 2.4\times 10^{-3}\,\theta \uu{e\cdot fm},
\label{eq:110}
\end{align}
calculated with 40\% accuracy~\cite{Pospelov1999b,Graham2013}. Here $g_d$ is the EDM coupling constant~\cite{Graham2013}, introduced in the Lagrangian term:
\begin{align}
\mathcal{L}_{\text{EDM}} = -\frac{i}{2}g_da\bar{\Psi}_N\sigma_{\mu\nu}\gamma_5\Psi_NF^{\mu\nu},
\label{eq:120}
\end{align}
where $\Psi_N$ is the nucleon wavefunction, $F^{\mu\nu}$ is the electromagnetic field tensor, and $\sigma$ and $\gamma$ are the standard Dirac matrices.
From eqs.~(\ref{eq:100},\ref{eq:110}) we get the relationship between $g_d$ and $f_a$:
\begin{align}
g_d = \frac{2.4\times 10^{-16}\uu{e\cdot cm}}{f_a} = \frac{3.6\times 10^{-3}\uu{GeV^{-1}}}{f_a},
\label{eq:130}
\end{align}
where we used the natural unit conversions: $1\uu{cm}=5\times10^{13}\uu{GeV^{-1}}$ and $e=0.303$.
For the QCD axion, the decay constant is related to its mass:
\begin{align}
m_a = 6\times10^{-10}\uu{eV}\,\left(\frac{10^{16}\uu{GeV}}{f_a}\right),
\label{eq:140}
\end{align}
but for a generic ALP there is no such connection.
\subsection{Nuclear Schiff moments induced by the EDM coupling of axion-like dark matter}
\noindent
The nuclear Schiff moment~\cite{Schiff1963,Sandars1967,Sushkov1984,Flambaum2002} is defined as:
\begin{align}
\bm{S} = \frac{e}{10}\left( \langle r^2\bm{r}\rangle - \frac{5}{3Z}\langle r^2\rangle\langle\bm{r}\rangle \right),
\label{eq:200}
\end{align}
where $e$ is the elementary electric charge, $Z$ is the atomic number, and $\langle r^k \rangle = \int r^k\rho(\bm{r})d^3r$ are the integrals over nuclear charge density $\rho(\bm{r})$.
The Schiff moment sources the P- and T-odd electrostatic potential
\begin{align}
\varphi(\bm{r}) = 4\pi(\bm{S}\cdot\nabla)\delta(\bm{r}).
\label{eq:205}
\end{align}
Importantly, the definition of the Schiff moment in Ref.~\cite{Khriplovich1997} differs from this one by a factor of $4\pi$. We adopt the definition in Eq.~(\ref{eq:200}), noting the factor of $4\pi$ wherever we refer to Ref.~\cite{Khriplovich1997}.
The Schiff moment can be induced by a permanent EDM of a nucleon, or by P,T-odd nuclear forces~\cite{Khriplovich1997}. The contribution of P,T-odd nuclear forces is larger than the contribution of nucleon EDM~\cite{Sushkov1984}. Let us consider the two contributions separately, in the case of the $^{207}$Pb nucleus, whose ground state is $I^{\pi}=1/2^-$, having a neutron $3p_{1/2}$ hole in a closed-shell magic nucleus.
\subsubsection{Nuclear Schiff moments induced by nucleon EDM}
\noindent
This contribution is due to non-coincident densities of nuclear charge and dipole moment. It can be estimated for $^{207}$Pb using Eq.~(8.76) in Ref.~\cite{Khriplovich1997}:
\begin{align}
4\pi S_{\mathrm{EDM}} \approx d_n\times \frac{4\pi}{25}\frac{(K+1)I}{I(I+1)}r_0^2,
\label{eq:210}
\end{align}
where $K=(\ell-1)(2I+1)=1$ and $r_0=1.25A^{1/3}=7.4\uu{fm}$. This estimate gives $S_{\mathrm{EDM}} \approx d_n\times 3\uu{fm^2}$.
More detailed calculations~\cite{Dzuba2002,Dmitriev2003a} give the result:
\begin{align}
S_{\mathrm{EDM}} = d_n\times 1.9\uu{fm^2}.
\label{eq:220}
\end{align}
In order to connect this to QCD axion physics, we use Eq.~(\ref{eq:110}):
\begin{align}
S_{\mathrm{EDM}} = g_d a\times 1.9\uu{fm^2} = 5\times 10^{-3}\,\theta \uu{e\cdot fm^3}.
\label{eq:240}
\end{align}
\subsubsection{Nuclear Schiff moments induced by P,T-odd nuclear forces}
\noindent
The P,T-odd nuclear interaction of a non-relativistic nucleon with nuclear core is parametrized by strength $\eta$~\cite{Sushkov1984}:
\begin{align}
W = \frac{G_F}{\sqrt{2}}\frac{\eta}{2m}\bm{\sigma}\cdot\nabla \rho(\bm{r}),
\label{eq:250}
\end{align}
where $G_F\approx 10^{-5}\uu{GeV^{-2}}$ is the Fermi constant, $m$ is the nucleon mass, $\bm{\sigma}$ is its spin, and $ \rho(\bm{r})$ is the density of core nucleons. A vacuum $\theta$ angle gives rise to this interaction via the P,T-odd pion-nucleon coupling constant~\cite{Khriplovich1997,Flambaum2014}:
\begin{align}
\eta = 1.8\times 10^6\,\theta.
\label{eq:260}
\end{align}
Next we need to calculate the nuclear Schiff moment induced by the interaction~(\ref{eq:250}). Reference~\cite{Sushkov1984} states that the Schiff moment is suppressed by a factor $\sim 10$ for nuclei with a valence neutron, compared to a valence proton, and only core polarization leads to a non-zero effect. For example, the Schiff moment of $^{201}$Hg is estimated as $0.2\times 10^{-8}\eta\uu{e\cdot fm^3}$. However in Ref.~\cite{Flambaum1986} it was realized that virtual excitations in the core eliminate this suppression, and, in fact, the results for a valence neutron and a valence proton should be comparable. Here the Schiff moment of $^{201}$Hg is estimated as $2.4\times 10^{-8}\eta\uu{e\cdot fm^3}$, and the Schiff moment of $^{199}$Hg is estimated as $-1.4\times 10^{-8}\eta\uu{e\cdot fm^3}$.
The issue is complicated by nuclear many-body effects. These were numerically calculated for $^{199}$Hg in Refs.~\cite{Dmitriev2003,Ban2010}, giving a factor $\sim10$ reduction in the Schiff moment. However the physical origin of such a strong reduction is not clear. The only effect, not included in the shell model, that can change the value of the Schiff moment is the collective nuclear octupole deformation, and, if anything, that should increase the Schiff moment. Reference~\cite{Yanase2020} gives a result for $^{199}$Hg that is $\sim10\%$ away from the shell-model estimate. These authors attribute the Schiff moment suppression in Ref.~\cite{Ban2010} to the mixing with the $J^{\pi}=1/2^-_2$ state, for which they get a small Schiff moment value. However this small value itself is questionable. This state is an admixture of a soft quadrupole phonon ($J=2$) to the ground state, resulting still in $J=1/2$. The excited states do not have this quadrupole deformation, therefore the overlap matrix elements are likely to be small unless a lot of excited states are carefully taken into account. This suggests that the calculation may have large intrinsic uncertainties.
Importantly, $^{207}$Pb is close to a magic nucleus, which means that many-body effects should not play an important role here. Therefore, until the many-body effects can be better understood, for $^{207}$Pb we retain the single-particle estimate of Ref.~\cite{Flambaum1986}:
\begin{align}
S_{\eta} = 2\times 10^{-8}\eta\uu{e\cdot fm^3}=0.04\,\theta\uu{e\cdot fm^3}.
\label{eq:270}
\end{align}
Note that this is a factor of eight larger than the result (13) in Ref.~\cite{Flambaum2020a}, where the $^{207}$Pb Schiff moment was taken to be the same as for the many-body suppressed $^{199}$Hg.
We can also see that this contribution is a factor of eight larger than the EDM contribution in Eq.~(\ref{eq:240}). We therefore neglect the EDM contribution, and use the above estimate (\ref{eq:270}).
Similar estimates were performed for $^{199}$Hg in Ref.~\cite{Stadnik2014}.
\subsection{Nuclear Schiff moment-induced spin energy shift in ferroelectric PMN-PT}
\noindent
The energy shift of each nuclear spin sublevel of a \chem{^{207}Pb^{2+}} ion in ferroelectric \chem{PbTiO_3} is estimated in Refs.~\cite{Mukhamedjanov2005,Ludlow2013}. The result of the full quantum chemistry calculation~\cite{Skripnikov2016} is:
\begin{align}
\Delta\epsilon = 1.04\times 10^6\frac{x}{0.58\,\mbox{\AA}}\frac{S}{ea_B^3}\uu{[eV]}=1.2\times 10^{-8}\frac{x}{\mbox{\AA}}\frac{S}{\mathrm{e\cdot fm^3}}\uu{[eV]},
\label{eq:300}
\end{align}
where $x$ is the displacement of the \chem{Pb^{2+}} ion with respect to the center of the oxygen cage, $S$ is the magnitude of the Schiff moment of the $^{207}$Pb nucleus, and $a_B=0.53\uu{\AA}$ is the Bohr radius. The nuclear spin is $I=1/2$; each of the two nuclear spin states shifts by this amount, in opposite directions. Since $\theta$ and $S$ exhibit sinusoidal time dependence, the experimentally relevant quantity is the Rabi angular frequency:
\begin{align}
\Omega_a = \frac{1}{2}\frac{2\Delta\epsilon}{\hbar} = 1.8\times 10^7\frac{x}{\mbox{\AA}}\frac{S}{\mathrm{e\cdot fm^3}}\uu{[rad/s]},
\label{eq:310}
\end{align}
where we used $\hbar = 6.58\times 10^{-16}\uu{eV\cdot s}$. We note that the spin driving field is ``linearly polarized'', and therefore the Rabi frequency contains an extra factor of $1/2$, which arises because only one of the two counter-rotating components of the linearly polarized drive is resonant (rotating wave approximation).
Density functional theory calculations for PMN-PT give the Pb$^{2+}$ cation displacement from the center of the oxygen cage: $x_0=0.39$~\AA, and the average polarization: $P_0=55\uu{\mu C/cm^2}$~\cite{Grinberg2004}. Our experiment was performed with the crystal polarization $P_r=22\uu{\mu C/cm^2}$, therefore we scale the average displacement to $x=0.16$~\AA.
For $^{207}$Pb in ferroelectric PMN-PT we can use Eq.~(\ref{eq:270},\ref{eq:300}) and $x = 0.16$~\AA~to get:
\begin{align}
\Delta\epsilon &= 8\times 10^{-11}\,\theta\uu{[eV]}, \\
\Omega_a &= 1.2\times 10^5\,\theta\uu{[rad/s]}.
\label{eq:320}
\end{align}
To connect with the EDM $d_n$ and the coupling constant $g_d$, we use Eqs.~(\ref{eq:100},\ref{eq:110},\ref{eq:130}). For the energy shift we obtain
\begin{align}
\Delta\epsilon = d_n\uu{[e\cdot cm]}\times 3.4\times10^5\uu{[V/cm]}.
\label{eq:325}
\end{align}
We can extract the effective electric field (which includes the Schiff screening factor~\cite{Budker2014}):
\begin{align}
E^*=\Delta\epsilon/d_n=340\uu{[kV/cm]}.
\label{eq:325}
\end{align}
For the drive Rabi frequency we obtain:
\begin{align}
\Omega_a &= 1.2\times 10^5\frac{g_da_0}{3.6\times 10^{-3}\uu{[GeV^{-1}]}}\uu{[rad/s]} ,\\
\hbar\Omega_a &= 2.2\times 10^{-17}(g_da_0)\uu{[GeV]},
\label{eq:330}
\end{align}
where $g_d$ is in GeV$^{-2}$ and $a_0=\sqrt{2\rho_{\text{DM}}}/m_a$ is in GeV.
Let us introduce the sensitivity factor $\xi$, defined as $\hbar\Omega_a=\xi g_da_0$. Its estimated value is therefore
\begin{align}
\xi = 2.2\times 10^{-17}\uu{[GeV^2]}.
\label{eq:330}
\end{align}
There are several contributions to the theoretical uncertainty in $E^*$ and $\xi$. The uncertainty of the QCD calculations is $\approx 40\%$~\cite{Pospelov1999b,Graham2013}. The uncertainty of the solid-state calculation of the nuclear spin energy shift due to the Schiff moment is $\approx 30\%$~\cite{Mukhamedjanov2005,Ludlow2013,Skripnikov2016}. Therefore we estimate the total theoretical uncertainty in $E^*$ and $\xi$ at $\approx 50\%$.
\section{Nuclear spin dynamics due to the gradient interaction with axion-like dark matter}
\noindent
The non-relativistic Hamiltonian for the gradient interaction of spin $\vec{I}$ with axion-like dark matter field $a(\vec{r},t)$ is
\begin{align}
H_{\text{aNN}} = g_{\text{aNN}}\vec{\nabla}a(\vec{r},t)\cdot\vec{I},
\label{eq:330}
\end{align}
where $g_{\text{aNN}}$ is the coupling strength measured in units of GeV$^{-1}$, and we used natural units here $\hbar=c=1$~\cite{Graham2013,Garcon2019b}. In the first approximation we can write the axion-like dark matter field as: \begin{align}
a(\vec{r},t)\approx a_0\cos{(\omega_at-\vec{k}\cdot\vec{r})},
\label{eq:330}
\end{align}
where the field amplitude $a_0$ is fixed by the assumption that it dominates the dark matter energy density: $\rho_{\text{DM}} = m_a^2 a_0^2 / 2 = 3.6 \times 10^{-42}~\text{GeV}^4$~\cite{PDG2019, Graham2013}.
We approximate the instantaneous value of the gradient $\vec{\nabla}a\approx m_a\vec{v}a$, where $\vec{v}$ is the instantaneous value of the velocity of the ALP field in the laboratory frame.
The Hamiltonian in natural units becomes:
\begin{align}
H_{\text{aNN}} =(g_{\text{aNN}}a_0)m_a\vec{v}\cdot\vec{I}\cos{(\omega_at)}.
\label{eq:330}
\end{align}
The product $g_{\text{aNN}}a_0$ is dimensionless, so we can restore the values of fundamental constants by dimensional analysis:
\begin{align}
H_{\text{aNN}} =(g_{\text{aNN}}a_0)m_ac^2\frac{\vec{v}}{c}\cdot\vec{I}\cos{(\omega_at)}.
\label{eq:330}
\end{align}
This interaction exerts a torque on nuclear spins, with the drive Rabi frequency given by
\begin{align}
\hbar\Omega_a =\frac{1}{2}(g_{\text{aNN}}a_0)m_ac^2\frac{v_{\perp}}{c},
\label{eq:330}
\end{align}
where $v_{\perp}$ is the component of the velocity perpendicular to the direction of the leading field $B_0$.
As in the previous section, the spin driving field is ``linearly polarized'', and therefore the Rabi frequency contains an extra factor of $1/2$, which arises because only one of the two counter-rotating components of the linearly polarized drive is resonant (rotating wave approximation).
\section{Spectral properties of the spin response due to axion-like dark matter}
\noindent
In the first approximation we assume that the axion-like dark matter field is coherent, and drives the $^{207}$Pb nuclear spins at carrier angular frequency $\omega_a$ with Rabi frequency $\Omega_a$.
The steady-state transverse spin magnetization that develops under the action of this driving field is given by Eq.~(1) of the main text. The resulting voltage recorded by the ADC is:
\begin{align}
V_a(t) = \alpha\mu_0M_a = \alpha u\mu_0M_0\Omega_aT_2\cos{(\omega_at)}.
\label{eq:330}
\end{align}
The time-averaged power in this signal is
\begin{align}
\langle V_a^2\rangle = \frac{1}{2}(\alpha u\mu_0M_0\Omega_aT_2)^2.
\label{eq:331}
\end{align}
Note that we use the term ``power'' in the signal processing context, and this is proportional to the physical power.
The Galactic axion-like dark matter halo field $a(t)$ is not perfectly coherent.
In this work we search for the axion-like dark matter halo that follows the standard halo model~\cite{Turner1990,Evans2019}. In this model the ALP speeds $v$ in the Galactic frame follow the Maxwell-Boltzmann distribution
\begin{align}
f_{\text{gal}}(v) = \frac{4v^2}{\sqrt{\pi} v_0^3} e^{-v^2 / v_0^2},
\end{align}
where $v_0 \approx 220~\text{km}/\text{s}$ is the most probable speed~\cite{Evans2019}.
The laboratory frame moves relative to the Galactic frame with the average speed $v_{\text{lab}} \approx232~\text{km}/\text{s}$ which has annual and daily modulations due to, respectively, Earth's revolution about the Sun and Earth's rotation around its axis~\cite{Foster2018}.
The distribution of ALP speeds broadens the Fourier spectrum of the ALP field $a(t)$, giving it a characteristic linewidth $\approx v_0^2\nu_a/c^2\approx 10^{-6}\nu_a$.
The power spectrum of the ALP field $a(t)$ is given by the function
\begin{align}
f_0(\nu) = \frac{2c^2}{\sqrt{\pi} v_0 v_{\text{lab}} \nu_a} \exp{\left(-\frac{2c^2}{v_0^2} \frac{\nu - \nu_a}{\nu_a} - \frac{v_{\text{lab}}^2}{v_0^2}\right)} \sinh{\beta}, \label{eq:500}
\end{align}
where
\begin{align}
\beta = \frac{2c v_{\text{lab}}}{v_0^2} \sqrt{\frac{2(\nu - \nu_a)}{\nu_a}}.
\end{align}
This spectral function is normalized so that
\begin{align}
\int_{\nu_a}^{\infty}f_0(\nu) \,d\nu = 1.
\label{eq:555}
\end{align}
This is the spectral lineshape used in searches for ALP-photon interactions~\cite{Du2018,Brubaker2017,Ouellet2019,Gramolin2020a}.
\subsection{EDM search}
\noindent
In our search for the ALP EDM interaction, the recorded voltage $V_{\text{EDM}}$ is directly proportional to the ALP field $a(t)$. Therefore the voltage power spectral density will have the same spectral shape $f_0(\nu)$. We make use of Parseval's theorem to ensure that the time-averaged power, Eq.~(\ref{eq:331}), matches the integral of the Fourier power spectrum, with the lineshape normalized as in Eq.~(\ref{eq:555}). The result is the expression for the voltage power spectrum:
\begin{align}
V_{\text{EDM}}^2(\nu) = \frac{1}{2}(\alpha u\mu_0M_0\Omega_aT_2)^2f_0(\nu)
= \frac{1}{2}(\alpha u\mu_0M_0T_2)^2\left(\frac{\xi g_d a_0}{\hbar}\right)^2f_0(\nu).
\label{eq:330}
\end{align}
\subsection{Gradient search}
\noindent
In our search for the ALP gradient interaction, the recorded voltage $V_{\text{gr}}$ is proportional to the gradient of the ALP field, which includes the velocity of the ALP field in the lab frame. Therefore the voltage power spectrum has a different form:
\begin{align}
f_1(\nu) = \frac{2c^2}{v_0^2 + v_{\text{lab}}^2 \sin^2{\zeta}} \frac{\nu - \nu_a}{\nu_a} \left[\sin^2{\zeta} + \frac{1}{\beta} \left(\coth{\beta} - \frac{1}{\beta}\right) \left(2 - 3\sin^2{\zeta}\right)\right] f_0(\nu),
\label{eq:600}
\end{align}
where $\zeta$~is the angle between the vectors $\mathbf{B}_0$ and $\mathbf{v}_{\text{lab}}$. This spectral function is normalized so that
\begin{align}
\int_{\nu_a}^{\infty}f_1(\nu) \,d\nu = 1.
\end{align}
A detailed analysis of the ALP velocity distribution in the laboratory reference frame, resulting in the ALP gradient spectral line shape~(\ref{eq:600}), will be published elsewhere.
Again, making use of Parseval's theorem to ensure that the time-averaged power equals the integral of the Fourier power spectrum, we write the expression for the voltage power spectrum:
\begin{align}
V_{\text{gr}}^2(\nu) = \frac{1}{2}(\alpha u\mu_0M_0T_2)^2\left(\frac{g_{\text{aNN}} a_0 m_ac^2}{2\hbar}\right)^2\frac{v_0^2 + v_{\text{lab}}^2 \sin^2{\zeta}}{c^2}f_1(\nu).
\end{align}
\section{Data acquisition and analysis for the axion-like dark matter search}
\noindent
The experimental search for axion-like dark matter took place on October 7, 2019.
We varied the static magnetic field to sweep the spin Larmor frequency, starting at $40.16\uu{MHz}$ and ending at $39.16\uu{MHz}$, in $21$ steps with step size of $50\uu{kHz}$. This corresponds to magnetic fields between $B_0=4.45\uu{T}$ and $4.35\uu{T}$. During the recording of data sensitive to axion-like dark matter the superconducting magnet is in persistent mode with the power supply turned off and the excitation probe and cancellation coil are terminated with a $50\,\Omega$ resistor.
Data are recorded at the ADC sampling rate of $250\uu{MS/s}$ and saved to a hard drive using the first-in, first-out mode of the ADC. At each value of $B_0$ we record $58\uu{s}$ of ``scan'' data, immediately followed by $58\uu{s}$ of ``re-scan'' data, analyzed as described below.
During the search we perform three pulsed NMR calibrations, at the first, the last, and the middle values of the magnetic field. Each calibration consists of FID data taken at five different excitation carrier frequencies near the corresponding Larmor frequency, Fig.~\ref{fig:axion_nmr}.
\begin{figure}[h]
\includegraphics[width=1\textwidth]{S_Fig6.eps}
\caption{NMR calibration at the three values of the bias field $B_0$. FID data are recorded after excitation pulses at Rabi frequency $\Omega_e=0.88\uu{rad/ms}$ and pulse length $20\uu{ms}$. The excitation carrier frequency is plotted on the x-axis. Following the procedure used to obtain Fig. 2(b) in the main text, results are normalized so that the integral of the spectrum is unity.
The error bars show one standard deviation uncertainties of the FID spectrum fits, performed as described in section~\ref{simbloch}.
Each spectrum is modeled as a super-Gaussian of order 2 (Eq.~(\ref{eqn:gamma})) and constant width $78\uu{kHz}$ (orange line). The only free parameter is the central frequency.
The best-fit values of the central frequency for the three calibration data sets are: $\nu_0=(39159 \pm 1)\uu{kHz},\,\,(39708 \pm 1)\uu{kHz},\,\,(40160 \pm 2)\uu{kHz}$.
}
\label{fig:axion_nmr}
\end{figure}
The data sensitive to axion-like dark matter are analyzed using Matlab on the Shared Computing Cluster, which is administered by Boston University’s Research Computing Services. The data-processing procedure consists of the following steps.
\begin{enumerate}[label=(\arabic*)]
\item Divide data at each value of the magnetic field $B_0$ into 27 blocks. Each block contains $2^{29}$ points and corresponds to $2.15\uu{s}$ of data.
The block duration exceeds the axion-like dark matter coherence time of $\approx 25\uu{ms}$ in the standard halo model.
\item Perform the discrete Fourier transform on each data block without any windowing function, to obtain the spectral density $F[\nu]$.
Select the analysis frequency range between $39.1\uu{MHz}$ and $40.2\uu{MHz}$.
The real and imaginary parts of the spectral density are Gaussian-distributed, therefore the power spectral density $|F[\nu]|^2=\operatorname{Re}(F[\nu])^2 + \operatorname{Im}(F[\nu])^2$ is a chi-square distribution with two degrees of freedom. Then, average $|F[\nu]|^2$ obtained from $27$ segments. Take a histogram and fit to a chi-square distribution and confirm that it has $54$ degrees of freedom. Repeat this for the scan (and separately, re-scan) data at each resonant frequency.
\item \label{item:SG} Search for narrow RF interference spectral lines using the Savitzky-Golay filter with order 2 and length 31~\cite{Brubaker2017a}.
Spectral lines narrower than the ALP linewidth are distinguished by the difference between the filtered and raw power spectral densities. The points where this difference is above a threshold are marked as narrow spectral lines and are assigned the average value of their neighboring points.
\item \label{item:lineshape} Optimally-filter data by convolving the power spectral density with the spectral lineshape for the ALP EDM interaction $f_0(\nu)$ given in Eq.~(\ref{eq:500}). The separation between distinct ALP search frequencies is set to the ALP signal linewidth $3(v_0^2/c^2)\nu_0/4$, where $\nu_0$ is the central Larmor frequency, determined by the value of the bias field $B_0$~\cite{Brubaker2017a,Foster2018}.
\item Model the histogram of the optimally-filtered power spectral density with $100$ bins as the Gaussian distribution with mean $\mu$ and standard deviation $\sigma$. Calculate the detection threshold at $\mu+3.355\sigma$, corresponding to a $5\sigma$ detection with $95\%$ confidence level. Points above the threshold are ALP detection candidates. A detailed explanation of the choice of threshold value can be found in Refs.~\cite{Brubaker2017a,Gramolin2020a}.
\end{enumerate}
This analysis process is repeated for data taken at each of the $21$ settings of bias magnetic field $B_0$ in the scan. The spin response to an axion-like dark matter signal will only appear in the data set where $B_0$ is such that the ALP Compton frequency is within the magnetic resonance excitation spectrum.
For each data set we use the $80\uu{kHz}$ frequency band centered at Larmor frequency $\nu_0$, corresponding to the excitation spectrum, to search for the ALP signal, as described above. The rest of the spectral data within the $1\uu{MHz}$ scan range are used to reject residual background RF interference, which is not eliminated by the Savitzky-Golay filter.
In addition, re-scan measurements are analyzed to eliminate statistical fluctuations, which are expected, given the large bandwidth of our search (look-elsewhere effect). The analysis procedure is as follows.
\begin{enumerate}[label=(\alph*)]
\item At each value of bias magnetic field we consider $\approx 5000$ frequency points (independent values of the ALP Compton frequency). For Gaussian-distributed data we expect two points to be above the $3.355\sigma$ threshold. Typically we obtain $\approx 30$ candidates above the threshold. The excess candidates are due to RF interference.
\item We compare candidate frequencies from the ``resonant'' data set (for which the frequency is within the excitation spectrum) to the candidate frequencies from the ``background'' data sets (for which the frequency is outside the excitation spectrum). If the candidate frequency appears in one of the background data sets, it is rejected as RF interference. On average this eliminates $\approx 28$ candidates at each value of $B_0$.
\item We compare candidate frequencies from the scan and re-scan data sets. If a candidate frequency appears only in one of those data sets, it is rejected as a statistical fluctuation. On average this eliminates $\approx 2$ candidates at each value of $B_0$.
\end{enumerate}
This analysis procedure rejects all candidates above the $3.355\sigma$ threshold at all values of $B_0$. We do not detect an axion-like dark matter signal.
Therefore, for each value of $B_0$, we quote the $g_d$ coupling value that corresponds to the $5\sigma$ value of the power spectral density as the 95\% confidence interval limit~\cite{Gramolin2020a}.
\begin{figure}[h]
\includegraphics[width=0.35\textwidth]{S_Fig7.eps}
\caption{Angle $\zeta$ between the bias magnetic field $\mathbf{B}_0$ and the laboratory velocity vector $\mathbf{v}_{\text{lab}}$ as a function of time offset from the start of the experimental search for axion-like dark matter at 18:41 UTC on October 7, 2019 to 00:30 UTC on October 8, 2019. Stars indicate the times at which data are recorded for different values of $B_0$. The magnitude of the laboratory velocity is $v_{\text{lab}} = 226.5 \pm 0.5~\text{km}/\text{s}$ for the entire duration of data taking. The velocity $\mathbf{v}_{\text{lab}}$ is calculated for the Physics Department at Boston University ($42\degree20'53.8"\mathrm{N}, 71\degree06\mathrm{'}01.8"\mathrm{W}$) using the Python code~\cite{tassle2020} based on the Astropy library~\cite{astropy:2013,astropy:2018}.}
\label{fig:zeta}
\end{figure}
We search for the gradient coupling $g_{\text{aNN}}$ of axion-like dark matter using the same steps as described above, with the standard halo model lineshape in step~\ref{item:lineshape} replaced by the gradient coupling lineshape $f_1(\nu)$, given in Eq.~(\ref{eq:600}). We calculate the angle $\zeta$ at each value of $B_0$ during the scan, based on the coordinates of our laboratory and the time at which the data are recorded, Fig.~\ref{fig:zeta}.
Our analysis for the gradient coupling $g_{\text{aNN}}$ rejects all candidates above the $3.355\sigma$ threshold at all values of $B_0$. Therefore, for each value of $B_0$, we quote the $g_{\text{aNN}}$ coupling value that corresponds to the $5\sigma$ value of the power spectral density as the 95\% confidence interval limit. We note that the variation in $\zeta$ throughout the scan means that the shape of the limit curves for $g_d$ and for $g_{\text{aNN}}$ is slightly different in Fig.~4(b) of the main text, however this difference is smaller than the line thickness on the logarithmic plot.
\subsection{Testing the data analysis procedure by injecting ALP signals}
\noindent
We test our data-analysis procedure by injecting into the experimental spectra synthetic axion-like dark matter signals with the lineshape given by Eq.~(\ref{eq:500}). Figure \ref{fig:axion_inj}(a) shows the spectrum with an injected signal at Compton frequency $\nu_a=39.1586\uu{MHz}$ and with magnetic field PSD of $2.6\uu{fT^2/Hz}$. After optimal filtering, the injected signal shows up as a candidate with amplitude $101\uu{fT^2}$, as shown in Fig.~\ref{fig:axion_inj}(b). The histogram of the optimally-filtered data points shows that this injected signal is detected at $20\sigma$ significance, Fig.~\ref{fig:axion_inj}(c).
We test the recovery of the coupling strength by injecting 10 simulated signals, whose coupling strength is varied between $g_d = 7.0\times10^{-4}\uu{GeV^{-2}}$ and $g_d = 7.0\times10^{-3}\uu{GeV^{-2}}$ and whose Compton frequencies are selected randomly between $\nu_a = 39.1185\uu{MHz}$ and $\nu_a = 39.1985\uu{MHz}$. The coupling strengths recovered from detected signals are shown in Fig.~\ref{fig:axion_inj}(d). We find that, on average, our analysis procedure results in a $(2.7 \pm 0.8) \%$ suppression in the recovered coupling strength. This is due to the discrete sampling of the ALP search frequencies. If the injected ALP frequency falls between the search frequencies, there is a small mismatch in the lineshapes, which reduces the recovered coupling strength.
The limits reported in the main text are corrected for this suppression.
\begin{figure}[h]
\includegraphics[width=0.7\textwidth]{S_Fig8.eps}
\caption{Injecting simulated axion-like dark matter signals into experimental data.
(a) A $400\uu{Hz}$-wide band of experimental power spectrum, with an injected signal at Compton frequency $\nu_a=39.1586\uu{MHz}$ and with magnetic field PSD of $2.6\uu{fT^2/Hz}$. Experimental data are shown as blue circles, the injected signal is shown as the orange line.
(b) The optimally-filtered spectrum within the $50\uu{kHz}$ frequency band centered on $\nu_a$.
(c) The histogram of optimally-filtered PSD data (blue circles) within the $80\uu{kHz}$ band centered on the Larmor frequency $\nu_0=39.16\uu{MHz}$. Data are sorted into 100 bins, and the Gaussian fit is shown as the orange line. The $3.355\sigma$ detection threshold (vertical dashed black line) is at $17\uu{fT^2}$.
(d) Recovered coupling for injected signals with coupling strengths varying logarithmically from $g_d = 7.0\times10^{-4}\uu{GeV^{-2}}$ to $g_d = 7.0\times10^{-3}\uu{GeV^{-2}}$ at different Compton frequencies, sampled randomly between $\nu_a = 39.1185\uu{MHz}$ and $\nu_a = 39.1985\uu{MHz}$. The orange line shows the linear fit, from which we extract the $(2.7 \pm 0.8)\%$ signal suppression.}
\label{fig:axion_inj}
\end{figure}
\subsection{Projected sensitivity reach}
\noindent
Our experimental results demonstrate the feasibility of using solid-state nuclear magnetic resonance to search for axion-like dark matter. There are several bounds on the relevant interactions of axion-like dark matter in this mass range, based on analysis of cooling dynamics of supernova SN1987A~\cite{Raffelt2008,Budker2014,Graham2015a}, and of Big-Bang nucleosynthesis~\cite{Blum2014}. However these model-dependent bounds are subject to significant caveats and uncertainties, and may be evaded altogether~\cite{DeRocco2020,Bar2020}.
Stringent experimental limits on $g_d$ and $g_{\text{aNN}}$ exist at much lower ALP masses~\cite{Vasilakis2009,Abel2017a,Wu2019a,Garcon2019b,Terrano2019,Roussy2020}, but the mass range probed in the current search has been, until now, experimentally unexplored.
The current sensitivity is not yet sufficient to reach the benchmark QCD axion level. The two main reasons are: (1) the CSA-induced inhomogeneous broadening of the NMR linewidth of the $^{207}$Pb nuclear spin ensemble, and (2) the small size of our PMN-PT sample. We plan to circumvent the inhomogeneous broadening by concentrating our future searches on the lower Compton frequencies ($\nu_a<1\uu{MHz}$), where the linewidth will be dominated by the $T_2$ spin coherence time, rather than CSA. The long $T_1$ relaxation time will allow us to pre-polarize the nuclear spins, retaining their polarization even at lower fields. We plan to use Superconducting Quantum Interference Devices (SQUIDs) to detect the transverse magnetization in this frequency range. The green dashed curves in Fig.~\ref{fig:global_lims} show the projected experimental sensitivity for the search with the same $4.6\uu{mm}$ sample as used in the current work. The cutoff at the low frequency end is set at the $1/T_2$ NMR linewidth, and the cutoff at high frequencies is set by the Larmor frequency at the maximum magnetic field of $15\uu{T}$.
In order to reach sufficient sensitivity to probe the QCD axion coupling strengths, we plan to scale up the volume of the ferroelectric sample. If the sample is coupled to the SQUID sensor with a broadband circuit,
sample size of $\approx80\uu{cm}$ and operation at $\approx100\uu{mK}$ temperature are sufficient to reach the QCD axion line over $\approx 3$ decades in mass, Fig.~\ref{fig:global_lims}, blue dashed line. Implementing a resonant coupling circuit with a modest quality factor $\approx 1000$ may allow us to reach this sensitivity level with a sample that is an order of magnitude smaller. The ultimate sensitivity limit is determined by the nuclear spin projection noise, Fig.~\ref{fig:global_lims}, black dashed line.
\begin{figure}[h]
\includegraphics[width=1\textwidth]{S_Fig9.eps}
\caption{Existing bounds and sensitivity projections for the:
(a) EDM and (b) gradient coupling of axion-like dark matter.
The region shaded in red is the exclusion at 95$\%$ confidence level placed by this work (CASPEr-e).
The purple line shows the QCD axion coupling band. The darker purple color shows the mass range motivated by theory~\cite{Graham2013}. The blue regions mark the mass ranges where the ADMX and HAYSTAC experiments have probed the QCD axion-photon coupling~\cite{Du2018,Brubaker2017}.
The green region is excluded by analysis of cooling in supernova SN1987A, with color gradient indicating theoretical uncertainty~\cite{Graham2013}.
The dashed green line marks the projected $5\sigma$ sensitivity of our CASPEr-e search with a $4.6\uu{mm}$ sample, as used in current work. The dashed blue line marks the projected $5\sigma$ sensitivity of our CASPEr-e search with an $80\uu{cm}$ sample, operating at $100\uu{mK}$ temperature. Implementing a resonant coupling circuit will enable operation with a smaller sample. The black dashed line marks the sensitivity limited by the quantum spin projection noise~\cite{Budker2014}. This is sufficient to detect the EDM coupling of the QCD axion across the 6-decade mass range from $\approx0.3\uu{peV}$ to $\approx500\uu{neV}$.
The other bounds are as follows.
(a) The pink region is excluded by the neutron EDM (nEDM) experiment~\cite{Abel2017a}.
The blue region is excluded by the HfF$^+$ EDM experiment~\cite{Roussy2020}.
The yellow region is excluded by analysis of Big Bang nucleosynthesis (BBN)~\cite{Blum2014}.
(b) The pink region is excluded by the neutron EDM (nEDM) experiment~\cite{Abel2017a}.
The blue region is excluded by the zero-to-ultralow field comagnetometer (ZULF CM) experiment~\cite{Wu2019a}.
The gray region is excluded by the zero-to-ultralow field sideband (ZULF SB) experiment~\cite{Garcon2019b}.
The yellow region is excluded by the new-force search with K-$^3$He comagnetometer~\cite{Vasilakis2009}.
The bounds are shown as published, although corrections should be made to some of the low-mass limits, due to stochastic fluctuations of the axion-like dark matter field~\cite{Centers2019}.
}
\label{fig:global_lims}
\end{figure}
|
2,877,628,091,289 | arxiv | \section{Introduction}\label{introduction}
Graph matching is widely used in a wide range of computer vision and pattern recognition tasks~\cite{[2002-Belongie-pami],[2011-Duchenne],[2012-Yao-eccv],[2016-Shen-eccv],[2016-Garro-pami],[2017-Pinheiro],[Xue2018]} to find correspondence between two graph-structured feature sets.
The general idea behind graph matching solutions is to minimize objective functions composed of unary, pairwise~\cite{[2005-Leordeanu],[2010-Cho-eccv],[2017-Huu-cvpr]} or higher-order~\cite{[2011-Lee-cvpr],[2015-Yan-cvpr],[2017-Huu-cvpr],[YU2016255]} potentials to preserve the structure alignment between two graphs.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.9\linewidth]{Images/fig_atgm.pdf}
\caption{{\bf ATGM vs. general graph matching}. General graph matching between two graphs $\mathcal{G}_X$ and $\mathcal{G}_Y$ with nodes $X =\{X_i\}_{i=1}^3$ and $Y =\{Y_j\}_{j=1}^4$, shown in (a), often involves computing a pairwise affinity matrix $\mathbf{W}$ with a size of $12\times 12$, as displayed in (b). In contrast, our proposed ATGM matches $\mathcal{G}_X$ to $\mathcal{G}_Y$ by transforming each node $X_i$ to $\bar{X}_i$ with an optimal transformation map through minimizing two objectives $F_{XY}$ and $G_{\bar{X}Y}$, as shown in (c). ATGM preserves the pairwise structure alignments by minimizing the differences between two edge attribute matrices $w_{X}$ and $w_{\bar{X}}$ with a smaller size of $3\times 3$, as shown in (d) and (e). ATGM can also remove the outliers, {\it i.e.}, $Y_1$ here, naturally according to the distance matrix $\mathbf{D}_{\bar{X}Y}$ between $\mathcal{G}_{\bar{X}}$ and $\mathcal{G}_{Y}$, as shown in (f).}
\label{fig:1}
\end{figure}
Under pairwise constraint, graph matching can be formulated as a quadratic assignment problem (QAP)~\cite{[2007-Loiola-ejor]}, which is NP-complete~\cite{[1979-Garey]}, and only approximate solutions can be found in polynomial time.
Although the past decade has witnessed remarkable progress on graph matching~\cite{[2016-Yan-ICMR],[2009-Leordeanu-nips],[2010-Cho-eccv],[2014-Cho-cvpr],[2016-Zhou-pami]}, there are still some challenges in computational complexity and matching performance.
For instance, a costly affinity matrix often needs to be computed or factorized~\cite{[2009-Leordeanu-nips],[2005-Leordeanu],[2016-Zhou-pami]}, which results in high space complexity--especially with large-scale complete graphs.
Because of the combinatorial nature of QAP, the objective function is difficult to solve for obtaining binary solutions~\cite{[2009-Leordeanu-nips],[2015-Yan-cvpr],[2010-Lee-icpr]}.
Although with relaxation, the discrete constraint can be approximated by a continuous one that is easier to solve, this approach requires extra effort to achieve a global optimum or satisfy the binary constraint~\cite{[2009-Zaslavskiy-pami],[2016-Zhou-pami],[2014-Liu-pami],[2017-Jiang-cvpr]}.
Moreover, matching unequal-sized graphs often suffer from outliers~\cite{[2014-Cho-cvpr],[2009-Zaslavskiy-pami]}. Thus, it is of great interest to reduce the computational complexity and to be as robust as possible to outliers.
This paper introduces a method for graph matching from the perspective of functional representation.
The main idea is illustrated by a toy example in Fig.~\ref{fig:1}.
Under this perspective, one graph is transformed into the space spanned by the second graph, and then, a desired correspondence can be reformulated as an optimal transformation map between graphs.
To pursue such a map, we construct two functionals to measure the discrepancy between graphs and minimize them with the Frank-Wolfe method~\cite{[2015-Simon-nips]}.
Using the transformation map, the pairwise edge attributes of graphs can be explicitly represented by node attributes, which enables us to significantly reduce the space and time complexity. We also propose a domain adaptation-based strategy to remove outliers leveraging the fact that transformation maps can preserve graph structures.
Our work is distinguished in following aspects:
\begin{itemize}
\item[-] We present a new perspective for graph matching that explicitly represents the pairwise edge attributes of graphs using unary node attributes. Therefore,
the space complexity is reduced in form from $\mathbf{O}(m^2n^2)$ to $\mathbf{O}(mn)$ and
the objective function can be optimized efficiently with $\mathbf{O}(Tn^3)$ time complexity. Benefiting from this simplification, we can match large-scale graphs, even with complete graphs.
\item[-] We propose a domain adaptation-based method for outlier removal using the transformation map. This technique can be used as a pre-processing step to improve graph matching algorithms.
\end{itemize}
\section{Related Work}\label{relatedwork}
Over the past few decades, both exact and inexact (error-tolerant) graph matching have been extensively studied to measure either (dis-)similarity~\cite{[2011-Duchenne],[1999-Pelillo-pami],[2009-Riesen-pami],[2017-Bougleux]} or find correspondence~\cite{[1996-Gold],[2009-Leordeanu-nips],[2010-Cho-eccv],[2015-Yan-cvpr],[2016-Zhou-pami]} between graphs. We focus on inexact graph matching to find correspondence as the work on exact graph matching and measuring similarity ({\itshape e.g.}, graph edit distance) is beyond the scope of this paper.
Many existing works of pairwise graph matching have addressed reducing the high computational complexity of the QAP formulation. In the path-following method proposed in~\cite{[2009-Zaslavskiy-pami]}, the author rewrote the graph matching problem as an approximate least-squares problem on the set of permutation matrices. A factorization-based method~\cite{[2016-Zhou-pami]} was proposed to factorize the affinity matrix with high space complexity into a Kronecker product of smaller matrices. An efficient sampling heuristic has been proposed in~\cite{[2008-Zass-cvpr]} to avoid the high space complexity of the affinity matrix. However, the methods in~\cite{[2009-Zaslavskiy-pami],[2016-Zhou-pami]} suffer from huge time consumption in practice, and the ability to reduce space complexity of the works~\cite{[2016-Zhou-pami],[2008-Zass-cvpr]} is limited by complete graphs. As a comparison, our functional representation-based method can reduce the space complexity by two orders of magnitude with a lower time complexity and runs faster in practice.
Considering looking for global optimal solutions with binary property for graph matching, the approaches in~\cite{[2009-Zaslavskiy-pami],[2016-Zhou-pami],[2014-Liu-pami],[2014-Liu-ijcv]} constructed objective functions in both convex and concave relaxations that were controlled by a continuation parameter. However, these approaches are often time consuming in reaching an ideal solution. Moreover, to ensure binary solutions, an integer-projected fixed point algorithm~\cite{[2009-Leordeanu-nips]} solving a sequence of first-order Taylor approximations had been proposed, and the author of~\cite{[2015-Yan-cvpr]} took an adaptive and dynamic relaxation mechanism for optimization in the discrete domain directly. In our method, we separately construct non-convex and convex relaxations and obtain (nearly) binary solutions in a faster way with high matching accuracy.
In addition, several spectral matching methods~\cite{[2005-Leordeanu],[2006-Cour-nips]} were introduced based on the rank-1 approximation of the affinity matrix. The graduated assignment method~\cite{[1996-Gold]} iteratively solved a series of convex approximations of the objective. The decomposition-based works in~\cite{[2013-Torresani-pami]} and~\cite{[2017-Huu-cvpr]} decomposed the original complex graphs and took decomposition of the matching constraints,respectively. Probability-based~\cite{[2008-Zass-cvpr],[2013-Egozi]} and learning-based~\cite{[2009-Caetano-pami],[2012-Leordeanu-ijcv]} methods gave further interpretations of the graph matching problem. A random walk view~\cite{[2010-Cho-eccv]} of the problem was introduced by simulating random walks with re-weighting jumps. A max-pooling based strategy has been also proposed in~\cite{[2014-Cho-cvpr]} to address the presence of outliers. These two works~\cite{[2010-Cho-eccv],[2014-Cho-cvpr]} are both robust to outliers due to their re-weighting procedure during iterations. In contrast, our proposed outlier-removal strategy removes the outliers by explicitly relying on the global structure of graphs, and it can be applied to other methods as a pre-processing step.
\section{General Graph Matching}
Given an undirected graph $\mathcal{G}_X=\left\{X,\mathcal{E}_X\right\}$
with $m$ nodes $X_i\in X,\, i=1,\ldots, m$, we denote each edge as $X_{i_1i_2}\triangleq(X_{i_1},X_{i_2})\in \mathcal{E}_X$, where
$\mathcal{E}_X$ is the edge set consisting of $M$ edges.
Matching the two graphs $\mathcal{G}_X$ and $\mathcal{G}_Y$, with $m,n$ nodes and $M,N$ edges, respectively, yields a binary correspondence $\mathbf{P}\in \left\{0,1\right\}^{m\times n}$,
such that $P_{ij}=1$ when the nodes $X_i$ and $Y_j$ are matched and $P_{ij}=0$ otherwise.
The graph matching problem is often solved by maximizing an objective function that measures the node and edge affinities between $\mathcal{G}_X$ and $\mathcal{G}_Y$.
Under pairwise constraints, the objective function typically consists of a unary potential $w_v(X_i,Y_j)$ and a pairwise potential $w_e(X_{i_1i_2},Y_{j_1j_2})$, which measure the similarity between the nodes $X_i$ and $Y_j$ and the edges $X_{i_1i_2}$ and $Y_{j_1j_2}$, respectively.
These two types of similarities are usually integrated by an affinity matrix $\mathbf{W}\in \mathbb{R}^{mn\times mn}$, the diagonal element $\mathbf{W}_{ij,ij}$ of which corresponds to the unary potential $w_v(X_i,Y_j)$ and the non-diagonal element $\mathbf{W}_{i_1j_1,i_2j_2}$ of which corresponds to the pairwise potential $w_e(X_{i_1i_2},Y_{j_1j_2})$.
Thus, the objective function for graph matching can be written as
\begin{align}
\mathbf{P}_v^T\mathbf{W}\mathbf{P}_v = \sum_{P_{ij}=1}w_v(X_i,X_j) + \sum_{{\tiny{\substack{P_{i_1j_1}=1\\P_{i_2j_2}=1}}}}w_e(X_{i_1i_2},Y_{j_1j_2}),
\label{eq:gm=general}
\end{align}
where $\mathbf{P}_v$ is the column-wise vectorized replica of $\mathbf{P}$.
For graph matching under one-to-(at most)-one
constraints, the feasible field $\mathcal{P}$ is composed of all (partial) permutation matrices (where $m\le n$), {\it i.e.}
\begin{equation}\label{constraint1}
\mathcal{P}\triangleq \left\{\mathbf{P}\in \left\{0,1\right\}^{m\times n}; \mathbf{PI}_n=\mathbf{I}_m,\mathbf{P}^T\mathbf{I}_m\le \mathbf{I}_n\right\},
\end{equation}
where $\mathbf{I}_m$ is a $m \times 1$ unit vector.
Then, the graph matching problem can be approached by finding the optimal assignment matrix $\mathbf{P}^*$ by maximizing
\begin{equation}\label{QAP}
\max_{\mathbf{P}\in \mathcal{P}}~\mathbf{P}_v^T\mathbf{W}\mathbf{P}_v.
\end{equation}
Eq.(\ref{QAP}) is the so-called (QAP), which is known to be NP-complete. Usually, an approximate solution of it can be found by relaxing the discrete feasible field $\mathcal{P}$ into a continuous feasible filed $\hat{\mathcal{P}}$ as:
\begin{equation}\label{constraint2}
\hat{\mathcal{P}}\triangleq \left\{\mathbf{P}\in [0,1]^{m\times n}; \mathbf{PI}_n=\mathbf{I}_m,\mathbf{P}^T\mathbf{I}_m\le \mathbf{I}_n\right\},
\end{equation}
which is known as the {\em doubly-stochastic relaxation}.
Unfortunately, (1) the affinity matrix $\mathbf{W}$ results in high space complexity--especially with complete graphs, and (2) achieving global optimal or binary solutions of Eq.(\ref{QAP}) is often highly time consuming.
\section{Adaptively transforming graph matching}\label{method}
This section presents our ATGM algorithm starting with a definition of the linear representation map of transformation from one graph to the space spanned by another graph. Basically, the transformation map models the correspondence between graphs. On this basis, we first measure the edge discrepancy between two graphs to derive the sub-optimal transformation map. Then, we incorporate the shifting vectors of the transformed nodes to obtain the final optimal transformation map. Finally, we address the unequal size cases in graph matching by proposing a domain adaptation-based outlier removal strategy.
\subsection{Linear representation of transformation}
Given two undirected graphs $\mathcal{G}_X=\left\{X,\mathcal{E}_X \right\}$ and $\mathcal{G}_Y=\left\{Y,\mathcal{E}_Y \right\}$, we formulate graph matching as transformation from node set $X=\left\{X_i \right\}_{i=1}^m$ to the space spanned by $Y=\left\{Y_j \right\}_{j=1}^n$. Because $X,Y$ are discrete sets, we first define the continuous space spanned by $Y$ as $\mathcal{C}_Y={\sum_{i=1}^n \alpha_jY_j}$. Transformation $\mathcal{T}$ from $X$ to $\mathcal{C}_Y$ is defined as
\begin{align}
\mathcal{T}: X \to \mathcal{C}_Y, X_i \mapsto \mathcal{T}(X_i).
\end{align}
According to linear algebra, $\mathcal{T}(X_i)$ can be represented as $\mathcal{T}(X_i)\triangleq\sum_{j=1}^{n}\mathbf{P}_{ij}Y_j$. Then, $\mathbf{P}\in \mathbb{R}^{m\times n}$ is a linear representation ({\itshape i.e.}, a transformation map) of $\mathcal{T}$. By the constraint Eq.(\ref{constraint2}) that $\mathbf{P}\in \hat{\mathcal{P}}$, each node $\mathcal{T}(X_i)$ lies in the convex hull of $Y$. Therefore, we redefine $\mathcal{C}_Y$ as the convex hull of $Y$ for graph matching problem. Whenever $\mathbf{P}$ reaches an extreme point of the feasible field $\hat{\mathcal{P}}$, it is a binary assignment matrix, and consequently, $X_i$ is transformed to ({\itshape i.e.}, matches) a $Y_{j'}$ where $\mathbf{P}_{ij'} =1,\mathbf{P}_{i,j\neq j'}=0$.
By this representation formulation, the transformed graph $\bar{X}\triangleq\mathcal{T}(X)=\mathbf{P}Y$ is determined by specified $\mathbf{P}$ and $Y$. The more $\mathbf{P}$ is binary, the more $\bar{X}$ is similar to $Y$. Therefore, we can replace $\mathcal{G}_Y$ by $\mathcal{G}_{\bar{X}}$ when we attempt to minimize the disagreement between $\mathcal{G}_X$ and $\mathcal{G}_Y$ by forcing $\mathbf{P}$ to be binary.
With notation $\bar{X}_{i_1i_2}\triangleq(\bar{X}_{i_1},\bar{X}_{i_2})$, we construct the functional w.r.t. $\mathbf{P}$ to measure disagreement between $\mathcal{G}_X$ and $\mathcal{G}_{\bar{X}}$ as
\begin{equation}
\mathbb{F}(\mathbf{P})=\sum_{(i,j)}f_v(X_i,Y_j)P_{ij} + \sum_{(i1,i_2)} f_e(X_{i_1i_2},\bar{X}_{i_1i_2}),
\end{equation}
where the unary potential $f_v(X_i,Y_j)$ denotes the disagreement between nodes $X_i$ and $Y_j$, and the pairwise potential $f_e(X_{i_1i_2},\bar{X}_{i_1i_2})$ denotes the discrepancy between
edge $X_{i_1i_2}$ and its transformed edge $\bar{X}_{i_1i_2}$. Using this formulation, the costly affinity matrix $\mathbf{W}\in \mathbb{R}^{mn\times mn}$ used in general graph matching is replaced by the node disagreement matrix $\left\{ f_v(X_i,Y_j) \right\}\in \mathbb{R}^{m\times n}$ and the edge discrepancy matrix $\left\{ f_e(X_{i_1i_2},\bar{X}_{i_1i_2})\right\}\in \mathbb{R}^{m\times m}$, which consequently reduces the space complexity from $O(m^2 n^2)$ to $O(mn)$.
To obtain a desired assignment matrix $\mathbf{P}^*$ given graphs $\mathcal{G}_X$ and $\mathcal{G}_Y$, we can construct a specified functional $\mathbb{F}(\mathbf{P})$ and minimize it to preserve the structure alignments between $\mathcal{G}_X$ and $\mathcal{G}_{\bar{X}}$ in a optimization-based way:
\begin{equation}
\mathbf{P}^*\in \text{arg}\min\limits_{{\small{\mathbf{P}\in \hat{\mathcal{P}}}}} \mathbb{F}(\mathbf{P}).
\end{equation}
In the rest of this section, we introduce two functionals w.r.t. $\mathbf{P}$ as our objective functions to model the pairwise graph matching problem.
\begin{figure}[htb!]
\begin{center}
\subfigure[]
{\includegraphics[width=0.36\linewidth]{./Images/fig1a.pdf}}
\subfigure[]
{\includegraphics[width=0.11\linewidth]{./Images/fig1c1.pdf}}
\subfigure[]
{\includegraphics[width=0.36\linewidth]{./Images/fig1b.pdf}}
\subfigure[]
{\includegraphics[width=0.11\linewidth]{./Images/fig1c2.pdf}}
\end{center}
\caption{(a) Nodes shift after being transformed by minimizing $F_{XY}(\mathbf{P})$ in a 20-vs-30 case. The lines in blue are the shifting vectors, and the points in green are transformed nodes $\left\{\bar{X}_i\right\}_{i=1}^m$.
(b) Transformation map (top) and their post-discretization (bottom) corresponding to (a). (c) Nodes transformed by minimizing $G_{\bar{X}Y}(\mathbf{P})$ with almost no shifting.
(d) Transformation map (top) and their post-discretization (bottom) corresponding to (c). In (b) and (d), red points mark the groundtruth.}
\label{fig:long}
\label{fig:longfig1}
\end{figure}
\subsection{Edge discrepancy}
In the case where graphs are embedded in Euclidean space $\mathbb{R}^d$, the function $f_e$ mentioned above can be defined in some simple but effective forms to incorporate the edge length (or orientations),
\begin{equation}
f_e(X_{i_1i_2},\bar{X}_{i_1i_2}) = (||X_{i_1i_2}||-||\bar{X}_{i_1i_2}||)^2,
\end{equation}
where $||X_{i_1i_2}||$ is the $l_2$ norm of $X_{i_1i_2}$.
Thus, the pairwise potential of our first objective function is defined as,
\begin{align}
F_{XY}(\mathbf{P})&=\sum_{(i_1,i_2)}S_{i_1i_2}(||X_{i_1i_2}||-||\bar{X}_{i_1i_2}||)^2, \\
&=\sum_{(i_1i_2)}S_{i_1i_2}( ||\bar{X}_{i_1i_2}||^2-2||X_{i_1i_2}||\,||\bar{X}_{i_1i_2}||)+c, \nonumber
\end{align}
where $c$ is a constant and ${S}_{i_1i_2}$ measures the weight of $(||X_{i_1i_2}||-||\bar{X}_{i_1i_2}||)^2$ if we have priors. We denote $\mathbf{S} \triangleq \{ S_{i_1i_2} \}\in \mathbb{R}^{m\times m}$.
The gradient of $F_{XY}(\mathbf{P})$ w.r.t. $\mathbf{P}$ can be computed using the chain rule,
\begin{equation}\label{gra_F}
\nabla F_{XY}(\mathbf{P}) = 2(\mathbf{L}_X+\mathbf{L}^*_X)(\mathbf{P}Y)Y^T,
\end{equation}
where $\bf{L}_X=\text{diag}(\bf{SI}_m)-S$ is the Laplacian of $\mathcal{G}_X$, and $\mathbf{L}^*_X=\text{diag}(\bf{S^*I}_m)-S^*$ with $S_{i_1i_2}^*\triangleq S_{i_1i_2}||X_{i_1i_2}||\,||\bar{X}_{i_1i_2}||^{-1}$. To avoid numerical instabilities as in~\cite{[2007-Candès]}, a small $\epsilon > 0$ is added to $||\bar{X}_{i_1i_2}||^{-1}$, {\it i.e.}, $(||\bar{X}_{i_1i_2}||+\epsilon)^{-1}$. Naturally, we can reconstruct $F_{XY}(\mathbf{P})$ by adding a unary potential such as $\sum_{ij} f_v(X_i,Y_j)\mathbf{P}_{ij} + \lambda F_{XY}(\mathbf{P})$.
Due to the non-convexity of $F_{XY}(\mathbf{P})$, its minimizer $\mathbf{P}^*\in \hat{\mathcal{P}}$, which is regarded as an optimal transformation map from $\mathcal{G}_X$ to $\mathcal{G}_{\bar{X}}$, often reaches a local minimum and is not binary; see Fig.\ref{fig:longfig1} (b) for illustration. Consequently, the transformed node $\bar{X}_{i}$ is usually not exactly equal to a $Y_{j'}\in Y$, and there is often a shift between $\bar{X}_i$ and its correct match $Y_{\sigma_i}$. Fig.\ref{fig:longfig1} (a) displays this shift phenomena, where each $\bar{X}_i$ shifts from the correct match $Y_{\sigma_i}$ to some degree.
\subsection{Node shifting}
Benefiting from the property of $F_{XY}(\mathbf{P})$ that preserves the edge alignment between $\mathcal{G}_X$ and $\mathcal{G}_{\bar{X}}$, the shifting vectors of adjacent nodes have similar directions and norms, as shown in Fig.\ref{fig:longfig1}~(a). Consequently, in order to reduce the node shifting from $\bar{X}_i$ to its correct match $Y_{\sigma_i}$, denoted by
$$
\overrightarrow{\bar{X}_{i}Y_{\sigma_i}}=Y_{\sigma_i}-\bar{X}_{i},
$$
we minimize the sum of the differences between adjacent shifting vectors, {\it i.e.},
\begin{align}
G_{\bar{X}Y}(\mathbf{P})&=\sum_{(i_1i_2)}\bar{S}_{i_1,i_2}||(\bar{\bar{X}}_{i_1}-\bar{X}_{i_1})-(\bar{\bar{X}}_{i_2}-\bar{X}_{i_2})||_2^2\nonumber\\
&=\text{Tr}\left((\mathbf{P}Y-\bar{X})^T\mathbf{L}_{\bar{X}}(\mathbf{P}Y-\bar{X})\right),
\end{align}
where $\mathbf{L}_{\bar{X}} = \text{diag}(\bar{\mathbf{S}}\mathbf{I}_m)-\bar{\mathbf{S}}$. We denote $\bar{\bar{X}}=\mathbf{P}Y$ as the transformed nodes of $\bar{X}$. In our method, the weight matrix $\bar{\mathbf{S}}$ is set to be positive and symmetric, therefore, $\mathbf{L}_{\bar{X}}$ is positive definite and $G_{\bar{X}Y}(\mathbf{P})$ is convex.
{\bf Sparse regularization}
Because $G_{\bar{X}Y}(\mathbf{P})$ is convex, its minimizer is often an inner point rather than an extreme point of the feasible field $\hat{\mathcal{P}}$. In order to approach a binary solution, we first add a sparse regularization term, {\it i.e.}, the $l_1$ norm of $\mathbf{P}$ to $G_{\bar{X}Y}(\mathbf{P})$.
We denote $D_{ij} \triangleq d(\bar{X}_i,Y_j)$ as the distance between $\bar{X}_i$ and $Y_j$.
Benefiting from the solution of $F_{XY}(\mathbf{P})$, the norms of shifting vectors $\overrightarrow{\bar{X}_{i}Y_{\sigma_i}}$ are relatively small, and elements $D_{i,\sigma_i}$ are much smaller than $D_{i,j\neq \sigma_i}$, as shown in Fig.\ref{fig:1} (f).
Thus, we also add a unary term $\mathbf{D}_{\bar{X}Y}=\left\{ D_{ij}\right\} \in \mathbb{R}^{m\times n}$ to improve the sparsity of the minimizer.
Finally, $G_{\bar{X}Y}(\mathbf{P})$ can be summarized as
\begin{align}
G_{\bar{X}Y}(\mathbf{P})=\langle\mathbf{P},\mathbf{D}_{\bar{X}Y}\rangle + \lambda_1 ||\mathbf{P}||_1^1\nonumber+\lambda_2\text{Tr}\left((\mathbf{P}Y-\bar{X})^T\mathbf{L}_{\bar{X}}(\mathbf{P}Y-\bar{X})\right),
\end{align}
where $\langle\mathbf{P},\mathbf{D}_{\bar{X}Y}\rangle=\sum_{ij}\mathbf{P}_{ij}D_{ij}$. The gradient of $G_{\bar{X}Y}(\mathbf{P})$ is then
\begin{equation}\label{gra_G}
\nabla G_{\bar{X}Y}(\mathbf{P}) = \mathbf{D}_{\bar{X}Y} + \lambda_1 + 2\lambda_2\mathbf{L}_{\bar{X}}(\mathbf{P}Y-\bar{X})Y^T.
\end{equation}
With this sparse regularization, the function $G_{\bar{X}Y}(\mathbf{P})$ is always solved with a (nearly) binary solution, which significantly improves the matching accuracy. See Fig.~\ref{fig:longfig1}~(c) and (d) for examples.
\subsection{Outlier removal via domain adaptation}\label{section33}
\begin{figure}[htb!]
\begin{center}
\subfigure
{\includegraphics[width=0.16\linewidth]{./Images/fig21_1.pdf}}
\subfigure
{\includegraphics[width=0.16\linewidth]{./Images/fig22_1.pdf}}
\subfigure
{\includegraphics[width=0.16\linewidth]{./Images/fig23_1.pdf}}
\subfigure
{\includegraphics[width=0.16\linewidth]{./Images/fig24_1.pdf}}
\subfigure
{\includegraphics[width=0.16\linewidth]{./Images/fig25_1.pdf}}
\subfigure
{\includegraphics[width=0.16\linewidth]{./Images/fig26_1.pdf}}
\end{center}
\caption{Outlier removal with a transformation map $\mathbf{P}^*$ obtained by alternately minimizing $F_{XY}(\mathbf{P})$ and $G_{XY}(\mathbf{P})$. In each iteration, the red dots are inliers and the green plus signs are the nodes remaining after removal.}
\label{fig:removal}
\end{figure}
Matching graphs $\mathcal{G}_X$ and $\mathcal{G}_Y$ of different sizes with $m<n$ is more complicate. In this situation, the outliers occurring in graph $\mathcal{G}_Y$ usually affect the matching results. Thanks to the transformation map $\mathbf{P}^*$ achieved by minimizing $F_{XY}(\mathbf{P})$, the structure of $\mathcal{G}_{\bar{X}}$ is similar to that of $\mathcal{G}_{X}$. In some sense, the operation $\bar{X}=\mathbf{P}^*Y$ can be seen as a domain adaptation~\cite{[2017-Courty-pami]} from the source domain $X$ to the target domain $Y$. We propose a method to remove outliers adaptively by using the transformation map alternately minimized from $F_{XY}(\mathbf{P})$ and $G_{XY}(\mathbf{P})$, where $G_{XY}(\mathbf{P})$ is defined by replacing $\bar{X}$ with $X$ in the pairwise potential of $G_{\bar{X}Y}(\mathbf{P})$:
\begin{align}
G_{XY}(\mathbf{P})&=\sum_{(i_1,i_2)}S_{i_1i_2}||(\bar{X}_{i_1}-X_{i_1})-(\bar{X}_{i_2}-X_{i_2})||_2^2\\
&=\sum_{(i_1,i_2)}S_{i_1i_2}||(X_{i_2}-X_{i_1})-(\bar{X}_{i_2}-\bar{X}_{i_1})||_2^2,
\end{align}
which depicts the edge-vector differences between the original graph $\mathcal{G}_X$ and the transformed graph $\mathcal{G}_{\bar{X}}$.
The orientation of edge has also been used in many graph matching methods ~\cite{[2017-Huu-cvpr],[2013-Torresani-pami],[2016-Zhou-pami]} to construct the non-diagonal element of the affinity matrix as:
\begin{equation}
\small{
\mathbf{W}_{i_1j_1,i_2j_2}=\text{exp}\left(-\frac{1}{2}(||X_{i_1i_2}||-||Y_{j_1j_2}||)^2-\frac{1}{2}(\theta_{i_1i_2}-\theta_{j_1j_2})^2\right),}
\label{equationedge}
\end{equation}
where $\theta_{i_1i_2}$ is the angle between edge $X_{i_1i_2}$ and the horizontal line.
After minimizing $F_{XY}(\mathbf{P})$ or $G_{XY}(\mathbf{P})$, we obtain the transformed nodes $\bar{X}=\mathbf{P}^*Y$. Consequently, $\mathcal{G}_{\bar{X}}$ has a structure similar to that of the original graph $\mathcal{G}_X$ and lies in the same coordinate system of $\mathcal{G}_Y$ with relatively small shifts. Then, we can remove outliers adaptively using a ratio test technique. Given two point sets $\bar{X}$ and $Y$, we compute the Euclidean distance $d_{ij}$ between all the pairs $(\bar{X}_i,Y_{j})$. For each node $\bar{X}_i$, we find the closest node $Y_{j^*}$ and remove all the nodes $Y_j$ when $d_{ij}>k\cdot d_{ij^*}$ for a given $k>0$. If the number of remaining nodes $l$ is smaller than $m$, $m-l$ nodes are selected from the removed nodes that are closer to $\bar{X}$ and are added. The experimental results show that after several iterations of alternately minimizing $F_{XY}(\mathbf{P})$ and $G_{XY}(\mathbf{P})$ most outliers are removed (see Fig.\ref{fig:removal}).
Our ATGM algorithm with outlier-removal is summarized in {\bf{Algorithm}~\ref{algorithm}}.
\begin{algorithm}
\caption{~~$\mathbf{P}^*\leftarrow {\bf{ATGM}}(X,Y,k_0)$}
\begin{algorithmic}
\STATE {\bf{Input~~~:}} $X,~Y,~k_0$ and $\mathbf{S}, \bar{\mathbf{S}}$ if available.
\STATE {\bf{Output:}} $\mathbf{P}^*$
\WHILE{$k\le k_0$}
\STATE $ \mathbf{P}^*\leftarrow \text{argmin} ~ G_{XY}$ via Eq.\eqref{eq:fw1} and Eq.\eqref{eq:fw2};
\STATE $Y~~\leftarrow$ removing outliers of $Y$ with $\mathbf{P}^*$;
\STATE $\mathbf{P}^*\leftarrow \text{argmin} ~ F_{XY}$ via Eq.\eqref{eq:fw1} and Eq.\eqref{eq:fw2};
\STATE $Y~~\leftarrow$ removing outliers of $Y$ with $\mathbf{P}^*$;
\STATE $k ~~~\leftarrow k+1$;
\ENDWHILE
\STATE ~~~~$\mathbf{P}^*\leftarrow \text{argmin} ~ F_{XY}$ via Eq.\eqref{eq:fw1} and Eq.\eqref{eq:fw2};
\STATE ~~~~$\bar{X}~\leftarrow\mathbf{P}^*Y$;
\STATE ~~~~$ \mathbf{P}^*\leftarrow \text{argmin} ~ G_{\bar{X}Y}$ with $\mathbf{P}^* $ as initialization.
\STATE ~~~~$\mathbf{P}^*\leftarrow $ post-discretization of $ \mathbf{P}^* $ by the Hungarian method.
\end{algorithmic}
\label{algorithm}
\end{algorithm}
\section{Numerical implementation and analysis}
As presented above, we construct two objective functions, namely, a non-convex $F_{XY}(\mathbf{P})$ and a convex $G_{\bar{X}Y}(\mathbf{P})$.
Previous methods, {\itshape e.g.},~\cite{[2009-Zaslavskiy-pami],[2016-Zhou-pami],[2014-Liu-pami]}, relaxed their objective functions in both convex and concave forms as $\mathbf{J}_v$ and $\mathbf{J}_c$, respectively, and solved a series of combined functions $\mathbf{J}_{\lambda}=\lambda \mathbf{J}_c + (1-\lambda)\mathbf{J}_v$ controlled by a parameter $\lambda$ increasing from $0$ to $1$. In contrast, we solve our objective functions $F_{XY}(\mathbf{P})$ and $G_{\bar{X}Y}(\mathbf{P})$ separately by the Frank-Wolfe (FW) method~\cite{[2009-Zaslavskiy-pami],[2016-Zhou-pami]}, which is simple but efficient.
Given that $g$ is a convex and differentiable function, and given that $\hat{\mathcal{P}}$ is a convex set, the FW method iterates the following steps until it converges:
\begin{eqnarray}
&& \tilde{\mathbf{P}}^{(k+1)}\in \mathop{\text{argmin}}\limits_{\mathbf{P}\in \hat{\mathcal{P}}}\langle\nabla g(\mathbf{P}^{(k)}),\mathbf{P}\rangle\label{equation18},
\label{eq:fw1}
\\
&& \mathbf{P}^{(k+1)}=\mathbf{P}^{(k)} + \alpha^{(k)}(\tilde{\mathbf{P}}^{(k+1)}-\mathbf{P}^{(k)}),
\label{eq:fw2}
\end{eqnarray}
where $\alpha^{(k)}$ is the step size of the iteration $k$ obtained by a line search procedure~\cite{[Goldstein-1965]}, and $\nabla g$ is computed using the Eq.\eqref{gra_F} and Eq.\eqref{gra_G}.
In Eq.\eqref{eq:fw1}, the minimizer $\tilde{\mathbf{P}}^{(k+1)} \in \hat{\mathcal{P}}$ is theoretically an extreme point of $\hat{\mathcal{P}}$ (so is binary). This means that $\tilde{\mathbf{P}}^{(k+1)} \in \mathcal{P}$. Therefore, Eq.\eqref{eq:fw1} is a linear assignment problem (LAP) that can be efficiently solved by approaches such as the Hungarian~\cite{[2010-Kuhn]}, LAPJV~\cite{[1987-Jonker]} algorithm. Moreover, since $\tilde{\mathbf{P}}^{(k+1)}$ is binary in each iteration, the final solution $\mathbf{P}^*$ is (nearly) binary after minimizing $G_{\bar{X}Y}(\mathbf{P})$.
{\bf Convergence} The FW method ensures an at least sublinear convergence rate~\cite{[2015-Simon-nips]}, which may result in large iterations for solving the non-convex function $F_{XY}(\mathbf{P})$. However, minimizing $F_{XY}(\mathbf{P})$ within 200 iterations is sufficient because its solution will be applied as the initialization for minimizing $G_{\bar{X}Y}(\mathbf{P})$, which is strong convex and stronger convergence can be achieved. In our experiments, $G_{\bar{X}Y}(\mathbf{P})$ always converges at a $10^{-7}$ tolerance within $k\le 100$ iterations. Compared to the path-following method that solves the two relaxed objective functions combined together in~\cite{[2009-Zaslavskiy-pami],[2016-Zhou-pami],[2014-Liu-pami]}, our optimization strategy is faster with higher matching accuracy.
{\bf Local optimal vs. global optimal} The FW method can guarantee obtaining only a local optimum of the non-convex objective $F_{XY}(\mathbf{P})$. However, as discussed above, the local optimum for $F_{XY}(\mathbf{P})$ is applied as an initialization for solving the convex objective $G_{\bar{X}Y}(\mathbf{P})$, which allows us to reach a global optimum.
{\bf Computational complexity}\label{complexity} For our method, the space complexity is $\mathbf{O}(mn)$, which is considerably smaller than the size $\mathbf{O}(m^2n^2)$ of most of other methods with complete graphs. The time complexity is $\mathbf{O}(Tn^3)$, where $T$ is the number of iterations in the FW method. This complexity can be calculated as $\mathbf{O}\left(T(\tau_{f} +\tau_{l}) + \tau_s \right)$, where $\tau_s=\mathbf{O}(m^2)$ is the cost of the edge attribute matrices of $\mathcal{G}_X$. In each iteration of the FW method, $\tau_{f}=\mathbf{O}(m^2n)$ is the cost to compute the gradient, function value and step size at $\mathbf{P}^{(k)}$, and $\tau_{l}=\mathbf{O}(n^3)$ is the cost to minimize Eq.\eqref{eq:fw1} using the Hungarian algorithms.
\section{Experimental analysis}\label{section_results}
In this section, we evaluate our method {\bf{ATGM}} on both synthetic data and real-world datasets. We compare our method with state-of-the-art methods including GA~\cite{[1996-Gold]}, PM~\cite{[2008-Zass-cvpr]}, SM~\cite{[2005-Leordeanu]}, SMAC~\cite{[2006-Cour-nips]}, IPFP~\cite{[2009-Leordeanu-nips]}, RRWM~\cite{[2010-Cho-eccv]}, FGM~\cite{[2016-Zhou-pami]} and MPM~\cite{[2014-Cho-cvpr]}. As suggested in ~\cite{[2009-Leordeanu-nips]}, we use the solution of SM as the initialization for IPFP. Also, for FGM, we use the deformable graph matching method called FGM-D.
In all the experiments, to be able to apply unified parameters $\lambda=1,\lambda_1=10^3$, and $\lambda_2=1$, we normalize the node coordinates to $[0,1]$ for our method. For the non-convex objective functions $F_{XY}$, we compute its unary term by using Shape Context~\cite{[2002-Belongie-pami]}. For comparison, the average accuracy for each algorithm is reported. Our objective functions are different from those used by the compared methods, and thus, it does not make sense to compare the objective scores or objective ratios.
\subsection{Results on synthetic data}
We perform a comparative evaluation of ATGM on synthesized random point sets following ~\cite{[2016-Zhou-pami],[2017-Jiang-cvpr],[2010-Cho-eccv]}. The synthetic points of $\mathcal{G}_X$ and $\mathcal{G}_Y$ are constructed as follows: for the graph $\mathcal{G}_X$, $n_{in}$ inlier points are randomly generated on $\mathbb{R}^2$ with the Gaussian distribution $\mathcal{N}(0,1)$.
The graph $\mathcal{G}_Y$ with noise is generated by adding Gaussian noise $\mathcal{N}(0,\sigma^2)$ to each $X_i\in X$ to evaluate the robustness of the method to deformation noise.
Graph $\mathcal{G}_Y$ with outliers is generated by adding $n_{out}$ additional points on $\mathbb{R}^2$ with a Gaussian distribution $\mathcal{N}(0,1)$ to evaluate the robustness to outliers.
For the compared methods, as in~\cite{[2016-Zhou-pami]}, we set the edge affinity matrix $\mathbf{W}_{i_1j_1,i_2j_2}=\text{exp}(-\frac{(||X_{i_1i_2}||-||Y_{j_1j_2}||)^2}{0.15})$ . We set $\mathbf{S}\in \mathbb{R}^{m\times m}$ as $S_{i_1i_2}={||X_{i_1i_2}||}^{-1}$ for $F_{XY}(\mathbf{P})$ an $G_{XY}(\mathbf{P})$ with fully connected $\mathcal{G}_X$. For $G_{\bar{X}Y}(\mathbf{P})$, our method performs a Delaunay triangulation on $X$ to get its edge set $\bar{\mathcal{E}}_{X}$, and then, $\bar{\mathcal{E}}_{X}$ is divided into two parts using k-means by considering the edge length (edges with longer lengths are abandoned).
{\bf Memory efficiency}
As analyzed in Sec.\ref{complexity}, the space complexity of our method is lower than that of compared methods.
In this experiment, we try to verify that ATGM can match graphs with low memory consumption while achieving better accuracy.
Since the compared methods can achieve better accuracy with complete graphs, for fairness, we first applied all methods to complete graphs with a relative small size $n_{in}=20$. We then enlarged the size to $n_{in}=100$ to test the advantages of ATGM in terms of memory efficiency. Due to the high space complexities of the other methods, we had to apply them to graphs with Delaunay triangulation. However, our method is able to use complete graphs due to its lower space complexity $O(n^2)$.
As shown in Fig.\ref{fig:syn_menory} (a) and (b), under the complete graph setting, our method achieves the highest average accuracy in the case with deformation noise and achieves competitive results in the case with outliers.
For graphs of large size, our method outperforms all the other methods (shown in Fig.\ref{fig:syn_menory} (c) and (d)). In contrast, using complete graphs with a large number of nodes with other methods is infeasible in practice. Except for PM~\cite{[2008-Zass-cvpr]}, all of the compared methods have to use $n_{in}^2(n_{in}+n_{out})^2$ units of memory , which will be extremely large for $n_{in}=100, n_{out}\geq 100$. This requirement affects their application to graph matching in practice.
\begin{figure}[htb!]
\centering
{\includegraphics[width=0.8\linewidth]{./Images/fig_henglan.pdf}}\\
\subfigure[]
{\includegraphics[width=0.24\linewidth]{./Images/fig_20_noise.pdf}}
\subfigure[]
{\includegraphics[width=0.24\linewidth]{./Images/fig_20_outlier.pdf}}
\subfigure[]
{\includegraphics[width=0.24\linewidth]{./Images/fig_syn_100_noise_del_full_10_acc.pdf}
\subfigure[]
{\includegraphics[width=0.24\linewidth]{./Images/fig_syn_100_out_del_full_acc.pdf}
\caption{Comparisons of the robustness to noise and outliers. For complete graphs, the accuracy with respect to the noise and number of outliers are in (a) and (b), respectively.
The results for graphs connected by Delaunay triangulation are shown in (c) and (d).}
\label{fig:syn_menory}
\end{figure}
{\bf Running time} To compare the time consumption of all methods, we tested them in both equal-size and unequal-size cases, namely, (1) $n_{in}=10,20,...,100$, $n_{out}=0$, $\sigma = 0.2$ and (2) $n_{in} = 100$, $n_{out} = 10,20,...,100$, $\sigma = 0.05$. Considering the effect of the number of edges on time consumption, in equal-size cases, we applied all methods to both complete and Delaunay triangulation-connected graphs. In unequal-size cases, we applied our method to complete graphs and the others to Delaunay triangulation-connected graphs so that ATGM took more edges than the others.
As shown in Fig.\ref{fig_syn_time} (a) and (b), where graphs are either complete or connected by Delaunay triangulation, our method takes an intermediate running time and achieves the highest average accuracy. As shown in Fig.\ref{fig_syn_time} (c), even though ATGM handles more edges than the other methods, it takes an acceptable time with the highest accuracy. Compared with GA, SM, PM, SMAC, and IPFP-S, which run faster, ATGM can achieve higher average accuracy. To match complete graphs, the methods RRWM, FGM, MPM can achieve competitive accuracy with ATGM. However, the time consumptions of them rapidly increase and becomes larger than that of ours.
\begin{figure}[htb!]
\centering
{\includegraphics[width=0.8\linewidth]{./Images/fig_henglan.pdf}}\\
{\includegraphics[width=0.3\linewidth]{./Images/fig_noise2_full_acc.pdf}}
{\includegraphics[width=0.3\linewidth]{./Images/fig_noise2_del_acc.pdf}}
{\includegraphics[width=0.3\linewidth]{./Images/fig_syn_noise005_out_fulldel_acc.pdf}}
\subfigure[]
{\includegraphics[width=0.3\linewidth]{./Images/fig_noise2_full_time.pdf}}
\subfigure[]
{\includegraphics[width=0.3\linewidth]{./Images/fig_noise2_del_time.pdf}}
\subfigure[]
{\includegraphics[width=0.3\linewidth]{./Images/fig_syn_noise005_out_fulldel_time.pdf}}
\caption{Comparisons of running time and average accuracy. Graphs in (a) are complete, and those in (b) are Delaunay triangulation-connected. In (c), only ATGM uses complete graphs, while the others use Delaunay triangulation-connected graphs.}
\label{fig_syn_time}
\end{figure}
\begin{table}[htb!]
\centering
\caption{Average accuracy and running time of ATGM on synthetic data with varying inliers $n_{in}$, deformation noise $\sigma$ and outliers $n_{out}$.}
\begin{minipage}{0.49\linewidth}
\centering
\scriptsize
\begin{tabular}{cc|ccccc}
\toprule[1.0pt]
\multicolumn{1}{c}{\#Inlier}&{Noise ($\sigma$)}&0.02 & 0.04 & 0.06 & 0.08 & 0.10 \\
\hline
\multirow{2}{20pt}{100}
&time (s) &0.22&0.51&0.74&0.78&1.01 \\
&acc. (\%) &99.10 &94.15 &89.75 &84.2 &73.9 \\
\hline
\multirow{2}{20pt}{300}
&time (s) &3.34&5.43&6.72&7.73&8.02 \\
&acc. (\%) &96.87 &88.33 &74.37 &60.13 &51.33 \\
\hline
\multirow{2}{20pt}{500}
&time (s)&23.33&32.47&33.12&33.81&35.24 \\
&acc. (\%)&94.20 &79.96 &62.32 &48.54 &38.72 \\
\hline
\multirow{2}{20pt}{1000}
&time (s)&147.15&150.92&156.71&156.99&159.26 \\
&acc. (\%)&89.43 &66.34 &45.23 &33.47 &25.27 \\
\bottomrule[1.0pt]
\end{tabular}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\scriptsize
\begin{tabular}{cc|ccccc}
\toprule[1.0pt]
\multicolumn{1}{c}{\#Inlier}&{\#Outlier} &0.2 &0.4 & 0.6 & 0.8 & 1.0 \\
\hline
\multirow{2}{20pt}{100}
&time (s)&1.90&3.11&3.81&4.62&5.51\\
&acc. (\%)&99.90 &99.80&99.90 &99.80 &99.60 \\
\hline
\multirow{2}{20pt}{300}
&time (s)&17.02&22.70&42.92&47.70&55.13\\
&acc. (\%)&100.00 &99.80&99.67 &99.70 &99.53 \\
\hline
\multirow{2}{20pt}{500}
&time (s)&107.24&123.42&146.99&187.84&185.54\\
&acc. (\%)&99.86 &99.88 &99.64 &98.24 &81.30 \\
\hline
\multirow{2}{20pt}{1000}
&time (s)&563.83&645.11&758.73&882.18&1070.26\\
&acc. (\%)&99.84 &98.95 &88.44 &78.17 &71.33 \\
\bottomrule[1.0pt]
\end{tabular}
\end{minipage}
\label{table_largescale}
\end{table}
{\bf Large-scale graph matching.}
To test the efficiency of our method when applied to large-scale graphs, we carried out more challenging experiments by setting the number of inliers as $n_{in}=100,300,500,1000$ with deformation noise and outliers. The number of outliers was set to $20\%,40\%,...,100\%$ of the number of inliers.
As reported in Tab.\ref{table_largescale}, ATGM is very robust to outliers and less robust to strong noise with larger graphs. Since the compared methods need to store affinity matrices with size of approximately $n_{in}^2(n_{in}+n_{out})^2$, applying these methods to large-scale graphs with hundreds or thousands of nodes is infeasible.
\subsection{Results on real-world datasets}
We also perform comparative evaluations on real-world datasets, including the CMU House sequence\footnote{\url{http://vasc.ri.cmu.edu//idb/html/motion/house/index.html}} and the PASCAL Cars and Motorbikes pairs~\cite{[2012-Leordeanu-ijcv]}, which are commonly used to evaluate graph matching algorithms.
\begin{figure}[htb!]
\centering
\subfigure[20-vs-30 (ATGM:~20/20)]
{\includegraphics[height=0.11\linewidth,width=0.32\textwidth]{./Images/fig_house_ex2.pdf}}
\subfigure[28-vs-48 (ATGM:~28/28)]
{\includegraphics[height=0.11\linewidth,width=0.32\textwidth]{./Images/fig_car_ex2.pdf}}
\subfigure[46-vs-86 (ATGM:~44/46)]
{\includegraphics[height=0.11\linewidth,width=0.32\textwidth]{./Images/fig_motor_ex2.pdf}}
\caption{Examples of matching unequal-size graphs using ATGM on real-world datasets. The red dots are inliers in $\mathcal{G}_X$, and yellow plus signs are both inliers and outliers in $\mathcal{G}_Y$. The lines in green are correct matches, while those in red are incorrect.}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.8\linewidth]{./Images/fig_henglan.pdf}\\
\subfigure[House:30-vs-30]
{\includegraphics[width=0.32\linewidth]{./Images/house_30vs30.pdf}}
\subfigure[House: 25-vs-30]
{\includegraphics[width=0.32\linewidth]{./Images/house_25vs30.pdf}}
\subfigure[House: 20-vs-30]
{\includegraphics[width=0.32\linewidth]{./Images/house_20vs30.pdf}}
\caption{Comparison of average accuracy on the House sequence in both equal-size and unequal-size cases.}\label{fig:CMU-house}
\end{figure}
\begin{figure}[htb!]
\centering
{\includegraphics[width=0.9\linewidth]{./Images/fig_legend_bar.pdf}}\\
{\includegraphics[width=0.48\linewidth]{./Images/fig_cars_bar_acc.pdf}}
{\includegraphics[width=0.48\linewidth]{./Images/fig_motors_bar_acc.pdf}}
\caption{Comparison on cars (left) and motorbikes (right) image pairs with outliers.}
\label{fig:car-motor}
\end{figure}
The CMU House sequence consists of 111 frames of a synthetic house. Each image contains 30 feature points that are manually marked with known correspondences. In this experiment, we matched all the image pairs separated by 10, 20,.., 90 frames. The unequal-size cases are set as 20-vs-30 and 25-vs-30. For the compared methods, we set the edge-affinity to $\mathbf{W}_{i_1j_1,i_2j_2}=\text{exp}(-\frac{(||X_{i_1i_2}||-||Y_{j_1j_2}||)^2}{2500})$ as the same as ~\cite{[2016-Zhou-pami]}.
The PASCAL dataset for graph matching consists of 30 pairs of car images and 20 pairs of motorbike images. Each pair contains both inliers (approximately 30--60 feature points) with groundtruth labels and randomly marked outliers. In the unequal-size matching case, we added 5, 10, 15, 20 outliers to $\mathcal{G}_Y$. For the compared methods, we set the edge affinity matrix as Eq.\eqref{equationedge} which was used in~\cite{[2016-Zhou-pami]}.
{\bf Average accuracy}
For the CMU House sequence, as shown in Fig.\ref{fig:CMU-house}, our method achieves a higher accuracy in both equal-size and unequal-size cases. Meanwhile, our method outperforms all the compared methods on the PASCAL datasets because our method can remove the outliers automatically. The results are shown in Fig.\ref{fig:car-motor}.
{\bf Effect of objective functions} As we discussed in Sec.\ref{method}, the objective function $G_{\bar{X}Y}$ has effects on both the sparsity and matching accuracy. First, to evaluate the sparsity of $\mathbf{P}\in [0,1]^{m\times n}$, we define an index $S_r(\mathbf{P})= \frac{\sum_{i}\mathbb{I}_{(\mathbf{P}_{ij}\ge r)} }{m}$ where $\mathbb{I}$ is the indicator function. We evaluated $S_r(\mathbf{P})$ on the House sequence with $r=0.9$. As shown in Tab.\ref{tab_acc_sparse}, the optimal representation map $\mathbf{P}^*$ of $G_{\bar{X}Y}$ is (nearly) binary in all cases. Then, we evaluated the average accuracy in two cases: (1) minimizing $F_{XY}$ only and (2) applying $G_{\bar{X}Y}$ after $F_{XY}$ is minimized. As shown in Tab.\ref{tab_acc_sparse}, the average accuracy is highly improved especially in unequal-size cases due to $G_{\bar{X}Y}$. This results shows that $G_{\bar{X}Y}$ can enhance the sparsity of the assignment matrix and reduce the node shifting.
\begin{table}[htb!]
\centering
\footnotesize
\caption{Effect of objective functions $F_{XY}$ and $G_{\bar{X}Y}$ on both the sparsity of the assignment matrix $\mathbf{P}^*$ and the average matching accuracy of the house sequence dataset.}
\begin{tabular}{c|c|ccccccccc}
\toprule[0.8pt]
\multirow{1}{30pt}{Size}&\#Separation &10& 20 & 30 & 40&50 & 60& 70 & 80& 90 \\ \hline
\multirow{3}{30pt}{m=20\\n=30}
& Sparsity&98.18&97.98&97.59&96.11&95.15&89.63&90.79&79.58&80.90\\
\cline{2-11}
& acc. (F) &59.60&58.89&57.78&56.18&55.38&54.05&53.52&51.90&51.39\\
&acc. (F\&G)&\textbf{98.25}&\textbf{97.86}&\textbf{96.84}&\textbf{93.97}&\textbf{92.11}&\textbf{88.37}&\textbf{85.66}&\textbf{79.37}&\textbf{77.67}\\
\hline
\multirow{3}{30pt}{m=25\\n=30}
&Sparsity&99.92&100.00&100.00&99.72&99.42&98.35&98.42&96.63&92.43\\
\cline{2-11}
&acc. (F) &81.25&80.24&78.15&76.56&75.80&74.93&73.25&71.05&68.92\\
&acc. (F\&G)&\textbf{99.92}&\textbf{99.71}&\textbf{99.42}&\textbf{98.66}&\textbf{97.70}&\textbf{96.05}&\textbf{94.63}&\textbf{91.63}&\textbf{89.08}\\
\hline
\multirow{3}{30pt}{m=30\\n=30}
& Sparsity&100.00&100.00&100.00&100.00&100.00&100.00&100.00&100.00&100.00\\
\cline{2-11}
& acc. (F) &100.00&100.00&100.00&100.00&100.00&100.00&100.00&100.00&99.68\\
&acc. (F\&G)&\textbf{100.00}&\textbf{100.00}&\textbf{100.00}&\textbf{100.00}&\textbf{100.00}&\textbf{100.00}&\textbf{100.00}&\textbf{100.00}&\textbf{100.00}\\
\bottomrule[0.8pt]
\end{tabular}
\label{tab_acc_sparse}
\end{table}
\begin{table}[ht!]
\centering
\scriptsize
\caption{Effectiveness of outlier removal strategy. This strategy improves the average matching accuracy by more than ${\textbf{10}\%}$ for almost all the methods.}
\begin{tabular}{cc|ccccccccc}
\toprule[0.8pt]
\multirow{1}{20pt}{Data}&Out.Re. & GA~\cite{[1996-Gold]}& PM~\cite{[2008-Zass-cvpr]} & SM~\cite{[2005-Leordeanu]} & SMAC~\cite{[2006-Cour-nips]}&IPFP-S~\cite{[2009-Leordeanu-nips]} & RRWM~\cite{[2010-Cho-eccv]} & FGM-D~\cite{[2016-Zhou-pami]} & MPM~\cite{[2014-Cho-cvpr]}& {\bf ATGM} \\
\hline
\multirow{2}{20pt}{Cars}
& w/o&34.50&37.04&38.04&38.53&26.74&53.84&49.05&58.02&-\\
& w/&\textbf{61.93}&\textbf{60.71}&\textbf{63.55}&\textbf{49.54}&\textbf{65.55}&\textbf{70.37}&\textbf{70.62}&\textbf{63.44}&\textbf{71.83}\\
\hline
\multirow{2}{15pt}{Motor.}
& w/o& 45.97& 43.56&47.13& 43.84&34.90& 65.64&67.31& 65.73& -\\
& w/&\textbf{ 66.53}&\textbf{61.91}&\textbf{67.43}&\textbf{52.06} &\textbf{75.80}& \textbf{72.61}&\textbf{76.76}&\textbf{69.46}&\textbf{74.75}\\
\bottomrule[0.8pt]
\end{tabular}
\label{table_re}
\end{table}
{\bf Effectiveness on outlier removal} Finally, our proposed outlier removal strategy is not restricted to our approach. It can be applied to any other method. To evaluate the generality of our outlier removal strategy, we applied it as a pre-processing step, and then executed the other methods with the pre-processed input. As shown in Tab.\ref{table_re}, the average accuracy of all the methods is improvement greatly, and almost all the methods improve their performance by more than $10\%$.
\section{Conclusions}\label{section6}
In this paper, we presented a new approach from a functional representation perspective for the graph matching problem by redefining the assignment matrix as a linear representation map. Our approach reduces both the space and time complexity significantly. Thus, our method is suitable for matching complete graphs with hundreds and thousands of nodes. In addition to the transformation map, we presented a domain adaptation-based method for outlier removal that improves the performance of all methods. In future work, we plan to study graph matching on more general manifolds (or metric spaces) and hyper-graph matching with lower computational complexity.
\section{Acknowledgement}
This research is supported by projects of National Natural Science Foundation of China (NSFC) under the contracts No.61771350 and No.41501462.
\bibliographystyle{splncs04}
|
2,877,628,091,290 | arxiv | |
2,877,628,091,291 | arxiv |
\section{Introduction}
The charged particle transverse momentum (\pt) spectrum is an important observable for understanding the fundamental
quantum chromodynamic (QCD) interactions involved in proton-proton collisions.
While the energy dependence of the bulk of particle production with \pt\ below a few
\ensuremath{{\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}
is typically described either empirically or with phenomenological models, the rest of the spectrum
can be well described by a convolution of parton
distribution functions, the hard-scattering cross section from perturbative calculations, and fragmentation
functions. Such a prescription has been generally successful over a large range of lower energy pp and p$\bar{\mathrm{p}}$
collisions~\citep{Arleo:2008zd,Adare:2007dg,Adare:2008qb,Aaltonen:2009ne,PhysRevD.82.119903,:2010ir,Aamodt:2010my}.
Along with measurements of the jet production cross section
and fragmentation functions,
measurements of high-\pt\ spectra provide a test of factorised perturbative QCD (pQCD)~\cite{Yoon:2010fa} at the highest collision energy to date.
In addition to its relevance to the understanding of pQCD,
the charged particle spectrum in pp collisions will be an important reference for measurements
of high-\pt\ particle suppression in the dense QCD medium produced in heavy-ion collisions.
At the Relativistic Heavy Ion Collider (RHIC), the sizable suppression of high-\pt\ particle production,
compared to the spectrum expected from a superposition of a corresponding number of
pp collisions, was one of the first indications of strong final-state medium effects~\cite{Back:2004je,Adams:2005dq,Adcox:2004mh,Arsene:2004fa}.
A similar measurement of nuclear modification to charged particle \pt\ spectra has been one of the first heavy-ion results at the
Large Hadron Collider (LHC)~\cite{Aamodt:2010jd}.
The reference spectrum for the PbPb collisions at $\sqrt{s_{_{\mathrm{NN}}}}=2.76$\TeV\ per nucleon can be constrained
by interpolating between the pp spectra measured at $\sqrt{s}$ = 0.9 and 7\TeV.
In this paper, the phase-space-invariant differential yield $E\,d^{3}N_{\mathrm{ch}}/dp^{3}$ is presented for primary charged
particles with energy ($E$) and momentum ($p$),
averaged over the pseudorapidity acceptance of the Compact Muon Solenoid (CMS) tracking system ($|\eta|<2.4$).
The pseudorapidity is defined as --$\ln$[tan($\theta$/2)], with $\theta$ being the polar angle of the charged particle with respect
to the counterclockwise beam direction.
The number of primary charged particles ($N_{\mathrm{ch}}$) is defined to include decay products of particles with proper lifetimes less than 1\cm.
Using the integrated luminosities calculated in Refs.~\cite{EWK-10-004,EWK-11-001} with an estimated uncertainty of 11\% and 4\%
at $\sqrt{s}=0.9$ and 7\TeV, respectively, the differential cross sections are constructed and compared to a scaling
with the variable \mbox{$\xt\equiv2\pt/\sqrt{s}$}. Such a scaling has already been observed
for p$\bar{\mathrm{p}}$ measurements at lower collision energies~\citep{Albajar:1989an,Abe:1988yu,Aaltonen:2009ne,PhysRevD.82.119903}.
For consistency with the CDF measurements at $\sqrt{s}=0.63$, 1.8, and 1.96\TeV, the pseudorapidity
range of the \xt\ distributions has been restricted to $|\eta|<1.0$.
Finally, using the new measurements presented in this paper,
as well as previously measured pp and p$\bar{\mathrm{p}}$ cross sections, an estimate of the differential transverse momentum
cross section is constructed at the interpolated energy of $\sqrt{s}=2.76$\TeV, corresponding to the nucleon-nucleon centre-of-mass energy
of PbPb collisions recorded at the LHC.
The paper is organised as follows:
Section~\ref{sect:detector} contains a description of the CMS detector;
Section~\ref{sec:evtSel} describes the trigger and event selection;
Sections~\ref{sec:vtx} and \ref{sec:trk} detail the reconstruction and selection of primary vertices and tracks;
Section~\ref{sec:jetet} explains the characterisation of events based on the leading-jet transverse energy;
Section~\ref{sec:corr} describes the various applied corrections and systematic uncertainties;
Section~\ref{sec:results} presents the final invariant differential yields and comparisons to data and simulation;
and Section~\ref{sec:interpolation} discusses the interpolation procedures used to construct a reference spectrum at $\sqrt{s}=2.76$\TeV.
\section{The CMS Detector}
\label{sect:detector}
A detailed description of the CMS experiment can be found in Ref.~\cite{JINST}.
The central feature of the CMS apparatus is a superconducting solenoid
of 6\,m internal diameter, providing an axial magnetic field of 3.8\,T.
Immersed in the magnetic field are the pixel tracker, the silicon strip
tracker, the lead tungstate crystal electromagnetic calorimeter
(ECAL), and the brass/scintillator hadron calorimeter (HCAL).
Muons are measured in gas ionisation
detectors embedded in the steel return yoke.
The CMS experiment uses a right-handed coordinate system, with the origin at
the nominal interaction point, the $x$ axis pointing to the centre of the
LHC ring, the $y$ axis pointing up perpendicular to the plane of the LHC, and the
$z$ axis along the counterclockwise beam direction. The azimuthal angle,
$\phi$, is measured in the ($x$,\,$y$) plane.
The tracker consists of 1440 silicon pixel and 15\,148 silicon strip
detector modules and measures charged particle trajectories within the nominal
pseudorapidity range $|\eta|< 2.4$. The pixel tracker consists of three
53.3\cm-long barrel layers and two endcap disks on each side of the barrel
section. The innermost barrel layer has a radius of 4.4\cm, while for the
second and third layers the radii are 7.3\cm and 10.2\cm, respectively.
The tracker is designed to provide an impact parameter resolution of about
100\micron\ and a transverse momentum resolution of about 0.7\,\% for
1\GeVc charged particles at normal incidence ($\eta=0$)~\cite{CMSTDR1}.
The tracker was aligned as described in Ref.~\cite{TrackerAlign} using
cosmic ray data prior to the LHC commissioning. The
precision achieved for the positions of the detector modules with respect
to particle trajectories is 3--4\micron\ in the barrel for the coordinate
in the bending plane ($\phi$).
Two elements of the CMS detector monitoring system, the beam scintillator
counters (BSC) \cite{JINST, Bell} and the beam pick-up timing for the experiments
devices (BPTX)~\cite{JINST, Aumeyr}, were used to trigger the
detector readout. The BSCs are located at a distance
of 10.86\,m from the nominal interaction point (IP), one on each side, and are sensitive
in the $|\eta|$ range from 3.23 to 4.65. Each BSC is a set of $16$
scintillator tiles. The BSC elements have a time resolution of 3\,ns,
an average minimum ionising particle detection efficiency of 95.7\%, and
are designed to provide hit and coincidence rates.
The two BPTX devices, located around the beam pipe at a position of $z = \pm
175$\,m from the IP, are designed to provide precise
information on the bunch structure and timing of the incoming beam, with
better than 0.2\,ns time resolution.
The two steel/quartz-fibre forward calorimeters (HF), which extend the calorimetric coverage
beyond the barrel and endcap detectors to the $|\eta|$ region between 2.9 and 5.2,
were used for further offline selection of collision events.
The detailed Monte Carlo (MC) simulation of the CMS detector response is based
on \GEANTfour\ ~\cite{GEANT4}.
Simulated events were processed and reconstructed in the same manner as collision
data.
\section{Event Selection}
\label{sec:evtSel}
This analysis uses data samples collected from 0.9 and 7\TeV pp collisions in the first months of the 2010 LHC running,
corresponding to integrated luminosities of $(231 \pm 25)$\microbinv\ and $(2.96 \pm 0.12)$\pbinv,
respectively~\cite{EWK-10-004,EWK-11-001}.
This section gives a brief description of the requirements imposed to select good events for this analysis.
A more detailed description of the CMS trigger selections can be found in Ref.~\cite{Khachatryan:2010xs}.
First, a minimum bias trigger was used to select events with a signal in any of the BSC tiles,
coincident with a signal from either of the two BPTX detectors, indicating
the presence of at least one proton bunch crossing the interaction point. From this sample, collision events were
selected offline by requiring a coincidence of BPTX signals, indicating the presence of both beams.
To select preferentially non-single-diffractive (NSD) events, at least one forward calorimeter (HF) tower
with energy deposition $E>3$\GeV in each of the forward and backward hemispheres was required. Events with beam-halo muons
crossing the detector were identified and rejected based on the time difference between BSC hits on either side of
the interaction point. Beam-induced background events, producing anomalous numbers of low-quality tracks, were rejected by
requiring that at least 25\% of the charged particles reconstructed in the pixel--silicon tracking system satisfied the \textit{highPurity}
criterion. This criterion, described in Ref.~\cite{TRK-10-001}, consists of numerous selections on the properties of the tracks,
including the normalised $\chi^2$, the compatibility with the beamline and primary vertices, the number of hit layers,
the number of `3D' layers, and the number of lost layers.
The selection on the fraction of \textit{highPurity} tracks was only applied to events with more than 10 tracks, providing a clean separation
between real pp collisions and beam backgrounds. The remaining non-collision event fraction,
determined by applying the same selections to events where only a single beam was crossing the interaction point,
is estimated to be less than 2 x $10^{-5}$.
Events were required to have at least one primary vertex, reconstructed according to the description in the following section
from triplets of pixel hits.
A further requirement, namely at least one vertex found from fully reconstructed tracks (see next section for details)
with number of degrees of freedom
($Ndof$) greater than four, was imposed to improve the robustness against triggered events containing
multiple pp collisions, i.e., ``event pileup''. The loss in event selection efficiency
from the fully-reconstructed-track vertex compared to the pixel vertex alone was determined entirely from data, based on a subset
of early runs with negligible event pileup.
The percentage of events remaining after each selection step is presented in Table~\ref{tab:evtSel}.
For a large part of the 7\TeV data collection, the minimum bias trigger paths had to be prescaled by large factors because of
the increasing instantaneous luminosity of the LHC. In order to maximise the \pt\ reach of the charged particle transverse
momentum measurement at this centre-of-mass energy, two high-level trigger (HLT) paths were used that selected events with
minimum uncorrected transverse jet energies (\et) of 15 and 50\GeV, based only on information from the
calorimeters. While the higher threshold path was not prescaled during the 7\TeV data-taking period corresponding to
the 2.96\pbinv\ used in this analysis, the lower threshold path had to be prescaled for a significant fraction of this sample.
The 0.9\TeV\ data sample consists of 6.8 million minimum bias triggered events, while the 7\TeV sample is composed
of 18.7 million minimum bias events, and 1.4 (5.6) million events selected with the HLT minimum-\et\ values of 15 (50)\GeV.
The selection efficiency for NSD events was determined based on simulated events from the \textsc{pythia}~\cite{Sjostrand:2006za}
event generator (version 6.420, tune D6T~\cite{Bartalini:2009xx}) that were subsequently passed through
a Monte Carlo simulation of the CMS detector response.
The resulting event selection efficiency as a function of the multiplicity of reconstructed charged particles
is shown for 7\TeV collisions in Fig.~\ref{fig:evtSelEff}.
The corresponding event selection efficiency is calculated by the same technique for the 0.9\TeV data (not shown).
Based on events simulated with \textsc{phojet}~\cite{Bopp:1998rc,Engel:1995sb} and \textsc{pythia}, the remaining fraction
of single-diffractive (SD) events in the selected sample was estimated to be ($5 \pm 1$)\% and ($6 \pm 1$)\%
for the 0.9 and 7\TeV data, respectively.
\begin{table}[tb]
\caption{Summary of event selection steps applied to the 0.9 and 7\TeV collision data sets and
the percentage of events from the original minimum bias samples that remain after each step. }
\centering
\begin{tabular}{ l c c c }
\\ \hline \hline
Collision energy & 0.9\TeV & 7\TeV \\
\hline
Selection & \multicolumn{2}{c}{Percentage passing each selection cut} \\
\hline \hline
One BSC + one BPTX & 100.0 & 100.0 \\
BPTX coincidence & 94.49 & 90.05 \\
Beam halo rejection & 94.08 & 89.83 \\
HF coincidence & 73.27 & 83.32 \\
Beam background rejection & 73.26 & 83.32 \\
Valid pixel-track vertex & 70.14 & 82.48 \\
Quality full-track vertex & 64.04 & 77.35 \\
\hline \hline
\vspace{2mm}
\end{tabular}
\label{tab:evtSel}
\end{table}
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/evt_sel_eff_v1}
\label{fig:evtSelEff}}
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/multVtxZ_bothGood}
\label{fig:multPrimVtxZ}}
\caption{
(a) The efficiency ($\varepsilon_{\mathrm{NSD}}^{\mathrm{selected}}$ in Eq.~(\ref{eqn:evtWeight})) for selecting non-single-diffractive (NSD)
events as a function of the multiplicity of reconstructed charged particles in the tracker acceptance (\mbox{$|\eta|<2.4$})
after applying the full event selection described in the text,
including a single pixel-track vertex (filled circles) and additionally requiring a fully-reconstructed-track vertex
with $Ndof>4$ (open circles)
as described in Section~\ref{sec:vtx}.
Also, the remaining single-diffractive (SD) fraction ($f^{\mathrm{selected}}_{\mathrm{SD}}$ in Eq.~(\ref{eqn:evtWeight})) as a function of charged
particle multiplicity for the same selections (solid and dashed lines).
(b) Correlation between the $z$ positions, $z^0_{\mathrm{PV}}$ and $z^1_{\mathrm{PV}}$, of the two vertices with the most associated tracks
for measured events with more than one fully-reconstructed-track vertex satisfying the quality selections.
}
\vspace{4mm}
\label{fig:evtSel}
\end{figure}
\section{Primary Vertex}
\label{sec:vtx}
In this analysis, two separate algorithms are employed to determine the primary vertex position.
The first is a highly efficient algorithm based on pixel triplet tracks that requires a minimum of just a single track
consistent with the beam-spot position.
The position of the beam-spot, taken as the centre of the region where the LHC beams collide,
is calculated for each LHC fill based
on the average over many events of the three-dimensional fitted vertex positions~\cite{TRK-10-001}.
The second vertex-finding algorithm, based on fully reconstructed tracks with hits also in the silicon strip tracker,
is less efficient in selecting low-multiplicity events, but more robust in discriminating against event pileup.
Since pileup is significant over the majority of the analysed data sample, only the fully-reconstructed-track vertex is used
to construct the raw charged particle momentum spectra.
The raw spectra are subsequently corrected for the fraction of events with fewer than four tracks (and the fraction of tracks
in such low-multiplicity events), based on a subset of the event sample selected with the more efficient pixel-track vertex
requirement during collision runs with negligible event pileup.
To determine the $z$ position of the pixel vertex in each event, tracks consisting of three pixel hits
are constructed with a minimum \pt\ of 75\MeVc\ from a region within a transverse distance of 0.2\cm\ from the beam axis.
The $x$ and $y$ positions of the pixel vertex are taken from the transverse position of the beam axis.
Fitted tracks are selected based on the requirement that the transverse impact parameter is less than three times the
quadratic sum of the transverse errors on the track impact parameter and the beam axis position.
The selected tracks are then passed to an agglomerative algorithm~\cite{Sikler:2009nx},
which iteratively clusters the tracks into vertex-candidates. The procedure is halted when the distance between
nearest clusters, normalised by their respective position uncertainties, reaches 12. Only vertices consisting of at least
two tracks are kept, except when the event contains a single reconstructed track, which occurs in 1.67\% (0.99\%)
of the events at $\sqrt{s}=0.9$ (7)\TeV.
In the case of multiple vertex-candidates, only the vertex with the most associated tracks is kept.
While this occurs in as many as 20\% of events, the rejected vertex typically has very few
associated tracks and is highly correlated in $z$ position to the vertex with the most associated tracks.
These characteristics imply that the rejected vertices are not from event pileup, but rather from tracks
in the tails of the impact parameter distribution that are not agglomerated into the primary vertex.
The fully-reconstructed-track vertex algorithm begins from a set of tracks selected according to their transverse impact
parameter to the beam-spot ($<2\cm$), number of hits ($>6$), and normalised $\chi^2$ ($<20$). These tracks are
passed to an adaptive vertex fitter, in which tracks are assigned a weight between 0 and 1 according to their compatibility
with the common vertex~\cite{TRK-10-001}. Quality vertices are further required to have more than four degrees
of freedom ($Ndof$), corresponding to at least four tracks with weights of approximately one.
For events with multiple reconstructed vertices passing the quality selection, the correlation between the $z$ positions
of the two vertices with the most associated tracks is shown in Fig.~\ref{fig:multPrimVtxZ}. Other than the diagonal
region without multiple vertices, expected from the algorithmic parameter of at least a 1\cm\ separation,
the uncorrelated positions of the two vertices are indicative of random event pileup.
The event pileup rate is estimated from the fraction of events with multiple reconstructed vertices, after correcting
for vertices that are not found because of their proximity.
The beam conditions varied over the analysed minimum bias data samples, such that the corrected fraction of pileup events
is in the range (0.4--7.5)\%. The uncertainty on the event pileup fraction, determined from the largest correction to the
multiple-vertex fraction, is a constant factor of 0.2\% and 1.2\% for the 0.9 and 7\TeV data, respectively.
\section{Track Selection}
\label{sec:trk}
This analysis uses tracks from the standard CMS reconstruction algorithm, which consists of multiple iterations of a
combinatorial track finder based on various seeding layer patterns~\cite{Adam:2006ki}. After each iteration,
hits belonging unambiguously to tracks in the previous step are removed from consideration for subsequent steps.
In order to minimise the contribution from misidentified tracks and tracks with poor momentum resolution, a number of quality
selections are applied. These include the \textit{highPurity} selection mentioned in Section~\ref{sec:evtSel}, the requirement
of at least five hits on the track, the normalized $\chi^{2}$ per degree of freedom divided by the number
of tracker layers used in the fit less than a maximum value which varies from 0.48 and 0.07 depending
on $\eta$ and \pt, and a relative momentum uncertainty of less than 20\%. Furthermore, to reject non-primary
tracks (i.e., the products of weak decays and secondary interactions with detector material), only the pixel-seeded tracking
iterations are used, and selections are placed on the impact parameter of the tracks with respect to the primary vertex position.
Specifically, the transverse and longitudinal impact parameters are required to be less than 0.2\cm\ and also less than 3
times the sum in quadrature of the uncertainties on the impact parameter and the corresponding vertex position.
In the case of multiple quality reconstructed vertices in the minimum bias event samples, tracks that pass the impact
parameter selections with respect to any vertex are used in the analysis. The number of events, by which the track \pt\
distribution is normalised, is then scaled by a factor to account for the event pileup fraction. In contrast, for the jet-triggered
samples, tracks are selected based on the impact parameter with respect to the single vertex responsible for the trigger.
The primary vertex of the hard-scattering process is identified as the vertex with the largest value of $\sum{\pt^2}$ for the
associated fitted tracks.
With the above-mentioned selections applied to the reconstructed tracks, the algorithmic efficiency determined from simulated
\textsc{pythia} events is greater than 85\% (80\%) for tracks with transverse momentum above 2.0 (0.4)\GeVc\ averaged
over $|\eta|<2.4$ (Fig.~\ref{fig:trkEff}).
In the same kinematic region, misidentified and non-primary tracks are each below 1\%, while multiple reconstruction occurs for less than
0.01\% of tracks.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/peff_final}
\label{fig:trkEff}}
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/Eff_FR_GRAND_logx_wider_v2_loose}
\label{fig:occTrk}}
\caption{(a) The algorithmic tracking efficiency for two different momentum ranges as a function of $\eta$.
(b) The product of geometrical acceptance (A) with tracking efficiency ($\varepsilon^{\mathrm{tr}}$) (upper points) and
the misidentification (`fake') rate (lower points)
as a function of transverse momentum for tracks with $|\eta|<1$ in bins of corrected leading-jet transverse energy.}
\vspace{4mm}
\end{figure}
\section{Event Classification by Leading-Jet Energy}
\label{sec:jetet}
All events in this analysis are classified according to the transverse energy of the most energetic reconstructed jet,
defined as the leading jet.
Jets are reconstructed from calorimeter deposits alone using the anti-\kt\ algorithm~\cite{Cacciari:2008gp} with
cone radius $R=\sqrt{(\Delta\phi)^2+(\Delta\eta)^2}=0.5$.
The measured energy of the jet is adjusted according to corrections based on a MC description of the CMS calorimeter
response with a 3--6\% uncertainty on the jet energy scale~\cite{JME-10-010}.
The motivation for classifying events according to the leading-jet transverse energy is twofold.
First, the degrading effect of the local-track density on the high-\pt\ tracking performance (e.g., inside a jet)
can be parametrised according to this variable.
Based on events simulated with \textsc{pythia} in minimum bias and QCD samples with various thresholds on the hard-scattering
scale ($\hat{p}_{\mathrm{T}}$), the efficiency and misidentification rates of the selected tracks are estimated as a function of
transverse momentum in bins of leading-jet transverse energy (see Fig.~\ref{fig:occTrk}).
Second, as discussed in Section~\ref{sec:evtSel}, calorimeter-based triggers with leading-jet transverse energy thresholds of 15\GeV (Jet15U)
and 50\GeV (Jet50U) were used to extend the \pt\ reach of the 7\TeV measurement.
To avoid potential biases from the jet-trigger selection, it is desirable to operate in a region where the trigger is fully efficient.
The region above which the jet trigger with an uncorrected energy threshold of 15\GeV becomes fully efficient
is determined by first plotting the leading-jet \et\ distribution for a sample of events selected with the prescaled minimum bias trigger
and the offline selections described in Section~\ref{sec:evtSel}. This distribution is then compared to the subset of those events which
also fire the 15\GeV jet trigger as a function of corrected transverse energy. The resulting ratio is the trigger efficiency curve presented in
the lower panel of Fig.~\ref{fig:hltJetTurnon}. The 15\GeV jet trigger achieves more than 99\% efficiency at a corrected energy of $\et=45$\GeV.
The analogous procedure is repeated on a sample of events selected by the 15\GeV jet trigger
to determine that the 50\GeV jet trigger becomes fully efficient above $\et= 95$\GeV.
For the trigger efficiency study, an early subset of the data (10.2\nbinv) was used, because the minimum bias and
lower-threshold jet triggers were highly prescaled in the later runs.
In the upper panel of Fig.~\ref{fig:hltJetTurnon}, the $\et$ distributions from the jet-triggered sample are normalised per
equivalent minimum bias event by matching their integrals in the regions where the triggers are fully efficient.
For the 7\TeV analysis, events are divided into three classes based on leading-jet \et:
below 60\GeV, between 60 and 120\GeV, and above 120\GeV.
Since each event is uniquely assigned to one such leading-jet \et\ range, the overall $dN_{\mathrm{ch}}/d\pt$ distribution
is simply the sum of the spectra from the three ranges, each corresponding to a fully-efficient HLT selection (i.e., minimum bias,
15\GeV jet trigger, and 50\GeV jet trigger).
The contributions to the spectra from the jet-triggered events are normalised per selected minimum bias event; the fraction of
minimum bias events containing a leading jet with greater than either 60 or 120\GeV is calculated as shown in Fig.~\ref{fig:hltJetTurnon}
by matching the fully-efficient regions of the leading-jet \et\ distributions.
The three contributions to the combined charged particle transverse momentum spectrum are shown in Fig.~\ref{fig:spectraLeadingJetBins}.
The lower panel of that figure compares the combined spectrum first to the minimum bias spectrum alone and then to a spectrum
constructed with the addition of only the lower-threshold jet trigger. These are all in good agreement within their respective statistical uncertainties.
A \pt-dependent systematic uncertainty of 0--4\% is attributed to the normalisation of the contributions from the triggered samples.
This value is determined by changing the leading-jet \et\ ranges that separate the three samples
(e.g., to $\et = 40$ and 100\GeV), by basing the normalisation directly on the HLT prescale values, and by comparing the normalisations
determined from different subsets of the full data sample.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/cJetTurnOn}
\label{fig:hltJetTurnon}}
\subfigure{
\includegraphics[width=0.45\textwidth, viewport=0 25 567 719]{figs/dndpt_comparison_full_vs_merged_all_v2}
\label{fig:spectraLeadingJetBins}}
\caption{(a) Upper panel: distributions of the corrected transverse energy of leading jets normalised by the number of selected minimum
bias events $N^{\mathrm{Evt}}_{\mathrm{MB}}$.
Lower panel: the efficiency turn-on curves for the jet triggers with uncorrected energy thresholds of 15 and 50\GeV.
(b) Upper panel: the three contributions to the charged particle transverse momentum spectrum and their sum (solid circles).
Open squares show the minimum bias spectrum for all values of leading-jet \et; open triangles show the spectrum with
the addition of only the lower threshold jet trigger.
Lower panel: the ratio of the combined spectrum to minimum bias only (solid circles) and with the addition of
only the lower threshold jet trigger (open triangles).}
\vspace{4mm}
\end{figure}
\section{Corrections and Systematic Uncertainties}
\label{sec:corr}
To obtain the final phase-space-invariant charged particle differential momentum distribution, a number of corrections
must be applied to the raw distributions of reconstructed charged particles, according to the following equation:
\begin{equation}
E \frac{d^{3}N_{\mathrm{ch}}}{dp^{3}} (\pt,\eta) = \frac{\sum_{_{M,\et^{\mathrm{jet}}} } N_{\mathrm{track}}^{\mathrm{raw}}(M,\et^{\mathrm{jet}},\pt,\eta) \cdot w_{\mathrm{tr}}(\pt,\eta,\et^{\mathrm{jet}}) \cdot w_{\mathrm{ev}}(M) }{2\pi \pt \cdot \Delta \pt \cdot \Delta\eta \cdot \sum_{_{M}} N^{\mathrm{selected}}(M) \cdot (1-f^{0}_{\mathrm{NSD}})^{^{-1}} \cdot (1+f^{\mathrm{pileup}}) \cdot w_{\mathrm{ev}}(M)},
\label{eqn:corrspectra}
\end{equation}
where $N_{\mathrm{track}}^{\mathrm{raw}}$ is the raw number of tracks in a bin with transverse momentum width $\Delta \pt$
and pseudorapidity width $\Delta\eta$, and $N^{\mathrm{selected}}$ is the number of selected events.
An event weight $w_{\mathrm{ev}}$ (see Eq.~(\ref{eqn:evtWeight})) is applied as a function of the multiplicity of
reconstructed charged particles ($M$), while a track weight $w_{\mathrm{tr}}$ (see Eq.~(\ref{eqn:trkWeight})) is applied for each $M$
and leading-jet transverse energy ($\et^{\mathrm{jet}}$), as a function of \pt; the final results are summed over $M$ and $\et^{\mathrm{jet}}$.
The number of selected events is corrected for the fraction of NSD events ($f^{0}_{\mathrm{NSD}}$) that have zero reconstructed
tracks in the tracker acceptance of $|\eta|<2.4$ (about 5\%) and for the pileup event fraction ($f^{\mathrm{pileup}}$).
The multiplicity-dependent event weight $w_{\mathrm{ev}}$ accounts for the efficiency of the event selection
for accepting NSD events ($\varepsilon^{\mathrm{selected}}_{\mathrm{NSD}}$) and for the fraction of SD events ($f^{\mathrm{selected}}_{\mathrm{SD}}$)
that contaminate the selected sample (about 5\% overall):
\begin{equation}
w_{\mathrm{ev}}(M) = \frac{1}{\varepsilon_{\mathrm{NSD}}^{\mathrm{selected}}} (1-f^{\mathrm{selected}}_{\mathrm{SD}}).
\label{eqn:evtWeight}
\end{equation}
The correction factor $w_{\mathrm{tr}}$, by which each track is weighted, is calculated for each bin in transverse momentum, pseudorapidity,
and leading-jet transverse energy.
This factor accounts for the geometric detector acceptance ($A$) and algorithmic tracking efficiency ($\varepsilon^{\mathrm{tr}}$),
as well as the fraction of tracks corresponding to the same, multiply reconstructed charged particle ($D$),
the fraction of tracks corresponding to a non-primary charged particle ($S$),
and the fraction of misidentified (`fake') tracks that do not correspond to any charged particle ($F$):
\begin{equation}
w_{\mathrm{tr}}(\pt,\eta,\et^{\mathrm{jet}}) = \frac{(1-F) \cdot (1-S)}{A \cdot \varepsilon^{\mathrm{tr}} \cdot (1+D) }.
\label{eqn:trkWeight}
\end{equation}
The common uncertainty related to the triggering and event selection efficiency is discussed in detail in
Ref.~\cite{Khachatryan:2010us}. Contributions from uncertain diffractive-event fractions and detector
inefficiencies in the BSC and HF combine to contribute a scale error of $\pm 3.5$\% to the total systematic
uncertainty at $\sqrt{s}=7$\TeV (see Table~\ref{tab:sys}). At $\sqrt{s}=0.9$\TeV, the diffractive fractions are slightly better constrained,
hence an uncertainty of $\pm 3.2$\% is assigned.
Using simulated events generated with \textsc{pythia} tune D6T, the various terms in Eq.~(\ref{eqn:trkWeight}) are estimated by matching selected
reconstructed tracks to simulated tracks based on the requirement that they share 75\% of their hits.
As an example, the algorithmic efficiency ($\varepsilon^{\mathrm{tr}}$) versus $\eta$ is presented in Fig.~\ref{fig:trkEff}. The slight asymmetry between
the positive and negative hemispheres is attributed to a slightly displaced beam-spot and the distribution of dead channels in the tracker.
The systematic uncertainties assigned to the various tracking corrections are discussed below and are summarised,
along with the total systematic uncertainty, in Table~\ref{tab:sys}.
\begin{table}[tb]
\caption{Summary of the various contributions to the estimated systematic uncertainty. }
\centering
\begin{tabular}{ l c c}
\\ \hline \hline
Source & \multicolumn{2}{c}{Uncertainty [\%]} \\
Collision energy & 0.9\TeV & 7\TeV \\
\hline
Event selection & 3.2 & 3.5 \\
Pileup effect on vertexing & 0.2 & 1.2 \\
Acceptance & 1.5 & 1.5 \\
Reconstruction efficiency & 2.2 & 2.2 \\
Occupancy effect on efficiency & 0.0--0.5 & 0.0--2.8 \\
Misidentified track rate & 0.3--1.0 & 0.3--3.0 \\
Correction for secondary particles & 1.0 & 1.0 \\
Momentum resolution and binning & 0.3--1.5 & 0.3--2.7 \\
Normalisation of jet-triggered spectra & -- & 0.0--4.0 \\
\hline
Total & 4.3--4.7 & 4.7--7.9 \\
Total excluding event selection uncertainty & 2.9--3.4 & 3.1--7.1 \\
Total including luminosity uncertainty & 11.4--11.6 & 5.1--8.1 \\
\hline \hline
\vspace{2mm}
\end{tabular}
\label{tab:sys}
\end{table}
The uncertainty on the geometrical acceptance of the tracker was estimated from three sources. First, the efficiency
of the pixel hit reconstruction was estimated from a data-driven technique involving the projection of two-hit combinations
(called tracklets) onto the third layer in search of a compatible hit. The observed efficiency of $(99.0\pm0.5)$\% leads to
a 0.3\% uncertainty on the acceptance of pixel-seeded tracks. Second, the variation of the geometrical acceptance
was estimated for a variety of generator tunes including \textsc{pythia8} \cite{Sjostrand:2007gs} and the \mbox{Perugia0} \cite{Skands:2009zm} tune of \textsc{pythia}.
Third, the variation was estimated after shifting the generated beam-spot and modifying the width of the generated $z$ vertex
distribution. The latter two effects each contribute a 1\% shift in the acceptance.
In a similar fashion, using the different generator tunes results in a 2\% shift in the reconstruction efficiency. An additional series of checks was
performed by varying the cuts imposed during the track selection and in the determination of the corresponding MC-based corrections.
The resulting variation in the corrected results contributes another 1\% to the reconstruction efficiency uncertainty.
Since the dependence of the reconstruction efficiency on local hit density has been parametrised in terms of leading-jet transverse energy,
both the uncertainty on the jet energy scale and the accuracy of the jet-fragmentation description become relevant.
The former contribution is estimated by convolving the dependence of the tracking efficiency on the leading-jet transverse energy
(see Fig.~\ref{fig:occTrk}) with a 4\% uncertainty in the jet energy scale~\cite{JME-10-010}. The latter contribution is estimated by comparing
the \textsc{pythia}-based corrections to \textsc{herwig++}~\cite{Bahr:2008pv}.
The resulting \pt-dependent uncertainty on the occupancy is in the range (0.0--2.8)\%.
Based on studies of different generator tunes and MC samples with different hard-scattering scales, the assigned uncertainty
to the misidentified-track correction grows linearly as a function of \pt\ from 0.3 to 3.0\%.
An additional check was performed for tracks with \pt\ above 10\GeVc\ to correlate the reconstructed track momentum with
the deposited energy in the projected ECAL and HCAL cells.
For the selected tracks in this analysis, there is no evidence of any excess of high-\pt\ misidentified tracks characterised by
atypically little energy deposited in the calorimeters.
The correction for secondaries and feed-down from weak decays is assigned a
1\% systematic uncertainty, which is large compared to the scale of the contributions, but intended to account for the uncertainties
in the \PKzS\ and \PgL\ fractions~\cite{Khachatryan:2011tm}.
The tendency for finite bin widths (up to 40\GeVc) and a finite transverse momentum resolution
(rising from 1 to 5\% in the range \pt = 10--150\GeVc)
to deform a steeply falling spectrum is corrected based on the shape of the \pt\ spectrum and the MC-based \pt\ response matrix.
The effect of momentum resolution alone is 0.5--2.5\%, while the wide binning results in an additional
correction ranging from a fraction of a percent up to approximately 20\% in the widest high-\pt\ bins.
The correction for the two effects is determined by fitting an empirical function to the differential yield, smearing it with the MC-based
momentum resolution, re-binning into the bins of the final invariant yield, and dividing by the original fitted form.
The quoted systematic uncertainty of 0.3--2.7\% is estimated by varying the fitted form of the spectrum and by performing multiple
iterations of the unsmearing with successively more accurate input spectra.
In addition to the uncertainties from the event selection efficiency weighting and the tracking corrections described above,
the total systematic uncertainty contains a contribution
from the uncertainty on the estimation of the event pileup fraction of 0.2 and 1.2\% for the 0.9 and 7\TeV data, respectively.
In the cases where the total integrated luminosity is used to normalise the results,
this contributes an additional 4\% (11\%) scale uncertainty~\cite{EWK-10-004,EWK-11-001} for $\sqrt{s}$ = 7 (0.9)\TeV.
Assuming that the various \pt-dependent contributions are uncorrelated, the total systematic uncertainty is determined
from their sum in quadrature, as indicated in Table~\ref{tab:sys}.
\section{Results}
\label{sec:results}
After applying the corrections described in the previous section, the resulting invariant differential yields for charged particles within $|\eta|<2.4$
are shown for a limited \pt\ range in Figs.~\ref{fig:spectraCMS900} and~\ref{fig:spectraCMS} in order to quantify the agreement with
previous CMS measurements at $\sqrt{s}=$~0.9 and 7\TeV~\cite{Khachatryan:2010xs,Khachatryan:2010us}.
At each energy, both CMS measurements are divided by a Tsallis fit~\cite{tsallis} to the earlier measurement and the ratios compared in the lower
panels. For the earlier measurements, the error bars indicate the statistical plus systematic uncertainties added in quadrature.
The bands around the new measurements represent all contributions to the systematic uncertainty, except the contribution
from the common event selection. Statistical uncertainties are negligible on the new measurements in this \pt\ range.
Below $\pt=4$\GeVc\ for the 0.9\TeV sample and below $\pt=6$\GeVc\ at $\sqrt{s}$ = 7\TeV, which are the limits of the
previously published CMS spectra, the new results are in reasonable agreement with the earlier measurements. However, the measured spectra
do deviate from the Tsallis fits in the earlier papers by as much as 20\% at low \pt.
The origin of the small difference between the two CMS measurements at $\sqrt{s}=7$\TeV
is attributed to the different tracking algorithms used in the two measurements, as well as the different \textsc{pythia}
tunes used to determine the tracking corrections.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/CMS_old_vs_new_with_tsallis_900GeV_v1}
\label{fig:spectraCMS900}}
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/CMS_old_vs_new_with_tsallis_v1}
\label{fig:spectraCMS}}
\caption{(a) Upper panel: the invariant charged particle differential yield from the present analysis (solid circles) and the previous CMS
measurements at $\sqrt{s}=0.9$\TeV (stars) over the limited \pt\ range of the earlier result.
Lower panel: the ratio of the new (solid circles) and previous (stars) CMS results to a Tsallis fit of the earlier measurement.
Error bars on the earlier measurement are the statistical plus systematic uncertainties added in quadrature.
The systematic uncertainty band around the new measurement consists of all contributions,
except for the common event selection uncertainty. (b) The same for $\sqrt{s}=7$\TeV.}
\vspace{4mm}
\label{fig:spectraCMSboth}
\end{figure}
In the upper plots of Figs.~\ref{fig:spectraGEN900} and~\ref{fig:spectraGEN}, the charged particle differential transverse
momentum yields from this analysis are displayed for $\sqrt{s}$ = 0.9 and 7\TeV, respectively. The latter distribution covers
the \pt\ range up to 200\GeVc, the largest range ever measured in a colliding beam experiment. Also shown in the figures
are various generator-level MC predictions for the yields \cite{Bartalini:2009xx,Sjostrand:2007gs,Skands:2009zm,Buckley:2009bj}.
The lower plots of Figs.~\ref{fig:spectraGEN900} and~\ref{fig:spectraGEN} show the ratios of the data to the various MC predictions.
As already observed in Ref.~\cite{Khachatryan:2010us}, there is a deficit of $\pt < 1$\GeVc\ particles in the
predicted 7\TeV spectra for several of the popular \textsc{pythia} tunes.
For the whole \pt\ range above 1\GeVc, \textsc{pythia8} is the most consistent with the new 7\TeV result (within 10\%).
This provides an important constraint on the different generator parameters responsible for sizable variations among the tunes.
A similar but slightly larger spread is observed in Fig.~\ref{fig:spectraGEN900} for different generator parameters at $\sqrt{s} =0.9$\TeV,
where the CMS measurement is most consistently described by the ProQ20 tune.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/spectra_inv_cms_vs_pythia_v2_900GeV}
\label{fig:spectraGEN900}}
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/spectra_inv_cms_vs_pythia_v3}
\label{fig:spectraGEN}}
\caption{(a) Upper panel: the invariant charged particle differential yield at $\sqrt{s}=0.9$\TeV compared with the predictions of
four tunes of the \textsc{pythia} MC generator.
Lower panel: the ratio of the new CMS measurement to the four \textsc{pythia} tunes. The grey band corresponds
to the statistical and systematic uncertainties added in quadrature. (b) The same for $\sqrt{s}=7$\TeV.}
\vspace{4mm}
\label{fig:spectra900GeVn7TeV}
\end{figure}
As discussed in Ref.~\cite{Arleo:2009ch,Arleo:2010kw}, a robust prediction of pQCD hard processes is the power-law scaling of the inclusive
charged particle invariant differential cross section with the variable \xt:
\begin{equation}
E\frac{d^3\sigma}{dp^3} = F(\xt)/\pt^{n(\xt,\sqrt{s})} = F'(\xt)/\sqrt{s}^{n(\xt,\sqrt{s})},
\label{eqn:xtscaling}
\end{equation}
where $F$ and $F'$ are independent of $\sqrt{s}$, and the slow evolution of the power-law exponent $n$ with \xt\ and
$\sqrt{s}$ ($n\simeq$ 5--6) is due to the running of $\alpha_{\mathrm{s}}$ and changes in the parton distribution and fragmentation functions.
In the upper plot of Fig.~\ref{fig:xtScaling}, the 0.9 and 7\TeV pp measurements from this
analysis are compared to the empirical scaling observed from measurements
over a range of lower p$\bar{\mathrm{p}}$ collision energies by plotting \mbox{$\sqrt{s}^{n} \, E \,d^{3}\sigma / dp^{3} $}.
For the purpose of reporting the CMS results as differential cross sections, the integrated luminosities
for the analysed data samples were measured according to the descriptions in Ref.~\cite{EWK-10-004,EWK-11-001}.
Also, to compare with the published results from the CDF experiment at $\sqrt{s}=0.63$, 1.8, and 1.96\TeV, the pseudorapidity
range has been restricted to $|\eta|<1.0$.
Whereas an exponent $n=5.5$ was found in Ref.~\cite{Arleo:2010kw} from a global fit to only the previous p$\bar{\mathrm{p}}$ measurements from
$\sqrt{s}=0.2$ to 1.96\TeV, the \xt\ scaling presented in this paper is optimised for use in an interpolation between the CDF and CMS
measurements from $\sqrt{s}=0.9$ to 7\TeV. Within this range, the best scaling is achieved with an exponent
of $n=4.9\pm0.1$. This is consistent with the predictions of next-to-leading-order (NLO) calculations,
where the scaling is also found to be optimised for this value of the exponent~\cite{Arleo:2010kw}.
From the lower panel of Fig.~\ref{fig:xtScaling}, it is apparent that the NLO calculations over-predict the measured cross sections by almost
a factor of two at all collision energies. This is in spite of the relatively good agreement in the inclusive jet spectrum~\cite{Chatrchyan:2011me,Aad:2010wv},
which suggests that the fragmentation functions are not well tuned for LHC energies.
The CMS results are consistent over the accessible \xt\ range with
the empirical \xt\ scaling given by Eq.~(\ref{eqn:xtscaling}) and established at lower energies.
This quality of the scaling is more easily seen in the upper panel of Fig.~\ref{fig:xtScalingRatio}, where the points show the ratio of the
various differential cross sections, scaled by $\sqrt{s}^{4.9}$, to the result of a global power-law fit to the CDF and CMS
data from Fig.~\ref{fig:xtScaling}.
The fitting function is of the form $F'(\xt) = p_0 \cdot [1+(\xt/p_1)]^{p_2}$, where $p_0$, $p_1$, and $p_2$ are free parameters, and
the region below $\pt=3.5$\GeVc\ has been excluded to avoid complications from soft-particle production.
Considering the somewhat na\"{i}ve power-law function and the expected non-scaling effects~\cite{Stratmann:2010bh},
the new measurement is in reasonable agreement with the global power-law fit result (within roughly 50\%) over its full \xt\ range.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/xT_spectra_compiled_v4}
\label{fig:xtScaling}}
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/xT_ratio_data_nlo_v2}
\label{fig:xtScalingRatio}}
\caption{
(a) Upper panel: inclusive charged particle invariant differential cross sections, scaled by $\sqrt{s}^{4.9}$, for $|\eta|<1.0$
as a function of the scaling parameter \xt. The result is the average of the positive and negative charged particles.
Lower panel: ratios of differential cross sections measured at 0.9, 1.96, and 7 TeV to those predicted by NLO calculations for
factorisation scales ranging from 0.5--2.0 \pt.
(b) Upper panel: ratios of the scaled differential cross sections to the global power-law \xt\ fit described in the text (coloured markers)
and fits to these ratios (similarly coloured thin lines).
The expected ratio for $\sqrt{s}=2.76$\TeV after applying NLO-based corrections to each of the three measurements
as described in the text (solid blue lines). The uncertainty from the NLO parameters is represented by the shaded band.
The upper axis translates \xt\ to \pt\ for $\sqrt{s}=2.76$\TeV.
Lower panel: ratios of the NLO-calculated cross sections at three different energies, scaled by $\sqrt{s}^{4.9}$, to the cross section
calculated at $\sqrt{s}=2.75$\TeV. The width of the bands represents the variation of the factorisation scale by a factor of two.}
\vspace{4mm}
\end{figure}
\section{Interpolation to 2.76\TeV}
\label{sec:interpolation}
In order to construct a predicted reference charged particle differential cross section at $\sqrt{s}$ = 2.76\TeV for comparison with
the measured PbPb heavy-ion spectrum, two different techniques are used in partially overlapping transverse momentum regimes.
In the high-\pt\ range from 5.0--200\GeVc, where approximate \xt\ scaling is expected to hold, the estimated 2.76\TeV cross section
is derived from a common \xt-scaling curve, based on the CDF and CMS measurements shown in Fig.~\ref{fig:xtScaling}.
In the low-\pt\ range from 1.0--20\GeVc, it is possible to interpolate directly between the several measured cross section values
as a function of $\sqrt{s}$ at each fixed \pt\ value.
As discussed in the previous section, the upper panel of Fig.~\ref{fig:xtScalingRatio} shows the residual difference from perfect \xt\ scaling
with exponent $n=4.9$ for the 0.9 and 7\TeV CMS measurements and for the 1.96\TeV CDF
measurement~\citep{Aaltonen:2009ne,PhysRevD.82.119903} .
The $\sqrt{s}$ and \xt\ dependence of the residuals are not unexpected, since this behaviour is predicted by NLO calculations.
This can be seen in the lower panel of Fig.~\ref{fig:xtScalingRatio}, which shows the predicted deviation from perfect \xt\ scaling for
calculated NLO cross sections at several collision energies with respect to a reference centre-of-mass energy of 2.75\TeV~\cite{Arleo:2010kw}.
The calculations were performed using the CTEQ66 parton distribution functions~\cite{Nadolsky:2008zw},
DSS fragmentation~\cite{deFlorian:2007aj}, and a factorisation scale $\mu=\pt$~\cite{Arleo:2010kw}. Taking the magnitude of the
\xt-scaling violation from NLO (ranging from 0--20\%), each of the three measurements in data (i.e., 0.9, 1.96, and 7\TeV) can be
corrected separately to arrive at an expectation for the 2.76\TeV cross section. The three independent interpolations based on NLO-corrected
\xt\ scaling are shown as solid blue lines in the upper panel of Fig.~\ref{fig:xtScalingRatio}.
The combined `best estimate' (shown as a shaded band) has an associated uncertainty that covers the deviations of up to 12\% observed by varying the
factorisation scale from $\mu=0.5\,\pt$ to $\mu=2.0\,\pt$ for each of the three collision energies.
The error band is expanded below $\pt\approx8$\GeVc\ to include
the full difference between the 1.96 and 7\TeV results, since the evolution of the spectra below this value ---
corresponding to $\xt=0.0023$ (7\TeV), 0.0082 (1.96\TeV), and 0.018 (0.9\TeV) --- is no longer consistently described
by \xt\ scaling and the NLO-based corrections. In addition to the 12\% contribution from the uncertainty on the NLO-based correction,
the final uncertainty on the interpolated cross section has an additional component to account for possible correlations in the luminosity
uncertainty between the three measurements. This term, taken as equal to the smallest individual uncertainty (4\%), is added in quadrature.
The direct interpolation of cross sections at a fixed value of \pt\ is done using CDF measurements at $\sqrt{s}=0.63, 1.8$~and
1.96\TeV~\citep{Abe:1988yu,Aaltonen:2009ne,PhysRevD.82.119903}, the new CMS measurements at $\sqrt{s}=0.9$ and 7\TeV,
as well as an earlier result at $\sqrt{s}=2.36$\TeV~\cite{Khachatryan:2010xs}. The latter measurement is converted
to a differential cross section assuming the total inelastic cross section of 60.52\,mb from \textsc{pythia}.
At each energy, an empirical fit to the \pt\ distribution is first constructed to provide
a continuous estimation independent of different binning. Then, in arbitrarily small \pt\ bins, these empirical fits are evaluated and the evolution
of the cross section with $\sqrt{s}$ is parametrised by a second-order polynomial. Two examples of these fits are shown in
Fig.~\ref{fig:ptInterpolation} for $\pt=3$ and 9\GeVc. The uncertainty on the value of the fit evaluated at $\sqrt{s}=2.76$\TeV is taken
from the covariance matrix of the fit terms, with an additional 4\% added in quadrature to account conservatively for any correlation in the
luminosity uncertainty between the different measurements.
To arrive at a single interpolated spectrum over the full \pt\ range, a linear combination of the two techniques is used with weights that
vary linearly across the overlap range from $\pt=5$\GeVc\ (only direct interpolation at fixed \pt) to $\pt=20$\GeVc\ (only \xt\
scaling with NLO-based residual correction). In the \pt\ range where the two techniques overlap, the different methods agree to within their
respective systematic uncertainties. (The fixed-\pt\ interpolation value is typically around 8\% lower than the \xt\ interpolation.)
The resulting predicted 2.76\TeV differential cross section is shown in the upper panel of Fig.~\ref{fig:interpolation},
and its ratio with respect to various \textsc{pythia} tunes at that centre-of-mass energy in the lower panel.
The uncertainty on the predicted cross section, shown by the grey band in the lower panel, is the weighted sum (where applicable)
of the uncertainties derived from the two methods described in the preceding paragraphs.
Also shown in the lower panel of Fig.~\ref{fig:interpolation} is the ratio of the predicted 2.76\TeV cross section to that found by
simply scaling the CMS measured 7\TeV result by the expected 2.75\TeV to 7\TeV ratio from NLO calculations~\cite{Arleo:2010kw}.
The interpolation used in the recent \mbox{ALICE} publication~\cite{Aamodt:2010jd} is a few percent lower than the result quoted in this paper,
but consistent within the respective systematic uncertainties.
The behavior of the various generators compared to the interpolated 2.76\TeV cross section is broadly similar to the 0.9\TeV invariant yields
presented in Fig.~\ref{fig:interpolation}. The ProQ20 tune agrees most closely (within 15\%) with the interpolated cross section above 2\GeVc.
Future analysis of a recently recorded 2.76\TeV pp collision sample will provide verification of this result
and a reduction in the systematic uncertainties.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/linear_xsect_inter_v1}
\label{fig:ptInterpolation}}
\subfigure{
\includegraphics[width=0.45\textwidth]{figs/spectra_intpl_inv_vs_pythia_v1}
\label{fig:interpolation}}
\caption{
(a) Interpolations between measured charged particle differential cross sections at different $\sqrt{s}$ for the two example values of $\pt=3$
and 9\GeVc. Second-order polynomial fits to the measured data are shown by the solid lines. The open squares show the resulting interpolated
cross sections for $\sqrt{s}=2.76$\TeV.
The open circle on the lower panel represents the corresponding estimate from the \xt-scaling approach in the overlap region where both
can be estimated.
(b) Upper panel: the predicted 2.76\TeV charged particle differential transverse momentum cross section,
based on the combined direct \pt\ interpolation and NLO-corrected \xt-scaling techniques described in the text.
Lower panel: ratios of combined interpolation to predictions from several \textsc{pythia} tunes,
an NLO-based rescaling approach~\cite{Arleo:2010kw}, and the \mbox{ALICE} interpolation used in Ref.~\cite{Aamodt:2010jd}.}
\vspace{4mm}
\end{figure}
\section{Summary}
\label{sec:summary}
In this paper, measurements of the phase-space-invariant differential yield $E\,d^{3}N_{\mathrm{ch}}/dp^{3}$ at $\sqrt{s}$ = 0.9 and 7\TeV
have been presented for primary charged
particles, averaged over the pseudorapidity acceptance of the CMS tracking system ($|\eta|<2.4$).
The results have been shown to be in reasonable agreement with
the previously published CMS measurements at $\sqrt{s}$ = 0.9 and 7\TeV~\cite{Khachatryan:2010xs,Khachatryan:2010us}
and, except for the surplus of tracks at very low transverse momentum, with \textsc{pythia} leading-order pQCD.
The 7\TeV data are most consistent with \textsc{pythia8}, which agrees at the 10\% level over the full \pt\ range of the measurement.
In contrast, the 0.9\TeV data are considerably better described by the ProQ20 tune.
Additionally, the consistency of the 0.9 and 7\TeV spectra has been demonstrated
with an empirical \xt\ scaling that unifies the differential cross sections from a wide range of collision energies onto a common curve.
Furthermore, within the theoretical uncertainties of the NLO calculations, the residual breaking of \xt\ scaling above $\pt \approx 8$\GeVc\
is consistent between the measured cross sections and the NLO calculations.
This result has removed a large uncertainty from an important ingredient of existing and future PbPb measurements,
namely the pp reference spectrum corresponding to the energy of the 2010 PbPb run: 2.76\TeV per nucleon.
By employing a combination of techniques to interpolate between the results presented here at $\sqrt{s}=0.9$ and 7\TeV, including information from
existing CDF measurements at $\sqrt{s}=0.63$, 1.8, and 1.96\TeV, a pp reference at $\sqrt{s}=2.76$\TeV has been constructed over a large range
of transverse momentum (\pt\ = 1--100\GeVc) with systematic uncertainties of less than 13\%.
\section*{Acknowledgements}
\input{section_acknowledge.tex}
|
2,877,628,091,292 | arxiv | \section{Introduction}
Many physical systems in science and engineering are described by partial differential equations (PDEs). This study investigates the performance of recurrent and convolutional deep neural networks to model such phenomena. Accurately predicting the evolution of such systems is usually done through numerical simulations, a task that requires significant computational resources. Simulations usually need extensive tuning and need to be re-run from scratch even for small variations in the parameters.
With their potential to learn hierarchical representations, deep learning techniques have emerged as an alternative to numerical solvers, by offering a desirable balance between accuracy and computational cost \citep{Carleo2019MachineSciences}.
Here, we focus on the modelling of surface wave propagation governed by the Saint-Venant (SV) equations. This phenomenon offers a good test-bed for controlled analyses on two-dimensional sequence prediction of PDEs for several reasons. First, in contrast to some physical systems, such as fluid flow, the evolution of the real system is unlikely to enter chaotic regimes. From a representation learning point of view, this makes model training and assessment relatively straightforward. Despite this, the SV equations are strongly related to the Navier-Stokes equations, widely used in computational fluids. Further, computational modelling of surface waves is used in seismology, computer animation, in predictions of surface runoff from rainfall -- a critical aspect of the water cycle \citep{moussa2000approximationsaint} -- and flood modelling \citep{ersoy2017saintvenant}.
This study provides three contributions. First, we identify three relevant architectures for spatiotemporal prediction. Two of these architectures lead to improved accuracy in long-term prediction over previous attempts \citep{Sorteberg2019ApproximatingNetworks} while keeping the inference time orders of magnitude smaller than typical solvers.
Secondly, our comparison between recurrent and purely convolutional models indicates that both can be equally effective in spatiotemporal prediction of SV PDEs. This is in alignment with the findings of \cite{bai2018empirical} that demonstrates that convolutional models are as effective as recurrent models in one-dimensional sequence modelling. %
Finally, we evaluate the generalisation of the models in situations not seen during training and indicate their shortcomings.
\begin{minipage}[b]{0.5\textwidth}
\vspace{0pt}
\centering
\includegraphics[width=\linewidth]{images/results/model_comparison_MSE_ci_None.pdf}
\captionof{figure}{\textbf{Comparing the long-term prediction of the four models on the test set.} The Causal-LSTM and the U-Net significantly outperform the baseline LSTM model. The vertical line indicates the training horizon.}
\label{fig:res:model_comp}
\end{minipage}
\hfill
\begin{minipage}[b]{0.49\textwidth}
\vspace{0pt}
\centering
\begin{tabular}{lrr}
\toprule
Time-step ahead & 20 & 80 \\
\midrule
LSTM (baseline) & 0.08 $\pm$ 0.00 & 0.19 $\pm$ 0.03 \\
ConvLSTM & 0.05 $\pm$ 0.00& 0.15 $\pm$ 0.01 \\
PredRNN++ & \textbf{0.02} $\pm$ \textbf{0.01} & 0.09 $\pm$ 0.01\\
U-Net & \textbf{0.02} $\pm$ \textbf{0.00} & \textbf{0.07} $\pm$ \textbf{0.01}\\
\bottomrule
\end{tabular}
\vspace{1cm}
\captionof{table}{\textbf{Root Mean Square Error (RMSE) comparison of model prediction at specific time-steps ahead.} Best accuracy in bold. The standard errors are across 4 runs with different initialisation.}
\label{tab:res:model_comp}
\end{minipage}
\begin{figure}[ht]
\centering
\begin{minipage}[l]{.49\textwidth}
\includegraphics[width=\linewidth]{images/results/reconstruct-unet-important-to-less-paper.pdf}
\end{minipage}%
\hfill
\begin{minipage}[r]{.49\textwidth}
\includegraphics[width=\linewidth]{images/results/reconstruct-causal-lstm-important-to-less-paper.pdf}
\end{minipage}
\caption{\textbf{Cumulative reconstruction of the output from the feature maps of the pre-last layer of the \mbox{U-Net} (top) and PredRNN++ (bottom).} Prediction corresponds to the 80th time-step ahead. We ordered the feature maps by the absolute value of their weight, from the most important to the least. The PredRNN++ gradually builds up its prediction. The U-Net works differently: the first few feature maps put emphasis on the boundary conditions. Then some of the feature maps focus on the peaks (white colour) and some others on the troughs. All combined build the final prediction.
}
\label{add:fig:reconstruct}
\vskip -2mm
\end{figure}
\vskip -5mm
\section{Related work}
\label{sec:related_work}
\vskip -2mm
Deep learning methods have been proposed for spatiotemporal forecasting in various fields including the solution of PDEs. Recurrent neural networks have been proven a good fit for the task, due to their innate ability to capture temporal correlations. \cite{srivastava2015unsupervised} use a convolutional encoder-decoder architecture where an LSTM module is used to propagate the latent space to the future. Variations of this technique have been successfully applied to the long-term prediction of physical systems, such as sliding objects \citep{Ehrhardt2017LearningPredictor} and wave propagation \citep{Sorteberg2019ApproximatingNetworks}.
Convolutional LSTMs (ConvLSTM) use convolutions inside the LSTM cell to complement the temporal state with spatial information.
Whilst initially proposed for precipitation nowcasting, ConvLSTMs were also found successful for video prediction \citep{Shi2015ConvolutionalNowcasting}. \cite{Wang2018PredRNN++:Learning} proposed the PredRNN++, featuring spatial memory that traverses the stacked cells in the network and improves the accuracy of short-term prediction over ConvLSTMs.
Feed-forward models have, also, been used in spatiotemporal forecasting. \cite{mathieu2015deep} used a CNN to encode video frames in a latent space and extrapolated the latent vectors to the future. \cite{tompson2017accelerating} employed CNNs to speed up the projection step in fluid flow simulations. U-Net has been used for optical flow estimation in videos \citep{DosovitskiyFlowNet:Networks} as well as in physical systems, such as sea temperature predictions \citep{bezenac2017deep} and accelerating the simulation of the Navier-Stokes equations \citep{Thuerey2018DeepFlows}. While both recurrent and convolutional models have been successfully applied for the prediction of PDEs, there is a paucity of studies comparing the two categories from a representation learning point of view.
Other architectures for spatiotemporal prediction include Generative Adversarial Networks, for fluid simulations \citep{Kim2018DeepSimulations} and Graph Networks for wind-farm power estimation \citep{Park2019Physics-inducedEstimation}. There is also a growing body of research on physics-inspired networks for solving PDEs \citep{Raissi2017PhysicsEquations, Perdikaris2019ModelingModels}.
\begin{minipage}[t]{0.49\textwidth}
\vspace{0pt}
\centering
\includegraphics[width=\linewidth]{images/results/all_sets_U-Net.pdf}
\captionof{figure}{\textbf{Generalisation in different physical settings for the U-Net.} The network copes well with changes to illumination and even with two drop but cannot predict well linear waves or different tank size.}
\label{fig:res:gen}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\vspace{0.1cm}
\centering
\small
\begin{tabular}{lrrrr}
\toprule
Time-step ahead & 20 & 40 & 60 & 80 \\
\midrule
Test set & \textbf{0.02} & \textbf{0.03} & \textbf{0.05} & \textbf{0.07} \\
Opposite Illum. &\textbf{ 0.02} &\textbf{ 0.03} & \textbf{0.05 }&\textbf{ 0.07} \\
Random Illum. & 0.4 & 0.06 & 0.08 & 0.10 \\
Double Drop & 0.04 & 0.07 & 0.10 & 0.13 \\
Lines & 0.11 & 0.16 & 0.18 & 0.19 \\
Shallow Depth & 0.04 & 0.09 & 0.13 & 0.16 \\
Big Tank & 0.08 & 0.14 & 0.16 & 0.17 \\
Small Tank & 0.19 & 0.22 & 0.23 & 0.23 \\
\bottomrule
\end{tabular}
\vspace{0.3cm}
\captionof{table}{\textbf{RMSE of U-Net across datasets at specific points in time.} Performance varies across different physical settings. The model is invariant to an orthogonal phase shift in illumination.}
\label{tab:res:gen}
\end{minipage}
\section{Evaluated models}
Four different models are assessed in this work. Three of them are recurrent (LSTM, ConvLSTM, PredRNN++) and one is feed-forward (U-Net). A detailed description of all the implementations can be found in Section \ref{app:sec:mod} of the Appendix. The LSTM model was specifically developed for wave propagation prediction \citep{Sorteberg2019ApproximatingNetworks} and serves as a baseline on which we sought improvement. It is composed of a convolutional encoder and decoder with three LSTMs in the middle. The LSTM modules use the vector output of the encoder as an inner representation and propagate it forward in time. Each LSTM propagates a different part of the sequence (see Appendix).
The other models were selected on the basis of their applicability to relevant tasks. ConvLSTM and PredRNN++ have been empirically shown to perform well at short-term spatiotemporal predictions. The rationale for using them in long-term prediction is that the underlying physics of wave propagation do not change. If a model learns a good representation of short-term dynamics, then the error accumulation should remain low long-term. Both models use convolutions inside the recurrent cell to create a synergy between spatial and temporal modelling. Additionally, PredRNN++ employs a spatial memory that traverses the vertical stack to increase short-term accuracy.
The feed-forward model is based on the U-Net architecture used in spatiotemporal prediction. For example, it has been used to infer optical flow \citep{fischer2015flownet} , motion fields \citep{bezenac2017deep} and velocity fields \citep{Thuerey2018DeepFlows}. In contrast, we train the network end-to-end and conditional on its own predictions; the latter
shifts the focus from short-term to long-term accuracy.
\section{Results}\label{sec:results}
\subsection{Long term prediction: Extrapolation in time}
We evaluated how well the models extrapolate in time. Given ground-truth simulations of 100 frames in length, we tested the model predictions up to 80 steps, much more than the maximum of 20 frame sequences that the models are trained upon.
The RMSE at each time step is calculated as an average over all the test sequences. Results show that the baseline LSTM gives the worst performance. The RMSE error reaches 0.10 after only 21 frames while the error sharply raises after frame 10 (Figure \ref{fig:res:model_comp}). A probable cause is the usage of three distinct LSTMs, which require more data to train upon. The ConvLSTM offers an improvement: it reaches 0.1 RMSE after only 53 frames. The error trend is also very gradual, almost linear. An even greater improvement comes from the PredRNN++, which provides a very low error over the whole prediction range. Its maximum error at frame 80 is 0.091, substantially lower than the LSTM (0.186) and the ConvLSTM (0.150) (Table \ref{tab:res:model_comp}). This confirms the findings of \cite{wang2018predrnn++}, that PredRNN++ is more efficient than ConvLSTM. U-Net is on par with PredRNN++ until frame 34, but has better long-term prediction, reaching 0.071 RMSE at frame 80 vs 0.091 of the PredRNN++. The U-Net decreases the RMSE by $62\%$ compared to the baseline. It is also the faster model, providing a $240\times$ speed-up over the numerical solver that we used (Table \ref{app:tab:speed} in Appendix).
Qualitatively, it appears that the PredRNN++ propagates its internal representation one step at a time while the U-Net predicts multiple frames in one pass. How the output is reconstructed in the last layer is indicative of the differences (Figure \ref{add:fig:reconstruct}).
\begin{figure}[t]
\centering
\begin{minipage}[c]{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{images/results/cut_unet_vs_causal_Test_1.pdf}
\end{minipage}%
\hfill
\begin{minipage}[c]{.58\textwidth}
\centering
\includegraphics[width=\linewidth]{images/results/qual_unet_all_datasets.pdf}
\end{minipage}
\caption{\textbf{Left: Qualitative comparison between the U-Net and the PredRNN++ on the test set.} The intensity profile corresponds to the yellow line (the line with the highest variation). In this particular case, the PredRNN++ has missed the time constant. \textbf{Right: Predictions (at time $t=80$) of the U-Net in the various dataset that have not been seen during training.} In double drop, we see how the model fails to accurately predict the double wave-front. For bigger and smaller tub it misses the time constant.}
\label{fig:res:gen_qual}
\vskip -2mm
\end{figure}
\subsection{Generalisation: Extrapolation in other physical settings}
Here, we evaluate the capabilities and limitations of our models by testing under different initial conditions, illumination models and tank dimensions (Table \ref{app:tab:dataset} in Appendix). For conciseness, we only present the results of the U-Net but the same conclusions stand for all the models.
The U-Net seems to be quite robust to changes in illumination. The RMSE for opposite illumination angle ($135^\circ$) is indistinguishable to the original test set (Figure \ref{fig:res:gen} and Table \ref{tab:res:gen}). This indicates that the learned representation is invariant to a perpendicular phase shift in lighting conditions.
Propagation of linear waves appears to be more challenging, RMSE exceeds 0.10 after just 12 frames. The visualisation shows how the morphology of the prediction is qualitatively different, containing circular artefacts, reminiscent of the training data (Figure \ref{fig:res:gen_qual}). When two drops are used, the RMSE is fairly low but the two wave-fronts of the predictions are sometimes blurred. We also varied the tank size to study the effect of wave speed. It seems that both cases are challenging with the smaller tank size, or equivalently faster waves, exceeding 0.10 RMSE after just 5 frames. Predictions in Figure \ref{fig:res:gen_qual} demonstrate how the network miscalculates the wave speed, and its predictions are either faster or slower than the ground truth. Please note that direct comparisons between datasets based on the RMSE is not without shortcomings. Each dataset has its own inherent "variation" which affect the RMSE, i.e. waves move faster in a small tank (see Figure \ref{app:fig:res:gen_baselines} in the Appendix for a discussion).
\section{Conclusions and Future Work}\label{sec:conclusions}
In this work we investigated the use of deep networks for approximating wave propagation. Using a U-Net architecture, we managed to reduce the long-term approximation RMSE to 0.071 against the previous baseline of 0.186. At the same time, the U-Net is $240\times$ faster than the simulation. Our results suggest that the U-Net outperforms state-of-the-art recurrent models.
It is unclear why U-Net models perform so well in this task. It been demonstrated that convolutional networks are effective at modelling one-dimensional temporal sequences \citep{bai2018empirical}; it might be true for higher-dimensional data. Furthermore, the simulated data are based on few-step solvers. In such a case the memory modules may not offer a significant advantage.
Lastly, we extensively assessed how the networks generalise in unseen physical settings and pointed out current limitations.
In the future, we aim to introduce noise in the simulation so the system becomes stochastic. It would be interesting to see if in this case the recurrent models learn the dynamics better than the \mbox{U-Net}. A big shortcoming of the current models is generalisation in other physical settings. We plan to address this by a physics-inspired latent space factorisation and meta-learning.
|
2,877,628,091,293 | arxiv | \section{Introduction}
The ordered phase of electrons, where electrons localize themselves in a particular period or lattice, such as the skyrmion lattice or the Wigner lattice, has attracted considerable interest\cite{GST05,KLM14,DLP16,JDL18}. The existence of ordered phase in a two-dimensional (2D) system has been successfully probed by geometric resonance in magneto-oscillations\cite{GST05,KLM14,DLP16,JDL18}. Despite observations of the incipient Wigner lattice in one-dimensional (1D) system by means of conductance measurements\cite{HTP09, KTS14}, it is still challenging to monitor the ordered phase in a 1D system. This may be due to a merger with the 2D Fermi sea as the 1D electrons leave the 1D regime, thus losing their ordered phase. The first step towards realizing an ordered 1D phase could be forming a chain of localized electrons, and in this regard the multi-impurity Kondo effect appears to be an useful tool to visualize the formation of localized electrons. The multi-impurity Kondo effect arises from coherent spin-flip scattering between the conduction and multiple localized electrons\cite{KONDO64,PRM06, RFB08}. For the odd-numbered Kondo effect, screening of the unpaired spin-1/2 localized electron gives rise to a zero-bias anomaly (ZBA) in the differential conductance. For the even-numbered Kondo effect, the spin-configuration $|S,m>$ of the localized electrons is vital ($S$ is the total spin and $m$ is the spin projection). In the singlet regime ($|0,0>$), a finite-bias anomaly (FBA) occurs due to singlet-triplet transition while no ZBA is allowed\cite{PRM06, RFB08}; on the other hand, both FBA and ZBA (a partially screened spin-1 Kondo effect in this particular case\cite{SDE00}) can be observed in the triplet regime\cite{RFB08} ($|1,1>$, $|1,0>$ and $|1,-1>$).
The occurrence of FBA in the singlet regime and coexistence of FBA and ZBA in the triplet regime have been observed in quantum dots (QD)\cite{PRM06, RFB08}. On the other hand, less progress has been achieved in quantum point contacts (QPC) owing to difficulties in probing the spin configuration of the localized electrons. Some recent work in QPCs illustrate an abnormal splitting of ZBA \cite{ILK13,BMF14} in a narrow conductance window around $0.8 \times \frac{2e^2}{h}$. However, the coexistence of FBA and ZBA was absent in such cases. Also, it has been shown the non-Kondo disorder within the 1D channel can also result in the splitting of ZBA\cite{SSD09}. Therefore, the understanding of the double-impurity Kondo effect (or singlet-triplet Kondo effect) in QPCs is far from complete.
\begin{figure}
\includegraphics[height=3.6in,width=3.2in]{Fig1.png}
\caption{Setup of the experiment and main result. (a) Schematic of the setup of nexperiment. The square gold pads at the end of the mesa are ohmic contacts; the opening angle of the arc-shaped split gate is 45$^\circ$ and the radius is 2.0 $\mu$m, both the length and width of QPC embedded in the arc (hereafter referred as arc-QPC) is 200 nm; The length (width) of the injector-QPC is 700 nm (500 nm). The shining red patten represents the Friedel oscillations. The current meter measures I$_1$ + I$_2$. The inset shows a zoom-in of the injector-QPC in sample A-C, whereas a conventional rectangular QPC is used in sample D. (b) It is seen that with cavity switched on ($V_2$ = -0.90 V), FBAs (highlighted by the black dashed box) are observed along with the ZBA; on the other hand, only the ZBA is present with cavity switched off (($V_2$ = 0 V). It should be noted that in order to illustrate the main features, traces overlapping together around 0.85$\times \frac{2e^2}{h}$ are selectively plotted. (c) A zoom-in of representative traces of the first panel in (b); the arrows highlight the ZBA (black) and FBA (red). }
\label{fig:1}
\end{figure}
\begin{figure}
\includegraphics[height=3.2in,width=3.6in]{Fig2.png}
\caption{Simulated electron density along the current flow direction. (a) and (b) Simulated electron at different injector conductance for sample A and D, respectively. (c) and (d) Spin configuration in sample A with V$_1$ set to 0.6G$_0$ and 0.8G$_0$, respectively.
}
\label{fig:2}
\end{figure}
Here we provide an easily accessible route to realize the singlet-triplet Kondo effect in a hybrid system consisting of a QPC coupled to an electronic cavity. The cavity refocuses the injected electron back to the QPC\cite{ROZ15} and thus tunes the effective electron density within the QPC without effectively changing the electrostatic potential. We show in this system the coexistence of ZBA and FBA in the moderate conductance regime in addition to the splitting of ZBA in high conductance regime. Our results also indicate that the occurrence of the 0.7 conductance anomaly, which has been a subject of continuous debate, is not correlated with ZBA or FBA.
\section{Experiment}
The hybrid devices were fabricated on a high mobility two-dimensional electron gas (2DEG) formed at the interface of GaAs/Al$_{0.33}$Ga$_{0.67}$As heterostructure. The metallic gates are deposited on the surface which is 90 nm away from the 2DEG. The electron density (mobility) measured at 1.5 K was 1.80$\times$10$^{11}$cm$^{-2}$ (2.1$\times$10$^6$cm$^2$V$^{-1}$s$^{-1}$). All the measurements were performed with the standard lock-in technique in a cryofree dilution refrigerator with a lattice temperature of 20 mK.
The samples studied in the present work consist of a pair of arc-shaped gates, with a QPC forming in the center of the arc, and an injector-QPC as shown in Fig.~\ref{fig:1}(a). The injector-QPC has been shaped to have a slot in the center in sample A-C [inset of Fig.~\ref{fig:1}(a)], whereas for sample D, a conventional QPC with rectangular split gates\cite{KTS14} was used for comparison [see the inset of Fig.~\ref{fig:5}(a)]. Previously, it was shown that a weakly bound state can be often formed in a QPC with protrusions in the split gates\cite{SFP08}. The main results are obtained from sample A while sample B and C show similar behaviour (Supplemental Fig.~S4). It has been carefully examined that the injector-QPC did not show a QD-like behaviour [see Supplemental Fig.~S1(d)\cite{note}].
Before we discuss the main results of the present work, it is necessary to clarify that, despite the injector-QPC on sample A-C looks QD-like but it does not behave like a QD. It was suggested that a QD-like QPC may have three distinctive working regime\cite{HBS15}, namely, a QPC-dominant regime, a QPC-QD transition regime (i.e. device inherits characteristic from both QPC and QD), and a QD-dominant regime, according to the profile of the electrostatic potential. In QPC-QD transition regime, a Fabry-Perot type interference should be present on conductance plateaus, whereas in the QD-dominant regime, Coulomb blockade peaks superpose on conductance plateaus\cite{HBS15}. In our experiment, the conductance plateaus are free of oscillations, as shown in Fig.~\ref{fig:5}(b), and therefore suggests our device is not in QPC-QD transition regime or QD-dominant regime.
\section{Result and discussion}
The hybrid system, as shown in Fig.~\ref{fig:1}(a), exhibited interesting behaviour in the presence of source-drain bias [Fig.~\ref{fig:1}(b)]. In this experiment, the conductance of the injector-QPC was incremented slowly by changing the voltage V$_1$ applied to the injector-QPC for a fixed voltage V$_2$ applied across the arc-QPC. An electronic cavity can be created between the injector-QPC and arc-QPC once both the QPCs are fine-tuned\cite{YKP17,YPM17}. A sharp ZBA peak was observed with the cavity switched off (with $V_2$ = 0 V). On the other hand, the results got modulated significantly with the cavity switched on (with $V_2$ = - 0.90 V). First, a flat ZBA peak was obtained in the low conductance regime of the injector-QPC (G $\leqslant$ 0.5$\times \frac{2e^2}{h}$). Second, additional FBA peaks, occurring around $\pm$0.2 mV, co-existed with the ZBA and thus formed a triple-peak feature when 0.5$\times \frac{2e^2}{h}$ $\leqslant$ G $\leqslant$ 0.8$\times \frac{2e^2}{h}$ [Fig.~\ref{fig:1}(c) is a zoom-in of the ZBA and FBAs]; the triple-peak feature is similar to that reported in QDs\cite{ RFB08} but not yet observed in the QPC. Third, when the system was driven into the high conductance regime (G $\geqslant$ $0.8 \times \frac{2e^2}{h}$), the ZBA evolved into a dip while the FBA remained unchanged [highlighted by the blue trace in Fig.~\ref{fig:1}(b) and (c)], which agrees with the previous results in QPCs\cite{ILK13,BMF14}. On the other hand, switching the cavity on or off in sample D did not result in ZBA nor FBA (see Supplemental Fig.~S5), which was also noticed in a recent work where a flat QPC was coupled to a cavity\cite{SPK17}.
The observed ZBA only in low conductance regime, the coexistence of ZBA and FBA in the moderate conductance regime, and FBA only in high conductance regime can be understood in terms of evolution of localized electrons within the QPC as shown in Figs.~\ref{fig:2} (the detail of the simulation and further discussion can be found in note 1 and 6 of the Supplemental Material). The enhanced reflection probability at the entrance and exit of the slot-shaped injector-QPC (sample A-C) results in formation of emergent localized electrons (ELS)\cite{MHW02,ILK13} in sample A-C; whilst the smooth varying potential profile in sample D makes it difficult to sustain an ELS, as shown in Fig.~\ref{fig:2}(a) and (b). The electron density evolves from a single peak into multiple peaks on tuning the gate voltage (each peak corresponds to an ELS\cite{MHW02,ILK13}) in sample A-C. It is interesting to note the ELSs eventually merge with the smooth background at high conductance regime (see the trace for 1.2G$_0$) which explains why the ZBA and FBA were absent in the corresponding regime. After showing the general trend of evolution of electron density, we focus on the double-ELS regime whose spin configuration is directly related to FBA. The simulated results shown in Fig.~\ref{fig:2}(c) and (d) suggest there is a transition from spin triplet state (V$_1$=-1.35 V, the injector-QPC conductance is $\sim$0.6G$_0$) to spin singlet state (V$_1$=-1.33 V, the conductance is $\sim$0.8G$_0$). It has been shown in previous reports\cite{PRM06,RFB08} that the ZBA should be observed for the triplet state while it will be absent for the singlet state; on the contrary, FBA is allowed for both triplet and singlet states\cite{RFB08}. The experimental observation shown in Fig.~\ref{fig:1}(c) agrees well with the simulation that coexistence of ZBA and FBA occurs in the moderate conductance regime (triplet state) while only FBA is present in high conductance regime (singlet state).
\begin{figure}
\includegraphics[height=3.6in,width=3.0in]{Fig3.png}
\caption{Magnetic field dependence of the ZBA and FBA. (a) Schematic for the evolution of singlet (red) and triplet states (blue) in the presence of in-plane magnetic field in triplet regime. (b) ZBA and FBA with $V_1$ set to -1.35 V at different in-plane magnetic field with the cavity switched on. Data have been offset vertically for clarity. (c) The ZBA persists with the application of the transverse magnetic field while the FBA smears out at 30 mT.
}
\label{fig:3}
\end{figure}
\subsection{Magnetic field dependence}
To further support our argument, we present results in the presence of a magnetic field. The in-plane magnetic field lifts the degeneracy of the triplet state while it does not affect the singlet state as shown in Fig.~\ref{fig:3}(a). In triplet regime where triplet states correspond to lower energy compared to singlet state at zero magnetic field, the energy difference between the singlet state and the lowest triplet state increases linearly with increasing magnetic field\cite{RFB08,HDF06}. It is seen that with an in-plane magnetic field of 1 T, the ZBA becomes broadened while FBA moves towards larger $V_{sd}$ as expected\cite{RFB08,HDF06}. By increasing the magnetic field further to 2 T or 3 T, the ZBA splits into two. The FBA seems to smear out in the presence of a large magnetic field which is likely due to the transverse magnetic field component induced by the imperfection in field orientation. We show the influence of a transverse magnetic field in Fig.~\ref{fig:3}(c). We noted that at a small transverse magnetic field of 30 mT [the result at 0 T is the same as left panel of Fig.~\ref{fig:1}(b)], the FBAs were smeared out and only the ZBA was left over. The small transverse magnetic field is insufficient to introduce a noticeable Zeeman energy, but enough to influence electron propagation in the cavity\cite{MTM94,YKP17,YPM17}, so that the cavity is not able to refocus the electrons back to the QPC. In other words, the cavity cannot efficiently modulate the effective electron density within the QPC in the presence of transverse magnetic field.
\begin{figure}
\includegraphics[height=3.6in,width=3.2in]{Fig4.png}
\caption{Temperature dependence of the ZBA and FBA. (a) Temperature dependence in the ZBA-FBA coexistence regime. The data have been offset vertically for clarity. (b) Fittings for anomalies referred as M (red round markers; $T_K$ = 1.413 K, $s$ = 0.249), L (magenta diamond markers; $T_K$ = 1.804 K, $s$ = 0.248) and R (triangular blue markers; $T_K$ = 1.812 K, $s$ = 0.247) using Eq.(1), respectively. (c) Temperature dependence in the high conductance regime (V$_1$ = -1.30 V) where only the FBA was observable (without offset). (d) Conductance of the central dip in (c) as a function of temperature; the red solid line is a fitting using Eq.(2), $T^{\ast}$ = 0.72 K and $s$ = 0.22. }
\label{fig:4}
\end{figure}
\subsection{Temperature dependence}
The Kondo effect is known for its characteristic temperature dependence, thus it is interesting to investigate the temperature dependence of the multi-impurity Kondo effect as well. Figure~\ref{fig:4}(a) shows the temperature dependence of the ZBA-FBA coexistence regime with cavity switched on (see Supplemental Fig.~S6 for data with the cavity off). Both the ZBA and FBA were attenuated with increasing temperature when the cavity was switched on (V$_2$ = -0.90 V), the left and right FBA smeared out alternatively as the temperature was increased up to 1 K, leaving a broad ZBA at higher temperature.
\begin{figure}
\includegraphics[height=4.2in,width=3.6in]{Fig5.png}
\caption{0.7 conductance is not correlated with ZBA nor FBA. (a) Measured conductance of the injector-QPC in sample A (blue trace) and sample D (red trace). The upper and lower inset show schematic of injector-QPC for sample D and A, respectively. (b) Injector conductance measured with cavity switched on (red dotted trace) and off (blue solid trace). (c) and (d) show transconductance $\frac{dG}{dV_1}$ with cavity off and on, respectively.}
\label{fig:5}
\end{figure}
A more detailed analysis is presented in Fig.~\ref{fig:4}(b) using the standard Kondo function in QPC\cite{CLG02,COK98}, $$G=\frac{2e^2}{h}\{0.5 \times [1+(2^{1/s}-1)(\frac{T}{T_K})^2]^{-s}+0.5\} \eqno(1)$$ where $T_K$ is the Kondo temperature and $s$ is a fitting parameter characterizes the screening between conduction and localized electrons. It is noted that the ZBA agrees well with the theoretical fitting no matter whether the cavity is switched off or on. On the other hand, although the standard Kondo model is not meant for the FBA, it is surprising to note that the standard model can reproduce the temperature dependence of FBA when T $\geqslant$ 0.1 K, which might be due to the fact that the triplet state may decouple into two independent spin-half units at higher temperature\cite{RFB08} and thus the standard Kondo effect dominates. The anomalous suppression of the FBAs in the lowest temperature regime [see the left most data points in Fig.~\ref{fig:4}(b), the black arrow highlights the critical point] diverges significantly from the standard Kondo model. To shed more light on the weakening of FBAs, further experimental results in even lower temperature regime are required.
To make a direct comparison between our observation and results presented in Ref.11, we present the temperature dependence in the FBA-only regime as shown in Fig.~\ref{fig:4}(c) and (d). Apart from the unusual rise of $G_{dip}$ (conductance of the central dip) below 0.1 K [roughly the same value of the critical point in Fig.~\ref{fig:4}(b)], the nonmonotonic trend agrees well with G$_{dip}$ in both QD\cite{RFB08,HDF06} and QPC\cite{ILK13}, and can be fitted by the two-stage Kondo screening model\cite{WSJ02}. In the first stage when the temperature T$\geqslant$0.8 K, the energy difference between the singlet and triplet state E$_S$ - E$_T$ is smaller than k$_B$T, so that the Kondo screening of one of the ELSs instead of the singlet state dominates\cite{RFB08}. The system behaves spin-half-like, therefore the temperature dependence follows the standard Kondo model [i.e. Eq.~(1), the fitting for this section is not shown]. When the temperature T$<$0.8 K, the whole singlet state is screened and the temperature dependence can be described by the re-entrant Kondo formula\cite{RFB08}, $$G=\frac{2e^2}{h} \{1-\alpha \times [1+(2^{1/s}-1)(\frac{T}{T^{\ast}})^2]^{-s}\}+G_c\eqno(2)$$ where $\alpha$ = 1 for QDs\cite{RFB08}, however, we set it as a free fitting parameter to account for difference between the QPCs and QDs, $G_c$ is the background conductance, $s$ is set to 0.22\cite{COK98}, and $T^\ast$ originates from the renormalized singlet binding energy $k_B$$T^\ast$ which is estimated to be 0.72 K. It is noted that $T^\ast$ is in good agreement with the critical temperature ($\sim$0.8 K) below which G$_{dip}$ gradually reduces, Fig.~\ref{fig:4}(d).
\subsection{Discussion on other possible mechanisms of FBA}
There are several other mechanisms that can possibly result in FBA in addition to singlet-triplet Kondo effect, namely, the spin-orbit interaction\cite{CNF13}, lifting of K-K$^\prime$ degeneracy\cite{SSM15}, coupling the QPC to a high frequency bosonic environment\cite{KSS96}, non-Kondo disorder within the 1D channel\cite{SSD09}, and pinning of the Kondo resonance to the chemical potential owning to the asymmetric device design\cite{SBK99}. The conditions (such as the spin-orbit interaction) for the first three mechanisms are unlikely to be fulfilled in the current experiment setup, whereas the latter two cannot result in the coexistence of the ZBA and FBA. More importantly, these mechanisms predict rather different temperature and magnetic field dependence compared to the one we have observed. Hence, we can exclude the mentioned alternative interpretation for the observed FBA (a detailed discussion can be found in note 7 of the Supplemental Material).
\section{0.7-structure}
Apart from the conductance quantization in 1D system\cite{WTN88,VVB88}, a so called '0.7-structure'\cite{TNS96} (a conductance anomaly that occurs at 0.7$\times \frac{2e^2}{h}$) has been widely observed and attributed to the many-body effect. The origin of 0.7-structure remains a subject of continuous debate. A recent work indicated there is a correlation between the 0.7-structure and the FBA\cite{ILK13,BMF14} and thus suggested the 0.7-structure could be closely associated with the Kondo effect\cite{MHW02}. However, such a correlation is absent in our experiment. In Fig.~\ref{fig:5}(a) we show a comparison between sample A (ZBA is present) and D (ZBA is absent) with the cavity switched off so that only the single-impurity Kondo effect matters. A pronounced 0.7-structure was present in both cases. Figure~\ref{fig:5}(b) shows the result in sample B with the cavity switched on (FBA is observable; source-drain bias spectrum of sample B is present in Supplemental Fig.~S4) and off (FBA is absent). We found that the 0.7-structure was not affected by the presence of FBA (i.e. the multi-impurity Kondo effect). The trend was also clear in the source-drain bias spectrum presented in Fig.~\ref{fig:5}(c) and (d) with cavity switched off and on, respectively. Apart from the change in pinch-off voltage, there was hardly any change in the spectrum. Therefore, it seems that the 0.7-structure seen in the present case may not be related to the Kondo effect.
It has been widely shown in previous works that 0.7-structure can be a more general feature than single-impurity Kondo effect (a recent summary can be found in Ref. 34), so here we suggest the conclusion is also valid in multi-impurity Kondo regime.
\section{Conclusion}
In conclusion, we demonstrated the singlet-triplet Kondo effect in a QPC-cavity hybrid system via the coexistence of ZBA and FBA. The FBA is shown to be highly sensitive to the coupling between the QPC and cavity. The temperature dependence of the FBA uncovers a detailed evolution of the total spin of the localized electrons. The results may open up a different regime of experimentation using the QPCs to explore singlet-triplet effects which so far was largely restricted to QDs.
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC), U.K.
|
2,877,628,091,294 | arxiv | \section{Introduction}
A \emph{tile} is a finite non-empty subset of $\mathbb{Z}^n$ for some $n$. We say that a tile $T$ \emph{tiles} $\mathbb{Z}^d$ if $\mathbb{Z}^d$ can be partitioned into copies of $T$, that is, subsets that are translations, rotations or reflections, or any combination of these, of $T$.
For example, the tile $\texttt{X.X} = \{-1,1\} \subset \mathbb{Z}$ tiles $\mathbb{Z}$. The tile $\texttt{XX.XX} = \{-2,-1,1,2\} \subset \mathbb{Z}$ does not tile $\mathbb{Z}$, but we can also regard it as a tile in $\mathbb{Z}^2$, and indeed it tiles $\mathbb{Z}^2$, as shown, for example, in \cite{gltan16}.
Chalcraft \cite{chalcraft1,chalcraft2} conjectured that, for any tile $T \subset \mathbb{Z}^n$, there is some dimension $d$ for which $T$ tiles $\mathbb{Z}^d$. This was proved by Gruslys, Leader and Tan \cite{gltan16}. The first non-trivial case is the \emph{punctured interval} $T = \underbrace{\texttt{XXXXX}}_{k}\!\texttt{.}\!\underbrace{\texttt{XXXXX}}_{k}$. The authors of \cite{gltan16} showed that $T$ tiles $\mathbb{Z}^d$ for $d = 2k^2$, but they were unable to prove that the smallest required dimension $d$ was quadratic in $k$, or even that $d \to \infty$ as $k \to \infty$. They therefore asked the following question:
\begin{question}[Gruslys, Leader, Tan \cite{gltan16}]
Let $T$ be the punctured interval $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, and let $d$ be the least number such that $T$ tiles $\mathbb{Z}^d$. Does $d \to \infty$ as $k \to \infty$?
\end{question}
In this paper we will show that, rather unexpectedly, $d$ does not tend to $\infty$:
\begin{thm}\label{mainthm}
Let $T$ be the punctured interval $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$. Then $T$ tiles $\mathbb{Z}^4$. Furthermore, if $k$ is odd or congruent to $4 \pmod 8$, then $T$ tiles $\mathbb{Z}^3$.
\end{thm}
We have already noted that \texttt{X.X} tiles $\mathbb{Z}$, and \texttt{XX.XX} tiles $\mathbb{Z}^2$ but not $\mathbb{Z}$. It can be shown via case analysis that, for $k \geq 3$, the tile $T$ does not tile $\mathbb{Z}^2$. However, this proof is tedious and provides little insight, and since it is not the focus of this paper, we omit it. For odd $k \geq 3$ and for $k \equiv 4 \pmod 8$, 3 is therefore the least $d$ such that $T$ tiles $\mathbb{Z}^d$. For the remaining cases, namely $k \equiv 0, 2, 6 \pmod 8$, $k \geq 6$, it is unknown whether the least possible $d$ is 3 or 4.
In this paper, we will first prove the result for odd $k$. This will introduce some key ideas, which we will develop to prove the result for general $k$, and then to improve the dimension from 4 to 3 for $k \equiv 4 \pmod 8$.
Finally, we give some background. Tilings of $\mathbb{Z}^2$ by polyominoes (edge-connected tiles in $\mathbb{Z}^2$) have been thoroughly investigated. For example, Golomb \cite{golomb70} showed that results of Berger \cite{berger66} implied that there is no algorithm which decides whether copies of a given finite set of polyominoes tile $\mathbb{Z}^2$. It is unknown whether the same is true for tilings by a single polyomino. For tilings of $\mathbb{Z}$ by sets of general one-dimensional tiles, such an algorithm does exist, as demonstrated by Adler and Holroyd \cite{ah81}. Kisisel \cite{kisisel01} introduced an ingenious technique for proving that certain tiles do not tile $\mathbb{Z}^2$ without having to resort to case analysis.
A similar problem is to consider whether a tile $T$ tiles certain finite regions, such as cuboids. There is a significant body of research, sometimes involving computer searches, on tilings of rectangles in $\mathbb{Z}^2$ by polyominoes (see, for example, Conway and Lagarias \cite{cl90} and Dahlke \cite{dahlke}). Friedman \cite{friedman} has collected some results on tilings of rectangles by small one-dimensional tiles. More recently, Gruslys, Leader and Tomon \cite{gltomon16} and Tomon \cite{tomon16} considered the related problem of partitioning the Boolean lattice into copies of a poset, and similarly Gruslys \cite{gruslys16} and Gruslys and Letzter \cite{gl16} have worked on the problem of partitioning the hypercube into copies of a graph.
\section{Preliminaries and the odd case}
We begin with the case of $k$ odd. This is technically much simpler than the general case, and allows us to demonstrate some of the main ideas in the proof of Theorem \ref{mainthm} in a less complicated setting.
\begin{thm}\label{kodd}
Let $T$ be the punctured interval $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, with $k$ odd. Then $T$ tiles $\mathbb{Z}^3$.
\end{thm}
Throughout this section, $T$ is fixed, and $k \geq 3$. We will not yet assume that $k$ is odd, because the tools that we are about to develop will be relevant to the general case too.
We start with an important definition from \cite{gltan16}: a \emph{string} is a one-dimensional infinite line in $\mathbb{Z}^d$ with every $(k+1)$th point removed. Crucially, a string is a disjoint union of copies of $T$.
We cannot tile $\mathbb{Z}^d$ with strings, as each string intersects $[k+1]^d$ in either 0 or $k$ points, and $(k+1)^d$ is not divisible by $k$. However, we could try to tile $\mathbb{Z}^d$ by using strings in $d-1$ of the $d$ possible directions, leaving holes that can be filled with copies of $T$ in the final direction. We therefore consider $\mathbb{Z}^d$ as consisting of slices equivalent to $\mathbb{Z}^{d-1}$, each of which will be partially tiled by strings.
Any partial tiling of the discrete torus $\mathbb{Z}_{k+1}^{d-1} = (\mathbb{Z}/(k+1)\mathbb{Z})^{d-1}$ by lines with one point removed corresponds to a partial tiling of $\mathbb{Z}^{d-1}$ by strings. We will restrict our attention to these tilings at first, as they are easy to work with.
We will call a set $X \subset \mathbb{Z}_{k+1}^{d-1}$ a \emph{hole} in $\mathbb{Z}_{k+1}^{d-1}$ if $\mathbb{Z}_{k+1}^{d-1} \setminus X$ can be tiled with strings. One particularly useful case of this is when $d = 3$ and $X$ either has exactly one point in each row of $\mathbb{Z}_{k+1}^2$ or exactly one point in each column of $\mathbb{Z}_{k+1}^2$. Then $X$ is clearly a hole, since a string in $\mathbb{Z}_{k+1}^2$ is just a row or column minus a point.
The following result will allow us to fill the gaps in the final direction, assuming we have chosen the partial tilings of the $\mathbb{Z}^{d-1}$ slices carefully:
\begin{lemma}\label{biglemma}
Let $S \subset \mathbb{Z}^d$, $|S| = 3$. Then there exists $Y \subset S \times \mathbb{Z}$ such that $T$ tiles $Y$, and for every $n \in \mathbb{Z}$, $|Y \cap (S \times \{n\})| = 2$.
\end{lemma}
\begin{proof}
Let $S = \{x_1, x_2, x_3\}$. For $i = 1,2,3$, place a copy of $T$ beginning at $\{x_i\} \times \{n\}$ for every $n \equiv ik \pmod {3k}$. The union $Y$ of these tiles has the required property:\\
For $n \equiv 0, k+1, \ldots, 2k-1 \pmod{3k}$, $Y \cap (S \times \{n\}) = \{x_1, x_3\} \times \{n\}$.\\
For $n \equiv k, 2k+1, \ldots, 3k-1 \pmod{3k}$, $Y \cap (S \times \{n\}) = \{x_1, x_2\} \times \{n\}$.\\
For $n \equiv 2k, 1, \ldots, k-1 \pmod{3k}$, $Y \cap (S \times \{n\}) = \{x_2, x_3\} \times \{n\}$.\\
\end{proof}
We will now prove Theorem \ref{kodd}. We know that if $X \subset \mathbb{Z}_{k+1}^2$ has one point in each row or column then $X$ is a hole of size $k+1$. Since $k+1$ is even, we can try to choose $X_n$ in each slice $\mathbb{Z}_{k+1}^2 \times \{n\}$ so that $\bigcup_{n\in\mathbb{Z}}X_n$ is the disjoint union of $\frac{k+1}{2}$ sets $Y_i$ of the form in Lemma \ref{biglemma}.
We can do this as follows:\\
For $n \equiv 0, k+1, \ldots, 2k-1 \pmod{3k}$, let $X_n = \{(0,0),(1,1),\ldots,(k-1,k-1),(k,k)\}$.\\
For $n \equiv k, 2k+1, \ldots, 3k-1 \pmod{3k}$, let $X_n = \{(0,0),(0,1),(2,2),(2,3),\ldots,(k-1,k-1),\newline(k-1,k)\}$.\\
For $n \equiv 2k, 1, \ldots, k-1 \pmod{3k}$, let $X_n = \{(0,1),(1,1),(2,3),(3,3),\ldots,(k-1,k),(k,k)\}$.\\
Then let $X = \bigcup\limits_{n\in\mathbb{Z}} (X_n \times \{n\}) \subset \mathbb{Z}_{k+1}^2 \times \mathbb{Z}$.
Each $X_n$ is a hole, so we can tile $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z})\setminus X$ with strings. Also, $X$ is the disjoint union of sets of the form $Y$ from Lemma \ref{biglemma}: for $0 \leq i \leq \frac{k-1}{2}$, let $S_i = \{(2i,2i),(2i,2i+1),(2i+1,2i+1)\}$. Then $X \cap (S_i \times \mathbb{Z})$ is precisely the set $Y$ generated from $S_i$ in the proof of Lemma \ref{biglemma}. Hence $T$ tiles $X$.
Since $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z})\setminus X$ can be tiled with strings, we can partially tile $\mathbb{Z}^3$ with strings, leaving a copy of $X$ empty in each copy of $\mathbb{Z}_{k+1}^2 \times \mathbb{Z}$. We can tile all of these copies of $X$ with $T$, so $T$ tiles $\mathbb{Z}^3$, completing the proof of Theorem \ref{kodd}.
\section{The general case}
We now move on to general $k$:
\begin{thm}\label{generalk}
Let $T$ be the tile $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$. Then $T$ tiles $\mathbb{Z}^4$.
\end{thm}
We will assume throughout that $T$ is fixed and $k \geq 3$.
For even $k$, the construction used to prove Theorem \ref{kodd} does not work, as all holes in $\mathbb{Z}_{k+1}^2$ have size $(k+1)^2-mk$ for some $m$, and this is always odd, so we cannot use Lemma \ref{biglemma}. The same is true if we replace 2 with a larger dimension, or if, as in \cite{gltan16}, we use strings in which every $(2k+1)$th point, rather than every $(k+1)$th point, is removed. We will therefore need a new idea.
Instead of using strings in $d-1$ out of $d$ directions, we could only use them in $d-2$ directions and fill the gaps with copies of $T$ in the 2 remaining directions. We will show that this approach works in the case $d = 2$, giving a tiling of $\mathbb{Z}^4$. The strategy will be to produce a partial tiling of each $\mathbb{Z}^3$ slice and use the construction from Lemma \ref{biglemma} to fill the gaps with tiles in the fourth direction.
We will again build partial tilings of $\mathbb{Z}^{2}$, and therefore of higher dimensions, from partial tilings of the discrete torus $\mathbb{Z}_{k+1}^{2}$. The following result is a special case of one proved in \cite{gltan16}:
\begin{prop}\label{onepoint}
If $x \in \mathbb{Z}_{k+1}^{2}$, then $\mathbb{Z}_{k+1}^{2}\setminus\{x\}$ can be tiled with strings.
\end{prop}
\begin{proof}
Let $x = (x_1,x_2)$, where the first coordinate is horizontal and the second vertical. Since a string is a row or column minus one point, we can place a string $(\{n\} \times \mathbb{Z}_{k+1})\setminus\{(n,x_2)\}$ in each column, leaving only the row $\mathbb{Z}_{k+1} \times \{x_2\}$ empty. Placing the string $(\mathbb{Z}_{k+1} \times \{x_2\})\setminus \{x\}$ in this row completes the tiling of $\mathbb{Z}_{k+1}^{2}\setminus\{x\}$.
\end{proof}
The sets $S$ of size 3 that we will use in Lemma \ref{biglemma} will have 2 points, say $x_1$ and $x_2$, in one $\mathbb{Z}_{k+1}^{2}$ layer and one point, say $x_3$, in another layer. Every layer will contain points from exactly one such set $S$. Let $Y$ be the set constructed from $S$ in the proof of Lemma \ref{biglemma}. In a given slice $\mathbb{Z}^3 \times \{n\}$, there are therefore two cases:
\begin{enumerate}
\item $Y \cap (S \times \{n\}) = \{x_1, x_3\} \times \{n\}$ or $\{x_2, x_3\} \times \{n\}$.
\item $Y \cap (S \times \{n\}) = \{x_1, x_2\} \times \{n\}$.
\end{enumerate}
In Case 1, each $\mathbb{Z}_{k+1}^{2}$ layer contains exactly one point of $Y$. $T$ then tiles the rest of the layer by Proposition \ref{onepoint}.
In Case 2, some of the layers contain two points of $Y$, and some of the layers contain no points. Holes of size 0 and 2 do not exist, so we will need copies of $T$ in the third direction to fill some gaps (where $Y$ consists of copies of $T$ in the fourth direction). The following lemma provides us with a way to do this:
\begin{lemma}\label{otherlemma}
Let $A \subset \mathbb{Z}^d$, $|S| = 3k$. Then there exists $B \subset S \times \mathbb{Z}$ such that $T$ tiles $B$, and
\[|B \cap (S \times \{n\})| =
\begin{cases}
k+1 & \text{\emph{if} } n \equiv 1, \ldots, k \pmod{2k}\\
k-1 & \text{\emph{if} } n \equiv k+1, \ldots, 2k \pmod{2k}
\end{cases}\]
\end{lemma}
\begin{proof}
Let $A = \{a_1, \ldots, a_{3k}\}$. Then:\\
For $i = 1, \ldots, k$, place a copy of $T$ beginning at $\{a_i\} \times \{n\}$ for every $n \equiv i \pmod{6k}$.\\
For $i = k+1, \ldots, 2k$, place a copy of $T$ beginning at $\{a_i\} \times \{n\}$ for every $n \equiv i+k \pmod{6k}$.\\
For $i = 2k+1, \ldots, 3k$, place a copy of $T$ beginning at $\{a_i\} \times \{n\}$ for every $n \equiv i+2k \pmod{6k}$.\\
We now observe that the union $B$ of these tiles has the required property.\\
For $n \equiv 1, \ldots, k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{2k+n}, \ldots, a_{3k}, a_1, \ldots, a_n\}$ (size $k+1$).\\
For $n \equiv k+1, \ldots, 2k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_1, \ldots, a_k\}\setminus\{a_{n-k}\}$ (size $k-1$).\\
For $n \equiv 2k+1, \ldots, 3k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{n-2k}, \ldots, a_{n-k}\}$ (size $k+1$).\\
For $n \equiv 3k+1, \ldots, 4k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{k+1}, \ldots, a_{2k}\}\setminus\{a_{n-2k}\}$ (size $k-1$).\\
For $n \equiv 4k+1, \ldots, 5k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{n-3k}, \ldots, a_{n-2k}\}$ (size $k+1$).\\
For $n \equiv 5k+1, \ldots, 6k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{2k+1}, \ldots, a_{3k}\}\setminus\{a_{n-3k}\}$ (size $k-1$).
\end{proof}
The reasoning behind this lemma is that there exist sets $X \subset \mathbb{Z}_{k+1}^{2} \times \mathbb{Z}$ that are missing exactly $k+1$ points in every $\mathbb{Z}_{k+1}^{2}$ layer and can be tiled with strings. If we take $d = 2$ in Lemma \ref{otherlemma}, we would like to choose such a set $X$ and a set $A \subset \mathbb{Z}_{k+1}^{2}$ (abusing notation slightly, as $\mathbb{Z}_{k+1}^{2}$ is not actually a subset of $\mathbb{Z}^2$) such that the resulting $B$ in Lemma \ref{otherlemma} is disjoint from $X$. Then $(\mathbb{Z}_{k+1}^{2} \times \mathbb{Z})\setminus(B \cup X)$ contains either 2 or 0 points in each $\mathbb{Z}_{k+1}^{2}$ layer, which is what we wanted.
In order for this construction to work, we need the set $B \cap (A \times \{n\})$ to be a hole whenever it has size $k+1$, and to be a subset of a hole of size $k+1$ whenever it has size $k-1$, so that we actually can tile the required points with strings. By observing the forms of the sets $B \cap (A \times \{n\})$ in the proof of Lemma \ref{otherlemma}, we see that it is sufficient to choose the $a_n$ such that for all $n$, $\{a_n, \ldots, a_{n+k}\}$ is a hole. Here we regard the indices $n$ of the points $a_n$ of $A$ as integers mod $3k$, so $a_{3k+1} = a_1$ and so on. The following proposition says that we can do this.
\begin{prop}\label{anprop}
There exists a set $A = \{a_1, \ldots, a_{3k}\} \subset \mathbb{Z}_{k+1}^{2}$ such that for all $n$, $\{a_n, \ldots, a_{n+k}\}$ contains either one point in every row or one point in every column. Here the indices are regarded as integers \emph{mod} $3k$.
\end{prop}
\begin{proof}
For $n = 1, \ldots, k+1$, let $a_n = (n-1,n-1)$.\\
For $n = k+2, \ldots, 2k-1$, let $a_n = (n-k-2,n-k-1)$.\\
For $n = 2k, 2k+1, 2k+2$, let $a_n = (n-k-2,n-2k)$.\\
For $n = 2k+3, \ldots, 3k$, let $a_n = (n-2k-3,n-2k)$.\\
Note that all the $a_n$ are distinct. Let us regard the first coordinate as horizontal and the second as vertical.\\
Then, for $n = 1, \ldots, 2k$, $\{a_n, \ldots, a_{n+k}\}$ contains one point in every column.\\
For $n = 2k+1, \ldots, 3k$, $\{a_n, \ldots, a_{n+k}\}$ contains one point in every row.
\end{proof}
From now on, $a_n$ refers to the points defined in the above proof. This proposition is the motivation for choosing the value $6k$ in the proof of Lemma \ref{otherlemma}.
We can now prove Theorem \ref{generalk}. We will need 3 distinct partial tilings of $\mathbb{Z}^3$ slices, corresponding to the 3 cases in the proof of Lemma \ref{biglemma} with $d = 3$. The repeating unit in each of these partial tilings will have size $(k+1) \times (k+1) \times 6k$, so we will work in $\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k}$.
We start by choosing the sets $S$ as in Lemma \ref{biglemma}. These will be as follows:\\
For $n = 1, \ldots, k$, $S_n = \{(0,0,n),(a_n,n+k),(a_{k+1},n+k)\}$.\\
For $n = k+1, \ldots, 2k$, $S_n = \{(0,0,n+k),(a_n,n+2k),(a_{2k+1},n+2k)\}$.\\
For $n = 2k+1, \ldots, 3k$, $S_n = \{(0,0,n+2k),(a_n,n+3k),(a_1,n+3k)\}$.\\
We will refer to the points in $S_n$ as $x_{n,1},x_{n,2},x_{n,3}$ in the order given.
We can construct a set $Y_n \subset \mathbb{Z}^4$ from each $S_n$ using the construction in the proof of Lemma \ref{biglemma}. Let $Y = \bigcup_{1 \leq n \leq 3k} Y_n$. For a given $m \in \mathbb{Z}$, there are two possibilities for the structure of $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$:
\begin{enumerate}
\item $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ consists of pairs of the form $\{x_{n,1},x_{n,2}\}$ or $\{x_{n,1},x_{n,3}\}$. Then it contains exactly one point in each $\mathbb{Z}_{k+1}^2$ layer. We can therefore tile $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\}) \setminus Y$ entirely with strings, by Proposition \ref{onepoint}.
\item $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ consists of pairs of the form $\{x_{n,2},x_{n,3}\}$. Then it contains either 2 or 0 points in each $\mathbb{Z}_{k+1}^2$ layer.\\
If $A = \{a_1, \ldots, a_{3k}\}$, and $B$ is the set constructed from $A$ in the proof of Lemma \ref{otherlemma}, then, by the choice of the $S_n$, the sets $B$ and $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ are disjoint. Furthermore, if $C$ is the union of these two sets, then, for every $n$, $C \cap (\mathbb{Z}_{k+1}^2 \times \{n\} \times \{m\}) = \{a_r, \ldots, a_{r+k}\}$ for some $r$, and by Proposition \ref{anprop}, this contains either one point in every row or one point in every column and is therefore a hole.\\
Since $T$ tiles $B$, it also tiles $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\}) \setminus Y$.
\end{enumerate}
$T$ tiles $Y$ by Lemma \ref{biglemma}. Hence $T$ tiles $\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \mathbb{Z}$, and therefore also $\mathbb{Z}^4$, completing the proof of Theorem \ref{generalk}.
\section{The 4 mod 8 case}
To finish the proof of Theorem \ref{mainthm}, all that remains is to prove the following:
\begin{thm}\label{4mod8}
Let $T$ be the tile $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, with $k \equiv 4 \pmod 8$. Then $T$ tiles $\mathbb{Z}^3$.
\end{thm}
We will prove this by constructing partial tilings of each $\mathbb{Z}^2$ slice and filling in the gaps using the construction from the proof of Lemma \ref{biglemma}. We will define 3 subsets $X_1$, $X_2$, $X_3$ of $\mathbb{Z}^2$ and show that $T$ tiles each of them. However, two of these tilings will not make use of strings.
Let $S_1 = \{(x,x+n(k+1)) \; | \; n \in \mathbb{Z}, x \equiv 2n,2n+1,2n+2,2n+3 \pmod 8\}$.
Let $S_2 = \{(x,x+n(k+1)) \; | \; n\in \mathbb{Z}, x \equiv 2n+4,2n+5,2n+6,2n+7 \pmod 8\}$.
Let $S_3 = \{(x,x+n(k+1)+1) \; | \; n \in \mathbb{Z}, x \equiv 2n+2,2n+3,2n+4,2n+5 \pmod 8\}$.
Let $X_1 = \mathbb{Z}^2 \setminus (S_2 \cup S_3)$, $X_2 = \mathbb{Z}^2 \setminus (S_1 \cup S_3)$, $X_3 = \mathbb{Z}^2 \setminus (S_1 \cup S_2)$.
Let the first coordinate be horizontal and the second vertical.
$X_3$ is $\mathbb{Z}^2$ with every $(k+1)$th diagonal removed, so each row (or column) is $Z$ with every $(k+1)$th point removed, that is, a string. Hence $T$ tiles $X_3$.
We will show that $X_1$ can be tiled with vertical copies of $T$ and $X_2$ can be tiled with horizontal copies of $T$.
Note that $(x,x+n(k+1))+(2,k+3) = (x+2,(x+2)+(n+1)(k+1))$. Also, if $x \equiv 2n+r \pmod 8$, then $x+2 \equiv 2(n+1)+r \pmod 8$. Hence, by the definitions of $S_2$ and $S_3$, we see that $X_1$ is invariant under translation by $(2,k+3)$. To show that vertical copies of $T$ tile $X_1$, it therefore suffices to show that $T$ tiles the columns $X_1 \cap (\{0\} \times \mathbb{Z})$ and $X_1 \cap (\{1\} \times \mathbb{Z})$.
But in fact, if $(0,y) \in S_2$, then $0 \equiv 2n+4$ or $2n+6 \pmod 8$, so $1 \equiv 2n+5$ or $2n+7 \pmod 8$, so also $(1,y+1) \in S_2$. The converse also holds, and the same is true for $S_3$. Thus we only need to check the case $x = 0$.
$(0,n(k+1)) \in S_2$ for $n \equiv 1,2,5,6 \pmod 8$, that is, $n \equiv 1,2 \pmod 4$.
$(0,n(k+1)+1) \in S_3$ for $n \equiv 2,3,6,7 \pmod 8$, that is, $n \equiv 2,3 \pmod 4$.
Therefore $(0,y) \notin X_1$ for $y \equiv k+1, 2(k+1), 2(k+1)+1, 3(k+1)+1 \pmod{4(k+1)}$, so copies of $T$ beginning at positions $1$ and $2(k+1)+2 \pmod{4(k+1)}$ tile $X_1 \cap (\{0\} \times \mathbb{Z})$.
Hence $T$ tiles $X_1$.
Note that $(x,x+n(k+1))+(k+2,1) = (x+k+2,(x+k+2)+(n-1)(k+1))$.\\
Since $k \equiv 4 \pmod 8$, if $x \equiv 2n+r \pmod 8$ then $x+k+2 \equiv 2(n-1)+r \pmod 8$. Hence $X_2$ is invariant under translation by $(k+2,1)$, by the definitions of $S_1$ and $S_3$. To show that horizontal copies of $T$ tile $X_2$, it is therefore enough to show that $T$ tiles the row $X_2 \cap (\mathbb{Z} \times \{0\})$.
We can express $S_1$ as $\{(y-n(k+1),y) \; | \; y \equiv -n,1-n,2-n,3-n \pmod 8\}$.
Similarly $S_3 = \{(y-n(k+1)-1,y) \; | \; y \equiv 3-n,4-n,5-n,6-n \pmod 8\}$.
Therefore $(-n(k+1),0) \in S_1$ for $n \equiv 0,1,2,3 \pmod 8$, and $(-n(k+1)-1,0) \in S_3$ for $n \equiv 3,4,5,6 \pmod 8$.
Hence $(x,0) \notin X_2$ for $x \equiv 0, 2(k+1)-1, 3(k+1)-1, 4(k+1)-1, 5(k+1)-1, 5(k+1), 6(k+1), \newline 7(k+1) \pmod{8(k+1)}$, so copies of $T$ beginning at positions $k+1, 3(k+1), 5(k+1)+1, 7(k+1)+1 \pmod{8(k+1)}$ tile $X_2 \cap (\mathbb{Z} \times \{0\})$.
Hence $T$ tiles $X_2$.
$S_1 \cup S_2 \cup S_3$ can be partitioned into sets of the form $S = \{x_1, x_2, x_3\}$, where $x_1 = (x,y) \in S_1$, $x_2 = (x+4,y+4) \in S_2$, $x_3 = (x+2,y+3) \in S_3$. Then $|S| = 3$, so we can construct the corresponding set $Y \subset \mathbb{Z}^3$ as in Lemma \ref{biglemma}. Now, given $n \in \mathbb{Z}$, $(S \times \{n\}) \setminus Y = \{x_i\}$ for some $i \in \{1,2,3\}$. Then $Y \cap (X_i \times \{n\}) = \emptyset$. If we do this for all such sets $S$, and let $U$ be the (disjoint) union of the resulting sets $Y$, then $U \cap (X_i \times \{n\}) = \emptyset$, and $\mathbb{Z}^2 \times \{n\} \subset U \cup (X_i \times \{n\})$. Recall that $T$ tiles each $Y$ and therefore $U$.
We can do this for every $n$, choosing a partial tiling $X_i$ for the corresponding $\mathbb{Z}^2$ layer. Together with $U$, these form a tiling of $\mathbb{Z}^3$ by $T$. This completes the proof of Theorem \ref{4mod8}, and therefore also the proof of Theorem \ref{mainthm}.
\section{Open problems}
Theorem \ref{mainthm}, together with the result that a punctured interval $T = \underbrace{\texttt{XXXXX}}_{k}\!\texttt{.}\!\underbrace{\texttt{XXXXX}}_{k}$ does not tile $\mathbb{Z}^2$ for $k \geq 3$, determines the smallest dimension $d$ such that $T$ tiles $\mathbb{Z}^d$ in the cases $k$ odd and $k \equiv 4 \pmod 8$. However, for other values of $k$, it is still unknown whether the smallest such dimension $d$ is 3 or 4:
\begin{question}
Let $T$ be the punctured interval $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, where $k \equiv 0, 2, 6 \pmod 8$, $k \geq 6$. Does $T$ tile $\mathbb{Z}^3$?
\end{question}
It is also natural to consider more general tiles. The next non-trivial case is that of an interval with a non-central point removed. One might wonder if there is an analogue of Theorem \ref{mainthm} for these tiles:
\begin{question}
Does there exist a number $d$ such that, for any tile $T$ consisting of an interval in $\mathbb{Z}$ with one point removed, $T$ tiles $\mathbb{Z}^d$?
\end{question}
For general one-dimensional tiles, Gruslys, Leader and Tan \cite{gltan16} conjectured that there is a bound on the dimension in terms of the size of the tile:
\begin{conj}[Gruslys, Leader, Tan \cite{gltan16}]
For any positive integer $t$, there exists a number $d$ such that any tile $T \subset \mathbb{Z}$ with $|T| \leq t$ tiles $\mathbb{Z}^d$.
\end{conj}
This conjecture remains unresolved. The authors of \cite{gltan16} showed that if $d$ always exists then $d \to \infty$ as $t \to \infty$, by exhibiting a tile of size $3d-1$ that does not tile $\mathbb{Z}^d$. This gives a simple lower bound on $d$; better bounds would be of great interest.
\section*{Acknowledgements}
I would like to thank Vytautas Gruslys for suggesting this problem and for many helpful discussions, and Imre Leader for his encouragement and useful comments.
|
2,877,628,091,295 | arxiv | \section{Introduction}
The classification of sensor data through Deep Learning approaches is a well-researched topic within the scientific community, alas still being in its early phases. While some standardised workflows exist and a state-of-the-art has been accomplished and still evolves, there usually remain parts for each specific task where specially handcrafted solutions are required on top of community knowledge.\\
This work emerged from Stabilo's Ubicomp 2021 Challenge revolving around sensor data captured with the DigiPen, a pen equipped with multiple sensors but still reminiscent of a regular ball-point pen. The intent of the challenge is to find a suitable way to classify handwritten terms of writers with unknown writing styles. Naturally, Deep Learning appears to be a suitable approach to a classification problem like this.\\
To accomplish the task, a larger data set of different writers and according labels is given. The problem itself can generally be divided into two larger sub-problems with one being the segmentation of the terms into singular symbols and the other being the classification of said symbols and thereafter the possibility to classify whole terms.\\
\section{Related Work}
The problem presents itself as a combination of handwritten character recognition as well as time series data analysis. The methodology for this work required the application of some well researched concepts. As such, a multitude of superficially similar articles were explored, especially in the context of neural networks.\\
Handwritten character recognition, albeit still a challenging research topic, has become fairly popular in the recent advent of powerful neural network architectures, of course mainly due to its practicability. Traditionally, handwritten characters are processed and recognized through (gray scale) images. Starting more than 20 years ago along the work of \citep{lecun1995learning}, neural networks for digits, characters and other symbols started to evolve rapidly. Today, for different sets of characters, digits and other symbols, there are highly specialised networks available \citep{chatterjee2019bengali} along with large open data sets \citep{lecun-mnisthandwrittendigit-2010} as well as meta-research over these topics \citep{5277565}. Although the problem posed in this work does not implicitly offer image data, the concepts employed by the aforementioned types of research can simplify analogous steps and sometimes also be translated seamlessly.\\
Sensor data analysis, as a topic, is possibly one of the widest in the spectrum of pattern recognition. Sensors are comparatively cheap, usually lightweight in terms of their fingerprint, and are ubiquitous today. To solve the problem formulated in this work, a data set containing non-optical sensor data over time with variable intervals is given, which, in turn, moves the problem into the domain of time series data analysis. Similar to handwritten character recognition, this topic, in conjunction with neural networks, received popular scientific attention over the recent years. There are a multitude of practical applications for time series data analysis, e.g. for natural phenomenon \citep{agrawal2012application} \citep{JAIN2007585}, seasonal variations \citep{ZHANG2005501}, industrial processing \citep{hsu2021multiple} and overarching sensor data analysis in general \citep{8437249}. Some amount of practical knowledge gathered from different approaches in entirely different application domains can be translated into useful solutions in our work, largely due to the similarity of time series sensor data as a whole.\\
This focus of this work is the interaction and combination of the two previously introduced fields. Historically, handwritten characters would be recognized by classifying rasterised images of them. In recent years however, new options for classifying human interaction have emerged, enabled through the continuous evolution of microchips. One of these options are commercially available motion sensing devices and it has been shown that these may be used to track handwriting in live environments \citep{6473522}. Another option are writing tools equipped with different types sensors. Such pens may have a multitude of sensors for different tasks and sometimes run on battery and transmit data by radio or with a wired connection, but otherwise, from the outside, closely resemble a pen as it is usually known. One of the probable goals of equipping a pen with sensors may be the automated recognition of what the user is writing. This is the goal of this work. Many similar works can be found, using either a subset of the sensors \citep{6020787} available to us or additional sensory modalities \citep{10.1145/3173574.3173705}. Related work has shown that handwritten digit recognition through sensor based data may generally be possible. This work aims to provide additional scientific insight to these fields by working with a higher number of sensors as well as non-calibrated input data and solve the underlying challenge task in the process.\\
Previously, Stabilo already released a similar challenge for UbiComp 2020. Contrary to the one discussed in this work, only written letters were to be classified. These were also already segmented, so it required comparatively less focus on data cleanup and preprocessing in general. We have also worked on that challenge, but did not release a public paper on it. However, some other research teams released their approaches in various publications, gathering results by using networks similar to ours for this challenge \citep{DBLP:journals/corr/abs-2008-01078} and finding improvements gathered through domain knowledge extensions \citep{9257740}. They report limited success given the hardware and applicability of the data but show that there is a possibility to classify the data given some additional constraints. We have found similar ideas and pursued the integration of topical knowledge gained from related work and own work into this research.
\section{Methodology}
Generally, applying a Supervised Deep Learning approach to classify time series data is a straightforward workflow since popular procedures are very well documented and have been proven to be applicable in many different domains. This work is no exception to that rule. However, we realized that the posed problem also requires specially handcrafted solutions for some steps along the pipeline and we show that, as it is custom for many data analysis problems, the data cleanup and preprocessing require most care and time. This chapter will outline the general workflow and dive in to the more specialised approaches that were taken to solve the problem given the available input. Some details like the ranges of sensors are not explained in this paper, but rather taken as a given and such details as well as other useful information can be directly accessed through the official site of the challenge (\href{https://stabilodigital.com/ubicomp-2021-challenge/}{https://stabilodigital.com/ubicomp-2021-challenge/}).
\subsection*{Input data and objectives}
The goal is to classify any unknown time series of sensor data. The classes are given by the input characters which include all single-digit Arabic numerals as well as the most common operators, totalling to 15 possible classes. The data was captured with five different sensors, with four of them providing three-dimensional output, resulting in a total of 13 separate sensor tracks per sample as well as two additional tracks for indexing and time interval. The label is a single string built from the given characters with variable length per input data stream. The data is also separated by persons. The general shapes of the input data-label pairs are visualized in figure \ref{input_data}.
\begin{figure}[ht]
\caption{The raw input data shape. Each person is separated and contains a varying amount of data-label pairs. The length of the data and labels is also not fixed.}
\centering
\includegraphics[width=\linewidth]{input_data}
\label{input_data}
\end{figure}
Given their shape and the formulated problem, the first issue to solve is assigning each distinct symbol in the label to a region in the time series data to be able to assign a single class to a specific region and classify said region.
\subsection*{Label splitting and preprocessing}
Without the previously outlined task that we refer to as label splitting, the problem would be ill-defined since there are an exceedingly large amount of possible permutations for 15 possible characters and string lengths of at least 10 and up to 20 characters, where each permutation would be defined as a class in a Deep Learning problem. There is neither enough data to justify this approach nor would it be feasible for any network to learn that many classes even if there were.\\
Since there is no additional information on the location of these regions given, we have to deduct an approach by which they can be separated. We present a rule-based algorithm to perform label splitting for labelled data and a Machine Learning approach to identify them on unlabelled data. The training input for the former stems from a rule-based algorithm that is based on domain knowledge and is applied to the given labelled data.\\
Through experiments we have found that one particular sensor stands out in telling information about the boundary between two separate labels in the sensor data. That is the force sensor, which measures the force applied to the tip of the pen and, by extension, tells us when the tip of the pen was in contact with the paper. This information, however, is not sufficient as simulations have shown that some characters may be written by removing the pen from the paper to draw a new line or curve where at least either the start or end point are disconnected from any point that was written before picking up the pen. The most notable example is the division (:) sign. Writers were instructed to draw this sign as two separated dots which is the German notation. There are other examples like the plus (+) sign or some numerals like 4, 5 or 7, but, other than with division, there is technically no requirement to pick up the pen to write these. Any observation is simply of statistical nature. In order to avoid manually making up inaccurate rules about which characters require the pen to be removed from the paper during the writing process, we would seek an algorithm to accurately report this information for each writer separately to then be able to perform the label splitting task with very high certainty of it being correct.\\
The workflow to achieve this is based on a rule-based algorithm which analyses, modifies and reads the input data in certain ways to detect the writing style of each person separately. The essence of the algorithm is described in the diagram in figure \ref{label_splitting}. Smaller steps are omitted and most of the thresholds and rules can be fine-tuned through parameters in the code.
\begin{figure}[ht]
\caption{The label splitting workflow. Flow diagram for the rule-based label splitting algorithm. Used to accurately segment characters in labelled data.}
\centering
\includegraphics[width=\linewidth]{label_splitting}
\label{label_splitting}
\end{figure}
At this point a clear distinction for the boundary of separate characters for most of the given labelled data is possible. As such, the resulting data-label pairs are, with extremely high confidence, correct and contain one specific label per time series.\\
The remaining preprocessing pipeline is very straightforward and similar to one described in \citep{bulling2014tutorial} with normalization, resampling and, depending on the use-case, windowing of the data. Data is normalized column-wise to values between 0-1. Resampling is important, since the sensor values are recorded with varying frequency. The resample rate is chosen to be around 10 ms to capture small movements very well but to also avoid too much interpolation. Depending on the use-case of the data, a sliding window algorithm is also applied. The first use is simply as an input for the final classifier which can then take unlabelled time series data as an input and output a label. For this approach, sliding windows are taken with an overlap of about 25\%. To capture the process of writing a single symbol, a window size of around 16 is preferred, given the previous sample rate. The second use for the data is as an input for a different classifier with the aim to provide a way to perform label splitting on unknown sensor data for which the rule-based approach does not work since the number of labels has to be known. This use case does not require windowing as the amount of available samples is plentiful and the intent is not to classify regions but single samples.
\subsection*{Boundary data feature extraction and classification}
Considering the terms where the label splitting was successful we could then assign to each sample in such a series a value for if the person was in the process of writing a character. The idea is to use this information to somehow classify unknown samples into active or inactive, referring to the process of writing as being active. With a Deep Learning network already refined from tests for the final task, that is classifying the characters themselves, we tried using that network for this task as well. This would fail, however, mostly due to the network being extremely sensitive to the value of the force sensor. This outcome was not unlikely, since the data and labels were a direct result of the rule-based approach which used the force as a means of providing the labels we now use. As an alternative for a neural network we tried using Random Forest with various parameters which would provide results with similar issues. Concluding the tests, we went one step back and worked on feature engineering instead of using raw features, since we found that we would definitely need the force to produce meaningful results but could not find a way to scale or weigh it properly as a feature. The resulting idea was to use our, at this point during experimentation, already refined Deep Learning network, but instead of a classifier as a feature extractor only. The network is trained on the premise of trying to classify the active areas again, but during testing, instead of using the output of the final Dense layer, the output of the final LSTM layer is taken where each node is then treated as a feature of the input sample. The full designs of the network architectures are given in figure \ref{architectures} and get more in-depth explanations in chapter \ref{results}.\\
Given the engineered features for each sample, the Random Forest model is trained again, providing more accurate output this time. This model, called the boundary classifier, is then able to classify unknown samples of data as active or inactive after extracting their features. Inaccuracies during training can be cleaned up by applying similar rules based on domain knowledge that were used during the label splitting algorithm. As a result, a series of activity is then considered as one written character and can therefore be classified, without being certain about the amount of written symbols per term, or, in other words, the length of the label.
\subsection*{Segment and final term classification}
To finally classify the extracted regions, a model is trained by the formerly achieved properly segmented labelled data. Being as certain as possible about regions and their labels for most of the data, the network can be trained properly. Given that the boundary classifier and subsequent preprocessing produce region output similar to the training input, it is then possible to use this classifier to also classify completely unknown time series sensor data. Since every person's writing style is different the challenge permits five labelled terms of an unknown writer to be used to update the model before classifying unlabelled data from that person. The boundary data for the unknown writer is calculated through the boundary data classifier, as five labelled terms are not enough to apply the rule-based label splitting, and the resulting terms are then used to update the model for this writer specifically.
\section{Results and Evaluation}\label{results}
In this chapter we outline the results for the training and testing of the boundary feature extractor, boundary classifier, character classifier as well as a simulation for the challenge. All networks are trained using data of all but one person that is picked randomly and is used for the simulation and therefore treated like an unknown writer. The trained networks require one-hot encoded class label input. Therefore the original classes are mapped to values. This mapping and the preprocessing settings are listed in figure \ref{preprocessing}.
\begin{figure}[ht]
\caption{Left: Preprocessing settings for training and testing data. Right: Real character to one-hot value mapping for reference.}
\centering
\includegraphics[width=\linewidth]{preprocessing_vertical}
\label{preprocessing}
\end{figure}
The boundary feature extractor and character classifier models are trained with similar parameters albeit a slightly different architecture. Refer to figure \ref{architectures} for the exact designs. The architectures themselves are further explained in their respective sections. For the training process, the batch size is set to 2048 for the boundary feature extractor and 128 for the character classifier. Training is stopped if the validation loss does not decrease for more than five epochs. The weights of the epoch with the lowest validation loss are picked after stopping. The optimizer used is Adam with a learning rate of 0.001. Weights are regularized with l2 and a rate of 0.01. For the character classifier the convolution kernel size is set to [4, 1] in order to capture column-wise dependencies but ignore row-wise dependencies. Respectively the convolution kernel for Max Pooling is set to [2, 1] to cut the number of samples in half. For the boundary feature extractor the kernel size is always [1, 1] since the input is a single sample only. Finally, before connecting the last LSTM layer to the Dense output layer, a Dropout layer of 50\% is added. The Dense and Dropout layers are removed during testing of the boundary feature extraction.
\subsection*{Boundary classifier}
The boundary data classifier is trained without windowed data. Instead singular samples and their label (active or inactive) are fed into the network. The data set is distributed in the following splits: \textbf{Total number of samples:} 4,500,977, \textbf{Train split:} 60\% (2,700,585), \textbf{Validation split and test split:} 20\% each (900,196 each). The exact network architecture itself is shown in figure \ref{architecture_boundary_extractor}.
\begin{figure}[ht]
\caption{These diagrams show the network architectures with input shapes and number of layers for the neural networks used in this work.}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{architecture_boundary_extractor}
\caption{Network architecture for the boundary feature extractor. The kernel size for the convolution layers is set to [1, 1]. To extract features using this network, the last two layers are cut off during testing. The network learns to classify if a given sample is active (writing) or inactive (not writing).}
\label{architecture_boundary_extractor}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{architecture_character_classifier}
\caption{Network architecture for the character classifier. The kernel size for the convolution layers and Max Pooling layer is set to [4, 1] and [2, 1] respectively. The network learns to classify written characters from input windows. It is trained using data extracted through the label splitting algorithm.}
\label{architecture_character_classifier}
\end{subfigure}
\label{architectures}
\end{figure}
The model loss stabilizes relatively quickly and the F1 score on the test set averages at slightly above 91\%. To extract features from boundary data it is then fed as test data into the network. The outputs of the final LSTM layer are taken as the extracted features and used as input for the Random Forest model. As a result of the network architecture there are 128 features per sample (previously 13). In the case of training, the extracted features are used with their original boundary labels. The number of estimators is set to 50 and only 25\% of the extracted features is randomly chosen for training. The number of active versus inactive samples is about equal. The resulting boundary classifier model also evaluates with a similar F1 score to the boundary feature extractor. However, the combination of both networks has proven to result in smoother predictions. This means that the amount of outliers is far fewer compared to only using one of the methods alone. This in turn makes the next step easier and cleaner, which is detecting and removing the remaining outliers in question to then generate windows and finally classify them.
\subsection*{Character classifier}
To train the character classifier, windows with labels are required. From the segments acquired through label splitting, there are usually between 0-2 windows per segment. Given the previously stated preprocessing values, a segment needs to contain at least 16 samples to produce a window and at least 28 to produce another given the overlap, and so on. In rarer cases segments are too short, so they are discarded. This provides yet another opportunity to clean input data. The data set is distributed in the following splits: \textbf{Total number of windows:} 235,124, \textbf{Train split:} 60\% (141,074), \textbf{Validation split and test split:} 20\% each (47,025 each). During training the validation loss usually stopped decreasing after around 15-20 epochs for the character classifier. The exact network architecture itself is shown in figure \ref{architecture_character_classifier}. A graph depicting the model loss is shown in figure \ref{model_loss}.\\
Using the previously split test set, the F1 score of the model is evaluated. During five-fold cross-validation the F1 score ranges from 60\%-65\%. A confusion matrix for the test set is shown in figure \ref{confusion_matrix}. Refer to figure \ref{preprocessing} for the class to value mapping.
\begin{figure}[ht]
\caption{These graphs show the performance of training and testing for the character classifier with the architecture shown in \ref{architecture_character_classifier}. Specifically the model loss and confusion matrix.}
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{model_loss}
\caption{Model loss during training of character classifier. Both the training and validation loss decrease over time until the validation loss stabilizes after around 20 epochs when training is stopped to avoid overfitting.}
\label{model_loss}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.31\textwidth}
\includegraphics[width=\textwidth]{confusion_matrix}
\caption{Confusion matrix using separate test set on the trained model. The diagonal is well visible. For classes 0, 1, 12, 13 and 14 fewer samples were available and tested, explaining the lower count in the matrix.}
\label{confusion_matrix}
\end{subfigure}
\end{figure}
\subsection*{Challenge simulation}
Testing with completely unknown writers requires a few differences during preprocessing after boundary classification. For the challenge, five adaption samples for any unknown writer are given. These are regular labelled terms. We use these to adapt the character classifier with the intent for it to learn the writing style of the new person. Both the adaption and testing set undergo similar preprocessing steps than the training set, especially using the same parameters. Their boundary features are extracted and classified using the previously trained models. It is impossible to use the label splitting rule-based algorithm to determine the style of a writer from five terms. Therefore, these terms are also segmented using the boundary classifier. However, since the label is known, some rules can be applied. For both sets the boundary data is further cleaned. Outliers are generally removed by identifying singular samples or small groups of samples in between samples of the opposite group. Longer active regions are then extended by manually appending active or inactive labels to them, based on the distance to any other close active areas to avoid accidental segment merging. This extension of active regions is performed as to not miss truly active areas that were correctly recognized by the classifier but are too short to produce a window for character classification later on. We obviously saw an increase of classified characters through this extension, but also an increase in the score. The adaption set is further treated because the number of characters in the label is known and we can use this information to our advantage. After segmenting the data by the boundary information it is not guaranteed that the number of segments equals the number of actual characters. If we know the number of characters we can therefore merge segments to decrease the number of segments to fit the number of characters in the label. This merging is based on the distance between active areas. Finally, windows from the adaption and testing data can be generated. The adaption windows along with their true labels are then fed into the character classification model to update and adapt it to the new writer. The testing data is then evaluated on the updated model. The challenge itself is evaluated by the metric of Levensthein Distance which measures the distance between two input strings. With the randomly picked test person a total of 90 windows for adaption are created from the five adaption terms and the character classifier is updated on these. Notably, there are not samples for all available labels, so the model does not learn about the writing style for all characters of the person. A total of 236 terms is then evaluated on the character classifier with a resulting Levensthein Distance of ~8 and thus concluding the experiment.
\section{Conclusion}
In this work, state-of-the-art neural network designs are used to solve the common problem of classifying written characters. The data is captured by a regular shaped pen that is equipped with various sensors: the Stabilo DigiPen. Precisely, the final network was designed to be able to classify Arabic single digit integers and some of the most common mathematical operators. In addition to regular classification the task presented another challenge since the given sensor data streams are labelled but do not inherently provide boundaries between separate labels in the stream. The segmentation of characters in this stream on labelled data through our rule-based algorithm is fairly successful in identifying the writing style of a person and extracting correctly labelled segments through those rules. The yield averaged 70\% in this part of the workflow. The model trained on this data is able to classify characters independent of the writer with an F1 score of higher than 60\% at 15 classes. Because the segmentation of characters through the rule-based algorithm can not be performed on unlabelled data, another classifier is trained on the previously extracted data with the intent to classify the boundaries between characters. The resulting model classifies boundaries at an F1 score of higher than 90\% and also returns only few outliers that are cleaned easily. The segmentation of unlabelled data is performed with the boundary classifier and the segments are then predicted through the character classifier. During the challenge evaluation an adaption set is given for each writer so the character classifier can be updated using these terms beforehand. A simulation resulted in a Levensthein Distance of ~8 on a randomly picked test set.\\
Although the given data is fairly clean of other problems, the segmentation of characters is very challenging. Every person has a slightly different writing style and without any type of calibration or other additional information about the writers it is up to an algorithm to detect that style. We designed a straightforward rule-based algorithm which acts mainly on domain knowledge. Given the fairly high score of both trained classifiers we believe that we at least partially succeeded with that algorithm. On an unknown writer, the networks and some additional cleanup through topical knowledge and assumptions have to be enough to classify their written terms. Given the resulting Levenshtein Distance of our test it is quite clear that there are more required improvements to make this a feasible approach in a real-world scenario but our results show that the training works fairly well and may be used as a starting point for proper classification.\\
Given that the largest issue for this work are the aforementioned properties of the data-label pairs, we would like to suggest a simple improvement for the data capture: calibration. Letting the person write each symbol at least once, or better multiple times, as a calibration for the pen would only be required once but provides valuable knowledge for the classification task. Knowing the writing style of a person makes segmentation much easier and more accurate. Therefore, as we have shown with the label splitting algorithm, if the writing style is known, segmentation would be highly accurate and, as we have shown as well, classification could provide more accurate prediction as a result. We believe that this improvement would greatly enhance the results with our approach but, of course, other improvements like refining the networks or preprocessing algorithms may also lead to much better results without the requirement of additional data.
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,091,296 | arxiv | \section{Introduction}
\setcounter{equation}{0}
It is known \cite{Tha92} that the free Dirac Hamiltonian $H_m$ acting in the Hilbert space
$\mathcal H:=\mathsf{L}^{\:\!\!2}(\mathbb R^3;\mathbb C^4)$ is unitarily equivalent to the operator $h(P)\oplus-h(P)$, where
$P:=-i\nabla$ and $\mathbb R^3\ni\xi\mapsto h(\xi):=(\xi^2+m^2)^{1/2}$. For this reason, the set
$\{\pm m\}=h\big[(\nabla h)^{-1}(\{0\})\big]$ of critical values of $h$ plays an important
role in spectral analysis and scattering theory for Dirac operators. For instance, one cannot
prove at $\pm m$ the usual limiting absorption principle for operators $H_m+V$, even with $V$ a
regular perturbation of $H_m$, by using standard commutator methods. Both the statements and
the proofs have to be modified (see \eg \cite{BG09,IM99}).
In this paper, we provide a new account on the spectral analysis of Dirac operators at the
critical values by discussing the behaviour at $\pm m$ of the spectral shift function associated
to sign-definite perturbations of Dirac operators with non-constant magnetic fields. Our work is
closely related to \cite{Rai09} where G. D. Raikov treats a similar issue in the case of magnetic
Pauli operators. It can also be considered as a complement of \cite{RT07}, where general properties
of the spectrum of Dirac operators with variable magnetic fields of constant direction and
matrix perturbations are determined. Other related results on the spectrum of $3$-dimensional
magnetic Dirac operators can be found in
\cite{BG87,BS99,BMR93,BR99,EL99,GM01,GM93,Hac93,HNW89,Ivr98,MR03,Rob99,SU09,Tha91}.
Let us describe the content of this paper. We consider a relativistic spin-$\frac12$ particle
evolving in $\mathbb R^3$ in presence of a variable magnetic field of constant direction. By virtue of
the Maxwell equations, we may assume with no loss of generality that the magnetic field has the
form
$$
\vec B(x_1,x_2,x_3)=\big(0,0,b(x_1,x_2)\big).
$$
The system is described in $\mathcal H$ by the Dirac operator
$$
H_0:=\alpha_1\Pi_1+\alpha_2\Pi_2+\alpha_3P_3+\beta m,
$$
where $\beta\equiv\alpha_0,\alpha_1,\alpha_2,\alpha_3$ are the usual Dirac-Pauli matrices,
$m>0$ is the mass of the particle and $\Pi_j:=-i\partial_j-a_j$ are the generators of the
magnetic translations with a vector potential
$$
\vec a(x_1,x_2,x_3)=\big(a_1(x_1,x_2),a_2(x_1,x_2),0\big)
$$
that satisfies $B=\partial_1a_2-\partial_2a_1$. Since $a_3 =0$, we write $P_3=-i\partial_3$
instead of $\Pi_3$. We assume that the function $b:\mathbb R^2\to\mathbb R$ is continuous (see Section
\ref{Unperturbed} for details), so that $H_0$, defined on $C^\infty_0(\mathbb R^3;\mathbb C^4)$, can be
extended uniquely to a selfadjoint operator in $\mathcal H$ with domain $\mathcal D(H_0)$ .
Then we consider a bounded positive multiplication operator
$V\in C\big(\mathbb R^3;\mathscr B_{\sf h}(\mathbb C^4)\big)$, where $\mathscr B_{\sf h}(\mathbb C^4)$ is the set of $4\times4$
hermitian matrices, and define the perturbed Hamiltonian $H_\pm:=H_0\pm V$. Since $V$ is
bounded and symmetric, the operator $H_\pm$ is selfadjoint in $\mathcal H$ and has domain
$\mathcal D(H)=\mathcal D(H_0)$. We also assume that $|V(x)|$ decays more rapidly than $|x|^{-3}$
as $|x|\to\infty$ and that
\begin{equation}\label{durdur}
(H_\pm-z)^{-3}-(H_0-z)^{-3}\in S_1(\mathcal H)
\quad\hbox{for each}\quad z\in\mathbb R\setminus\{\sigma(H_0)\cup\sigma(H_\pm)\},
\end{equation}
where $S_1(\mathcal H)$ denotes the set of trace class operators in $\mathcal H$.
Under these assumptions, there exists a unique function
$\xi(\,\cdot\,;H_\pm,H_0)\in\mathsf{L}^{\:\!\!1}\big(\mathbb R;(1+|\lambda|)^{-4}\mathrm{d}\lambda\big)$ such that the
Lifshits-Krein trace formula
\begin{equation}\label{eq_LK}
\mathop{\mathsf{Tr}}\nolimits\big[f(H_\pm)-f(H_0)\big]
=\int_\mathbb R\mathrm{d}\lambda\,f'(\lambda)\;\!\xi(\lambda;H_\pm,H_0)
\end{equation}
holds for each $f\in C^\infty_0(\mathbb R)$ (see \cite[Sec.~8.11]{Yaf92}). The function
$\xi(\,\cdot\,;H_\pm,H_0)$ is called the spectral shift function for the pair $(H_\pm,H_0)$.
It vanishes identically on $\mathbb R\setminus\{\sigma(H_0)\cup\sigma(H_\pm)\}$, and can be related
to the number of eigenvalues of $H_\pm$ in $(-m,m)$ (see Remark
\ref{int_eigen}). Morever, for almost every $\lambda\in\sigma_{\rm ac}(H_0)$ the spectral
shift function is related to the scattering matrix $S(\lambda;H_\pm,H_0)$ for the pair
$(H_\pm,H_0)$ by the Birman-Krein formula
$$
\det S(\lambda;H_\pm,H_0)=\mathop{\mathrm{e}}\nolimits^{-2\pi i\xi(\lambda;H_\pm,H_0)}.
$$
After identification of $\xi(\,\cdot\,;H_\pm,H_0)$ with some representative of
its equivalence class, our results are the following. In Proposition \ref{prop_cont}, we
show that there exists a constant $\zeta>0$ defined in terms of $b$ (\cf Proposition
\ref{zeta}) such that $\xi(\,\cdot\,;H_\pm,H_0)$ is bounded on each compact subset of
$(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\{\pm m\}$ and is continuous on
$(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\big(\{\pm m\}\cup\sigma_{\rm p}(H_\pm)\big)$.
In Theorem \ref{thm<}, we determine the asymptotic behaviour of
$\xi(\lambda;H_\pm,H_0)$ as $\lambda\to\pm m$, $|\lambda|<m$, and in Theorem \ref{thm_ext},
we determine the asymptotic behaviour of $\xi(\lambda;H_\pm,H_0)$ as
$\lambda\to\pm m$, $|\lambda|>m$. In both cases, one has
$\xi(\lambda;H_\pm,H_0)\to\pm\infty$ as $\lambda\to\mp m$. The divergence of
$\xi(\lambda;H_\pm,H_0)$ near $\lambda=\pm m$ scales as the number of eigenvalues near $0$ of
certain Berezin-Toeplitz type operators. When $V$ admits a power-like or exponential decay at
infinity, or when it has a compact support, we give the first term of the asymptotic expansion of
$\xi(\lambda,;H_\pm,H_0)$ near $\lambda=\pm m$ (see Proposition \ref{in_gap} and Corollary
\ref{outside_gap}). In these cases, we show that the limits
$$
\lim_{\varepsilon\searrow0}
\frac{\xi\big(m+\varepsilon;H_-,H_0\big)}{\xi\big(m-\varepsilon;H_-,H_0\big)}
\qquad\hbox{and}\qquad
\lim_{\varepsilon\searrow0}
\frac{\xi\big(-m-\varepsilon;H_+,H_0\big)}{\xi\big(-m+\varepsilon;H_+,H_0\big)}
$$
exist and are equal to positive constants depending on the decay rate of $V$ at infinity
(see Corollary \ref{Levinson} for a precise statement). This can be interpreted as a
generalised version of Levinson's Theorem for the pair $(H_\pm,H_0)$ (see \cite{Kla90,Ma06}
for usual versions of Levinson's Theorem for Dirac operators). The relation between
the behaviour of the spectral shift function near $\lambda=+m$ and near $\lambda=-m$
is explained in Remark \ref{C_sym} by using the charge conjugation symmetry.
These results are similar to the results of \cite{Rai09} (where Pauli operators with
non-constant magnetic fields are considered) and \cite{FR04} (where Schr\"odinger operators
with constant magnetic field are considered). Part of the interest of this work relies on
the fact that we were able to exhibit a non-trivial class of matrix potentials $V$ satisfying \eqref{durdur} even though $H_0$ is not a bounded perturbation of the free Dirac operator.
We refer to Remark \ref{a_class} and Section \ref{cond_trace} for a discussion of this issue.
Let us fix the notations that are used in the paper. The norm and scalar product of
$\mathcal H\equiv\mathsf{L}^{\:\!\!2}(\mathbb R^3;\mathbb C^4)$ are denoted by $\|\;\!\cdot\;\!\|$ and
$\langle\;\!\cdot\;\!,\;\!\cdot\;\!\rangle$. The symbol $\otimes$ stands for the closed
tensor product of Hilbert spaces and $S_p(\mathcal H)$, $p\in[1,\infty]$, denotes the $p$-th
Schatten-von Neumann class of operators in $\mathcal H$ ($S_\infty(\mathcal H)$ is the set of compact operators
in $\mathcal H$). We denote by $\|\;\!\cdot\;\!\|_p$ the corresponding operator norm. The variable
$x\in\mathbb R^3$ is often written as $x\equiv(x_\perp,x_3)$, with $x_\perp\in\mathbb R^2$ and $x_3\in\mathbb R$.
The symbol $Q_j$, $j=1,2,3$, denotes the multiplication operator by $x_j$ in $\mathcal H$,
$Q:=(Q_1,Q_2,Q_3)$, and $Q_\perp:=(Q_1,Q_2)$. Sometimes, when the context is unambiguous, we
consider the operators $Q_j$ and $P_j$ as operators in $\mathsf{L}^{\:\!\!2}(\mathbb R)$ instead of $\mathcal H$ without
changing the notations. Given a selfadjoint operator $A$ in a Hilbert space $\mathcal G$, the symbol
$E^A(\;\!\cdot\;\!)$ stands for the spectral measure of $A$.
\section{Unperturbed operator}\label{Unperturbed}
\setcounter{equation}{0}
Throughout this paper we assume that the component $b:\mathbb R^2\to\mathbb R$ of the magnetic field
$\vec B\equiv(0,0,b)$ belongs to the class of ``admissible" magnetic fields defined in
\cite[Sec.~2.1]{Rai09}. Namely, we assume that $b=b_0+\widetilde b$, where $b_0>0$ is a
constant while the function $\widetilde b:\mathbb R^2\to\mathbb R$ is such that the Poisson equation
$$
\Delta\widetilde\varphi=\widetilde b
$$
admits a solution $\widetilde\varphi:\mathbb R^2\to\mathbb R$, continuous and
bounded together with its derivatives of order up to two. We also define
$\varphi_0(x_\perp):=\frac14b_0|x_\perp|^2$ for each $x_\perp\in\mathbb R^2$ and set
$\varphi:=\varphi_0+\widetilde\varphi$. Then we obtain a vector potential
$\vec a\equiv(a_1,a_2,a_3)\in C^1(\mathbb R^2;\mathbb R^3)$ for the magnetic field $\vec B$ by putting
$$
a_1:=\partial_1\varphi,\qquad a_2:=\partial_2\varphi\qquad\hbox{and}\qquad a_3:=0.
$$
(changing, if necessary, the gauge, we shall always assume that the vector potential
$\vec a$ is of this form). We refer to \cite{Rai09} for further properties and examples of
admissible magnetic fields.
Since the vector potential $\vec a$ belongs to
$\mathsf{L}^{\:\!\!\infty}_{\rm loc}(\mathbb R^2;\mathbb R^3)$, the magnetic Dirac operator
$$
H_0=\alpha_1\Pi_1+\alpha_2\Pi_2+\alpha_3P_3+\beta m
$$
satisfies all the properties of \cite[Sec. 2.1]{RT07}. The operator $H_0$ is
essentially selfadjoint on $C^\infty_0(\mathbb R^3;\mathbb C^4)$, with domain
$\mathcal D(H_0)\subset\mathcal H^{1/2}_{\rm loc}(\mathbb R^3;\mathbb C^4)$, the spectrum of $H_0$ satisfies
\begin{equation}\label{sigma_0}
\sigma(H_0)=\sigma_{\rm ac}(H_0)=(-\infty,-m]\cup[m,\infty),
\end{equation}
and we have the identity
\begin{equation}\label{tortuga}
H_0^2=
\(\begin{smallmatrix}
H_\perp^-\otimes1+1\otimes(P_3^2+m^2) & 0 & 0 & 0\\
0 & H_\perp^+\otimes1+1\otimes(P_3^2+m^2) & 0 & 0\\
0 & 0 & H_\perp^-\otimes1+1\otimes(P_3^2+m^2) & 0\\
0 & 0 & 0 & H_\perp^+\otimes1+1\otimes(P_3^2+m^2)
\end{smallmatrix}\)
\end{equation}
with respect to the tensorial decomposition $\mathsf{L}^{\:\!\!2}(\mathbb R^2)\otimes\mathsf{L}^{\:\!\!2}(\mathbb R)$ of $\mathsf{L}^{\:\!\!2}(\mathbb R^3)$.
Here the operators $H_\perp^\pm$ are the components of the Pauli operator
$H_\perp:=H_\perp^-\oplus H_\perp^+$ in $\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^2)$ associated with the vector
potential $(a_1,a_2)$.
We recall from \cite[Sec.~2.2]{Rai09} that $\dim\ker(H_\perp^-)=\infty$, that
$\dim\ker(H_\perp^+)=0$ and that we have the following result.
\begin{Proposition}\label{zeta}
Let $b$ be an admissible magnetic field with $b_0>0$. Then $0=\inf\sigma(H_\perp)$
is an isolated eigenvalue of infinite multiplicity. More precisely, we have
$$
\dim\ker(H_\perp)=\infty\qquad\hbox{and}\qquad
(0,\zeta)\subset\mathbb R\setminus\sigma(H_\perp),
$$
where
$$
\zeta:=2b_0\mathop{\mathrm{e}}\nolimits^{-2{\rm osc(\widetilde\varphi)}}\qquad\hbox{and}\qquad
{\rm osc(\widetilde\varphi)}:=\sup_{x_\perp\in\mathbb R^2}\widetilde\varphi(x_\perp)
-\inf_{x_\perp\in\mathbb R^2}\widetilde\varphi(x_\perp).
$$
\end{Proposition}
Finally, since $(0,{\zeta})\subset\mathbb R\setminus\sigma(H_\perp)$, we know from
\cite[Thm. 1.2.(d)]{RT07} that the limits
\begin{equation}\label{general_LAP}
\lim_{\varepsilon\searrow0}
\langle Q_3\rangle^{-\nu_3/2}(H_0-\lambda\mp i\varepsilon)^{-1}
\langle Q_3\rangle^{-\nu_3/2},\qquad \nu_3>1,
\end{equation}
exist for each $\lambda\in(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\{\pm m\}$ (note that
we use the usual notation $\langle\;\!\cdot\;\!\rangle:=\sqrt{1+|\;\!\cdot\;\!|^2}$).
\section{Perturbed operator}\label{Perturbed}
\setcounter{equation}{0}
We consider now the perturbed operators $H_\pm=H_0\pm V$, where $V\equiv\{V_{jk}\}$ is the
multiplication operator associated to the following matrix-valued function $V$.
\begin{Assumption}\label{assumption1}
The function $V\in C\big(\mathbb R^3;\mathscr B_{\sf h}(\mathbb C^4)\big)$ satisfies for each
$x\equiv(x_\perp,x_3)\in\mathbb R^3$ and each $j,k\in\{1,\ldots,4\}$
\begin{equation}\label{first_decay}
V(x)\ge0\qquad\hbox{and}\qquad
|V_{jk}(x)|\le{\rm Const.}\;\!\langle x_\perp\rangle^{-\nu_\perp}\langle x_3\rangle^{-\nu_3}
\quad\hbox{for some }\nu_\perp>2\hbox{ and }\nu_3>1.
\end{equation}
\end{Assumption}
The potential $V$ in Assumption \ref{assumption1} is short-range along $x_3$. So
we know from \cite[Thm. 1.2]{RT07} that
\begin{enumerate}
\item[(i)] $\sigma_{\rm ess}(H_\pm)=\sigma_{\rm ess}(H_0)=(-\infty,-m]\cup[m,\infty)$.
\item[(ii)] The point spectrum of $H_\pm$ in
$
\big(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta}\big)\setminus\{\pm m\}
$
is composed of eigenvalues of finite multiplicity and with no accumulation point.
\item[(iii)] $H_\pm$ has no singular continuous spectrum in
$
\big(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta}\big)
$.
In particular, $H_0$ and $H_\pm$ have a common spectral gap in $(-m,m)$.
\end{enumerate}
Using the formula
$$
(A+\lambda)^{-\gamma}
=\Gamma(\gamma)^{-1}\int_0^\infty\mathrm{d} t\,t^{\gamma-1}\mathop{\mathrm{e}}\nolimits^{-t(A+\lambda)},
\qquad A:\mathcal D(A)\to\mathcal H,~A\ge0,~\lambda,\gamma>0,
$$
the diamagnetic inequality \cite[Thm. 2.3]{AHS78}, and the compactness criterion
\cite[Thm. 5.7.1]{Dav07}, we find that
$$
\textstyle
|V_{jk}|^{1/2}\big(\sum_{\ell\le3}\Pi_\ell^*\Pi_\ell+m^2\big)^{-1/4}
\in S_\infty[\mathsf{L}^{\:\!\!2}(\mathbb R^3)].
$$
Since $b$ is bounded this implies that
$$
|H_0|^{-1/2}V|H_0|^{-1/2}
\le\textstyle|H_0|^{-1/2}\big(\sum_{j,k\le4}|V_{jk}|\big)|H_0|^{-1/2}
\in S_\infty(\mathcal H).
$$
So $|H_0|^{-1/2}V|H_0|^{-1/2}$ also belongs to $S_\infty(\mathcal H)$, since $S_\infty(\mathcal H)$
is an hereditary $C^*$-subalgebra of $\mathscr B(\mathcal H)$ \cite[Cor. 3.2.3]{Mur90}. One has in
particular
\begin{equation}\label{necessary1}
V^{1/2}(|H_0|+1)^{-1/2}\in S_\infty(\mathcal H).
\end{equation}
The standard criterion \cite[Thm. XI.20]{RS79} shows that
$$
|V_{jk}|^{1/2}\big(-\Delta+m^2\big)^{-\gamma}\in S_q[\mathsf{L}^{\:\!\!2}(\mathbb R^3)]
\quad\hbox{if }q\in[2,\infty)\hbox{ and }\gamma q>3/2.
$$
This together with arguments as above
implies that
\begin{equation}\label{necessary2}
V^{1/2}|H_0|^{-\gamma}\in S_q(\mathcal H)
\quad\hbox{if }q\ge2\hbox{ is even and }\gamma q>3.
\end{equation}
So we have in particular that
\begin{equation}\label{necessary3}
V^{1/2}E^{H_0}(B)\in S_2(\mathcal H)\quad\hbox{for any bounded borel set }B\subset\mathbb R.
\end{equation}
In the sequel we shall need a more restrictive assumption on $V$. For this, we recall
that there exists numbers $z\in\mathbb R\setminus\{\sigma(H_0)\cup\sigma(H_\pm)\}$ since
$H_0$ and $H_\pm$ have a common spectral gap in $(-m,m)$. We also set $R_0(z):=(H_0-z)^{-1}$
and $R_\pm(z):=(H_\pm-z)^{-1}$ for $z\in\mathbb C\setminus\sigma(H_0)$ and
$z\in\mathbb C\setminus\sigma(H_\pm)$, respectively.
\begin{Assumption}\label{assumption2}
The function $V\in C\big(\mathbb R^3;\mathscr B_{\sf h}(\mathbb C^4)\big)$ satisfies for each $x\in\mathbb R^3$ and
each $j,k\in\{1,\ldots,4\}$
\begin{equation}\label{a_decay}
V(x)\ge0\qquad\hbox{and}\qquad
|V_{jk}(x)|\le{\rm Const.}\;\!\langle x\rangle^{-\nu}\quad\hbox{for some constant }\nu>3.
\end{equation}
Furthermore, $V$ is chosen such that
\begin{equation}\label{necessary4}
R_\pm^3(z)-R_0^3(z)\in S_1(\mathcal H)
\quad\emph{for each }z\in\mathbb R\setminus\{\sigma(H_0)\cup\sigma(H_\pm)\}.
\end{equation}
\end{Assumption}
Note that \eqref{a_decay} implies \eqref{first_decay} if one takes $\nu_3\in(1,\nu-2)$
and $\nu_\perp:=\nu-\nu_3$. Note also that the choice of function
$\lambda\mapsto(\lambda-z)^{-3}$ in the trace class condition \eqref{necessary4} has
been made for convenience. Many other choices would also guarantee the existence of
the spectral shift function for the pair $(H_\pm,H_0)$ (see \eg \cite[Sec.~8.11]{Yaf92}).
\begin{Remark}\label{a_class}
Since the operator $H_0$ is not a bounded perturbation of the free Dirac operator, we
cannot apply the results of \cite[Sec.~4]{Yaf05} to prove the inclusion \eqref{necessary4}
under the condition \eqref{a_decay}. In general, one has to impose additional assumptions
on $V$ to get the result. For instance, if $V$ verifies
\eqref{a_decay}, and
\begin{enumerate}
\item[(i)] $[V,\alpha_1]=[V,\alpha_2]=0$,
\item[(ii)] for each $x\in\mathbb R^3$ and each $j,k,\ell\in\{1,\ldots,4\}$, one has
$
|(\partial_\ell V_{jk})(x)|\le{\rm Const.}\;\!\langle x\rangle^{-\varsigma}
$
for some $\varsigma>3$,
\item[(iii)] for each $j,k,\ell\in\{1,\ldots,4\}$, one has
$(\partial_\ell\partial_3V_{jk})\in\mathsf{L}^{\:\!\!\infty}(\mathbb R^3)$,
\end{enumerate}
then \eqref{necessary4} is satisfied. Furthermore, if $V$ is scalar, then the same
is true without assuming (iii) (and (i) is trivially satisfied). The proof of these
statements can be found in the appendix. Here, we only note that a matrix
${\sf V}\in\mathscr B_{\sf h}(\mathbb C^4)$ satisfying (i) is necessarily of the form
$$
{\sf V}=\left(\begin{smallmatrix}
{\sf v}_1 & 0 & {\sf v}_3 & 0\\
0 & {\sf v}_2 & 0 & \overline{\,{\sf v}_3}\\
\overline{\,{\sf v}_3} & 0 & {\sf v}_2 & 0\\
0 & {\sf v}_3 & 0 & {\sf v}_1
\end{smallmatrix}\right),
$$
with ${\sf v}_1,{\sf v}_2\in\mathbb R$ and ${\sf v}_3\in\mathbb C$.
\end{Remark}
\section{Spectral shift function}\label{SSF}
\setcounter{equation}{0}
In this section we recall some results due to A. Pushnitski on the representation of the
spectral shift function for a pair of not semibounded selfadjoint operators.
Given a a Lebesgue measurable set $B\subset\mathbb R$, we set
$\mu(B):=\frac1\pi\int_B\frac{\mathrm{d} t}{1+t^2}$, and note that $\mu(\mathbb R)=1$. Furthermore, if $T=T^*$
is a compact operator in a separable Hilbert space $\mathcal G$, we set
$$
n_\pm(s;T):=\mathop{\mathrm{rank}}\nolimits E^{\pm T}\big((s,\infty)\big)\quad\hbox{for }s>0.
$$
Then we have the following estimates.
\begin{Lemma}[Lemma 2.1 of \cite{Pus97}]\label{aday}
Let $T_1=T_1^*\in S_\infty(\mathcal H)$ and $T_2=T_2^*\in S_1(\mathcal H)$. Then one as for each $s_1,s_2>0$
$$
\int_\mathbb R\mathrm{d}\mu(t)\,n_\pm(s_1+s_2;T_1+tT_2)
\leq n_{\pm}(s_1;T_1)+\frac1{\pi s_2}\;\!\|T_2\|_1.
$$
\end{Lemma}
For $z\in\mathbb C\setminus\sigma(H_0)$, we define the usual weighted resolvent
$$
T(z):=V^{1/2}(H_0-z)^{-1}V^{1/2}
$$
and the corresponding real and imaginary parts
$$
A(z):=\mathop{\mathsf{Re}}\nolimits T(z)\qquad\hbox{and}\qquad B(z):=\mathop{\mathsf{Im}}\nolimits T(z).
$$
Then the next lemma is direct consequence of the inclusions
\eqref{necessary1}-\eqref{necessary3} and \cite[Prop.~4.4.(i)]{Pus01}.
\begin{Lemma}\label{2limits}
Let $V$ satisfy Assumption \ref{assumption1}. Then, for almost every $\lambda\in\mathbb R$, the
limits $A(\lambda+i0):=\lim_{\varepsilon\searrow0}A(\lambda+i\varepsilon)$ and
$B(\lambda+i0):=\lim_{\varepsilon\searrow0}B(\lambda+i\varepsilon)\ge0$ exist in
$S_4(\mathcal H)$.
\end{Lemma}
Next theorem follows from the inclusions \eqref{necessary1}, \eqref{necessary3},
\eqref{necessary4}, from the equations (1.9), (8.1), (8.2) of \cite{Pus01}, and from
Theorem 8.1 of \cite{Pus01}.
\begin{Theorem}\label{identify}
Let $V$ satisfy Assumption \ref{assumption2}. Then, for almost every $\lambda\in\mathbb R$,
$\xi(\lambda;H_\pm,H_0)$ exists and is given by
\begin{equation}\label{lhs}
\xi(\lambda;H_\pm,H_0)
=\pm\int_\mathbb R\mathrm{d}\mu(t)\,n_\mp\big(1;A(\lambda+i0)+tB(\lambda+i0)\big).
\end{equation}
\end{Theorem}
We know from \eqref{general_LAP} that $A(\lambda+i0)$ and $B(\lambda+i0)$ exist in $\mathscr B(\mathcal H)$
for each
$
\lambda\in(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\{\pm m\}
$.
In Propositions \ref{coffee}-\ref{cigarette} and Corollary \ref{inParis} below we show
that in fact $A(\lambda+i0)\in S_4(\mathcal H)$ and $B(\lambda+i0)\in S_1(\mathcal H)$ for each
$
\lambda\in(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\{\pm m\}
$.
Hence, by Lemma \ref{aday}, the r.h.s. of \eqref{lhs} will turn out to be well-defined for
every
$
\lambda\in(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\{\pm m\}
$.
In the next proposition we state some regularity properties of the function
$$
(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\{\pm m\}\ni\lambda\mapsto
\widetilde\xi(\lambda;H_\pm,H_0)
:=\pm\int_\mathbb R\mathrm{d}\mu(t)\,n_\mp\big(1;A(\lambda+i0)+tB(\lambda+i0)\big).
$$
The proof (which relies on Propositions \ref{coffee}-\ref{cigarette}, Lemma \ref{rank2},
Corollary \ref{inParis} and the stability result \cite[Thm.~3.12]{GM00}) is similar to the
one of \cite[Sec. 4.2.1]{BPR04}.
\begin{Proposition}\label{prop_cont}
Let $V$ satisfy Assumption \ref{assumption1}. Then $\widetilde\xi(\,\cdot\,;H_\pm,H_0)$
is bounded on each compact subset of
$
(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\{\pm m\}
$
and is continuous on
$
(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})
\setminus\big(\{\pm m\}\cup\sigma_{\rm p}(H_\pm)\big).
$
\end{Proposition}
In the sequel, we identify the functions $\widetilde\xi(\,\cdot\,;H_\pm,H_0)$ and
$\xi(\,\cdot\,;H_\pm,H_0)$ since they are equal for almost every $\lambda\in\mathbb R$ due to
Theorem \ref{identify} (see \cite{Saf01} for a study where the r.h.s. of \eqref{lhs} is
directly treated as a definition of $\xi(\lambda;H_\pm,H_0)$).
\begin{Remark}\label{int_eigen}
In the interval $(-m,m)$, $H_0$ has no spectrum and the spectrum of $H_\pm$ is purely
discrete. Thus the spectral shift function $\xi(\,\cdot\,;H_\pm,H_0)$ can be related
to the number of eigenvalues of $H_\pm$ as follows: for
$\lambda_1,\lambda_2\in(-m,m)\setminus\sigma(H_\pm)$ with $\lambda_1<\lambda_2$, we have
(see \cite[Thm.~9.1]{Pus01})
$$
\xi(\lambda_1;H_\pm,H_0)-\xi(\lambda_2;H_\pm,H_0)
=\mathop{\mathrm{rank}}\nolimits E^{H_\pm}\big([\lambda_1,\lambda_2)\big).
$$
\end{Remark}
\section{Decomposition of the weighted resolvent}\label{Sec_Dec}
\setcounter{equation}{0}
In this section we decompose the weighted resolvent
$$
T(z)=V^{1/2}(H_0-z)V^{1/2},\quad z\in\mathbb C\setminus\sigma(H_0),
$$
into a sum $T(z)=T_{\sf div}(z)+T_{\sf bound}(z)$, where $T_{\sf div}(z)$ (respectively
$T_{\sf bound}(z)$) corresponds to the diverging (respectively non-diverging) part of $T(z)$
as $z\to\pm m$. Then we estimate the behaviour, in suitable Schatten norms, of each term as
$z\to\pm m$. We refer to \cite[Sec.~4]{FR04} and \cite[Sec.~4.2]{Rai09} for similar
approaches in the case of the Schr\"odinger and Pauli operators.
Let $\mathsf a$ and $\mathsf a^*$ be the closures in $\mathsf{L}^{\:\!\!2}(\mathbb R^2)$ of the operators given by
$$
\mathsf a\varphi:=(\Pi_1-i\Pi_2)\varphi\qquad{\rm and}\qquad
\mathsf a^*\varphi:=(\Pi_1+i\Pi_2)\varphi,
$$
for $\varphi\in C^\infty_0(\mathbb R^2)$. Then one has (see \cite[Sec.~5.5.2]{Tha92} and
\cite[Sec.~5]{Rai99})
\begin{equation}\label{tortugo}
H_0=
\(\begin{smallmatrix}
m & 0 & 1\otimes P_3 & \mathsf a\otimes1\\
0 & m & \mathsf a^*\otimes1 & -1\otimes P_3\\
1\otimes P_3 & \mathsf a\otimes1 & -m & 0\\
\mathsf a^*\otimes1 & -1\otimes P_3 & 0 & -m
\end{smallmatrix}\),
\end{equation}
with
\begin{equation}\label{excel}
\ker(\mathsf a^*)=\ker(\mathsf a\a^*)=\ker(H_\perp^-)\subset\mathsf{L}^{\:\!\!2}(\mathbb R^2).
\end{equation}
Now, let
$$
\mathsf P:=\(\begin{smallmatrix}
P & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & P & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
$$
be the orthogonal projection onto the union of the eigenspaces of $H_0$ corresponding to
the values $\lambda=\pm m$. Since $P\equiv p\otimes1$ is the orthogonal projection
onto $\ker(H_\perp^-)\otimes\mathsf{L}^{\:\!\!2}(\mathbb R)$, the equations \eqref{tortugo} and \eqref{excel}
imply that $H_0$ and $\mathsf P$ commute:
\begin{equation}\label{acommutator}
H_0^{-1}\mathsf P=\mathsf P H_0^{-1}.
\end{equation}
In fact, by using \eqref{tortuga} and \eqref{tortugo}, one gets for each
$z\in\mathbb C\setminus\sigma(H_0)$ the equalities
\begin{align*}
&(H_0-z)^{-1}\mathsf P\\
&=(H_0+z)\big(H_0^2-z^2\big)^{-1}\mathsf P\\
&=\big[p\otimes R(z^2-m^2)\big]
\(\begin{smallmatrix}
(z+m) & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & (z-m) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
+\big[p\otimes P_3R(z^2-m^2)\big]
\(\begin{smallmatrix}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\),
\end{align*}
where $R(z):=\big(P_3^2-z\big)^{-1}$, $z\in\mathbb C\setminus[0,\infty)$, is the resolvent of $P_3^2$
in $\mathsf{L}^{\:\!\!2}(\mathbb R)$. This allows us to decompose $T(z)$ as $T(z)=T_{\sf div}(z)+T_{\sf bound}(z)$,
with
\begin{align*}
T_{\sf div}(z)&:=V^{1/2}\big[p\otimes R(z^2-m^2)\big]
\(\begin{smallmatrix}
(z+m) & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & (z-m) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
V^{1/2},\\
T_{\sf bound}(z)&:=V^{1/2}\big[p\otimes P_3R(z^2-m^2)\big]
\(\begin{smallmatrix}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
V^{1/2}
+V^{1/2}(H_0-z)^{-1}\mathsf P^\perp V^{1/2}
\qquad(\mathsf P^\perp:=1-\mathsf P).
\end{align*}
One may note that this decomposition of $T(z)$ differs slightly from the simpler
decomposition
$$
T(z)=V^{1/2}(H_0-z)\mathsf P V^{1/2}+V^{1/2}(H_0-z)\mathsf P^\perp V^{1/2},
$$
since the first term in $T_{\sf bound}(z)$ is associated to the projection $\mathsf P$ and not
the projection $\mathsf P^\perp$. This choice is motivated by the will of distinguishing clearly
the contribution $T_{\sf div}(z)$, that diverge as $z\to\pm m$, from the contribution
$T_{\sf bound}(z)$, that stays bounded as $z\to\pm m$.
For $\lambda\in\mathbb R\setminus\{0\}$, we can define the boundary value $R(\lambda)$ of the
resolvent $R(z)$ as the operator with convolution kernel $r_\lambda(\,\cdot\,)$, where
$$
r_\lambda(x_3)
:=\begin{cases}
\frac{\mathop{\mathrm{e}}\nolimits^{-\sqrt{-\lambda}|x_3|}}{2\sqrt{-\lambda}} & \hbox{if }\lambda<0,\vspace{3pt}\\
\frac{i\mathop{\mathrm{e}}\nolimits^{\sqrt{\lambda}|x_3|}}{2\sqrt{\lambda}} & \hbox{if }\lambda>0,
\end{cases}
$$
for each $x_3\in\mathbb R$. So, we can extend the definition of $T_{\sf div}(\,\cdot\,)$ to the
values $\lambda\in\mathbb R\setminus\{\pm m\}$:
$$
T_{\sf div}(\lambda):=V^{1/2}\big[p\otimes R(\lambda^2-m^2)\big]
\(\begin{smallmatrix}
(\lambda+m) & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & (\lambda-m) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
V^{1/2}.
$$
In the following proposition, we show that the trace norm of $T_{\sf div}(z)$ is continuous
in $\mathbb C_+:=\{z\in\mathbb C\mid\mathop{\mathsf{Im}}\nolimits(z)\ge0\}$ outside the points $z=\pm m$, where it may
diverge as $|z\mp m|^{-1/2}$. The proof of the proposition relies on a technical result
that we now recall.
\begin{Lemma}[Lemma 2.4 of \cite{Rai09}]\label{lem_rai}
Let $U\in\mathsf{L}^{\:\!\!q}(\mathbb R^2)$, $q\in[1,\infty)$, and assume that $b$ is an admissible
magnetic field. Then $pUp\in S_q[\mathsf{L}^{\:\!\!2}(\mathbb R^2)]$, and
$$
\big\|pUp\big\|_{S_q[\mathsf{L}^{\:\!\!2}(\mathbb R^2)]}^q
\le\frac{b_0}{2\pi}\;\!\mathop{\mathrm{e}}\nolimits^{2{\rm osc(\widetilde\varphi)}}\|U\|_{\mathsf{L}^{\:\!\!q}(\mathbb R^2)}^q\,.
$$
\end{Lemma}
The symbol $y_+$ denotes the postive part of $y\in\mathbb R$.
\begin{Proposition}\label{coffee}
Let $V$ satisfy Assumption \ref{assumption1}. Then the operator-valued function
$$
\mathbb C_+\setminus\{\pm m\}\ni z\mapsto T_{\sf div}(z)\in S_1(\mathcal H)
$$
is well-defined and continuous. Moreover, we have for each
$\lambda\in\mathbb R\setminus\{\pm m\}$ the bound
$$
\|T_{\sf div}(\lambda)\|_1
\le{\rm Const.}\;\!\textstyle\Big(\big|\frac{\lambda+m}{\lambda-m}\big|^{1/2}
+\big|\frac{\lambda-m}{\lambda+m}\big|^{1/2}\Big)\big(1+(\lambda^2-m^2)_+^{1/4}\big).
$$
\end{Proposition}
\begin{proof}
We have for each $z\in\mathbb C\setminus\sigma(H_0)$ the identity
$$
T_{\sf div}(z)=M\big(G\otimes J_{z^2-m^2}\big)
\(\begin{smallmatrix}
(z+m) & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & (z-m) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)M,
$$
where
\begin{align}
M&:=V^{1/2}\langle Q_\perp\rangle^{\nu_\perp/2}\langle Q_3\rangle^{\nu_3/2},
\label{operatorM}\\
G&:=\langle Q_\perp\rangle^{-\nu_\perp/2}p\langle Q_\perp\rangle^{-\nu_\perp/2},
\label{operatorG}\\
J_z&:=\langle Q_3\rangle^{-\nu_3/2}R(z)\langle Q_3\rangle^{-\nu_3/2}.\nonumber
\end{align}
The operator $M$ is bounded due to Assumption \ref{assumption1}. So
$$
\|T_{\sf div}(z)\|_1\le{\rm Const.}\(|z+m|+|z-m|\)\|G\|_1\|J_{z^2-m^2}\|_1.
$$
But we know from Lemma \ref{lem_rai} that $\|G\|_1\le{\rm Const.}$, and from
\cite[Sec.~4.1]{BPR04} that the operator-valued function
$\mathbb C_+\setminus\{0\}\ni z\mapsto J_z$ is continuous in the trace norm and admits the bound
$$
\|J_\lambda\|_1\le{\rm Const.}\;\!\big(1+\lambda_+^{1/4}\big)|\lambda|^{-1/2},
\quad\lambda\in\mathbb R\setminus\{0\}.
$$
It follows that
$$
\|T_{\sf div}(z)\|_1\le{\rm Const.}\;\!\textstyle
\Big(\big|\frac{\lambda+m}{\lambda-m}\big|^{1/2}
+\big|\frac{\lambda-m}{\lambda+m}\big|^{1/2}\Big)
\big(1+(\lambda^2-m^2)_+^{1/4}\big)
$$
for each $\lambda\in\mathbb R\setminus\{\pm m\}$.
\end{proof}
In the following proposition, we show that the function
$z\mapsto T_{\sf bound}(z)\in S_4(\mathcal H)$ is continuous in
$
\mathbb C\setminus\big\{(-\infty,-\sqrt{m^2+\zeta}]\cup[\sqrt{m^2+\zeta},\infty)\big\}.
$
The symbols $H^\pm$ stand for the operators $H^\pm:=H_\perp^\pm\otimes1+1\otimes P_3^2$
acting in $\mathsf{L}^{\:\!\!2}(\mathbb R^3)$.
\begin{Proposition}\label{cigarette}
Let $V$ satisfy Assumption \ref{assumption1}. Then the operator-valued function
$$
\mathbb C\setminus\big\{(-\infty,-\sqrt{m^2+\zeta}]\cup[\sqrt{m^2+\zeta},\infty)\big\}
\ni z\mapsto T_{\sf bound}(z)\in S_4(\mathcal H)
$$
is well-defined and continuous. Moreover, we have for each
$
\lambda\in(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})
$
the bound
\begin{equation}\label{ChuchoValdes}
\textstyle\|T_{\sf bound}(\lambda)\|_4\le{\rm Const.}\(|\lambda|+\lambda^2\)
\Big(1+\frac{(\lambda^2-m^2+1)_+}{\zeta+m^2-\lambda^2}\Big)+{\rm Const.}
\end{equation}
\end{Proposition}
\begin{proof}
One has the identity
$$
(H_0-z)^{-1}=H_0^{-1}+z\big(1+zH_0^{-1}\big)\big(H_0^2-z^2\big)^{-1}
$$
for each $z\in\mathbb C\setminus\sigma(H_0)$. Thus the operator $T_{\sf bound}(z)$ can be
written as
\begin{align}
T_{\sf bound}(z)&=M\big(G\otimes S_z\big)
\(\begin{smallmatrix}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)M
+V^{1/2}H_0^{-1}\mathsf P^\perp V^{1/2}
+zV^{1/2}\big(1+zH_0^{-1}\big)\big(H_0^2-z^2\big)^{-1}
\mathsf P^\perp V^{1/2}\label{BeforeCommute}\\
&\equiv T_1(z)+T_2+T_3(z),\nonumber
\end{align}
with $M$ and $G$ given by \eqref{operatorM}-\eqref{operatorG}, and
$$
S_z:=\langle Q_3\rangle^{-\nu_3/2}P_3R(z^2-m^2)\langle Q_3\rangle^{-\nu_3/2}.
$$
The integral kernel of $S_z$ is
\begin{equation}\label{truffes}
\textstyle\frac i2\langle x_3\rangle^{-\nu_3/2}
\frac{(x_3-x_3')}{|x_3-x_3'|}\mathop{\mathrm{e}}\nolimits^{i\sqrt{z^2-m^2}|x_3-x_3'|}
\langle x_3'\rangle^{-\nu_3/2},
\end{equation}
with the branch of $\sqrt{z^2-m^2}$ chosen so that $\mathop{\mathsf{Im}}\nolimits\sqrt{z^2-m^2}>0$. So $S_z$
extends to an element of $S_2[\mathsf{L}^{\:\!\!2}(\mathbb R)]$ for each $z\in\mathbb C$, with $\|S_z\|_2\le{\rm Const}$.
Since $M$ is bounded and $\|G\|_1\le{\rm Const.}$, this implies that
\begin{equation}\label{firstterm}
\|T_1(z)\|_2
\le{\rm Const.}\;\!\|M\|^2\|G\|_1\|S_z\|_2
\le{\rm Const.}
\end{equation}
for each $z\in\mathbb C$. One also has
\begin{equation}\label{secondterm}
\|T_2\|_4\le{\rm Const.}
\end{equation}
due to \eqref{necessary2}. So, it only remains to bound the term $T_3(z)$.
Let
$
z\in\mathbb C\setminus\{(-\infty,-\sqrt{m^2+\zeta}]\cup[\sqrt{m^2+\zeta},\infty)\}
$
and $P^\perp:=1-P$. Then $\big(H^-+m^2-z^2\big)^{-1}P^\perp$ and
$\big(H^++m^2-z^2\big)^{-1}$ belong to $\mathscr B[\mathsf{L}^{\:\!\!2}(\mathbb R^3)]$, and we have
$$
\big(H^-+m^2-z^2\big)^{-1}P^\perp=P^\perp\big(H^-+m^2-z^2\big)^{-1}.
$$
Thus
\begin{align*}
\big(H_0^2-z^2\big)^{-1}\mathsf P^\perp V^{1/2}
&=\big(H_0^2-z^2\big)^{-1}
\(\begin{smallmatrix}
P^\perp & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & P^\perp & 0\\
0 & 0 & 0 & 1
\end{smallmatrix}\)
V^{1/2}\\
&=\(\begin{smallmatrix}
P^\perp(H^-+m^2-z^2)^{-1} & 0 & 0 & 0\\
0 & (H^++m^2-z^2)^{-1} & 0 & 0\\
0 & 0 & P^\perp(H^-+m^2-z^2)^{-1} & 0\\
0 & 0 & 0 & (H^++m^2-z^2)^{-1}
\end{smallmatrix}\)
V^{1/2},
\end{align*}
and
$$
\big\|\big(H_0^2-z^2\big)^{-1}\mathsf P^\perp V^{1/2}\big\|^2_2
\le2\;\!\|M\|^2\Big\{\big\|P^\perp\big(H^-+m^2-z^2\big)^{-1}M_2\big\|^2_2
+\big\|\big(H^++m^2-z^2\big)^{-1}M_2\big\|^2_2\Big\},
$$
where $M_2:=\langle Q_\perp\rangle^{-\nu_\perp/2}\langle Q_3\rangle^{-\nu_3/2}$.
But, we know from the proof of \cite[Prop. 4.4]{Rai09} that
$$
\big\|P^\perp(H^-+m^2-z^2)^{-1}M_2\big\|_2\le{\rm Const.}\;\!C(z)
\qquad\hbox{and}\qquad
\big\|(H^++m^2-z^2)^{-1}M_2\big\|_2\le{\rm Const.}\;\!C(z),
$$
where
$$
C(z):=\sup_{y\in[\zeta,\infty)}\frac{y+1}{|y+m^2-z^2|}\,.
$$
It follows that
\begin{equation}\label{thirdterm}
\|T_3(z)\|_2
\le{\rm Const.}\big\|zV^{1/2}\big(1+zH_0^{-1}\big)\big\|\,\|M\|\,C(z)
\le{\rm Const.}\(|z|+|z|^2\)C(z).
\end{equation}
The claim follows then by putting together \eqref{firstterm}, \eqref{secondterm}, and
\eqref{thirdterm}.
\end{proof}
In the next lemma we give some results on the imaginary part of the operator $S_z$
in $\mathsf{L}^{\:\!\!2}(\mathbb R)$ appearing in the proof of Proposition \ref{cigarette}
$$
S_z=\langle Q_3\rangle^{-\nu_3/2}P_3R(z^2-m^2)\langle Q_3\rangle^{-\nu_3/2},
\qquad z\in\mathbb C\setminus\sigma(H_0),~\nu_3>1.
$$
\begin{Lemma}\label{rank2}
\begin{enumerate}
\item[(a)] One has $\mathop{\mathsf{Im}}\nolimits S_\lambda=0$ for each $\lambda\in(-m,m)$.
\item[(b)] Let $p\ge1$ be an integer. Then one has for each $\lambda\in\mathbb R$ with
$|\lambda|>m$
$$
\|\mathop{\mathsf{Im}}\nolimits S_\lambda\|_p\le\textsc c_p\,,
$$
where $\textsc c_p$ is a constant independent of $\lambda$. Furthermore
$$
\lim_{\lambda\to\pm m,\,|\lambda|>m}\|\mathop{\mathsf{Im}}\nolimits S_\lambda\|_p=0.
$$
\end{enumerate}
\end{Lemma}
\begin{proof}
(a) This is a direct consequence of the spectral theorem.
(b) Let $\lambda\in\mathbb R$, $|\lambda|>m$. Then one shows by using \eqref{truffes} that
$\mathop{\mathsf{Im}}\nolimits S_\lambda$ is equal to the rank two operator
$$
\mathop{\mathsf{Im}}\nolimits S_\lambda
=\langle v_\lambda,\;\!\cdot\;\!\rangle\,u_\lambda
+\langle u_\lambda,\;\!\cdot\;\!\rangle\,v_\lambda,
$$
with
$$
\textstyle
u_\lambda(x_3):=\langle x_3\rangle^{-\nu_3/2}\sin\big(x_3\sqrt{\lambda^2-m^2}\big)
\qquad\hbox{and}\qquad
v_\lambda(x_3):=\textstyle
-\frac i2\langle x_3\rangle^{-\nu_3/2}\cos\big(x_3\sqrt{\lambda^2-m^2}\big).
$$
Since $\langle v_\lambda,u_\lambda\rangle=0$, this implies that
$$
|\mathop{\mathsf{Im}}\nolimits S_\lambda|^p
=\|u_\lambda\|^p\langle v_\lambda,\;\!\cdot\;\!\rangle\,v_\lambda
+\|v_\lambda\|^p\langle u_\lambda,\;\!\cdot\;\!\rangle\,u_\lambda.
$$
Thus
$$
\|\mathop{\mathsf{Im}}\nolimits S_\lambda\|_p^p
=\mathop{\mathsf{Tr}}\nolimits\big(|\mathop{\mathsf{Im}}\nolimits S_\lambda|^p\big)
=\|u_\lambda\|^p\,\|v_\lambda\|^2+\|v_\lambda\|^p\,\|u_\lambda\|^2.
$$
This, together with the equality
$$
\lim_{\lambda\to\pm m,\,|\lambda|>m}\|u_\lambda\|=0,
$$
implies the claim.
\end{proof}
In the next corollary we combine some of the results of Propositions \ref{coffee},
\ref{cigarette} and Lemma \ref{rank2}.
\begin{Corollary}\label{inParis}
Let $V$ satisfy Assumption \ref{assumption1}. Then the identity
\begin{equation}\label{manjar}
T(\lambda+i0)=T_{\sf div}(\lambda)+T_{\sf bound}(\lambda)
\end{equation}
holds for each
$
\lambda\in(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})\setminus\{\pm m\}
$,
and the estimate
\begin{equation}\label{queso}
\big\|\mathop{\mathsf{Im}}\nolimits T_{\sf bound}(\lambda)\big\|_p\le{\rm Const.}\;\!\|\mathop{\mathsf{Im}}\nolimits S_\lambda\|_p
\end{equation}
holds for each integer $p\ge1$ an each
$
\lambda\in(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta}).
$
In particular, we have
\begin{equation}\label{limit_im}
\lim_{\lambda\to\pm m}\big\|\mathop{\mathsf{Im}}\nolimits T_{\sf bound}(\lambda)\big\|_p=0,
\end{equation}
due to Lemma \ref{rank2}.
\end{Corollary}
\begin{proof}
The first identity follows from Propositions \ref{coffee} and \ref{cigarette}. Let
$\lambda\in(-\sqrt{m^2+\zeta},\sqrt{m^2+\zeta})$. Using \eqref{BeforeCommute} and the
commutation rule \eqref{acommutator} one obtains that
$$
\mathop{\mathsf{Im}}\nolimits T_{\sf bound}(\lambda)=M\big(G\otimes\mathop{\mathsf{Im}}\nolimits S_\lambda\big)
\(\begin{smallmatrix}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)M,
$$
with $M$ and $G$ defined by \eqref{operatorM}-\eqref{operatorG}. Since
$M$ is bounded and $\|G\|_1\le{\rm Const.}$, this implies \eqref{queso}.
\end{proof}
\section{Proof of the main results}
\setcounter{equation}{0}
We begin this section by showing that the value of $\xi(\lambda;H,H_\pm)$ as $\lambda\to\pm m$
is bounded from below and from above by expressions involving only the term
$T_{\sf div}(\lambda)$ of the decomposition
$T(\lambda+i0)=T_{\sf div}(\lambda)+T_{\sf bound}(\lambda)$. Then we consider separately
the limits $\lambda\to\pm m$ with $|\lambda|<m$ and the limits $\lambda\to\pm m$ with
$|\lambda|>m$.
We start by recalling two standard properties of the counting functions $n_\pm$. Given two
compact operators $T_1=T_1^*$ and $T_2=T_2^*$ in a separable Hilbert space $\mathcal G$, we have the
Weyl inequalities
\begin{equation}\label{Weyl}
n_\pm(s_1+s_2;T_1+T_2)\le n_\pm(s_1;T_1)+n_\pm(s_2;T_2)\quad\hbox{for each }s_1,s_2>0.
\end{equation}
Moreover, if $T=T^*$ belongs to $S_p(\mathcal G)$ for some $p\in[1,\infty)$, then
\begin{equation}\label{pbound}
n_\pm(s;T)\le s^{-p}\|T\|^p_p\quad\hbox{for each }s>0.
\end{equation}
\begin{Proposition}\label{empanada}
Let $V$ satisfy Assumption \ref{assumption2}. Then the estimates
\begin{align*}
&\int_\mathbb R\mathrm{d}\mu(t)\,
n_\pm\big(1+\varepsilon;\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)+t\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)\big)+\mathcal O(1)\\
&\le\mp\xi(\lambda;H_\mp,H_0)\\
&\le\int_\mathbb R\mathrm{d}\mu(t)\,
n_\pm\big(1-\varepsilon;\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)+t\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)\big)+\mathcal O(1)
\end{align*}
hold as $\lambda\to\pm m$ for each $\varepsilon\in(0,1)$.
\end{Proposition}
\begin{proof}
Using \eqref{manjar}, the Weyl inequalities \eqref{Weyl}, and Lemma \ref{aday}
we get
\begin{align}
&\int_\mathbb R\mathrm{d}\mu(t)\,
n_\pm\big(1+\varepsilon;\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)+t\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)\big)
-n_\mp\big(\varepsilon/2;\mathop{\mathsf{Re}}\nolimits T_{\sf bound}(\lambda)\big)
-\frac2{\pi\varepsilon}\big\|\mathop{\mathsf{Im}}\nolimits T_{\sf bound}(\lambda)\big\|_1\nonumber\\
&\le\int_\mathbb R\mathrm{d}\mu(t)\,n_\pm\big(1;A(\lambda+i0)+tB(\lambda+i0)\big)\nonumber\\
&\le\int_\mathbb R\mathrm{d}\mu(t)\,
n_\pm\big(1-\varepsilon;\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)+t\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)\big)
+n_\pm\big(\varepsilon/2;\mathop{\mathsf{Re}}\nolimits T_{\sf bound}(\lambda)\big)
+\frac2{\pi\varepsilon}\big\|\mathop{\mathsf{Im}}\nolimits T_{\sf bound}(\lambda)\big\|_1.\label{gniarf}
\end{align}
Due to \eqref{pbound}, we have
$$
n_{\pm}\big(\varepsilon/2;\mathop{\mathsf{Re}}\nolimits T_{\sf bound}(\lambda)\big)
\le16\!\;\varepsilon^{-4}\|T_{\sf bound}(\lambda)\|_4^4,
$$
which combined with \eqref{ChuchoValdes} gives
$$
n_{\pm}\big(\varepsilon/2;\mathop{\mathsf{Re}}\nolimits T_{\sf bound}(\lambda)\big)=\mathcal O(1)
\quad{\rm as}\quad\lambda\to\pm m.
$$
Moreover, we know from \eqref{limit_im} that
$$
\lim_{\lambda\to\pm m}\big\|\mathop{\mathsf{Im}}\nolimits T_{\sf bound}(\lambda)\big\|_1=0.
$$
So the claim follows from the estimates \eqref{gniarf} and Formula \eqref{lhs}
\end{proof}
\subsection{The case $\boldsymbol{|\lambda|<m}$}\label{inside}
In this section we prove asymptotic estimates for $\xi(\lambda;H,H_\pm)$ as $\lambda\to\pm m$
with $|\lambda|<m$. We start with a corollary of Proposition \ref{empanada}, which follows
from the fact that $\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)=0$ and
$\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)=T_{\sf div}(\lambda)$ for $\lambda\in(-m,m)$.
\begin{Corollary}\label{jet-lag}
Let $V$ satisfy Assumption \ref{assumption2}. Then the estimates
$$
n_\pm\big(1+\varepsilon;T_{\sf div}(\lambda)\big)+\mathcal O(1)
\le\mp\xi(\lambda;H_\mp,H_0)
\le n_\pm\big(1-\varepsilon;T_{\sf div}(\lambda)\big)+\mathcal O(1)
$$
hold as $\lambda\to\pm m$, $|\lambda|<m$, for each $\varepsilon\in(0,1)$.
\end{Corollary}
Define the bounded operators $K_\pm:\mathcal H\to\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^4)$ by
\begin{align*}
(K_+\varphi)(x_\perp)&:=\int_{\mathbb R^3}\mathrm{d} x_\perp'\mathrm{d} x_3'\,p(x_\perp,x_\perp')
\(\begin{smallmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
V^{1/2}(x_\perp',x_3')\varphi(x_\perp',x_3'),\\
(K_-\varphi)(x_\perp)&:=\int_{\mathbb R^3}\mathrm{d} x_\perp'\mathrm{d} x_3'\,p(x_\perp,x_\perp')
\(\begin{smallmatrix}
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
V^{1/2}(x_\perp',x_3')\varphi(x_\perp',x_3'),
\end{align*}
where $p(\;\!\cdot\;\!,\;\!\cdot\;\!)$ is the integral kernel of the projection $p$.
One shows easily that $K_\pm^*:\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^4)\to\mathcal H$ are given by
\begin{align*}
(K_+^*\psi)(x_\perp,x_3)&=V^{1/2}(x_\perp,x_3)
\(\begin{smallmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
(p\psi)(x_\perp),\\
(K_-^*\psi)(x_\perp,x_3)&=V^{1/2}(x_\perp,x_3)
\(\begin{smallmatrix}
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
(p\psi)(x_\perp),
\end{align*}
and that
$$
O_+(\lambda):=\textstyle\frac12\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2}K_+^*K_+
\qquad{\rm and}\qquad
O_-(\lambda):=\textstyle-\frac12\big(\frac{m-\lambda}{m+\lambda}\big)^{1/2}K_-^*K_-
$$
belong to $S_2(\mathcal H)$ for each $\lambda\in(-m,m)$.
In the next proposition we show that the functions
$n_\pm\big(\;\!\cdot\;\!;T_{\sf div}(\lambda)\big)$ as $\lambda\to\pm m$, $|\lambda|<m$,
can be bounded, up to $\mathcal O(1)$ terms, from below and from above by expressions involving
$O_\pm(\lambda)$.
\begin{Proposition}\label{PoppaChubby}
Let $V$ satisfy Assumption \ref{assumption2}. Then the estimates
\begin{align}
n_+\big((1+\varepsilon)s;O_+(\lambda)\big)+\mathcal O(1)
&\le n_+\big(s;T_{\sf div}(\lambda)\big)
\le n_+\big((1-\varepsilon)s;O_+(\lambda)\big)+\mathcal O(1),\label{positive1}\\
\mathcal O(1)&\le n_-\big(s;T_{\sf div}(\lambda)\big)\le \mathcal O(1),\label{positive2}
\end{align}
hold as $\lambda\nearrow m$, for each $\varepsilon\in(0,1)$ and $s>0$,
and the estimates
\begin{align}
\mathcal O(1)&\le n_+\big(s;T_{\sf div}(\lambda)\big)\le \mathcal O(1),\label{positive3}\\
n_-\big((1+\varepsilon)s;O_-(\lambda)\big)+\mathcal O(1)
&\le n_-\big(s;T_{\sf div}(\lambda)\big)
\le n_-\big((1-\varepsilon)s;O_-(\lambda)\big)+\mathcal O(1),\label{positive4}
\end{align}
hold as $\lambda\searrow-m$, for each $\varepsilon\in(0,1)$ and $s>0$.
\end{Proposition}
\begin{proof}
We only give the proof of \eqref{positive1}-\eqref{positive2}, since the proof of
\eqref{positive3}-\eqref{positive4} is similar. In point (i) below we show that the
difference $T_{\sf div}(\lambda)-O_+(\lambda)$ can be approximated in norm, as
$\lambda\nearrow m$, by a compact operator independent of $\lambda$. Then
we prove \eqref{positive1}-\eqref{positive2} in point (ii) by using this result.
(i) Let $\lambda\in(-m,m)$ and take $\nu'\in(3,\nu)$. A direct calculation shows that
\begin{equation}\label{TminusO}
T_{\sf div}(\lambda)-O_+(\lambda)
=\widetilde M\big(G_{\nu-\nu'}\otimes J^{(\lambda)}_{\nu'}\big)
\(\begin{smallmatrix}
(\lambda+m) & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & (\lambda-m) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
\widetilde M+O_-(\lambda),
\end{equation}
where $J^{(\lambda)}_{\nu'}:\mathsf{L}^{\:\!\!2}(\mathbb R)\to\mathsf{L}^{\:\!\!2}(\mathbb R)$ is given by
$$
\big(J^{(\lambda)}_{\nu'}\psi\big)(x_3):=-\langle x_3\rangle^{-\nu'/2}\int_\mathbb R\mathrm{d} x_3'
\frac{\mathop{\mathrm{e}}\nolimits^{-\frac12\sqrt{m^2-\lambda^2}|x_3-x_3'|}}{\sqrt{m^2-\lambda^2}}\,
\sinh\Big(\frac{\sqrt{m^2-\lambda^2}|x_3-x_3'|}2\Big)\langle x_3'\rangle^{-\nu'/2}\psi(x_3'),
$$
and
\begin{align}
\widetilde M&:=V^{1/2}\langle Q_\perp\rangle^{(\nu-\nu')/2}
\langle Q_3\rangle^{\nu'/2},\label{Mtilde}\\
G_{\nu-\nu'}&:=\langle Q_\perp\rangle^{-(\nu-\nu')/2}p
\langle Q_\perp\rangle^{-(\nu-\nu')/2}.\label{G}
\end{align}
The operator $\widetilde M$ is bounded due to Assumption \ref{assumption2}, $G_{\nu-\nu'}$
is compact in $\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^4)$ due to Lemma \ref{lem_rai}, and
$O_-(\lambda)$ satisfies
\begin{equation}\label{omoins}
\lim_{\lambda\to m,\,|\lambda|<m}
\big\|O_-(\lambda)\big\|_2=0.
\end{equation}
Define
\begin{equation}\label{Tplusminus}
T_\pm:=\widetilde M\big(G_{\nu-\nu'}\otimes J^{(m)}_{\nu'}\big)
\(\begin{smallmatrix}
(m\pm m) & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & -(m\mp m) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
\widetilde M,
\end{equation}
with $J^{(m)}_{\nu'}:\mathsf{L}^{\:\!\!2}(\mathbb R)\to\mathsf{L}^{\:\!\!2}(\mathbb R)$ given by
$$
\big(J^{(m)}_{\nu'}\psi\big)(x_3):=-{\textstyle\frac12}\langle x_3\rangle^{-\nu'/2}\int_\mathbb R\mathrm{d} x_3'\,
|x_3-x_3'|\langle x_3'\rangle^{-\nu'/2}\psi(x_3').
$$
Since $\nu'>3$, $J^{(m)}_{\nu'}$ belongs to $S_2[\mathsf{L}^{\:\!\!2}(\mathbb R)]$, and $T_\pm$ is compact
in $\mathcal H$. Moreover, by using Lebesgue's dominated convergence theorem, one shows that
$$
\lim_{\lambda\to\pm m,\,|\lambda|<m}
\big\|J^{(m)}_{\nu'}-J^{(\lambda)}_{\nu'}\big\|^2_2=0.
$$
This, together with \eqref{TminusO}, \eqref{omoins} and \eqref{Tplusminus}, implies
that
\begin{equation}\label{CaptainKirk}
\lim_{\lambda\nearrow m}\big\|T_{\sf div}(\lambda)-O_+(\lambda)-T_+\big\|=0.
\end{equation}
(ii) Take $\lambda\in(-m,m)$, $\varepsilon\in(0,1)$, and $s>0$. Using the Weyl
inequalities \eqref{Weyl} we get
\begin{align*}
n_\pm\big((1+\varepsilon)s;O_+(\lambda)\big)
-n_\mp\big(\varepsilon s;T_{\sf div}(\lambda)-O_+(\lambda)\big)
&\le n_\pm\big(s;T_{\sf div}(\lambda)\big)\\
&\le n_\pm\big((1-\varepsilon)s;O_+(\lambda)\big)
+n_\pm\big(\varepsilon s;T_{\sf div}(\lambda)-O_+(\lambda)\big).
\end{align*}
Now we have $n_-\big(t;O_+(\lambda)\big)=0$ for each $t>0$ and $\lambda\in(-m,m)$,
since $O_+(\lambda)$ is a positive operator. So, to prove
\eqref{positive1}-\eqref{positive2}, it is sufficient to show that
$
n_\pm\big(\varepsilon s;T_{\sf div}(\lambda)-O_+(\lambda)\big)=\mathcal O(1)
$
as $\lambda\nearrow m$, for each $\varepsilon\in(0,1)$ and $s>0$. Let
$t>0$ be fixed. Then we know from \eqref{CaptainKirk} that we can chose
$\lambda_+\in(-m,m)$, close enough to $m$, so that
$\big\|T_{\sf div}(\lambda_+)-O_+(\lambda_+)-T_+\big\|<t/2$. Thus, using again the
Weyl inequalities, we get
$$
n_\pm\big(t;T_{\sf div}(\lambda_+)-O_+(\lambda_+)\big)
\le n_\pm\big(t/2;T_{\sf div}(\lambda_+)-O_+(\lambda_+)-T_+\big)+n_\pm\big(t/2;T_+\big)
=n_\pm\big(t/2;T_+\big).
$$
Since the r.h.s. is independent of $\lambda_+$ we have shown that
$n_\pm\big(t;T_{\sf div}(\lambda)-O_+(\lambda)\big)=\mathcal O(1)$ as $\lambda\nearrow m$.
This concludes the proof of \eqref{positive1}-\eqref{positive2}.
\end{proof}
We show now that the counting functions $n_\pm\big(\;\!\cdot\;\!;O_\pm(\lambda)\big)$
in Proposition \ref{PoppaChubby} can be rewritten in terms of Berezin-Toeplitz type operators.
Define for each $\lambda\in(-m,m)$
$$
\omega_+(\lambda)
:=\textstyle\frac12\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2}pW_+p
\qquad{\rm and}\qquad
\omega_-(\lambda)
:=\textstyle-\frac12\big(\frac{m-\lambda}{m+\lambda}\big)^{1/2}pW_-p,
$$
where the functions $W_\pm:\mathbb R^2\to\mathbb R$ are given by
\begin{equation}\label{BigOmegas}
W_+(x_\perp):=\int_\mathbb R\mathrm{d} x_3\,V_{11}(x_\perp,x_3)
\qquad{\rm and}\qquad
W_-(x_\perp):=\int_\mathbb R\mathrm{d} x_3\,V_{33}(x_\perp,x_3).
\end{equation}
Under the condition \eqref{a_decay} one has
$$
0\le W_\pm(x_\perp)\le{\rm Const.}\;\!\langle x_\perp\rangle^{-\nu+1}
\quad\hbox{for all }x_\perp\in\mathbb R^2,
$$
and $\omega_\pm(\lambda)\in S_1[\mathsf{L}^{\:\!\!2}(\mathbb R^2)]$ if $V$ satisfies Assumption
\ref{first_decay} (see Lemma \ref{lem_rai}). Moreover, one has the
following.
\begin{Proposition}\label{spec_AB}
Let $V$ satisfy Assumption \ref{assumption1}. Then we have for each $\lambda\in(-m,m)$
and $s>0$
\begin{equation}\label{O=omega}
n_\pm\big(s;O_\pm(\lambda)\big)=n_\pm\big(s;\omega_\pm(\lambda)\big).
\end{equation}
\end{Proposition}
\begin{proof}
Given $s>0$ and two separable Hilbert spaces $\mathcal H_1,\mathcal H_2$, one has
\begin{equation}\label{BB*=B*B}
n_\pm\big(s;B^*B\big)=n_\pm\big(s;BB^*\big)
\end{equation}
for any $B\in\mathscr B(\mathcal H_1,\mathcal H_2)$ such that $B^*B\in S_\infty(\mathcal H_1)$. Moreover, one can
easily check that
$$
K_+K_+^*=
\(\begin{smallmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
pW_+p
\qquad{\rm and}\qquad
K_-K_-^*=
\(\begin{smallmatrix}
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
pW_-p.
$$
Thus
$$
\textstyle
n_+\big(s;O_+(\lambda)\big)
=n_+\bigg(s;\frac12\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2}
\(\begin{smallmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
pW_+p\bigg)
=n_+\Big(s;\frac12\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2}pW_+p\Big)
=n_+\big(s;\omega_+(\lambda)\big).
$$
The proof of the second equality in \eqref{O=omega} is similar.
\end{proof}
The next theorem is direct consequence of Corollary \ref{jet-lag} and Propositions
\ref{PoppaChubby}-\ref{spec_AB}.
\begin{Theorem}\label{thm<}
Let $V$ satisfy Assumption \ref{assumption2}. Then one has for each $\varepsilon\in(0,1)$
\begin{align}
\mathcal O(1)&\le\xi(\lambda;H_+,H_0)\le \mathcal O(1)\label{ineq1}\\
-n_+\big(1-\varepsilon;\omega_+(\lambda)\big)+\mathcal O(1)
&\le\xi(\lambda;H_-,H_0)
\le-n_+\big(1+\varepsilon;\omega_+(\lambda)\big)+\mathcal O(1)\label{ineq2}
\end{align}
as $\lambda\nearrow m$, and
\begin{align}
n_-\big(1+\varepsilon;\omega_-(\lambda)\big)+\mathcal O(1)
&\le\xi(\lambda;H_+,H_0)
\le n_-\big(1-\varepsilon;\omega_-(\lambda)\big)+\mathcal O(1)\label{ineq3}\\
\mathcal O(1)&\le\xi(\lambda;H_-,H_0)\le \mathcal O(1)\label{ineq4}
\end{align}
as $\lambda\searrow-m$.
\end{Theorem}
\begin{Remark}
The inequalities \eqref{ineq1} together with Remark \eqref{int_eigen} imply that the
eigenvalues of $H_0+V$ in $(-m,m)$ near $+m$ (if any) do not accumulate at $+m$. On
the other hand the inequalities \eqref{ineq2} tell us that the number of eigenvalues
of $H_0-V$ in $(-m,m)$ near $\lambda=+m$ scales, up to $\mathcal O(1)$ terms, as
$$
\textstyle
n_+\big(s;\omega_+(\lambda)\big)\equiv\mathop{\mathrm{rank}}\nolimits E^{pW_+p}
\Big(\Big(s\big(\frac{m-\lambda}{m+\lambda}\big)^{1/2},\infty\Big)\Big)
$$
with $s\approx2$. Accordingly, the problem of counting the number of eigenvalues of
$H_0-V$ in $(-m,m)$ near $+m$ reduces to the problem of counting the number of eigenvalues
of the positive Berezin-Toeplitz type operator $pW_+p$ near $0$. The inequalities
\eqref{ineq3}-\eqref{ineq4} lead to similar conclusions on the number
of eigenvalues of $H_0\pm V$ in $(-m,m)$ near $-m$.
One can compare these results with the results of \cite{Coj06} and \cite{IM99} on the
finiteness in $(-m,m)$ of the discrete spectrum of the Dirac operator perturbed by a
matrix potential $Q\equiv\{Q_{jk}(x)\}_{j,k=1}^4$. In Corollary 2.2 of \cite{Coj06}, the
author shows that the spectrum in $(-m,m)$ of the Dirac operator perturbed by $Q$ is
finite if the $2\times2$ diagonal blocks of $Q$ are of order $\mathcal O\big(|x|^{-2-\delta}\big)$
and the anti-diagonal blocks are of order $\mathcal O\big(|x|^{-1-\delta}\big)$, for some
$\delta>0$ as $|x|\to\infty$. In Corollary 2.1 of \cite{IM99}, the authors show that the
Dirac operator perturbed by $\gamma Q$, with $|\gamma|$ small enough and
$$
\big|Q_{jk}(x)\big|\le\langle x\rangle^{-2},\qquad j,k\in\{1,2,3,4\},
$$
does not have any point spectrum. Therefore, in our case where $Q=-\alpha_1a_1-\alpha_2a_2+V$,
we would not have had any accumulation of eigenvalues in $(-m,m)$ if we would have imposed
such decay assumptions on the magnetic part $-\alpha_1a_1-\alpha_2a_2$ of the perturbation.
\end{Remark}
As seen in Theorem \ref{thm<} the behaviour of the function $\xi(\;\!\cdot\;\!;H_\pm,H_0)$
in $(-m,m)$ depends on the distribution of
eigenvalues of the trace class operator $pW_\mp p$. In our next proposition we
shall exhibit different types of behaviours depending on the choice of the
functions $V_{11}$ and $V_{33}$ appearing in $W_\pm$. For that purpose, we first have
to recall some technical results taken from \cite{Rai09}, \cite{Rai03} and \cite{RW02}.
In the first lemma, an integrated density of states (IDS) for the operator $H_\perp^-$
in $\mathsf{L}^{\:\!\!2}(\mathbb R^2)$ is defined as follows (see \eg \cite{DIM01,HLMW01}): Let $\chi_{T,x_\perp}$
be the characteristic function of the square $x_\perp+\big(-\frac T2,\frac T2\big)^2$, with
$x_\perp\in\mathbb R^2$ and $T>0$. Then a non-increasing function $\varrho:[0,\infty)\to\mathbb R$ is
called IDS for the operator $H_\perp^-$ if for each $x_\perp\in\mathbb R^2$ it satisfies
$$
\varrho(\lambda)
=\lim_{T\to\infty}T^{-2}\mathop{\mathsf{Tr}}\nolimits\big[\chi_{T,x_\perp}(Q_\perp)
E^{H_\perp^-}\big((-\infty,\lambda)\big)\chi_{T,x_\perp}(Q_\perp)\big]
$$
for each point $\lambda\in\mathbb R$ of continuity of $\varrho$.
\begin{Lemma}[Lemma 3.3 of \cite{Rai09}]\label{tec_Rai1}
Let $U\in C^1(\mathbb R^2)$ satisfy
$$
0\le U(x_\perp)\le{\rm Const.}\;\!\langle x_\perp\rangle^{-\alpha}
\qquad\hbox{and}\qquad
\big|(\nabla U)(x_\perp)\big|\le{\rm Const.}\;\!\langle x_\perp\rangle^{-\alpha-1}
$$
for all $x\in\mathbb R^2$ and some $\alpha>0$. Assume moreover that
\begin{enumerate}
\item[$\bullet$] $U(x_\perp)=u\big(\frac{x_\perp}{|x_\perp|}\big)\big(1+o(1)\big)$ as
$|x_\perp|\to\infty$, where $u$ is a continuous function on $\mathbb S^1$ which does
not vanish identically,
\item[$\bullet$] $b$ is an admissible magnetic field,
\item[$\bullet$] there exists an IDS $\varrho_b$ for the operator $H_\perp^-$.
\end{enumerate}
Then we have
$$
n_+\big(s;pUp\big)
=\frac{b_0}{2\pi}\big|\big\{x_\perp\in\mathbb R^2\mid U(x_\perp)>s\big\}\big|\big(1+o(1)\big)
=\Psi_\alpha(s;u,b_0)\big(1+o(1)\big)\quad\hbox{as}\quad s\searrow0,
$$
where $|\;\!\cdot\;\!|$ denotes the Lebesgue measure, and
\begin{equation}\label{psi_alpha}
\Psi_\alpha(s;u,b_0):=\frac{s^{-2/\alpha}b_0}{4\pi}
\int_{\mathbb S^1}\mathrm{d}\vartheta\,u(\vartheta)^{2/\alpha},\qquad s>0.
\end{equation}
\end{Lemma}
\begin{Lemma}[Lemma 3.4 of \cite{Rai09}]\label{tec_Rai2}
Let $0\le U\in\mathsf{L}^{\:\!\!\infty}(\mathbb R^2)$. Assume that
$$
\ln\big(U(x_\perp)\big)
=-\eta|x_\perp|^{2\beta}\big(1+o(1)\big)\quad\hbox{as}\quad|x_\perp|\to\infty,
$$
for some $\eta,\beta>0$. Let $b$ be an admissible magnetic field. Then we have
$$
n_+\big(s;pUp\big)=\Phi_\beta(s,\eta,b_0)\big(1+o(1)\big)
\quad\hbox{as}\quad s\searrow0,
$$
where
\begin{equation}\label{phi_beta}
\Phi_\beta(s,\eta,b_0):=
\begin{cases}
\frac{b_0}{2\eta^{1/\beta}}\;\!|\ln(s)|^{1/\beta} & \hbox{if}~~\beta\in(0,1),\\
\frac1{\ln(1+2\eta/b_0)}\;\!|\ln(s)| & \hbox{if}~~\beta=1,\\
\frac\beta{\beta-1}\big(\ln|\ln(s)|\big)^{-1}|\ln(s)| & \hbox{if}~~\beta>1,
\end{cases}
\qquad s\in(0,\mathop{\mathrm{e}}\nolimits^{-1}).
\end{equation}
\end{Lemma}
\begin{Lemma}[Lemma 3.5 of \cite{Rai09}]\label{tec_Rai3}
Let $0\le U\in\mathsf{L}^{\:\!\!\infty}(\mathbb R^2)$. Assume that the support of $U$ is compact, and that
there exists a constant $\textsc c>0$ such that $U\ge\textsc c$ on an open
non-empty subset of $\mathbb R^2$. Let $b$ be an admissible magnetic field. Then we have
$$
n_+\big(s;pUp\big)=\Phi_\infty(s)\big(1+o(1)\big)\quad\hbox{as}\quad s\searrow0,
$$
where
\begin{equation}\label{phi_infty}
\Phi_\infty(s):=\big(\ln|\ln(s)|\big)^{-1}|\ln(s)|,\qquad s\in(0,\mathop{\mathrm{e}}\nolimits^{-1}).
\end{equation}
\end{Lemma}
Combining Theorem \ref{thm<} with Lemmas \ref{tec_Rai1}-\ref{tec_Rai3} we obtain
the behaviour of $\xi(\lambda;H_\pm,H_0)$ as $|\lambda|\to m$, $|\lambda|<m$,
when the functions $W_\pm$ admit a power-like or exponential decay at infinity,
or when they have a compact support.
\begin{Proposition}\label{in_gap}
Let $V$ satisfy Assumption \ref{assumption2}.
\begin{enumerate}
\item[(a)] Assume that the hypotheses of Lemma \ref{tec_Rai1} hold with
$U_\pm=W_\pm$ and $\alpha=\nu-1$. Then we have
$$
\xi(\lambda;H_-,H_0)
=\textstyle-\Psi_{\nu-1}\Big(2\big(\frac{m-\lambda}{m+\lambda}\big)^{1/2};u_+,b_0\Big)
\big(1+o(1)\big)\quad\hbox{as}\quad\lambda\nearrow m,
$$
and
$$
\xi(\lambda;H_+,H_0)
=\textstyle\Psi_{\nu-1}\Big(2\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2};u_-,b_0\Big)
\big(1+o(1)\big)\quad\hbox{as}\quad\lambda\searrow-m,
$$
with $\Psi_{\nu-1}$ given by Equation \eqref{psi_alpha}.
\item[(b)] Assume that the hypotheses of Lemma \ref{tec_Rai2} hold with
$U_\pm=W_\pm$. Then we have
$$
\xi(\lambda;H_-,H_0)=\textstyle-\Phi_{\beta_+}
\Big(2\big(\frac{m-\lambda}{m+\lambda}\big)^{1/2};\eta_+,b_0\Big)\big(1+o(1)\big)
\quad\hbox{as}\quad\lambda\nearrow m,
$$
and
$$
\xi(\lambda;H_+,H_0)=\textstyle\Phi_{\beta_-}
\Big(2\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2};\eta_-,b_0\Big)\big(1+o(1)\big)
\quad\hbox{as}\quad\lambda\searrow-m,
$$
with $\beta_\pm\in(0,\infty)$ and $\Phi_{\beta_\pm}$ given by Equation \eqref{phi_beta}.
\item[(c)] Assume that the hypotheses of Lemma \ref{tec_Rai3} hold with
$U_\pm=W_\pm$. Then we have
$$
\xi(\lambda;H_-,H_0)=\textstyle-\Phi_\infty
\Big(2\big(\frac{m-\lambda}{m+\lambda}\big)^{1/2}\Big)\big(1+o(1)\big)
\quad\hbox{as}\quad\lambda\nearrow m,
$$
and
$$
\xi(\lambda;H_+,H_0)=\textstyle\Phi_\infty
\Big(2\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2}\Big)\big(1+o(1)\big)
\quad\hbox{as}\quad\lambda\searrow-m,
$$
with $\Phi_\infty$ given by Equation \eqref{phi_infty}.
\end{enumerate}
\end{Proposition}
The estimates of Proposition \ref{in_gap} are similar to the ones of \cite[Cor.~3.6]{Rai09},
where the corresponding situation for magnetic Pauli operators is considered.
\subsection{The case $\boldsymbol{|\lambda|>m}$}\label{outside}
In this section we prove asymptotic estimates for $\xi(\lambda;H,H_\pm)$ as $\lambda\to\pm m$,
when $|\lambda|>m$. We start by showing an estimate for
$n_\pm\big(s;\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)\big)$.
\begin{Proposition}\label{pebre}
Let $V$ satisfy Assumption \ref{assumption2}. Then the estimates
$$
n_\pm\big(s;\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)\big)=\mathcal O(1)
\quad\hbox{as}\quad\lambda\to\pm m,~|\lambda|>m,
$$
hold for each $s>0$.
\end{Proposition}
\begin{proof}
Take $\lambda\in\mathbb R$ with $|\lambda|>m$, and let $\nu'\in(3,\nu)$. Then we have
$$
\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)=\widetilde M\big(G_{\nu-\nu'}\otimes R^{(\lambda)}_{\nu'}\big)
\(\begin{smallmatrix}
(\lambda+m) & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & (\lambda-m) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)\widetilde M,
$$
with $\widetilde M$ and $G_{\nu-\nu'}$ as in \eqref{Mtilde}-\eqref{G}, and
$$
R^{(\lambda)}_{\nu'}:=\langle Q_3\rangle^{-\nu'/2}\mathop{\mathsf{Re}}\nolimits R(\lambda^2-m^2)
\langle Q_3\rangle^{-\nu'/2}.
$$
By using Lebesgue's dominated convergence theorem, one shows that
$$
\lim_{\lambda\to\pm m,\,|\lambda|>m}\big\|\mathop{\mathsf{Re}}\nolimits T_{\sf div}(\lambda)-T_\pm\big\|=0,
$$
with $T_\pm$ as in \eqref{Tplusminus}. So the claim can be proved as in point (ii)
of the proof of Proposition \ref{PoppaChubby}.
\end{proof}
The next result follows from applying Propositions \ref{empanada} and \ref{pebre},
the Weyl inequalities \eqref{Weyl} and the identities \cite[Sec.~5.4]{FR04}
\begin{equation}\label{id_arctan}
\int_\mathbb R\mathrm{d}\mu(t)\,n_\pm\big(s;tT\big)=\pi^{-1}\mathop{\mathsf{Tr}}\nolimits\arctan(s^{-1}T),\qquad s>0,
\end{equation}
where $T\in S_1(\mathcal H)$, $T=T^*\ge0$. We also use the fact that
$\mathop{\mathrm{sgn}}\nolimits(\lambda)\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)$ is a positive operator if $|\lambda|>m$.
\begin{Corollary}\label{Musashi}
Let $V$ satisfy Assumption \ref{assumption2}. Then the estimates
\begin{align*}
&\pi^{-1}\mathop{\mathsf{Tr}}\nolimits\arctan
\big[(1+\varepsilon)^{-1}\mathop{\mathrm{sgn}}\nolimits(\lambda)\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)\big]+\mathcal O(1)\\
&\le\mp\xi(\lambda;H_\mp,H_0)\\
&\le\pi^{-1}\mathop{\mathsf{Tr}}\nolimits\arctan
\big[(1-\varepsilon)^{-1}\mathop{\mathrm{sgn}}\nolimits(\lambda)\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)\big]+\mathcal O(1)
\end{align*}
hold as $\lambda\to\pm m$, $|\lambda|>m$, for each $\varepsilon\in(0,1)$.
\end{Corollary}
As in the case $|\lambda|<m$, we introduce auxiliary operators in order to express
the lower and upper bounds for $\mp\xi(\lambda;H_\mp,H_0)$ in terms of Berezin-Toeplitz
type operators. For $\lambda\in\mathbb R$ with $|\lambda|>m$, we define the operators
$K_{1,\lambda},K_{2,\lambda}:\mathcal H\to\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^4)$ by
\begin{align*}
(K_{1,\lambda}\varphi)(x_\perp)
&:=\int_{\mathbb R^3}\mathrm{d} x_\perp'\mathrm{d} x_3'\,p(x_\perp,x_\perp')
\cos\big(x_3'\sqrt{\lambda^2-m^2}\big)
\(\begin{smallmatrix}
\sqrt{|\lambda+m|} & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & \sqrt{|\lambda-m|} & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
V^{1/2}(x_\perp',x_3')\varphi(x_\perp',x_3'),\\
(K_{2,\lambda}\varphi)(x_\perp)
&:=\int_{\mathbb R^3}\mathrm{d} x_\perp'\mathrm{d} x_3'\,p(x_\perp,x_\perp')
\sin\big(x_3'\sqrt{\lambda^2-m^2}\big)
\(\begin{smallmatrix}
\sqrt{|\lambda+m|} & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & \sqrt{|\lambda-m|} & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
V^{1/2}(x_\perp',x_3')\varphi(x_\perp',x_3').
\end{align*}
Direct calculations show that the adjoint operators
$K_{1,\lambda}^*,K_{2,\lambda}^*:\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^4)\to\mathcal H$ are given by
\begin{align*}
(K_{1,\lambda}^*\psi)(x_\perp,x_3)
&=\cos\big(x_3\sqrt{\lambda^2-m^2}\big)V^{1/2}(x_\perp,x_3)
\(\begin{smallmatrix}
\sqrt{|\lambda+m|} & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & \sqrt{|\lambda-m|} & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
(p\psi)(x_\perp),\\
(K_{2,\lambda}^*\psi)(x_\perp,x_3)
&=\sin\big(x_3\sqrt{\lambda^2-m^2}\big)V^{1/2}(x_\perp,x_3)
\(\begin{smallmatrix}
\sqrt{|\lambda+m|} & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & \sqrt{|\lambda-m|} & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\)
(p\psi)(x_\perp),
\end{align*}
and that
$$
\mathop{\mathrm{sgn}}\nolimits(\lambda)\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)=\frac1{2\sqrt{\lambda^2-m^2}}
\big(K_{1,\lambda}^*K_{1,\lambda}+K_{2,\lambda}^*K_{2,\lambda}\big).
$$
This last equation can be written more compactly as
\begin{equation}\label{graou}
\mathop{\mathrm{sgn}}\nolimits(\lambda)\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)
=\frac1{2\sqrt{\lambda^2-m^2}}\,K_\lambda^*K_\lambda
\end{equation}
if we use the operator
$$
K_\lambda:\mathcal H\to\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^8),\qquad K_\lambda\varphi:=
\begin{pmatrix}
K_{1,\lambda}\varphi\\
K_{2,\lambda}\varphi
\end{pmatrix},
$$
with adjoint
$$
K_\lambda^*:\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^8)\to\mathcal H,
\qquad K_{\lambda}^*\begin{pmatrix}\psi_1\\\psi_2\end{pmatrix}
=K_{1,\lambda}^*\psi_1+K_{2,\lambda}^*\psi_2.
$$
For the next proposition we also need to introduce for each $\lambda\in\mathbb R$ with
$|\lambda|>m$ the positive operator $\Omega(\lambda):\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^8)\to\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^8)$
defined by
$$
\Omega(\lambda):=\frac1{2\sqrt{\lambda^2-m^2}}\,K_\lambda K_\lambda^*.
$$
A direct calculation shows that
$$
K_\lambda K_\lambda^*
=p\begin{pmatrix}
M_{1,\lambda} & M_{2,\lambda}\\
M_{2,\lambda} & M_{3,\lambda}
\end{pmatrix}p,
$$
where
\begin{align*}
M_{1,\lambda}(x_\perp)
&:=\int_\mathbb R\mathrm{d} x_3\,\cos^2\big(x_3\sqrt{\lambda^2-m^2}\big)
\(\begin{smallmatrix}
|\lambda+m|V_{11}(x_\perp,x_3) & 0
& \sqrt{\lambda^2-m^2}V_{13}(x_\perp,x_3) & 0\\
0 & 0 & 0 & 0\\
\sqrt{\lambda^2-m^2}V_{31}(x_\perp,x_3) & 0
& |\lambda-m|V_{33}(x_\perp,x_3) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\),\\
M_{2,\lambda}(x_\perp)
&=\int_\mathbb R\mathrm{d} x_3\,\sin\big(x_3\sqrt{\lambda^2-m^2}\big)
\cos\big(x_3\sqrt{\lambda^2-m^2}\big)
\(\begin{smallmatrix}
|\lambda+m|V_{11}(x_\perp,x_3) & 0
& \sqrt{\lambda^2-m^2}V_{13}(x_\perp,x_3) & 0\\
0 & 0 & 0 & 0\\
\sqrt{\lambda^2-m^2}V_{31}(x_\perp,x_3) & 0
& |\lambda-m|V_{33}(x_\perp,x_3) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\),\\
M_{3,\lambda}(x_\perp)
&=\int_\mathbb R\mathrm{d} x_3\,\sin^2\big(x_3\sqrt{\lambda^2-m^2}\big)
\(\begin{smallmatrix}
|\lambda+m|V_{11}(x_\perp,x_3) & 0
& \sqrt{\lambda^2-m^2}V_{13}(x_\perp,x_3) & 0\\
0 & 0 & 0 & 0\\
\sqrt{\lambda^2-m^2}V_{31}(x_\perp,x_3) & 0
& |\lambda-m|V_{33}(x_\perp,x_3) & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\).
\end{align*}
This implies that
$$
\textstyle
\|\Omega(\lambda)\|_1
\le\big(\frac{\lambda+m}{\lambda-m}\big)^{1/2}\big\|pW_+p\big\|_1
+\big(\frac{\lambda-m}{\lambda+m}\big)^{1/2}\big\|pW_-p\big\|_1,
$$
and thus $\Omega(\lambda)\in S_1[\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^8)]$ if $V$ satisfies Assumption
\ref{assumption1}.
Next Proposition is a direct consequence of Equations \eqref{BB*=B*B} and
\eqref{graou}.
\begin{Proposition}
Let $V$ satisfy Assumption \ref{assumption1}. Then we have for each $\lambda\in\mathbb R$
with $|\lambda|>m$ and each $s>0$
$$
n_\pm\big(s;\mathop{\mathrm{sgn}}\nolimits(\lambda)\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)\big)
=n_\pm\big(s;\Omega(\lambda)\big).
$$
In particular, it follows by Equation \eqref{id_arctan} that
\begin{equation}\label{cedula}
\mathop{\mathsf{Tr}}\nolimits\arctan\big(s^{-1}\mathop{\mathrm{sgn}}\nolimits(\lambda)\mathop{\mathsf{Im}}\nolimits T_{\sf div}(\lambda)\big)
=\mathop{\mathsf{Tr}}\nolimits\arctan\big(s^{-1}\Omega(\lambda)\big).
\end{equation}
\end{Proposition}
The combination of Corollary \ref{Musashi} and Equation \eqref{cedula} gives the following.
\begin{Theorem}\label{thm_ext}
Let $V$ satisfy Assumption \ref{assumption2}. Then one has for each $\varepsilon\in(0,1)$
\begin{align*}
\pm\pi^{-1}\mathop{\mathsf{Tr}}\nolimits\arctan\big[(1\pm\varepsilon)^{-1}\Omega(\lambda)\big]+\mathcal O(1)
\le\xi(\lambda;H_\pm,H_0)
\le\pm\pi^{-1}\mathop{\mathsf{Tr}}\nolimits\arctan\big[(1\mp\varepsilon)^{-1}\Omega(\lambda)\big]+\mathcal O(1)
\end{align*}
as $\lambda\to\pm m$, $|\lambda|>m$.
\end{Theorem}
\begin{Remark}\label{C_sym}
The fact that the operators $\omega_\pm(\lambda)$ and $\Omega(\lambda)$ in
Theorems \ref{thm<} and \ref{thm_ext} depend in a distinguished way on the
components $V_{11}$ and $V_{33}$ of $V$ is due to our initial assumption
$b_0>0$. Indeed, this choice implies that $\ker(H^-_\perp)$ is non trivial,
whereas $\ker(H^+_\perp)=\{0\}$. This lead us to introduce in Section
\ref{Sec_Dec} the projection $\mathsf P\equiv\mathop{\mathrm{diag}}\nolimits(P,0,P,0)$, which put into light the
priviledged role of the components $V_{11}$ and $V_{33}$ of $V$.
The variation of $\xi(\lambda;H_\pm,H_0)$ under the change
$\lambda\mapsto-\lambda$ can be explained using the antinunitary transformation
of charge conjugation \cite[Sec.~1.4.6]{Tha92}
$$
C:\mathcal H\to\mathcal H,\qquad\varphi\mapsto U_C\overline\varphi,
$$
where $U_C:=i\beta\alpha_2$. Indeed, if we write $H(\vec a,\pm V)$ and $H_0(\vec a)$
for $H_\pm$ and $H_0$, then a direct calculation using the Lifshits-Krein trace
formula \eqref{eq_LK} shows that
$$
CH(\vec a,\pm V)C^{-1}=-H(-\vec a,\mp U_C\overline VU_C^*),
$$
which entails
$$
\xi\big(\lambda;H(\vec a,\pm V),H_0(\vec a)\big)
=-\xi\big(-\lambda;H(-\vec a,\mp U_C\overline VU_C^*),H_0(-\vec a)\big).
$$
This obviously explains why the overall sign of the spectral shift function is reversed
under the change $\lambda\mapsto-\lambda$. But it also explains why the roles of $V_{11}$
and $V_{33}$ are interchanged in the estimates. Indeed, the natural
projection corresponding to the vector potential $\vec a$ is $\mathsf P=\mathop{\mathrm{diag}}\nolimits(P,0,P,0)$ since
we have $b_0>0$ for $\vec a$, whereas $\mathsf P':=\mathop{\mathrm{diag}}\nolimits(0,P,0,P)$ is the natural choice for the
vector potential $-\vec a$ since we have $b_0<0$ for $-\vec a$. Now, one has
$$
\mp U_C\overline VU_C^*=\mp
\left(\begin{smallmatrix}
V_{44} & -\overline{V_{43}} & -\overline{V_{42}} & \overline{V_{41}}\\
-\overline{V_{34}} & V_{33} & \overline{V_{32}} & -\overline{V_{31}}\\
-\overline{V_{24}} & \overline{V_{23}} & V_{22} & -\overline{V_{21}}\\
\overline{V_{14}} & -\overline{V_{13}} & -\overline{V_{12}} & V_{11}
\end{smallmatrix}\right).
$$
So, the projection $\mathsf P$ which selects the components $\pm(V_{11},V_{33})$ of the potential
$\pm V$ is replaced, after the change $\lambda\mapsto-\lambda$, by the projection $\mathsf P'$
which selects the components $\mp(V_{33},V_{11})$ of the transformed potential
$\mp U_C\overline VU_C^*$.
\end{Remark}
For the next proposition we define for each $\lambda\in\mathbb R$ with $|\lambda|>m$ the
positive operator $\Omega^{(1)}(\lambda)$ in $\mathsf{L}^{\:\!\!2}(\mathbb R^2;\mathbb C^8)$ given by
$$
\Omega^{(1)}(\lambda):=\frac1{2\sqrt{\lambda^2-m^2}}
\begin{pmatrix}
pM_\lambda p & 0\\
0 & 0
\end{pmatrix}
\qquad\hbox{where}\qquad
M_\lambda:=
\(\begin{smallmatrix}
|\lambda+m|W_+ & 0 & 0 &0\\
0 & 0 & 0 &0\\
0 & 0 & |\lambda-m|W_- &0\\
0 & 0 & 0 &0
\end{smallmatrix}\).
$$
\begin{Proposition}\label{soprole}
\begin{enumerate}
\item[(a)] Let $V$ satisfy Assumption \ref{assumption2} with $\nu\in(3,4]$. Then one has
for each $s>0$ and each $\delta\in\big(\frac{4-\nu}2,\frac12\big)$
$$
\mathop{\mathsf{Tr}}\nolimits\big\{\arctan\big[s^{-1}\Omega(\lambda)\big]
-\arctan\big[s^{-1}\Omega^{(1)}(\lambda)\big]\big\}
=O\big(|\lambda\mp m|^{-\delta}\big)
\quad\hbox{as}\quad\lambda\to\pm m,~|\lambda|>m.
$$
\item[(b)] Let $V$ satisfy Assumption \ref{assumption1} with $\nu_\perp>2$ and
$\nu_3>2$. Then one has for each $s>0$
\begin{equation}\label{choclito}
\mathop{\mathsf{Tr}}\nolimits\big\{\arctan\big[s^{-1}\Omega(\lambda)\big]
-\arctan\big[s^{-1}\Omega^{(1)}(\lambda)\big]\big\}=\mathcal O(1)
\quad\hbox{as}\quad\lambda\to\pm m,~|\lambda|>m.
\end{equation}
\end{enumerate}
\end{Proposition}
\begin{proof}
Points (a) and (b) are proved by using the Lifshits-Krein trace formula \eqref{eq_LK}
with $f(\lambda)=\arctan(\lambda)$, $\lambda\in\mathbb R$. We do not give the details, since
the argument is analogous to the one of \cite[Cor.~2.2]{FR04}.
\end{proof}
Note that if $V$ satisfy Assumption \ref{assumption2} with $\nu\in(3,4]$, we can choose
$\delta\in\big(\frac{4-\nu}2,\frac1{\nu-1}\big)$, and so Proposition \ref{soprole}.(a)
entails
\begin{equation}
\mathop{\mathsf{Tr}}\nolimits\big\{\arctan\big[s^{-1}\Omega(\lambda)\big]
-\arctan\big[s^{-1}\Omega^{(1)}(\lambda)\big]\big\}
=o\big(|\lambda\mp m|^{-\frac1{\nu-1}}\big)
\quad\hbox{as}\quad\lambda\to\pm m,~|\lambda|>m.\label{eq_rhume2}
\end{equation}
Moreover, if $V$ satisfy Assumption \ref{assumption2} with $\nu>4$, then it satisfies
Assumption \ref{assumption1} with $\nu_\perp>2$ and $\nu_3>2$, and, hence
\eqref{choclito} is valid. Finally, we have for $s>0$ and $|\lambda|>m$
\begin{equation}\label{eq_rhume1}
\mathop{\mathsf{Tr}}\nolimits\arctan\big[s^{-1}\Omega^{(1)}(\lambda)\big]
=\int_0^\infty\frac{\mathrm{d} t}{1+t^2}\,
n_+{\textstyle\Big(2st\big(\frac{\lambda-m}{\lambda+m}\big)^{1/2}};pW_+p\Big)
+\int_0^\infty\frac{\mathrm{d} t}{1+t^2}\,
n_+{\textstyle\Big(2st\big(\frac{\lambda+m}{\lambda-m}\big)^{1/2}};pW_-p\Big).
\end{equation}
Combining Equations \eqref{choclito}-\eqref{eq_rhume1}, Theorem \ref{thm_ext} and
Lemmas \ref{tec_Rai1}-\ref{tec_Rai3}, we get the following.
\begin{Corollary}\label{outside_gap}
Let $V$ satisfy Assumption \ref{assumption2}.
\begin{enumerate}
\item[(a)] Assume that the hypotheses of Lemma \ref{tec_Rai1} hold with
$U_\pm=W_\pm$ and $\alpha=\nu-1$. Then we have
$$
\xi(\lambda;H_-,H_0)
=-\frac1{2\cos\big(\pi/(\nu-1)\big)}\,\textstyle
\Psi_{\nu-1}\Big(2\big(\frac{\lambda-m}{\lambda+m}\big)^{1/2};u_+,b_0\Big)
\big(1+o(1)\big)\quad\hbox{as}\quad\lambda\searrow m,
$$
and
$$
\xi(\lambda;H_+,H_0)
= \frac1{2\cos\big(\pi/(\nu-1)\big)}\,\textstyle
\Psi_{\nu-1}\Big(2\big(\frac{\lambda+m}{\lambda-m}\big)^{1/2};u_-,b_0\Big)
\big(1+o(1)\big)\quad\hbox{as}\quad\lambda\nearrow-m,
$$
with $\Psi_{\nu-1}$ given by Equation \eqref{psi_alpha}.
\item[(b)] Suppose that $V$ also satisfies \eqref{first_decay} with $\nu_\perp>2$
and $\nu_3>2$, and assume that the hypotheses of Lemma \ref{tec_Rai2} hold with
$U_\pm=W_\pm$. Then we have
$$
\xi(\lambda;H_-,H_0)
=-\frac12\,\textstyle
\Phi_{\beta_+}\Big(2\big(\frac{\lambda-m}{\lambda+m}\big)^{1/2};\eta_+,b_0\Big)
\big(1+o(1)\big)\quad\hbox{as}\quad\lambda\searrow m,
$$
and
$$
\xi(\lambda;H_-,H_0)
=\frac12\,\textstyle
\Phi_{\beta_-}\Big(2\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2};\eta_-,b_0\Big)
\big(1+o(1)\big)\quad\hbox{as}\quad\lambda\nearrow-m,
$$
with $\beta_\pm\in(0,\infty)$ and $\Phi_{\beta_\pm}$ given by Equation \eqref{phi_beta}.
\item[(c)] Suppose that $V$ also satisfies \eqref{first_decay} with $\nu_\perp>2$
and $\nu_3>2$, and assume that the hypotheses of Lemma \ref{tec_Rai3} hold with
$U_\pm=W_\pm$. Then we have
$$
\xi(\lambda;H_-,H_0)=-\frac12\,\textstyle\Phi_\infty
\Big(2\big(\frac{m-\lambda}{m+\lambda}\big)^{1/2}\Big)\big(1+o(1)\big)
\quad\hbox{as}\quad\lambda\searrow m,
$$
and
$$
\xi(\lambda;H_+,H_0)=\frac12\,\textstyle\Phi_\infty
\Big(2\big(\frac{m+\lambda}{m-\lambda}\big)^{1/2}\Big)\big(1+o(1)\big)
\quad\hbox{as}\quad\lambda\nearrow-m,
$$
with $\Phi_\infty$ given by Equation \eqref{phi_infty}.
\end{enumerate}
\end{Corollary}
Putting together the results of Proposition \ref{in_gap} and Corollary \ref{outside_gap}, we
obtain the following.
\begin{Corollary}\label{Levinson}
Under the assumptions of Corollary \ref{outside_gap}.(a), we have
$$
\lim_{\varepsilon\searrow0}
\frac{\xi\big(m(1-\varepsilon)^{-1};H_-,H_0\big)}{\xi\big(m(1-\varepsilon);H_-,H_0\big)}
=\frac1{2\cos\big(\pi/(\nu-1)\big)}
=\lim_{\varepsilon\searrow0}
\frac{\xi\big(-m(1-\varepsilon)^{-1};H_+,H_0\big)}{\xi\big(-m(1-\varepsilon);H_+,H_0\big)}\,,
$$
and under the assumptions of Corollary \ref{outside_gap}.(b)-(c), we have
$$
\lim_{\varepsilon\searrow0}
\frac{\xi\big(m(1-\varepsilon)^{-1};H_-,H_0\big)}{\xi\big(m(1-\varepsilon);H_-,H_0\big)}
=\frac12
=\lim_{\varepsilon\searrow0}
\frac{\xi\big(-m(1-\varepsilon)^{-1};H_+,H_0\big)}{\xi\big(-m(1-\varepsilon);H_+,H_0\big)}\,.
$$
\end{Corollary}
\section*{Acknowledgements}
The author expresses his deep gratitude to Professor G. D. Raikov for suggesting him this
study. He also thanks him for the idea of using the decomposition \eqref{tortugo} for the free
Hamiltonian. This work was partially supported by the Chilean Science Fundation Fondecyt under
the Grant 1090008.
\section{Appendix}\label{cond_trace}
\setcounter{equation}{0}
We give in this appendix the proof of the inclusion \eqref{necessary4} for the class of
potentials $V$ given in Remark \ref{a_class}. We start with a technical lemma. We use the
notations $\alpha:=(\alpha_1,\alpha_2,\alpha_3)^{\sf T}$ and
$$
(\partial_\ell V):=\{(\partial_\ell V_{jk})\},\qquad
\nabla V:=(\partial_1V,\partial_2V,\partial_3V)^{\sf T},\qquad
(\partial_\ell\partial_mV):=\{(\partial_\ell\partial_mV_{jk})\}.
$$
\begin{Lemma}
Let $V$ be as in Remark \ref{a_class}. Then
\begin{enumerate}
\item[(a)] One has in $\mathscr B\big(\mathcal D(H_0),\mathcal D(H_0)^*\big)$ the equalities
\begin{align}
[H_0,H]
&=-i\alpha\cdot(\nabla V)+[\alpha_3,V]P_3+m[\beta,V]\label{eq_com1}\\
&=-i\alpha\cdot(\nabla V)+P_3[\alpha_3,V]+i[\alpha_3,(\partial_3V)]+m[\beta,V].\label{eq_com2}
\end{align}
\item[(b)] Let $z\in\mathbb R\setminus\{\sigma(H_0)\cup\sigma(H_\pm)\}$. Then there exist operators
$B_\pm\in\mathscr B(\mathcal H)$ such that
\begin{equation}\label{square}
R_\pm^2(z)=B_\pm H_0^{-2}\qquad\hbox{and}\qquad R_\pm^2(z)=H_0^{-2}B_\pm^*.
\end{equation}
\end{enumerate}
\end{Lemma}
\begin{proof}
(a) We know from Lemma 2.2(b) of \cite{RT04} that $\mathcal D(H_0)\subset\mathcal D(P_3)$. So each member
of Equations \eqref{eq_com1}-\eqref{eq_com2} belongs to $\mathscr B\big(\mathcal D(H_0),\mathcal D(H_0)^*\big)$.
Let $\varphi\in\mathcal D(H_0)$, take a sequence $\{\varphi_n\}\subset C^\infty_0(\mathbb R^3;\mathbb C^4)$
such that $\displaystyle\lim_n\|\varphi_n-\varphi\|_{\mathcal D(H_0)}=0$, and denote by
$\langle\;\!\cdot\;\!,\;\!\cdot\;\!\rangle_{1,-1}$ the anti-duality map between $\mathcal D(H_0)$
and $\mathcal D(H_0)^*$. Then
\begin{align}
\langle\varphi,[H_0,H]\varphi\rangle_{1,-1}
&\equiv\langle H_0\varphi,V\varphi\rangle-\langle V\varphi,H_0\varphi\rangle\nonumber\\
&=\lim_n
\big\langle\varphi_n,[\alpha_1(P_1-a_1)+\alpha_2(P_2-a_2)+\alpha_3P_3+\beta m,V]\varphi_n\big\rangle
\nonumber\\
&=\lim_n\big\langle\varphi_n,
\big\{-i\alpha\cdot(\nabla V)+[\alpha_3,V]P_3+m[\beta,V]\big\}\varphi_n\big\rangle
\label{two_pos}.
\end{align}
Since $\mathcal D(H_0)\subset\mathcal D(P_3)$, we also have
$\displaystyle\lim_n\|\varphi_n-\varphi\|_{\mathcal D(P_3)}=0$, and thus
$$
\langle\varphi,[H_0,H]\varphi\rangle_{1,-1}
=\big\langle\varphi,
\big\{-i\alpha\cdot(\nabla V)+[\alpha_3,V]P_3+m[\beta,V]\big\}\varphi\big\rangle
$$
This proves \eqref{eq_com1}. Using \eqref{two_pos}, one also gets the equality \eqref{eq_com2}.
(b) In what follows, we omit the indices ``$\pm$" to simplify the notations and we write
$B_1,B_2,\ldots$ for elements of $\mathscr B(\mathcal H)$. Since $\mathcal D(H)=\mathcal D(H_0)$, we have
$$
R^2(z)
=B_1H_0^{-1}R(z)
=B_1R(z)H_0^{-1}+B_1\big[H_0^{-1},R(z)\big]
=B_2H_0^{-2}+B_1H_0^{-1}R(z)\big[H_0,H\big]R(z)H_0^{-1}.
$$
Now, one has
$$
R(z)\big[H_0,H\big]R(z)H_0^{-1}
=R(z)\big\{-i\alpha\cdot(\nabla V)+P_3[\alpha_3,V]+i[\alpha_3,(\partial_3V)]+m[\beta,V]\big\}
R(z)H_0^{-1}
=B_3H_0^{-2}
$$
due to Equation \eqref{eq_com2}, the equality $\mathcal D(H)=\mathcal D(H_0)$, and the inclusion
$\mathcal D(H_0)\subset\mathcal D(P_3)$. This, together with the preceding equation, implies the first
identity in \eqref{square}. The second identity follows from the first one by adjunction.
\end{proof}
\begin{Proposition}\label{diff_trace}
Take $z\in\mathbb R\setminus\{\sigma(H_0)\cup\sigma(H_\pm)\}$ and let $V$ be as in Remark
\ref{a_class}. Then we have
$$
R_\pm^3(z)-R_0^3(z)\in S_1(\mathcal H).
$$
\end{Proposition}
\begin{proof}
In what follows, we omit the indices ``$\pm$" to simplify the notations and we write
$B_1,B_2,\ldots$ for elements of $\mathscr B(\mathcal H)$. Differentiating twice the resolvent identity
$$
R(z)-R_0(z)=-R(z)VR_0(z)
$$
we find that
$$
R^3(z)-R_0^3(z)=-R(z)VR_0^3(z)-R^2(z)VR_0^2(z)-R^3(z)VR_0(z).
$$
So it is sufficient to show that each term on the r.h.s. belongs to $S_1(\mathcal H)$. This is
done in points (i), (ii) and (iii) below.
(i) For the term $R(z)VR_0^3(z)$, one has
\begin{equation}\label{premiere_eq}
R(z)VR_0^3(z)=R(z)R_0(z)VR_0^2(z)+R(z)[V,R_0(z)]R_0^2(z).
\end{equation}
Since $\mathcal D(H)=\mathcal D(H_0)$, one has
$$
R(z)R_0(z)VR_0^2(z)
=R(z)(H_0-z)R_0^2(z)VR_0^2(z)
=\big(B_1H_0^{-2}V^{1/2}\big)\big(V^{1/2}H_0^{-2}B_2\big).
$$
So, by \eqref{necessary2}, $R(z)R_0(z)VR_0^2(z)$ is the product of two Hilbert-Schmidt
operators, and thus belongs to $S_1(\mathcal H)$.
For the second term of \eqref{premiere_eq}, we have by \eqref{eq_com1}
$$
R(z)[V,R_0(z)]R_0^2(z)
=R(z)R_0(z)[H_0,H]R_0^3(z)
=B_1H_0^{-2}\big\{-i\alpha\cdot(\nabla V)+[\alpha_3,V]P_3+m[\beta,V]\big\}H_0^{-3}B_3.
$$
Due to the hypotheses on $V$ and $(\partial_jV)$, one can use \eqref{necessary2} to
write the first and third term as a product of two Hilbert-Schmidt operators. So it
only remains to show that $H_0^{-2}[\alpha_3,V]P_3H_0^{-3}$ belongs to $S_1(\mathcal H)$. For
this, we use the inclusion $\mathcal D(H_0)\subset\mathcal D(P_3)$ and the commutation of $P_3$
and $H_0^{-1}$ on $\mathcal D(P_3)$ \cite[Lemma~2.2(b)]{RT04} to get
$$
H_0^{-2}[\alpha_3,V]P_3H_0^{-3}
=H_0^{-2}[\alpha_3,V]H_0^{-2}P_3H_0^{-1}
=H_0^{-2}[\alpha_3,V]H_0^{-2}B_4.
$$
This, together with \eqref{necessary2}, implies that $H_0^{-2}[\alpha_3,V]P_3H_0^{-3}$
belongs to $S_1(\mathcal H)$.
(ii) One can write $R^2(z)VR_0^2(z)$ as the product of two Hilbert-Schmidt operators by
using \eqref{square} and \eqref{necessary2}:
$$
R^2(z)VR_0^2(z)
=B_5H_0^{-2}VH_0^{-2}B_2
=\big(B_5H_0^{-2}V^{1/2}\big)\big(V^{1/2}H_0^{-2}B_2\big).
$$
Thus $R^2(z)VR_0^2(z)$ belongs to $S_1(\mathcal H)$.
(iii) For the term $R^3(z)VR_0(z)$ we have
$$
R^3(z)VR_0(z)
=R^2(z)VR(z)R_0(z)+R^2(z)[R(z),V]R_0(z).
$$
One shows that $R^2(z)VR(z)R_0(z)\in S_1(\mathcal H)$ as in point (ii). For the second term, we have
by \eqref{eq_com2} and \eqref{square}
\begin{align*}
R^2(z)[R(z),V]R_0(z)
&=R^3(z)[H,H_0]R(z)R_0(z)\\
&=B_5H_0^{-2}R(z)
\big\{i\alpha\cdot(\nabla V)-P_3[\alpha_3,V]-i[\alpha_3,(\partial_3V)]-m[\beta,V]\big\}
H_0^{-2}B_6.
\end{align*}
Due to the hypotheses on $V$ and $(\partial_jV)$, one can use \eqref{square} and
\eqref{necessary2} to write the first, third, and fourth term as a product of two
Hilbert-Schmidt operators. So it only remains to show that
$H_0^{-2}R(z)P_3[\alpha_3,V]H_0^{-2}$ belongs to $S_1(\mathcal H)$. Using \cite[Lemma~2.2(b)]{RT04}
and \eqref{square}, one gets
\begin{align*}
H_0^{-2}R(z)P_3[\alpha_3,V]H_0^{-2}
&=H_0^{-2}P_3R(z)[\alpha_3,V]H_0^{-2}+H_0^{-2}[R(z),P_3][\alpha_3,V]H_0^{-2}\\
&=P_3H_0^{-2}R(z)[\alpha_3,V]H_0^{-2}-iH_0^{-2}R(z)(\partial_3V)R(z)[\alpha_3,V]H_0^{-2}\\
&=B_7H_0^{-2}[\alpha_3,V]H_0^{-2}+B_8R(z)(\partial_3V)R(z)[\alpha_3,V]H_0^{-2}.
\end{align*}
The first term on the r.h.s. belongs to $S_1(\mathcal H)$, and for the second term we have by
\eqref{eq_com2} and \eqref{square}
\begin{align*}
&R(z)(\partial_3V)R(z)[\alpha_3,V]H_0^{-2}\\
&=(\partial_3V)R^2(z)[\alpha_3,V]H_0^{-2}+R(z)[(\partial_3V),H]R^2(z)[\alpha_3,V]H_0^{-2}\\
&=B_9H_0^{-2}[\alpha_3,V]H_0^{-2}+R(z)\big\{i\alpha\cdot[\nabla(\partial_3V)]
-P_3[\alpha_3,(\partial_3V)]\\
&\hspace{150pt}-i[\alpha_3,(\partial_3^2V)]-m[\beta,(\partial_3V)]+[(\partial_3V),V]\big\}
B_5H_0^{-2}[\alpha_3,V]H_0^{-2}.
\end{align*}
Due to the hypotheses on $V$, $(\partial_jV)$, and $(\partial_{j3}V)$, one can use
\eqref{necessary2} to show that the first, the second, the fourth, the fifth, and the
sixth term are trace class. For the third term we have to use \eqref{necessary2} and the
fact that $R(z)P_3$ extends to a bounded operator.
\end{proof}
\begin{Remark}
When the potential $V$ is scalar, the equations \eqref{eq_com1}-\eqref{eq_com2} reduce to
the single equality
$$
[H_0,H]=-i\alpha\cdot(\nabla V)
$$
in $\mathscr B\big(\mathcal D(H_0),\mathcal D(H_0)^*\big)$. So the calculations in points (i) and (iii) of
the proof of Proposition \ref{diff_trace} simplify accordingly, and we obtain the inclusion
$$
R_\pm^3(z)-R_0^3(z)\in S_1(\mathcal H)
$$
without assuming anything on the derivatives of $V$ of order $2$.
\end{Remark}
|
2,877,628,091,297 | arxiv | \section*{Results}
{\bf Quantum estimation theory}
Fisher information (FI) provides an asymptotic measure of the amount of information on the parameters of a system that is acquired by performing a measurement on it. For parameters $\boldsymbol{\lambda}$, in terms of the probabilities associated to the measurement results $p_n=p_n(\boldsymbol{\lambda})$, the FI matrix elements read\cite{CRB} $F_{i,j}=\sum_{n} p_n\frac{\partial^2}{\partial \lambda_i \partial \lambda_j}\log(p_n)=\sum_{n} \frac{1}{p_n}\frac{\partial p_n}{\partial \lambda_i}\frac{\partial p_n}{\partial \lambda_j}$. The Cram\'er-Rao bound states that, for all unbiased estimators $\hat{\boldsymbol{\lambda}}$, the expected covariance matrix with elements defined as $\gamma_{i,j}=\avg{\hat{\lambda}_i \hat{\lambda}_j}-\avg{\hat{\lambda}_i}\avg{\hat{\lambda}_j}$ satisfies
\begin{equation}
\label{CRbound}
\boldsymbol{\gamma}\geq(M \text{\bf F})^{-1}
\end{equation}
where $M$ is the number of experimental runs. The Cram\'er-Rao bound is saturated asymptotically by the maximum likelihood estimator,
upon elimination of systematic errors. FI provides a useful indicator of the optimality of a given experiment and constitutes a useful tool for designing measurements with the goal of minimising statistical errors.
For a set of probabilities originated from measurements on a quantum system, the ultimate limit on the covariance matrix is set by the quantum Cram\'er-Rao bound in terms of the quantum Fisher information (QFI) matrix \cite{HelstromBook,MatteoIJQI}. Introducing the symmetric logarithmic derivative (SLD) operator $L_j$ for parameter $\lambda_j$, obeying $2 \partial_{\lambda_j} \rho=L_j\rho+\rho L_j$, the QFI matrix is defined as $H_{ij}=\hbox{Re}[\hbox{Tr}[\rho L_i L_j]]$ and it bounds the FI matrix corresponding to any particular measurement: $\text{\bf H}\geq \text{\bf F}$.
For a single parameter, the ultimate bound can always be achieved choosing the measurement given by the eigenvectors of the SLD operator. In the case of a multi-parameter problem, if the SLDs corresponding to different parameters do not commute then the FI values for the two parameters are maximized by incompatible measurements.
{\bf Interferometry with phase diffusion}
We consider an interferometer with phase difference $\phi$ between its two arms. The annihilation operators corresponding to each arm are labeled $\hat{a}$ and $\hat{b}$.
Different physical processes lead to phase diffusion and the corresponding channel can be modelled as a random phase shift distributed according to a normal distribution of width $\Delta$, called the noise amplitude. Acting on a mode $a$ with initial state $\rho_{in}$, the phase diffusion channel yields
\begin{equation}
\rho = \mathcal{N}_{\Delta}(\rho_{in})=\frac{1}{\sqrt{2\pi}\Delta}\int\!\!\text{d}\xi\,\,e^{-\frac{\xi^2}{2\Delta^2}} U_{\xi}\rho_{in} U_{\xi}^{\dagger}
\end{equation}
where $U_{\xi}=\exp(i \xi \hat{a}^{\dagger}\hat{a})$ is the phase shift operator. In the Fock basis, the result is the exponential erasing of the off-diagonal elements of the density matrix:
\begin{equation}
\mathcal{N}_{\Delta}(\ket{n}\bra{m})=e^{-\Delta^2(n-m)^2}\ket{n}\bra{m}.
\end{equation}
This mapping can be attained, alternatively, by solving the master equation corresponding to phase diffusion \cite{genoni1}.
Quantum strategies aiming at an enhancement of the precision in phase estimation make use of
$\text{N}00\text{N}$ states, defined as $\frac{1}{\sqrt{2}}\left(|N0\rangle + |0N\rangle \right)$. Even under phase diffusion, the evolution of these states lies in the two-dimensional space spanned by $\ket{N,0}{=}\frac{1}{\sqrt{N!}}(\hat a^\dagger)^N\ket{00}$ and $\ket{0,N}{=}\frac{1}{\sqrt{N!}}(\hat b^\dagger)^N\ket{00}$.
A two-dimensional picture also describes classical phase estimation strategies relying on coherent states. Indeed, a coherent state with amplitude $\alpha$ yields the same precision as a collection of $|\alpha|^2$ independent single photons ({\em i.e.} $\text{N}00\text{N}$ states with $N{=}1$).
In these relevant cases, our two-mode probe state can be effectively modelled as a single qubit
\begin{equation}
\rho_0=\left(\begin{array}{cc}
\cos^2(\frac{\theta}{2}) & \cos(\frac{\theta}{2})\sin(\frac{\theta}{2})\\
\cos(\frac{\theta}{2})\sin(\frac{\theta}{2}) & \sin^2(\frac{\theta}{2})
\end{array}\right),
\end{equation}
which, acted upon by a phase shift $\phi$ and a phase diffusion channel parametrized by $\Delta$, yields
\begin{equation}
\rho=\left(\begin{array}{cc}
\cos^2(\frac{\theta}{2}) & \cos(\frac{\theta}{2})\sin(\frac{\theta}{2})e^{-\text{i} \phi-\Delta^2}\\
\cos(\frac{\theta}{2})\sin(\frac{\theta}{2})e^{\text{i} \phi-\Delta^2} & \sin^2(\frac{\theta}{2})
\end{array}\right).
\label{2dState}
\end{equation}
The QFI matrix corresponding to parameters $\phi$ and $\Delta$, depending on the probe parameter
$\theta$ can be calculated, using the SLD.
\begin{equation}
\text{\bf H}_\theta(\phi,\Delta)=\text{\bf H}_\theta(\Delta)=\sin^2\theta\left(\begin{array}{cc}
e^{-2 \Delta^2} & 0\\
0 & \frac{4\Delta^2}{e^{2\Delta^2}-1}
\end{array}\right).
\end{equation}
The maximum QFI corresponds to equatorial states with $\theta=\pi/2$. From now on we shall refer to the diagonal elements of the matrix $\text{\bf H}_{\pi/2}(\Delta)$ as $H_{11}$ and $H_{22}$.
For $\text{N}00\text{N}$ states and for coherent states with amplitude $\alpha$, the QFI matrices read
\begin{align}
\text{\bf H}^{(\text{N}00\text{N})}(\Delta) &=N^2 \text{\bf H}_{\pi/2}( N\Delta)\:, \\
\text{\bf H}^{(coh)}(\Delta) &=|\alpha|^2 \text{\bf H}_\theta(\Delta) ,
\end{align}
respectively.
The SLDs corresponding to the two parameters do not commute. However for equatorial states (corresponding to balanced interferometers), the expectation value of their commutator vanishes, {\em i.e.} ${\text{Tr}[\rho(L_1 L_2-L_2 L_1)]=0}$ for $\theta=\pi/2$. In principle, when this condition is satisfied, a measurement that attains the QFI for joint estimation of both parameters can be constructed\cite{guta}.
This requires a collective measurement on multiple copies of evolved probe states, which is a challenging task to implement. Therefore, we firstly restrict our search for an optimal strategy to separable positive operator valued measurements (POVMs) i.e. measurements that act on probe states individually. We discuss extensions to joint measurements subsequently.
{\bf Trade-off in the estimation precision for $\phi$ and $\Delta$}
In order to assess the performances of these measurements we consider the quantities $F_{1,1}/H_{1,1}$ and $F_{2,2}/H_{2,2}$, {\em i.e.} the ratios between the FI and QFI values for $\phi$ and $\Delta$.
Finding a relation between these ratios would effectively express the interplay that exists between the estimator variances corresponding to the two parameters.
As shown in the Supplementary Methods, a trade-off relation can be derived, which is obeyed for all probe states and separable measurements:
\begin{equation}
\label{rel}
\frac{F_{1,1}}{H_{1,1}}+\frac{F_{2,2}}{H_{2,2}}\leq 1\:.
\end{equation}
The most naive bound for the quantity in Equation \ref{rel} is equal to 2, and it would be in principle achievable by means of a measurement which is optimal for both parameters. With this inequality, we not only prove that such measurement does not exist, but we also quantify the maximum precision achievable in a joint estimation. Specifically, we prove that any measurement that is independently optimum for the estimation of one of the parameters is completely insensitive to the other. This bound is saturated by all POVMs with elements in the equatorial plane of the Bloch sphere which have the form
\begin{equation}
\label{chi}
\Pi_j=\frac{n_j}{2}\left(\begin{array}{cc}
\frac{1}{2} & \frac{1}{2} e^{-\text{i} \chi_j}\\
\frac{1}{2} e^{\text{i} \chi_j} & \frac{1}{2}
\end{array}\right),
\end{equation}
with $0{<}n_j{<1}$, $0{\leq}\chi_j{\leq}2\pi$, where the probability of outcome $j$ is $\hbox{Tr}[\rho \Pi_j]$ and $\sum_j \Pi_j=\mathbb{I}$.
A further bound on statistical variances can be derived from this relation and Equation \ref{CRbound}. The expected variance of the phase shift estimator obeys $\gamma_{1,1}=\text{Var}(\phi) \geq [M (F_{1,1}-F_{1,2}^2/F_{2,2})]^{-1}$ and an analogous relation can be written for the phase diffusion amplitude. Using the fact that the off-diagonal elements of the FI matrix are real numbers, we get $\text{Var}(\phi)\geq(M F_{1,1})^{-1}$ and $\gamma_{2,2}=\text{Var}(\Delta)\geq (M F_{2,2})^{-1}$. Notice that the off-diagonal elements of the FI matrix correspond to the coupling of estimators for the two parameters, which results in increased statistical errors. Thus, the statistical variances obey
\begin{equation}
\label{rel1}
\frac{H_{1,1}^{-1}}{\text{Var}(\phi)}+\frac{H_{2,2}^{-1}}{\text{Var}(\Delta)}\leq M.
\end{equation}
This inequality is one of our main results. It is saturated when the inequality given in Equation \ref{rel} is saturated and the off-diagonal elements of the FI matrix are zero.
{\bf An optimal measurement}
In the Supplementary Methods, we show that the bound in Equation \ref{rel1} can be saturated for POVMs in the equatorial plane that are symmetric with respect to the measured state -- meaning that for each operator of the form given in Equation \ref{chi}, parametrized by $n_j=n$ and $\chi_j=\phi+\delta$, the POVM set contains another element, parametrized by $n_{j'}=n$ and $\chi_{j'}=\phi-\delta$ for some $\delta$. Note that in general the POVM saturating the bound depends on the specific value of the phase $\phi$.
We prove, also in the Supplementary Methods that a double homodyne setup, combining modes $a$ and $b$ on a beam splitter and measuring the $X$ and $P$ quadratures, respectively, in the beam splitter's two outputs, saturates the bound in Equation \ref{rel1} independently on the value of $\phi$. Figure \ref{homodynes} shows the dependence of the variances on $\Delta$ for this setup, which is depicted in Supplementary Fig. 1.
\begin{figure}[H]
\centering
\includegraphics[width=0.475\textwidth]{Figure1.pdf}
\caption{\textbf{Joint parameter estimation by double homodyne.} (a) The ratios between optimum single parameter statistical variances and statistical variances that can be achieved with the double homodyne setup, as a function of phase diffusion amplitude, for a split single photon probe state (first order \text{N}00\text{N} state). Note that, for low phase diffusion amplitude, the homodyne setup measures phase optimally. (b) FI elements, for a split single photon. Blue corresponds to phase estimation precision and red to phase diffusion amplitude estimation precision, while black corresponds to the sum of the two. For \text{N}00\text{N} states with N photons, plots a and b scale by a factor of $N^2$ vertically and by a factor of $1/N$ horizontally. A schematic of this measurement setup, which implements a continuous measurement, is included as Supplementary Fig. 1. }
\label{homodynes}
\end{figure}
{\bf Experiment}
We adopt our theory to quantify the effectiveness of an actual experimental setup for joint parameter estimation for polarimetry by investigating how close the implementation compares with the optimal bound in Eq. (11). We are not aiming at demonstrating a quantum advantage and realise our implementation with coherent states. The joint estimation of phase and phase diffusion requires a measurement with at least three outputs. This is because the FI matrix corresponding to any single qubit (two-output) projective measurement is singular. Thus, it cannot be inverted, yielding unbounded estimator variances. We implemented a four-outcome measurement based on a displaced Sagnac polarisation interferometer \cite{noonAndSagnac, Mosley_2006}, depicted in Figure \ref{setup}. Our measurement realises a mixture of the optimal projective measurements for estimating the phase and the one for the phase diffusion amplitude. The setup can be arranged to tune the different weights of these measurements by rotating a waveplate.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Figure2.png}
\caption{\textbf{Experimental setup.} In our setup, the two modes $a$ and $b$ correspond to the horizontal ($H$) and vertical ($V$) polarisation of a single spatial mode.
In this basis, the diagonal polarisation states are defined as $\ket{D}=\frac{1}{\sqrt{2}}(\ket{H}+\ket{V})$ and $\ket{A}=\frac{1}{\sqrt{2}}(\ket{H}-\ket{V})$ and the circular polarisation states as $\ket{R}=\ket{H}+\text{i}\ket{V}$ and $\ket{L}=\ket{H}-\text{i}\ket{V}$. The four outputs of the polarisation interferometer correspond approximately to the following POVM operators acting on the input polarisation state: $\{\Pi_{1a}=k\ket{D}\bra{D}, \Pi_{1b}=k\ket{A}\bra{A}, \Pi_{2a}=(1-k)\ket{R}\bra{R}, \Pi_{2b}=(1-k)\ket{L}\bra{L}\}$, where $k$ is a tunable parameter. This measurement should saturate the inequality given in Equation \ref{rel1} for certain input states. PBS -- polarizing beam splitter; HWP -- half-wave plate; QWP -- quarter-wave plate.}
\label{setup}
\end{figure}
Complete information about the POVM associated to each of these measurements is obtained via detector tomography \cite{detectortomo}. This technique adopts a quorum of input states and records the probabilities of the outcomes. The Born rule then allows to reconstruct the measurement operator. These reconstructed POVMs are then used to compute the relevant FI matrix, as detailed in the Methods section. In Figure \ref{exp} we report our results, where the variances Var$(\phi)$ and Var$(\Delta)$ have been estimated from the classical Cram\'er-Rao bound. The plot is obtained by varying the measurement from $\sigma_x$, the optimal measurement for phase estimation, to $\sigma_y$, the optimal measurement for estimating the diffusion amplitude.
\begin{figure}[H]
\centering
\includegraphics[width=0.475\textwidth]{Figure3.pdf}
\caption{
\textbf{Experimental results.} Parametric plot of the estimates for the ratios between optimum single parameter statistical variances and optimum statistical variance, achievable with our experimental setup (details in the Methods section). Blue points are calculated for a phase shift of approximately $\frac{\pi}{2}$, optimized to obtain null off-diagonal elements of the FI matrix. Red points are calculated for a phase shift differing by $1^{\circ}$ with respect to the one corresponding to the blue points. Error bars represent twice the standard deviation, obtained by Monte-Carlo simulation (details in the Supplementary Methods). The black line gives the ultimate limit. The blue and red dashed lines give the theoretical prediction for the blue and red points, respectively, assuming visibilities of 96.5\% for outputs 1a and 1b and 99.4\% for outputs 2a and 2b. Insert: Bloch sphere representation of the estimated POVM operators, for the indicated setting (all the estimated POVMs are represented in Supplementary Fig. 2). The vectors represent the measurement operators corresponding to the 4 outputs, each normalized. The numbers written on the vectors are the trace norms of the corresponding operators, weighed by the total trace of the 4 operators. We choose a phase diffusion amplitude $\Delta=0.25 \,\text{rad}$ ($\approx 14^\circ$). Details on how this figure is obtained are presented in the Supplementary Methods.}
\label{exp}
\end{figure}
The experimental results are close to the optimum precision given by Equation \ref{rel1}, with the main imperfection of the implementation stemming from non-unit interference visibility and imperfect alignment of the setup. The precision for the estimates of $\phi$ depends strongly on the measurement visibility corresponding to outputs $1a$ and $1b$ (according to Figure \ref{exp}). For $\Delta$, the precision strongly depends on the visibility corresponding to outputs $2a$ and $2b$. The influence of non-unit visibility is more pronounced for the latter, as we detail in the Supplementary Methods.
{\bf Extensions}. We have so far restricted both theoretical and experimental studies to measurements on single quantum probes. Collective measurements on multiple copies of probe states may get closer to the multiparameter quantum Cram\'er-Rao bound in some cases~\cite{guta}. We study this for the simplest nontrivial case, that of an entangled projective measurement on a pair of qubit probe states that have undergone the same phase shift and phase diffusion. In the Supplementary Methods, we analyze the performance of a Bell measurement (in the basis in which the states of Equation ~(\ref{2dState}) are written).
\begin{figure}[H]
\centering
\includegraphics[width=0.475\textwidth]{Figure4.pdf}
\caption{\textbf{Collective measurements.}
Results of a simulated annealing search over all projectors acting on the space of two qubit probe states. Points are shown for all coordinates that signify a violation of the bound given by Equation ~(\ref{rel1}). Color illustrates the smallest value of total entropy of entanglement of the projectors found by the search, weighed by the maximum possible value (which corresponds to a Bell measurement). The maximum sum of coordinates in this graph is $1.48$ and the corresponding entropy of entanglement is $0.425$. The search is performed for a phase diffusion amplitude $\Delta=0.25 \,\text{rad}$ ($\approx 14^\circ$). The details of the search are presented in the Methods section and Supplementary Methods.}
\label{collectmeas}
\end{figure}
In such a setup, the Bell measurement can perform joint estimation with precision surpassing the bound established in Equation ~(\ref{rel1}) for separable measurements, as long as the amplitude of phase diffusion is less than $\Delta_0$, corresponding to $e^{-{\Delta_0}^2}=\sqrt{2/(1+\sqrt{5})}$. Indeed, for $\Delta=0$, a Bell measurement yields
\begin{equation}
\label{belltradeoff}
\frac{H_{1,1}^{-1}}{\text{Var}(\phi)}+\frac{H_{2,2}^{-1}}{\text{Var}(\Delta)}=\frac{3}{2}M,
\end{equation}
a value larger than the right side of the inequality given by Equation ~(\ref{rel1}), implying that greater precision can be obtained by investing in collective measurements.
For a larger value of phase diffusion, we perform a numerical search over all two qubit projective measurements that provides the achievable pairs of $\{\frac{H_{1,1}^{-1}}{M\,\text{Var}(\phi)},\frac{H_{2,2}^{-1}}{M\,\text{Var}(\Delta)}\}$, also optimizing for the smallest total of the entropy of entanglement for the corresponding projectors. Figure\ref{collectmeas} shows the results of this numerical search, revealing how a higher violation of the bound derived for separable measurements can be obtained with a more entangled measurement.
Our trade-off relations have been derived for those states whose evolution is effectively described in a 2D Hilbert space. In order to explore the form that this trade-off takes for probe state in a larger space, we present a numerical study of the performance of Holland-Burnett (HB) states \cite{HB}. An $\text{HB}(N)$ state results from the interference of two $N$-photon states on a beam splitter. $\text{HB}(N)$ states provide the same precision scaling as $\text{N}00\text{N}$ states, but are more resilient to losses than the latter.
\begin{figure}[H]
\centering
\includegraphics[width=0.475\textwidth]{Figure5.pdf}
\caption{\textbf{Joint estimation using HB states.}
Limits found by a simulated annealing search over all projectors acting on the space of the $\text{HB}(3)$ state. The black points correspond to $\Delta=0.01 \,\text{rad}$ ($\approx 0.6^\circ$), red points to $\Delta=0.05 \,\text{rad}$ and blue points to $\Delta=0.1 \,\text{rad}$. The details of the search are presented in the Methods section, in Supplementary Fig. 3 and in the Supplementary Methods.}
\label{HBstates}
\end{figure}
We performed a numerical search over all projective measurements on the 4D space corresponding to the $\text{HB}(3)$ state, optimizing the set of values $\{\frac{H_{1,1}^{-1}}{M\,\text{Var}(\phi)},\frac{H_{2,2}^{-1}}{M\,\text{Var}(\Delta)}\}$. The trade-off bounds observed in the results of the search depend on the amplitude of the phase diffusion. While for $\Delta= 0$, the linear trade-off expressed by Equation~(\ref{rel1}) is observed, for larger phase diffusion, we obtain limits higher than this (results are presented in Figure \ref{HBstates}). In Supplementary Fig. 4, we show how a photon number resolving measurement \cite{loss2} can beat the limit in Equation~(\ref{rel1}) when applied to HB states, however not reaching the bounds depicted in Figure \ref{HBstates}.
\begin{figure}[H]
\centering
\includegraphics[width=0.475\textwidth]{Figure6.pdf}
\caption{\textbf{Practical setup with losses.}
The ratios between the optimum single parameter statistical variances and statistical variances that can be achieved with a double homodyne measurement; solid line for phase diffusion estimation, dashed line for phase estimation and dot-dashed line for the sum of the two; (a) for an $\text{HB}(3)$ probe state (with 6 photons) and (b) for a $\text{N}00\text{N}(6)$ probe state; with symmetric losses -- black for unit efficiency, blue for $0.95$ total efficiency and red for $0.5$ total efficiency. Note that the HB state is better suited for parameter estimation with loss than the $\text{N}00\text{N}$ state.
}
\label{loss}
\end{figure}
It is recognized that the precision of any measurement making use of entangled states is affected by loss. Here we illustrate a different effect of loss, i.e. how it affects the performance of simultaneous estimation. We focus on a practical scenario where the double homodyne measurement is used to analyze HB and $\text{N}00\text{N}$ states with 6 photons; the results are shown in Figure \ref{loss}. They illustrate the fact that HB states are more robust to loss than $\text{N}00\text{N}$ states not only in terms of QFI scaling, but also for attaining a satisfactory joint estimation precision.
We demonstrate in the Supplementary Methods that for all path-symmetric probe states (of which HB states are an example), with $\Delta=0$ and no loss, the double homodyne measurement estimates phase optimally \cite{parity1, parity2}. In our results (presented in Figure \ref{loss}), the decrease in sensitivity due to loss is partly contained in the decreasing value of the QFI. In addition, the classical Fisher information corresponding to double homodyne detection degrades with respect to the QFI due to the effect of the incoherent part of the loss-affected probe signal, which introduces noise in the measurement outcomes (this is detailed in Supplementary Fig. 5).
\section*{Discussion}
Figure \ref{exp} shows the variances that can be obtained in our experimental setup with a probe state that has a phase shift of 1$^\circ$ with respect to the optimal probe state. We highlight a somewhat overlooked aspect of parameter estimation: the sensitivity of the measurements to experimental imperfections in the alignment of the phase of the probe state. For a large extent of the settings of our tunable measurement, the precision of the estimates is robust to this small variation in phase. This is because the four-outcome POVM is capable of distinguishing between the rotation and the shrinking of the Bloch vector corresponding to the probe state. As we tune the measurement close to either of the extremal points, corresponding to $\{\sigma_x, \sigma_y\}$, this ability is compromised. While with a projective measurement ($\sigma_x$ or $\sigma_y$ for our setup), when there is no prior information on the amplitude of the phase diffusion, phase estimation is not possible, with a balanced setting of the weights given to pairs of projectors, our setup is tolerant to phase alignment (this is illustrated in Supplementary Fig. 6 and the Supplementary Discussion). Notably, the performance of the double homodyne setup described in this work is completely independent on the phase of the probe state.
We have applied our study to quantum correlated states that offer enhanced sensitivity for phase estimation. We have also shown that collective measurements can offer an advantage for joint estimation. However, entangled measurements in optics require either probabilistic schemes, which have limited applicability in metrology, or strong nonlinearities, which may be challenging, and at the edge of current technology.
We have found that states with correlations over multiple Fock layers, such as HB states, can perform better than $\text{N}00\text{N}$ states in terms of joint estimation.
\section*{Methods}
\subsection*{Experimental setup}
The source is a mode-locked Ti:sapphire laser, working in the pulsed regime, with central wavelength $830$nm, bandwidth of $32$nm and a repetition rate of $256\text{kHz}$. The preparation stage consists of a polarizing beam splitter (PBS1) which transmits only horizontally polarized light, followed by a half-wave plate (HWP1) and a quarter-waveplate (QWP1), used for the preparation of polarisation states for detector tomography. QWP2 is set at $45^{\circ}$, rotating $\ket{R}$ to $\ket{V}$ and $\ket{L}$ to $\ket{H}$. The displaced Sagnac interferometer consists of two slightly displaced counter-propagating modes of
equal length. The input state is split by PBS2 into its $\ket{H}$ and $\ket{V}$ components, corresponding to the two paths of the interferometer. After being acted upon by HWP2, the two paths recombine on PBS2. Depending on the orientation of HW2, the input beam is split and directed towards outputs $1$ and $2$. The polarisation state at output $1$ is approximately that of the input with a phase shift due to a path difference in the arms of the interferometer. QWP3 is a multiorder waveplate, with axis vertical, which is twisted in order to correct for this phase shift. The displaced Sagnac interferometer acts as a tunable non-polarizing beam splitter, with the added effect of switching $\ket{H}$ and $\ket{V}$ polarisations in output $2$. The detectors situated after HWP3 and PBS3 measure polarisations $\ket{D}$ and $\ket{A}$, respectively. The detectors situated after PBS4 measure $\{\ket{H}, \ket{V}\}$ in output $2$, effectively measuring part of the input polarisation state in the basis $\{\ket{R}, \ket{L}\}$. Single-mode fibers (SMF) are used to couple light into the detectors and alignment of the interferometer is performed by coupling the horizontal and vertical modes independently into SMF. The interferometer phase is set so that a minimum of interference is measured in output $1a$ when the input polarisation state is $\ket{D}$. The measured visibility of the interference was $\sim 97\%$.
We characterized the tunable measurement by performing detector tomography, with different settings of HWP3 and measuring intensities with a photodiode.
\subsection*{Estimation and errors}
The experimental errors affecting our setup fall into three categories: (1) statistical errors intrinsic to quantum measurement, which are the object of our study; (2) loss and distinguishability of photons, which are accounted for in the description of the setup and (3) technical (systematic) errors. The latter dominate statistical errors in our characterization of the setup. One of the two easy-to-identify error sources consists of intensity fluctuations on a time scale longer than the detection time, which can be dealt with by recording traces of the intensity readings and using the measured distributions when fitting data to the POVM model. The second consists of imperfections in the manufacturing and calibration of the waveplates used for preparation of the input polarisation state.
The POVMs are estimated by using a maximum likelihood algorithm comparing the collected data with the predictions from the reconstruction. We verify that the outcomes predicted by the reconstructed POVMs differ from those measured by values accountable for by observed fluctuations. The error bars for the estimated Fisher information are computed using a Monte Carlo simulation, starting with the variance of the measured values of light intensities. More detailed information on how Figure 3 was obtained is present in the Supplementary Methods.\\
\subsection*{Searches over projective measurements}
All elements of the set of projective measurements in a $d$-dimensional Hilbert space can be produced by acting on an orthonormal basis of this space with a unitary transformation. We perform simulated annealing \cite{optimization} over the set of projective measurements by using random unitary transformations to perform a random walk. The algorithm decides whether a step is made in a randomly generated direction according to a tunable distribution that favours increasing values of $\frac{H_{1,1}^{-1}}{M\,\text{Var}(\phi)}$ and $\frac{H_{2,2}^{-1}}{M\,\text{Var}(\Delta)}$ and, for the search presented in Figure\ref{collectmeas}, decreasing values of the total entropy of entanglement of the projectors. We modify the step size, as well as the distribution controlling the random walk in order to reach the extreme values of the parameters that we are interested in, while ensuring that local minima are avoided. Details of this method, as well as arguments to restrict our search to projective measurements are presented in the Supplementary Methods.
\section*{Acknowledgments}
We thank Joshua Nunn, Tim Bartley, Mark Mitchison, David Jennings and Paolo Mataloni for discussion and comments. MV is supported by the EPSRC via the Controlled Quantum Dynamics CDT. XMJ is supported by an EU Marie-Curie Fellowship (PIIF-GA-2011-300820). WSK is supported by an EU Marie-Curie Fellowship (PIIF-GA-2011-331859). MGG acknowledges a fellowship support from
UK EPSRC (grant EP/I026436/1). MSK is supported by the Qatar National Research Fund (NPRP4-5554-1-084). This work was supported by the EPSRC (EP/H03031X/1, EP/K034480/1, EP/K026267/1), the European Commission project SIQS, the Air Force Office of Scientific Research (European Office of Aerospace Research and Development).
\section*{Author contributions}
M.D.V. developed the theory, designed and performed the experiment. M.G.G. contributed to the theory, G.D., X.-M. J. and W.S.K. contributed to the set-up and assisted with preliminary data collection, A.D. and M.B conceived and supervised the project and helped with designing the experimental setup, M.S.K. and I.A.W. supervised the project and contributed with discussion.
|
2,877,628,091,298 | arxiv | \section{INTRODUCTION}
The existence of positive entire solutions for the coupled nonlinear systems
\begin{equation}
\left\{
\begin{array}{l}
\Delta _{\phi _{1}}u+\sigma _{1}\left( \left\vert x\right\vert \right) \phi
_{1}(|\nabla u|)\left\vert \nabla u\right\vert =p_{1}(\left\vert
x\right\vert )f_{1}(u,v)\text{ for }x\in \mathbb{R}^{N}, \\
\Delta _{\phi _{2}}v+\sigma _{2}\left( \left\vert x\right\vert \right) \phi
_{2}(|\nabla v|)\left\vert \nabla v\right\vert =p_{2}(\left\vert
x\right\vert )f_{2}(u,v)\text{ for }x\in \mathbb{R}^{N}\text{ }
\end{array
\right. \label{11}
\end{equation
where $\mathbb{R}^{N}$ ($N\geq 3$) denote the Euclidean $N$-space,
\left\vert \circ \right\vert $ will denote any $N$-dimensional norm, $\Delta
_{\phi _{i}}w$ $(i=1,2)$ stands for the $\phi _{i}$-Laplacian operator
defined as $\Delta _{\phi _{i}}w:=\func{div}(\phi _{i}(|\nabla w|)\nabla w)$
and the functions $\phi _{i}$ satisfy, throughout this paper:
O1)\quad $\phi _{i}$ $\in C^{1}\left( \left( 0,\infty \right) ,\left(
0,\infty \right) \right) $,
O2)\quad $t\phi _{i}(t)$ is strictly increasing in $\left( 0,\infty \right)
,
O3)\quad there exist $l_{i}$, $m_{i}>1$ such that
\begin{equation*}
\text{if }\Phi _{i}\left( t\right) :=\int_{0}^{t}s\phi _{i}\left( s\right) d
\text{ then }l_{i}\leq \frac{\Phi _{i}^{\prime }\left( t\right) \cdot t}
\Phi _{i}\left( t\right) }\leq m_{i}\text{ for any }t>0,
\end{equation*}
O4)\quad there exist $a_{0}^{i}$, $a_{1}^{i}>0$ such that
\begin{equation*}
a_{0}^{i}\leq \frac{\Phi _{i}^{\prime \prime }\left( t\right) \cdot t}{\Phi
_{i}^{\prime }\left( t\right) }\leq a_{1}^{i}\text{ for any }t>0\text{,}
\end{equation*
have been intensively studied in the last few decades in view of the
understanding of some basic phenomena arising in physics (for more details,
see \cite{DP,CD2}, Kawano-Kusano \cite{KK}, Franchi-Lanconelli-Serrin \cit
{FLS}, Fukagai-Narukawa \cite{FK}, Grosse-Martin \cite{GR} and Smooke \cit
{S}). Below there are some examples of functions $\varphi _{1}$ and $\phi
_{2}$ that fulfil (O1)-(O4) which, cf. \cite{FK}, arise in mathematical
models in nonlinear physical science:
\textbf{E1:\quad Nonlinear Elasticity:}
\begin{equation*}
\Phi _{i}\left( t\right) =\left( 1+t^{2}\right) ^{p}-1,\text{ }\phi
_{i}\left( t\right) =2p\left( 1+t^{2}\right) ^{p-1},
\end{equation*
where $t>0$ and $p>\frac{1}{2}$;
\textbf{E2:\quad Plasticity:}
\begin{equation*}
\Phi _{i}\left( t\right) =t^{p}\left( \ln \left( 1+t\right) \right) ^{q}
\text{ }\phi _{i}\left( t\right) =\frac{\ln ^{q-1}\left( t+1\right) }{t+1
\left[ \left( pt^{p-1}+pt^{p-2}\right) \ln \left( t+1\right) +qt^{p-1}\right]
\allowbreak ,
\end{equation*
where $t>0$, $p>1$ and $q>0$;
\textbf{E3:\quad Generalized Newtonian fluids:}
\begin{equation*}
\Phi _{i}\left( t\right) =\int_{0}^{t}s^{1-p}\left( \sinh ^{-1}s\right)
^{q}ds,\text{ }\phi _{i}\left( t\right) =\allowbreak t^{-p}\func{arcsinh
^{q}t,
\end{equation*
where $t>0$, $0\leq p\leq 1$ and $q>0$;
\textbf{E4:}\quad \textbf{Plasma Physics:}
\begin{equation*}
\Phi _{i}\left( t\right) =\frac{t^{p}}{p}+\frac{t^{q}}{q},\phi _{i}\left(
t\right) =t^{p-2}+t^{q-2},
\end{equation*
where $t>0$ and $1<p<q$.
\textbf{E5:\quad Non-Newtonian Fluid:
\begin{equation*}
\Phi _{i}\left( t\right) =\frac{t^{p}}{p},\phi _{i}\left( t\right) =t^{p-2},
\end{equation*
where $t>0$ and $p>1$.
\begin{remark}
The systems of the form (\ref{11}) are known today as coupled nonlinear
systems of Schr\"{o}dinger type. Since, in particularly, one of the most
important classes of (\ref{11}) is the time-independent Schr\"{o}dinger
equation in quantum mechanic
\begin{equation}
\Delta u=\frac{8\pi ^{2}m}{h^{2}}\left( V\left( r\right) -E\right) u\text{,}
\label{sc}
\end{equation
where $m$ is the particle's "reduced mass", $V\left( r\right) $ is its
potential energy, $r$ is a position vector, $h$ is the Planck constant, $E$
is the energy of a photon and the unknown function $u$ is the wave function
(for more details, see \cite{GR}, \cite{L}, \cite{SJ}, \cite{S} and \cite{XZ
).
\end{remark}
In the literature, an \textit{entire large solution} means a couple $\left(
u,v\right) \in C^{1}(\left[ 0,\infty \right) )\times C^{1}(\left[ 0,\infty
\right) )$ of positive functions satisfying (\ref{11}) and such that both
u\left( x\right) $ and $v\left( x\right) $ tend to infinity as $\left\vert
x\right\vert \rightarrow \infty $; an \textit{entire} \textit{bounded
solution} if the condition $u\left( x\right) <\infty $ and $v\left( x\right)
<\infty $ as $\left\vert x\right\vert \rightarrow \infty $; a \textit
semifinite entire large solution} when ($u\left( x\right) <\infty $ and
v\left( x\right) $ tend to infinity) or ($u\left( x\right) $ tend to
infinity and $v\left( x\right) <\infty $) as $\left\vert x\right\vert
\rightarrow \infty $.
In the next, we shall reserve $r$ for the polar distance, $r:=\sqrt
x_{1}^{2}+...+x_{n}^{2}}$ for $x=\left( x_{1},...,x_{n}\right) \in \mathbb{R
^{N}$. Note that if $\phi _{i}$ is considered as a function in $\mathbb{R
^{N}$ depending only on $r$, then
\begin{equation*}
\Delta _{\phi _{i}}w=\left( \phi _{i}(\left\vert w\right\vert ^{\prime
})\left\vert w\right\vert ^{\prime }\right) ^{\prime }+\frac{N-1}{r}\phi
(\left\vert w\right\vert ^{\prime })\left\vert w\right\vert ^{\prime }.
\end{equation*
We would like to quote some references where the existence of entire bounded
radial solutions, or the existence of entire large radial solutions for the
systems of the form (\ref{11}) were analyzed. Lair, \cite{L1} has considered
the entire large radial solutions for the elliptic system
\begin{equation}
\left\{
\begin{array}{l}
\Delta u=p_{1}\left( r\right) v^{\alpha }, \\
\Delta v=p_{2}\left( r\right) u^{\beta }\text{, }r=\left\vert x\right\vert
\text{, }x\in \mathbb{R}^{N}\text{ (}N\geq 3\text{),
\end{array
\right. \label{lair}
\end{equation
where $0<\alpha \leq 1$, $0<\beta \leq 1$, $p_{1}$ and $p_{2}$ are
nonnegative continuous functions on $\mathbb{R}^{N}$. He proved that a
necessary and sufficient condition for this system to have a positive entire
large radial solution, is
\begin{eqnarray}
\int_{0}^{\infty }tp_{1}\left( t\right) \left(
t^{2-N}\int_{0}^{t}s^{N-3}Q\left( s\right) ds\right) ^{\alpha }dt &=&\infty ,
\label{c1l} \\
\int_{0}^{\infty }tp_{2}\left( t\right) \left(
t^{2-N}\int_{0}^{t}s^{N-3}P\left( s\right) ds\right) ^{\beta }dt &=&\infty ,
\label{c2l}
\end{eqnarray
where $P\left( r\right) =\int_{0}^{r}\tau p_{1}\left( \tau \right) d\tau $
and $Q\left( r\right) =\int_{0}^{r}\tau p_{2}\left( \tau \right) d\tau $.
It is well known, see Yang \cite{YH}, that if $p:\left[ 0,\infty \right)
\rightarrow \left[ 0,\infty \right) $ is a spherically symmetric continuous
function and the nonlinearity $f:[0,\infty )\rightarrow \lbrack 0,\infty )$
is a continuous, increasing function with $f\left( 0\right) \geq 0$ and
f\left( s\right) >0$ for all $s>0$ which satisfie
\begin{equation}
\int_{1}^{\infty }\frac{1}{f\left( t\right) }dt=\infty , \label{DY}
\end{equation
then, the single equatio
\begin{equation}
\Delta u=p\left( r\right) f\left( u\right) \text{ for }x\in \mathbb{R}^{N
\text{ (}N\geq 3\text{), }\underset{r\rightarrow \infty }{\lim }u\left(
r\right) =\infty \text{,} \label{dye}
\end{equation
has a nonnegative radial solution if and only if $p$ satisfies
\begin{equation*}
\underset{r\rightarrow \infty }{\lim }\mathcal{P}_{p}\left( r\right) =\infty
\text{, }\mathcal{P}_{p}\left( r\right)
:=\int_{0}^{r}s^{1-N}\int_{0}^{s}z^{N-1}p(z)dzds.
\end{equation*
A direct computation gives
\begin{equation*}
\underset{r\rightarrow \infty }{\lim }\mathcal{P}_{p}\left( r\right) =\frac{
}{N-2}\int_{0}^{\infty }tp\left( t\right) dt.
\end{equation*
However, there is no existence results for the system (\ref{11}) where
f_{1} $ and $f_{2}$ satisfy a condition of the form (\ref{DY}). This
observation can be found in the paper of \cite{L1}. Fang-Yi \cite{BF}, in
the particular case $\varphi _{1}\left( t\right) =\varphi _{2}\left(
t\right) =t^{p-1}$ ($p>1$), supplied a sufficient condition
\begin{equation}
\int_{a}^{\infty }\frac{1}{f_{1}^{1/(p-1)}\left( t,t\right)
+f_{2}^{1/(p-1)}\left( t,t\right) }dt=\infty ,\text{ }t\geq a>0, \label{zz}
\end{equation
for the existence of positive radial large solutions to (\ref{11}). The
condition (\ref{zz}) have been used by many authors and in many contexts,
see Li-Zhang-Zhang \cite{ZL}, Liu-Zhang \cite{HL}, Qin-Yang \cite{QH},
Dkhil-Zeddini \cite{DZ}, and the references therein.
Now we return to (\ref{zz}): if the function
(f_{1}^{1/(p-1)}+f_{2}^{1/(p-1)})$ satisfies condition (\ref{zz}), so do
separately each of the functions $f_{1}^{1/(p-1)}$ and $f_{2}^{1/(p-1)}$ but
the converse is not true as one can see from the paper of Bernfeld \cite
Example 3.8., pp. 283]{B}. One of our main purposes of this paper is to
establish sufficient conditions for the existence of entire large radial
solutions of the system (\ref{11}) under the new conditions of the for
\begin{equation}
\int_{a}^{\infty }\frac{1}{f_{1}^{1/(p-1)}\left( t,t\right) }dt=\infty
\text{ }t\geq a>0, \label{cov1}
\end{equation
an
\begin{equation}
\int_{a}^{\infty }\frac{1}{f_{2}^{1/(p-1)}\left( t,t\right) }dt=\infty
\text{ }t\geq a>0, \label{cov2}
\end{equation
The existence of entire bounded/semifinite entire large positive solutions
is also studied in this paper. Finally, we should like to mention that the
method presented here also yields much more precise information on the
behavior of solutions.
\section{NOTATIONS AND PRELIMINARIES}
We work under the following assumptions:
(P1)\quad $\sigma _{1},\sigma _{2},p_{1},p_{2}:\left[ 0,\infty \right)
\rightarrow \left[ 0,\infty \right) $ are continuous functions\textit{;}
(C1)\quad $f_{1},f_{2}:\left[ 0,\infty \right) \times \left[ 0,\infty
\right) \rightarrow \left[ 0,\infty \right) $ are continuous, increasing,
f_{1}\left( 0,0\right) \geq 0$, $f_{2}\left( 0,0\right) \geq 0$ and
f_{1}\left( s_{1},s_{2}\right) >0$, $f_{2}\left( s_{1},s_{2}\right) >0$
whenever $s_{1},s_{2}>0$;
(C2)\quad there exist the continuous and increasing functions $h_{1}$,
h_{2}:\left[ 0,\infty \right) \times \left[ 0,\infty \right) \rightarrow
\left[ 0,\infty \right) $ and $\overline{f}_{1}$, $\overline{f}_{2}:\left[
0,\infty \right) \rightarrow \left[ 0,\infty \right) $ such that
\begin{eqnarray}
f_{1}\left( t_{1},t_{1}\cdot s_{1}\right) &\leq &h_{1}\left(
t_{1},t_{1}\right) \cdot \overline{f}_{1}\left( s_{1}\right) \text{,
\forall s_{1}\geq 1\text{ and }\forall \text{ }t_{1}\geq M_{1}a_{1},
\label{c22} \\
f_{2}\left( t_{2},t_{2}\cdot s_{2}\right) &\leq &h_{2}\left(
t_{2},t_{2}\right) \cdot \overline{f}_{2}\left( s_{2}\right) \text{,
\forall s_{2}\geq 1\text{ and }\forall \text{ }t_{2}\geq M_{2}a_{2},
\label{c222}
\end{eqnarray
where $a_{1},a_{2}\in \left( 0,\infty \right) $, $M_{1}\geq \max \left\{ 1
\frac{1}{a_{1}}\right\} $ and $M_{2}\geq \max \left\{ 1,\frac{1}{a_{2}
\right\} $.
In order to state our existence theorems, we introduce the following
notation
\begin{eqnarray*}
Z\left( r\right) &=&\int_{a_{1}+a_{2}}^{r}\frac{1}{\overline{\theta
_{1}\left( f_{1}\left( t,t\right) \right) +\overline{\theta }_{2}\left(
f_{2}\left( t,t\right) \right) }dt\text{, }\mathcal{H}_{i}\left( r\right)
=\int_{a_{i}}^{r}\frac{1}{\overline{\theta }_{i}\left( h_{i}\left(
t,M_{i}t\right) \right) }dt\text{, } \\
\xi _{i}\left( t\right) &=&t^{N-1}e^{\int_{0}^{t}\sigma _{i}\left( s\right)
ds}\text{, }P_{i}\left( r\right) =\int_{0}^{r}\Psi _{i}^{-1}\left( \frac{1}
\xi _{i}\left( z\right) }\int_{0}^{z}\xi _{i}\left( t\right) p_{i}\left(
t\right) dt\right) dz\text{,} \\
\overline{P}_{i}\left( r\right) &=&\int_{0}^{r}\Psi _{i}^{-1}\left( \frac{1}
\xi _{i}\left( t\right) }\int_{0}^{t}\xi _{i}\left( s\right) p_{i}\left(
s\right) \overline{f}_{i}\left( 1+Z^{-1}\left( P_{1}\left( s\right)
+P_{2}\left( s\right) \right) \right) ds\right) dt\text{, } \\
\underline{P}_{1}\left( r\right) &=&\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{
}{\xi _{1}\left( t\right) }\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left(
s\right) f_{1}\left( a_{1},a_{2}+\underline{\theta }_{2}(f_{2}\left(
a_{1},a_{2}\right) )P_{2}\left( s\right) \right) ds\right) dt\text{, } \\
\underline{P}_{2}\left( r\right) &=&\int_{0}^{r}\Psi _{2}^{-1}\left( \frac{
}{\xi _{2}\left( t\right) }\int_{0}^{t}\xi _{2}\left( s\right) p_{2}\left(
s\right) f_{2}\left( a_{1}+\underline{\theta }_{1}(f_{1}\left(
a_{1},a_{2}\right) P_{1}\left( s\right) ,a_{2}\right) ds\right) dt\text{, }
\\
\mathcal{H}_{i}\left( \infty \right) &=&\lim_{s\rightarrow \infty }\mathcal{
}_{i}\left( s\right) \text{, }P_{i}\left( \infty \right) =\lim_{r\rightarrow
\infty }P_{i}\left( r\right) \text{, \ }i=1,2,\text{ } \\
\overline{P}_{i}\left( \infty \right) &=&\lim_{r\rightarrow \infty
\overline{P}_{i}\left( r\right) \text{, }\underline{P}_{i}\left( \infty
\right) =\lim_{r\rightarrow \infty }\underline{P}_{i}\left( r\right) \text{.
}
\end{eqnarray*
Some remarks are now in order on the preliminaries stated in this and the
previous section.
\begin{remark}
A simple example of $f_{1}$ and $f_{2}$ satisfying (C2) is given by \
f_{1}\left( u,v\right) =u^{\beta _{1}}v^{\alpha _{1}}$ and $f_{2}\left(
u,v\right) =u^{\beta _{2}}v^{\alpha _{2}}$ with $\alpha _{1}$, $\beta _{1}$,
$\alpha _{2}$, $\beta _{2}\in \left[ 0,\infty \right) $ with $\alpha
_{1}^{2}+$ $\beta _{1}^{2}\neq 0$ and $\alpha _{2}^{2}+\beta _{2}^{2}\neq 0$.
\end{remark}
For the proof of the next remark that will be stated below, we refer the
reader to \cite[Lemma 2.1]{FK}.
\begin{remark}
Suppose $\phi _{i}$ ($i=1,2$) satisfy (O1), (O2), (O3) and (O4). Then
\begin{equation}
\underline{\theta }_{i}(s_{1})\Psi _{i}^{-1}(s_{2})\leq \Psi
_{i}^{-1}(s_{1}s_{2})\leq \overline{\theta }_{i}(s_{1})\Psi _{i}^{-1}(s_{2}
\text{ for all }s_{1},s_{2}>0, \label{ineq}
\end{equation
where $\underline{\theta }_{i}(t)=\min \left\{
t^{1/m_{i}},t^{1/l_{i}}\right\} $, $\overline{\theta }_{i}(t)=\max \left\{
t^{1/m_{i}},t^{1/l_{i}}\right\} $.
\end{remark}
The reader is referred to Krasnosel'skii and Rutickii \cite{KR} (see also
Rao and Ren \cite{RAO}) for a through treatment of the assumptions (C2) and
\ref{ineq}).
\begin{remark}
If $P_{i}\left( \infty \right) =\infty $ then $\underline{P}_{i}\left(
\infty \right) =\infty $ and $\overline{P}_{i}\left( \infty \right) =\infty
. On the other other hand, if $\underline{P}_{i}\left( \infty \right)
=\infty $ or $\overline{P}_{i}\left( \infty \right) =\infty $ then we can
have one of the following
\begin{equation*}
\begin{array}{cc}
1. & P_{1}\left( \infty \right) <\infty \text{ and }P_{2}\left( \infty
\right) =\infty , \\
2. & P_{1}\left( \infty \right) =\infty \text{ and }P_{2}\left( \infty
\right) <\infty , \\
3. & P_{1}\left( \infty \right) =\infty \text{ and }P_{2}\left( \infty
\right) =\infty
\end{array
\end{equation*
(see \cite{L1} for an example in this direction).
\end{remark}
\section{STATEMENTS AND PROOFS OF THE THEOREMS}
Our main objective in this work is to prove the following result:
\begin{theorem}
\label{th1}The system (\ref{11}) has one positive radial solution $\left(
u,v\right) \in C^{1}\left( \left[ 0,\infty \right) \right) \times
C^{1}\left( \left[ 0,\infty \right) \right) $ given \textit{that }$\mathcal{
}_{1}\left( \infty \right) =\mathcal{H}_{2}\left( \infty \right) =\infty $
and \textrm{(P1)}, \textrm{(C1)}, \textrm{(C2) }hold true. Moreover, if
\underline{P}_{1}\left( \infty \right) =\infty $ and $\underline{P
_{2}\left( \infty \right) =\infty $ then
\begin{equation*}
\lim_{r\rightarrow \infty }u\left( r\right) =\infty \text{ and
\lim_{r\rightarrow \infty }v\left( r\right) =\infty .
\end{equation*}
\end{theorem}
\subparagraph{\textbf{Proof of Theorem \protect\ref{th1}:}}
We start by showing that (\ref{11}) has positive radial solutions. On this
purpose we can see that radial solutions of the system
\begin{equation}
\left\{
\begin{array}{l}
\left( \phi _{1}(u^{\prime })u^{\prime }\right) ^{\prime }+\frac{N-1}{r}\phi
_{1}(u^{\prime })u^{\prime }+\sigma _{1}\left( r\right) \phi _{1}(u^{\prime
})u^{\prime }=p_{1}\left( r\right) f_{1}\left( u\left( r\right) ,v\left(
r\right) \right) \text{, }r\geq 0, \\
\left( \phi _{2}(v^{\prime })v^{\prime }\right) ^{\prime }+\frac{N-1}{r}\phi
_{2}(v^{\prime })v^{\prime }+\sigma _{2}\left( r\right) \phi _{2}(v^{\prime
})v^{\prime }=p_{2}\left( r\right) f_{2}\left( u\left( r\right) ,v\left(
r\right) \right) \text{, }r\geq 0, \\
u^{\prime },v^{\prime }\geq 0\text{ on }\left[ 0,\infty \right) \\
u\left( 0\right) =a_{1}\text{, }v\left( 0\right) =a_{2
\end{array
\right. \label{ss1}
\end{equation
solve (\ref{11}). By the symmetry of $\left( u,v\right) $ and using the
standard integrating procedure, we rewrite the system (\ref{ss1}) as
\begin{equation}
\left\{
\begin{array}{l}
u\left( r\right) =a_{1}+\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{1}{\xi
_{1}\left( t\right) }\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left(
s\right) f_{1}\left( u\left( s\right) ,v\left( s\right) \right) ds\right) d
\text{, }r\geq 0, \\
v\left( r\right) =a_{2}+\int_{0}^{r}\Psi _{2}^{-1}\left( \frac{1}{\xi
_{2}\left( t\right) }\int_{0}^{t}\xi _{2}\left( s\right) p_{2}\left(
s\right) f_{2}\left( u\left( s\right) ,v\left( s\right) \right) ds\right) d
\text{, }r\geq 0
\end{array
\right. \label{ss}
\end{equation
The solution $\left( u,v\right) $ can be constructed by the following
approximate scheme: define $u_{0}=a_{1},v_{0}=a_{2}$ and let $\left\{ \left(
u_{n},v_{n}\right) \right\} _{n\geq 1}$ on $\left[ 0,\infty \right) \times
\left[ 0,\infty \right) $ given by
\begin{equation}
\left\{
\begin{array}{l}
u_{n}\left( r\right) =a_{1}+\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{1}{\xi
_{1}\left( t\right) }\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left(
s\right) f_{1}\left( u_{n-1}\left( s\right) ,v_{n-1}\left( s\right) \right)
ds\right) dt\text{, }r\geq 0, \\
v_{n}\left( r\right) =a_{2}+\int_{0}^{r}\Psi _{2}^{-1}\left( \frac{1}{\xi
_{2}\left( t\right) }\int_{0}^{t}\xi _{2}\left( s\right) p_{2}\left(
s\right) f_{2}\left( u_{n-1}\left( s\right) ,v_{n-1}\left( s\right) \right)
ds\right) dt\text{, }r\geq 0
\end{array
\right. \label{recs}
\end{equation
We show that $\left\{ u_{n}\right\} _{n\geq 0}$ and $\left\{ v_{n}\right\}
_{n\geq 0}$ are nondecreasing on $\left[ 0,\infty \right) $. To see this,
express
\begin{eqnarray*}
u_{1}\left( r\right) &=&a_{1}+\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{1}{\xi
_{1}\left( t\right) }\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left(
s\right) f_{1}\left( u_{0}\left( s\right) ,v_{0}\left( s\right) \right)
ds\right) dt \\
&=&a_{1}+\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left( t\right)
\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left( s\right) f_{1}\left(
a_{1},a_{2}\right) ds\right) dt \\
&\leq &a_{1}+\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left(
t\right) }\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left( s\right)
f_{1}\left( u_{1}\left( s\right) ,v_{1}\left( s\right) \right) ds\right)
dt=u_{2}\left( r\right) .
\end{eqnarray*
This proves that $u_{1}\left( r\right) \leq u_{2}\left( r\right) $.
Similarly, $v_{1}\left( r\right) \leq v_{2}\left( r\right) $. By an
induction argument we get
\begin{equation*}
u_{n}\left( r\right) \leq u_{n+1}\left( r\right) \text{ for any }n\in
\mathbb{N}\text{ and }r\in \left[ 0,\infty \right) ,
\end{equation*
an
\begin{equation*}
v_{n}\left( r\right) \leq v_{n+1}\left( r\right) \text{ for any }n\in
\mathbb{N}\text{ and }r\in \left[ 0,\infty \right) .
\end{equation*
Let us now prove that the non-decreasing sequences $\left\{ u_{n}\right\}
_{n\geq 0}$ and $\left\{ v_{n}\right\} _{n\geq 0}$ are bounded from above on
bounded sets. By the monotonicity of $\left\{ u_{n}\right\} _{n\geq 0}$ and
\left\{ v_{n}\right\} _{n\geq 0}$ one ge
\begin{eqnarray}
\left[ \xi _{1}\left( r\right) \phi _{1}(u_{n}^{\prime }\left( r\right)
)u_{n}^{\prime }\left( r\right) \right] ^{\prime } &=&\xi _{1}\left(
r\right) p_{1}\left( r\right) f_{1}\left( u_{n-1}\left( r\right)
,v_{n-1}\left( r\right) \right) \notag \\
&\leq &\xi _{1}\left( r\right) p_{1}\left( r\right) f_{1}\left( u_{n}\left(
r\right) ,v_{n}\left( r\right) \right) , \label{gen1} \\
\left[ \xi _{2}\left( r\right) \phi _{2}(v_{n}^{\prime }\left( r\right)
)v_{n}^{\prime }\left( r\right) \right] ^{\prime } &\leq &\xi _{2}\left(
r\right) p_{2}\left( r\right) f_{2}\left( u_{n}\left( r\right) ,v_{n}\left(
r\right) \right) . \label{gen2}
\end{eqnarray
Integrating the above inequalities and using (\ref{ineq}), yield tha
\begin{eqnarray*}
u_{n}^{\prime }\left( r\right) &\leq &\Psi _{1}^{-1}\left( \frac{1}{\xi
_{1}\left( r\right) }\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left(
s\right) f_{1}\left( u_{n}\left( s\right) ,v_{n}\left( s\right) \right)
ds\right) \\
&\leq &\Psi _{1}^{-1}\left( \frac{f_{1}\left( u_{n}\left( r\right)
,v_{n}\left( r\right) \right) }{\xi _{1}\left( r\right) }\int_{0}^{r}\xi
_{1}\left( s\right) p_{1}\left( s\right) ds\right) \\
&\leq &\overline{\theta }_{1}\left( f_{1}\left( u_{n}\left( r\right)
,v_{n}\left( r\right) \right) \right) \Psi _{1}^{-1}\left( \frac{1}{\xi
_{1}\left( r\right) }\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left(
s\right) ds\right) \\
&=&\overline{\theta }_{1}\left( f_{1}\left( u_{n}\left( r\right)
+v_{n}\left( r\right) ,u_{n}\left( r\right) +v_{n}\left( r\right) \right)
\right) P_{1}^{\prime }\left( r\right) ,
\end{eqnarray*
an
\begin{equation*}
v_{n}^{\prime }\left( r\right) \leq \overline{\theta }_{2}\left( f_{2}\left(
u_{n}\left( r\right) +v_{n}\left( r\right) ,u_{n}\left( r\right)
+v_{n}\left( r\right) \right) \right) P_{2}^{\prime }\left( r\right) .
\end{equation*
It follows from these last inequalities that
\begin{equation}
\frac{\left( u_{n}\left( r\right) +v_{n}\left( r\right) \right) ^{\prime }}
\left( \overline{\theta }_{1}\left( f_{1}\right) +\overline{\theta
_{2}\left( f_{2}\right) \right) \left( \left( u_{n}\left( r\right)
+v_{n}\left( r\right) ,u_{n}\left( r\right) +v_{n}\left( r\right) \right)
\right) }\leq P_{1}^{\prime }\left( r\right) +P_{2}^{\prime }\left( r\right)
, \label{mat}
\end{equation
from which inequality we obtai
\begin{equation*}
\int_{a_{1}+a_{2}}^{u_{n}\left( r\right) +v_{n}\left( r\right) }\frac{1}
\overline{\theta }_{1}\left( f_{1}\left( t,t\right) \right) +\overline
\theta }_{2}\left( f_{2}\left( t,t\right) \right) }dt\leq P_{1}\left(
r\right) +P_{2}\left( r\right) .
\end{equation*
Now we have
\begin{equation}
Z\left( u_{n}\left( r\right) +v_{n}\left( r\right) \right) \leq P_{1}\left(
r\right) +P_{2}\left( r\right) , \label{zc1}
\end{equation
which will play a basic role in the proof of our main results. The
inequalities (\ref{zc1}) can be rewritten as
\begin{equation}
u_{n}\left( r\right) +v_{n}\left( r\right) \leq Z^{-1}\left( P_{1}\left(
r\right) +P_{2}\left( r\right) \right) . \label{zc2}
\end{equation
This can be easily seen from the fact that $Z$ is a bijection with the
inverse function $Z$ strictly increasing on $\left[ 0,Z\left( \infty \right)
\right) $. Let $M_{1}\geq \max \left\{ 1,\frac{1}{a_{1}}\right\} $ and
M_{2}\geq \max \left\{ 1,\frac{1}{a_{2}}\right\} $. The next step is to
integrate (\ref{gen1}) from $0$ to $r$ and bearing in mind (\ref{c22}), we
find
\begin{eqnarray}
\left( u_{n}\left( r\right) \right) ^{\prime } &\leq &\Psi _{1}^{-1}\left(
\frac{1}{\xi _{1}\left( r\right) }\int_{0}^{r}\xi _{1}\left( s\right)
p_{1}\left( s\right) f_{1}\left( u_{n}\left( s\right) ,v_{n}\left( s\right)
\right) ds\right) \notag \\
&\leq &\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left( r\right)
\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left( s\right) f_{1}\left(
u_{n}\left( s\right) ,2u_{n}\left( s\right) +v_{n}\left( s\right) \right)
ds\right) \notag \\
&\leq &\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left( r\right)
\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left( s\right) f_{1}\left(
u_{n}\left( s\right) ,u_{n}\left( s\right) +Z^{-1}\left( P_{1}\left(
s\right) +P_{2}\left( s\right) \right) \right) ds\right) \notag \\
&=&\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left( r\right) }\int_{0}^{r}\xi
_{1}\left( s\right) p_{1}\left( s\right) f_{1}\left( u_{n}\left( s\right)
,u_{n}\left( s\right) (1+\frac{1}{u_{n}\left( s\right) }Z^{-1}\left(
P_{1}\left( s\right) +P_{2}\left( s\right) \right) )\right) ds\right)
\notag \\
&\leq &\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left( r\right)
\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left( s\right) f_{1}\left(
u_{n}\left( s\right) ,u_{n}\left( s\right) (1+\frac{1}{a_{1}}Z^{-1}\left(
P_{1}\left( s\right) +P_{2}\left( s\right) \right) )\right) ds\right)
\label{exin} \\
&\leq &\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left( r\right)
\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left( s\right) f_{1}\left(
u_{n}\left( s\right) ,M_{1}\left( 1+Z^{-1}\left( P_{1}\left( s\right)
+P_{2}\left( s\right) \right) \right) \right) ds\right) \notag \\
&\leq &\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left( r\right) }h_{1}\left(
u_{n}\left( r\right) ,M_{1}u_{n}\left( r\right) \right) \int_{0}^{r}\xi
_{1}\left( s\right) p_{1}\left( s\right) \overline{f}_{1}\left(
1+Z^{-1}\left( P_{1}\left( s\right) +P_{2}\left( s\right) \right) \right)
ds\right) \notag \\
&\leq &\overline{\theta }_{1}\left( h_{1}\left( u_{n}\left( r\right)
,M_{1}u_{n}\left( r\right) \right) \right) \Psi _{1}^{-1}\left( \frac{1}{\xi
_{1}\left( r\right) }\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left(
s\right) \overline{f}_{1}\left( 1+Z^{-1}\left( P_{1}\left( s\right)
+P_{2}\left( s\right) \right) \right) ds\right) \notag \\
&=&\overline{\theta }_{1}\left( h_{1}\left( u_{n}\left( r\right)
,M_{1}u_{n}\left( r\right) \right) \right) \overline{P}_{1}^{\prime }\left(
r\right) . \notag
\end{eqnarray
Dividing the inequality (\ref{exin}) by $\overline{\theta }_{1}\left(
h_{1}\left( u_{n}\left( r\right) ,M_{1}u_{n}\left( r\right) \right) \right) $
we see that
\begin{equation}
\frac{\left( u_{n}\left( r\right) \right) ^{\prime }}{\overline{\theta
_{1}\left( h_{1}\left( u_{n}\left( r\right) ,M_{1}u_{n}\left( r\right)
\right) \right) }\leq \overline{P}_{1}^{\prime }\left( r\right) .
\label{mat2}
\end{equation
Integrate (\ref{mat2}), from $0$ to $r$, we hav
\begin{equation*}
\int_{a_{1}}^{u_{n}\left( r\right) }\frac{1}{\overline{\theta }_{1}\left(
h_{1}\left( t,M_{1}t\right) \right) }dt\leq \overline{P}_{1}\left( r\right)
\text{ }
\end{equation*
which is the same a
\begin{equation}
\mathcal{H}_{1}\left( u_{n}\left( r\right) \right) \leq \overline{P
_{1}\left( r\right) . \label{ints}
\end{equation
Now, we can easy see that $\mathcal{H}_{1}$ is a bijection with the inverse
function $\mathcal{H}_{1}^{-1}$ strictly increasing on $\left[ 0,\mathcal{H
_{1}\left( \infty \right) \right) $. By combining this with the previous
inequality, leads t
\begin{equation}
u_{n}\left( r\right) \leq \mathcal{H}_{1}^{-1}\left( \overline{P}_{1}\left(
r\right) \right) . \label{int}
\end{equation
Returning to $\left\{ v_{n}\left( r\right) \right\} _{n\geq 0}$, one can
show tha
\begin{equation*}
\left( v_{n}\left( r\right) \right) ^{\prime }\leq \overline{\theta
_{2}\left( h_{2}\left( v_{n}\left( r\right) ,M_{2}v_{n}\left( r\right)
\right) \right) \overline{P}_{2}^{\prime }\left( r\right) .
\end{equation*
Integrating this ordinary differential inequality we ge
\begin{equation*}
\mathcal{H}_{2}\left( v_{n}\left( r\right) \right)
=\int_{a_{2}}^{v_{n}\left( r\right) }\frac{1}{\overline{\theta }_{2}\left(
h_{2}\left( t,M_{2}t\right) \right) }dt\leq \overline{P}_{2}\left( r\right) .
\end{equation*
From this inequality, we derive
\begin{equation}
v_{n}\left( r\right) \leq \mathcal{H}_{2}^{-1}\left( \overline{P}_{2}\left(
r\right) \right) . \label{int2}
\end{equation
In summary, we have found upper bounds for $\left\{ u_{n}\right\} _{n\geq 0}$
and $\left\{ v_{n}\right\} _{n\geq 0}$ which are dependent of $r$. Now let
us complete the proof of Theorem \ref{th1}. We prove that the sequences
\left\{ u_{n}\right\} _{n\geq 0}$ and $\left\{ v_{n}\right\} _{n\geq 0}$ are
bounded and equicontinuous on $\left[ 0,c_{0}\right] $ for arbitrary $c_{0}>0
$. Indeed, since
\begin{equation*}
\left( u_{n}\left( r\right) \right) ^{^{\prime }}\geq 0\text{ and }\left(
v_{n}\left( r\right) \right) ^{^{\prime }}\geq 0\text{ for all }r\geq 0,
\end{equation*
it follows that
\begin{equation*}
u_{n}\left( r\right) \leq u_{n}\left( c_{0}\right) \leq C_{1}\text{ and
v_{n}\left( r\right) \leq v_{n}\left( c_{0}\right) \leq C_{2}\text{ on
\left[ 0,c_{0}\right] .
\end{equation*
Here $C_{1}=\mathcal{H}_{1}^{-1}\left( \overline{P}_{1}\left( c_{0}\right)
\right) $ and $C_{2}=\mathcal{H}_{2}^{-1}\left( \overline{P}_{2}\left(
c_{0}\right) \right) $ are positive constants. Recall that $\left\{
u_{n}\right\} _{n\geq 0}$ and $\left\{ v_{n}\right\} _{n\geq 0}$ are bounded
on $\left[ 0,c_{0}\right] $ for arbitrary $c_{0}>0$. Using this fact, we
show that the same is true of $\left( u_{n}\left( r\right) \right) ^{\prime }
$ and $\left( v_{n}\left( r\right) \right) ^{\prime }$. Indeed, for any
r\geq 0$
\begin{eqnarray*}
\left( u_{n}\left( r\right) \right) ^{\prime } &=&\Psi _{1}^{-1}\left( \frac
1}{\xi _{1}\left( r\right) }\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left(
s\right) f_{1}\left( u_{n-1}\left( s\right) ,v_{n-1}\left( s\right) \right)
ds\right) \\
&\leq &\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left( r\right)
\int_{0}^{r}\xi _{1}\left( s\right) p_{1}\left( s\right) f_{1}\left(
u_{n}\left( s\right) ,v_{n}\left( s\right) \right) ds\right) \\
&\leq &\Psi _{1}^{-1}\left( \left\Vert p_{1}\right\Vert _{\infty
}f_{1}\left( C_{1},C_{2}\right) \frac{1}{\xi _{1}\left( r\right)
\int_{0}^{r}\xi _{1}\left( s\right) ds\right) \\
&\leq &\Psi _{1}^{-1}\left( \left\Vert p_{1}\right\Vert _{\infty
}f_{1}\left( C_{1},C_{2}\right) \int_{0}^{r}ds\right) \\
&\leq &\Psi _{1}^{-1}\left( \left\Vert p_{1}\right\Vert _{\infty
}f_{1}\left( C_{1},C_{2}\right) c_{0}\right) \text{ on }\left[ 0,c_{0}\right]
.
\end{eqnarray*
Similar arguments show tha
\begin{eqnarray*}
\left( v_{n}\left( r\right) \right) ^{\prime } &=&\Psi _{2}^{-1}\left( \frac
1}{\xi _{2}\left( r\right) }\int_{0}^{r}\xi _{2}\left( s\right) p_{2}\left(
s\right) f_{2}\left( u_{n-1}\left( s\right) ,v_{n-1}\left( s\right) \right)
ds\right) \\
&\leq &\Psi _{2}^{-1}\left( \left\Vert p_{2}\right\Vert _{\infty
}f_{2}\left( C_{1},C_{2}\right) c_{0}\right) \text{ on }\left[ 0,c_{0}\right]
.
\end{eqnarray*
It remains, to prove that $\left\{ u_{n}\right\} _{n\geq 0}$ and $\left\{
v_{n}\right\} _{n\geq 0}$ are equicontinuous on $\left[ 0,c_{0}\right] $ for
arbitrary $c_{0}>0$. Let $\varepsilon _{1}$, $\varepsilon _{2}>0$. To verify
equicontinuous on $\left[ 0,c_{0}\right] $, observe tha
\begin{eqnarray*}
\left\vert u_{n}\left( x\right) -u_{n}\left( y\right) \right\vert
&=&\left\vert \left( u_{n}\left( \xi _{1}\right) \right) ^{\prime
}\right\vert \left\vert x-y\right\vert \leq \Psi _{1}^{-1}\left( \left\Vert
p_{1}\right\Vert _{\infty }f_{1}\left( C_{1},C_{2}\right) c_{0}\right)
\left\vert x-y\right\vert , \\
\left\vert v_{n}\left( x\right) -v_{n}\left( y\right) \right\vert
&=&\left\vert \left( v_{n}\left( \xi _{2}\right) \right) ^{\prime
}\right\vert \left\vert x-y\right\vert \leq \Psi _{2}^{-1}\left( \left\Vert
p_{2}\right\Vert _{\infty }f_{2}\left( C_{1},C_{2}\right) c_{0}\right)
\left\vert x-y\right\vert ,
\end{eqnarray*
for all $n\in \mathbb{N}$ and all $x,y\in \left[ 0,c_{0}\right] $ and for
\xi _{1}$, $\xi _{2}$ the constants from the mean value theorem. So it
suffices to tak
\begin{equation*}
\delta _{1}=\frac{\varepsilon _{1}}{\Psi _{1}^{-1}\left( \left\Vert
p_{1}\right\Vert _{\infty }f_{1}\left( C_{1},C_{2}\right) c_{0}\right)
\text{ and }\delta _{2}=\frac{\varepsilon _{2}}{\Psi _{2}^{-1}\left(
\left\Vert p_{2}\right\Vert _{\infty }f_{2}\left( C_{1},C_{2}\right)
c_{0}\right) },
\end{equation*
to see that $\left\{ u_{n}\right\} _{n\geq 0}$ and $\left\{ v_{n}\right\}
_{n\geq 0}$ are equicontinuous on $\left[ 0,c_{0}\right] $. In particular,
it follows from the Arzela--Ascoli theorem that there exists a function
u\in C\left( \left[ 0,c_{0}\right] \right) $ and a subsequence $N_{1}$ of
\mathbb{N}^{\ast }$ with $u_{n}\left( r\right) $ converging uniformly on $u$
to $\left[ 0,c_{0}\right] $ as $n\rightarrow \infty $ through $N_{1}$. By
the same token there exists a function $v\in C\left( \left[ 0,c_{0}\right]
\right) $ and a subsequence $N_{2}$ of $\mathbb{N}^{\ast }$ with
v_{n}\left( r\right) $ converging uniformly to $v$ on $\left[ 0,c_{0}\right]
$ as $n\rightarrow \infty $ through $N_{2}$. Thus $\left\{ \left(
u_{n}\left( r\right) ,v_{n}\left( r\right) \right) \right\} _{n\in N_{2}}$
converges uniformly on $\left[ 0,c_{0}\right] $ to $\left( u,v\right) \in
C\left( \left[ 0,c_{0}\right] \right) \times C\left( \left[ 0,c_{0}\right]
\right) $ through $N_{2}$ (see L\"{u}-O'Regan-Agarwal \cite{LU}). The limit
function $\left( u,v\right) $ constructed in this way will be nonnegative,
radially symmetric and nondecreasing with respect to $r$ and is a solution
of system (\ref{11}). Moreover, the radial solutions of (\ref{11}) with
u\left( 0\right) =a_{1},$ $v\left( 0\right) =a_{2}$ satisfy
\begin{eqnarray}
u\left( r\right) &=&a_{1}+\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{1}{\xi
_{1}\left( t\right) }\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left(
s\right) f_{1}\left( u\left( s\right) ,v\left( s\right) \right) ds\right) dt
\text{ }r\geq 0, \label{eq1} \\
v\left( r\right) &=&a_{2}+\int_{0}^{r}\Psi _{2}^{-1}\left( \frac{1}{\xi
_{2}\left( t\right) }\int_{0}^{t}\xi _{2}\left( s\right) p_{2}\left(
s\right) f_{2}\left( u\left( s\right) ,v\left( s\right) \right) ds\right) dt
\text{ }r\geq 0. \label{eq2}
\end{eqnarray
In the case $\underline{P}_{1}\left( \infty \right) =\underline{P}_{2}\left(
\infty \right) =\infty $, we observe that
\begin{eqnarray}
u\left( r\right) &=&a_{1}+\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{1}{\xi
_{1}\left( t\right) }\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left(
s\right) f_{1}\left( u\left( s\right) ,v\left( s\right) \right) ds\right) dt
\notag \\
&\geq &a_{1}+\int_{0}^{r}\Psi _{1}^{-1}\left( \frac{1}{\xi _{1}\left(
t\right) }\int_{0}^{t}\xi _{1}\left( s\right) p_{1}\left( s\right)
f_{1}\left( a_{1},a_{2}+\underline{\theta }_{2}(f_{2}\left(
a_{1},a_{2}\right) )P_{2}\left( s\right) \right) ds\right) dt \label{ints1}
\\
&=&a_{1}+\underline{P}_{1}\left( r\right) . \notag
\end{eqnarray
We repeat the argument applied in the proof of (\ref{ints1})
\begin{eqnarray}
v\left( r\right) &=&a_{2}+\int_{0}^{r}\Psi _{2}^{-1}\left( \frac{1}{\xi
_{2}\left( t\right) }\int_{0}^{t}\xi _{2}\left( s\right) p_{2}\left(
s\right) f_{2}\left( u\left( s\right) ,v\left( s\right) \right) ds\right) dt
\notag \\
&\geq &a_{2}+\int_{0}^{r}\Psi _{2}^{-1}\left( \frac{1}{\xi _{2}\left(
t\right) }\int_{0}^{t}\xi _{2}\left( s\right) p_{2}\left( s\right)
f_{2}\left( a_{1}+\underline{\theta }_{1}(f_{2}\left( a_{1},a_{2}\right)
)P_{1}\left( s\right) ,a_{2}\right) ds\right) dt \label{ints2} \\
&=&a_{2}+\underline{P}_{2}\left( r\right) . \notag
\end{eqnarray
By taking limits in (\ref{ints1}) and (\ref{ints2}), we get entire large
solutions
\begin{equation*}
\lim_{r\rightarrow \infty }u\left( r\right) =\infty \text{ and
\lim_{r\rightarrow \infty }v\left( r\right) =\infty .
\end{equation*
Consequently, $\left( u,v\right) $ is an entire large solution of (\ref{11}).
The next purpose of the paper is to give a sufficient condition to obtain an
entire bounded solution to (\ref{11}). Our result in this case is the
following:
\begin{theorem}
\label{th2}The system (\ref{11}) has one positive radial solution $\left(
u,v\right) \in C^{1}\left( \left[ 0,\infty \right) \right) \times
C^{1}\left( \left[ 0,\infty \right) \right) $ given that\textit{\ }$\mathcal
H}_{1}\left( \infty \right) =\mathcal{H}_{2}\left( \infty \right) =\infty $
and \textrm{(P1)}, \textrm{(C1)}, \textrm{(C2)} hold true. Moreover, if
\overline{P}_{1}\left( \infty \right) <\infty $ and $\overline{P}_{2}\left(
\infty \right) <\infty $ then
\begin{equation*}
\lim_{r\rightarrow \infty }u\left( r\right) <\infty \text{ and
\lim_{r\rightarrow \infty }v\left( r\right) <\infty .
\end{equation*}
\end{theorem}
\subparagraph{\textbf{Proof of Theorem \protect\ref{th2}:}}
The existence part is proved in Theorem \ref{th1}. Assume $\overline{P
_{1}\left( \infty \right) <\infty $ and $\overline{P}_{2}\left( \infty
\right) <\infty $. Proceeding as in the proof of (\ref{int}) and (\ref{int2
) with the integral equations (\ref{eq1}) and (\ref{eq2}), one gets the
estimate
\begin{equation*}
u\left( r\right) \leq \mathcal{H}_{1}^{-1}\left( \overline{P}_{1}\left(
\infty \right) \right) <\infty \text{ and }v\left( r\right) \leq \mathcal{H
_{2}^{-1}\left( \overline{P}_{2}\left( \infty \right) \right) <\infty \text{
for all }r\geq 0.
\end{equation*
Thus $\left( u,v\right) $ is a positive entire bounded solution of the
system (\ref{11}).
Concerning the existence of semifinite entire large solutions to (\ref{11}),
we have the following:
\begin{theorem}
\label{th3}The system (\ref{11}) has one positive radial solution $\left(
u,v\right) \in C^{1}\left( \left[ 0,\infty \right) \right) \times
C^{1}\left( \left[ 0,\infty \right) \right) $ given that $\mathcal{H
_{1}\left( \infty \right) =\mathcal{H}_{2}\left( \infty \right) =\infty $
and \textrm{(P1)}, \textrm{(C1)}, \textrm{(C2)} hold true. Moreover, the
following hold:
1)\quad If $\overline{P}_{1}\left( \infty \right) <\infty $ and $\underline{
}_{2}\left( \infty \right) =\infty $ then
\begin{equation*}
\lim_{r\rightarrow \infty }u\left( r\right) <\infty \text{ and
\lim_{r\rightarrow \infty }v\left( r\right) =\infty .
\end{equation*}
2)\quad If $\underline{P}_{1}\left( \infty \right) =\infty $ and $\overline{
}_{2}\left( \infty \right) <\infty $ then
\begin{equation*}
\lim_{r\rightarrow \infty }u\left( r\right) =\infty \text{ and
\lim_{r\rightarrow \infty }v\left( r\right) <\infty .
\end{equation*}
\end{theorem}
\subparagraph{\textbf{Proof of Theorem \protect\ref{th3}:}}
The existence part is proved in Theorem \ref{th1}.
\textbf{1):} As in the proof of Theorem \ref{th1} and Theorem \ref{th2}, we
have
\begin{equation*}
u\left( r\right) \leq \mathcal{H}_{1}^{-1}\left( \overline{P}_{1}\left(
\infty \right) \right) <\infty \text{ and }v\left( r\right) \geq a_{2}
\underline{P}_{1}\left( r\right) .
\end{equation*
Observing that $\overline{P}_{1}\left( \infty \right) <\infty $ and
\underline{P}_{2}\left( \infty \right) =\infty $ the above relations yield
\begin{equation*}
\lim_{r\rightarrow \infty }u\left( r\right) <\infty \text{ and
\lim_{r\rightarrow \infty }v\left( r\right) =\infty \text{.}
\end{equation*
This completes the proof.
\textbf{2): }Arguing as above, we obtai
\begin{equation}
u\left( r\right) \geq a_{1}+\underline{P}_{1}\left( r\right) \text{ and
v\left( r\right) \leq \mathcal{H}_{2}^{-1}\left( \overline{P}_{2}\left(
r\right) \right) . \label{t2}
\end{equation
Our conclusion follows now by letting $r\rightarrow \infty $ in (\ref{t2}).
We now propose a more refined question concerning the solutions of system
\ref{11}).\ In analogy with Theorems \ref{th1}-\ref{th3}, we can also prove
the following three theorems. The first is the following:
\begin{theorem}
\label{th4}The system (\ref{11}) has one positive bounded radial solution
\left( u,v\right) \in C^{1}\left( \left[ 0,\infty \right) \right) \times
C^{1}\left( \left[ 0,\infty \right) \right) $ given that $\overline{P
_{1}\left( \infty \right) <\mathcal{H}_{1}\left( \infty \right) <\infty $,
\overline{P}_{2}\left( \infty \right) <\mathcal{H}_{2}\left( \infty \right)
<\infty $, \textrm{(P1), (C1),\ (C2) }hold true. \ Moreover
\begin{equation*}
\left\{
\begin{array}{l}
a_{1}+\underline{P}_{1}\left( r\right) \leq u\left( r\right) \leq \mathcal{H
_{1}^{-1}\left( \overline{P}_{1}\left( r\right) \right) , \\
a_{2}+\underline{P}_{1}\left( r\right) \leq v\left( r\right) \leq \mathcal{H
_{2}^{-1}\left( \overline{P}_{2}\left( r\right) \right)
\end{array
\right. \text{ }
\end{equation*}
\end{theorem}
\subparagraph{\textbf{Proof of Theorem \protect\ref{th4}: }}
The existence part is proved in Theorem \ref{th1}. Next, by a simple
calculation together with (\ref{ints}) and the conditions of the theorem we
obtain
\begin{equation*}
\mathcal{H}_{1}\left( u_{n}\left( r\right) \right) \leq \overline{P
_{1}\left( \infty \right) <\mathcal{H}_{1}\left( \infty \right) <\infty
\text{ and }v_{n}\left( r\right) \leq \mathcal{H}_{2}^{-1}\left( \overline{P
_{2}\left( \infty \right) \right) <\infty .
\end{equation*
On the other hand, since $\mathcal{H}_{1}^{-1}$ is strictly increasing on
\left[ 0,\mathcal{H}_{1}\left( \infty \right) \right) $, we find that
\begin{equation*}
u_{n}\left( r\right) \leq \mathcal{H}_{1}^{-1}\left( \overline{P}_{1}\left(
\infty \right) \right) <\infty ,
\end{equation*
and then the non-decreasing sequences $\left\{ u_{n}\right\} _{n\geq 0}$ and
$\left\{ v_{n}\right\} _{n\geq 0}$ are bounded above for all $r\geq 0$ and
all $n$. Now we use this observation to conclude
\begin{equation*}
\left( u_{n}\left( r\right) ,v_{n}\left( r\right) \right) \overset
n\rightarrow \infty }{\rightarrow }\left( u\left( r\right) ,v\left( r\right)
\right)
\end{equation*
and then the limit functions $u$ and $v$ are positive entire bounded radial
solutions of system (\ref{11}).\textbf{\ }This completes the proof.
\begin{theorem}
\label{th5}Assume \textrm{(P1), (C1) }and\textrm{\ (C2) }hold true. The
following hold true:
i)\quad The system (\ref{11}) has one positive radial solution $\left(
u,v\right) \in C^{1}\left( \left[ 0,\infty \right) \right) \times
C^{1}\left( \left[ 0,\infty \right) \right) $ such that $\lim_{r\rightarrow
\infty }u\left( r\right) =\infty $ and $\lim_{r\rightarrow \infty }v\left(
r\right) <\infty $ given that $\mathcal{H}_{1}\left( \infty \right) =\infty
, $\underline{P}_{1}\left( \infty \right) =\infty $ and $\overline{P
_{2}\left( \infty \right) <\mathcal{H}_{2}\left( \infty \right) <\infty $.
ii)\quad The system (\ref{11}) has one positive radial solution $\left(
u,v\right) \in C^{1}\left( \left[ 0,\infty \right) \right) \times
C^{1}\left( \left[ 0,\infty \right) \right) $ such that $\lim_{r\rightarrow
\infty }u\left( r\right) <\infty $ and $\lim_{r\rightarrow \infty }v\left(
r\right) =\infty $ given that $\overline{P}_{1}\left( \infty \right)
\mathcal{H}_{1}\left( \infty \right) <\infty $ and $\mathcal{H}_{2}\left(
\infty \right) =\infty $, $\underline{P}_{2}\left( \infty \right) =\infty $.
\end{theorem}
\subparagraph{\textbf{Proof of Theorem \protect\ref{th5}:}}
The proof for these cases is similar as the above and is therefore omitted.
|
2,877,628,091,299 | arxiv | \section{Introduction and main result}
The concept of Poissonian pair correlations has its origin in quantum mechanics, where the spacings of energy levels of integrable systems were studied. See for example \cite{not9} and the references therein for detailed information on that topic. Rudnik and Sarnak first studied this concept from a purely mathematical point of view and over the years the topic has attracted wide attention, see e.g., \cite{ not5, not10, not1, not2, not3}. Recently, Aistleitner, Larcher and Lewko (see \cite{not6}) could give a strong link between the concept of Poissonian pair correlations and the additive energy of a finite set of integers, a notion that plays an important role in many mathematical fields, e.g., in additive combinatorics. Roughly speaking, they proved that if the first $N$ elements of an increasing sequence of distinct integers $(a_n)_{n \in {\mathbb N}}$, have an arbitrarily small energy saving, then $(\lbrace a_n \alpha \rbrace)_{n \in {\mathbb N}}$ has Poissonian pair correlations for almost all $\alpha$. This result implies the metrical Poissonian pair correlation property for lacunary sequences as well. In this paper the authors also raised the question if an increasing sequence of distinct integers with maximal order of additive energy can have Poissonian pair correlations for almost all $\alpha$. Jean Bourgain could show that the answer to this question is negative, see the appendix of \cite{not6} for details and a second problem which was also solved by Bourgain. Recently, the results of Bourgain have been further extended, see \cite{not9, not14, not15, not16}. \\
Let $\| \cdot \|$ denote the distance to the nearest integer. A sequence $(x_n)_{n \in {\mathbb N}}$ of real numbers in $[0,1)$ has Poissonian pair correlations if
\begin{equation}\label{eq:pc1}
\lim_{N \to \infty} \frac{1}{N} \# \left\lbrace 1 \leq l \neq m \leq N: \| x_l - x_m \| \leq \frac{s}{N} \right\rbrace = 2s
\end{equation}
for every $s \geq 0$. Due to a result by Grepstad and Larcher \cite{not8} (see also \cite{not7, not12}), we know that a sequence which satisfies property (\ref{eq:pc1}), is also uniformly distributed in $[0,1)$, i.e., it satisfies
\begin{equation*}
\lim_{N \to \infty} \frac{1}{N} \# \lbrace 1 \leq n \leq N: x_n \in [a,b) \rbrace = b-a
\end{equation*}
for all $0 \leq a < b \leq 1$. Note that the other direction is not necessarily correct. For instance the Kronecker sequence $(\lbrace n\alpha \rbrace)_{n \in {\mathbb N}}$, does not have this property for any real $\alpha$; a fact that can be argued by a continued fractions argument or by the main theorem in \cite{not19} in combination with the famous Three Gap Theorem, see \cite{not4}. Poissonian pair correlation is a typical property of a sequence. Random sequences, i.e., almost all sequences, have Poissonian pair correlations. Nevertheless, it seems to be extremely difficult to give explicit examples of sequences with Poissonian pair correlations. We note that $(\lbrace \sqrt{n} \rbrace)_{n \in {\mathbb N}}$ has Poissonian pair correlations, \cite{not17} (see \cite{not18} for another explicit construction). Apart from that -- to our best knowledge -- no other explicit examples are known. Especially, until now we do not know any single explicit construction of a real number $\alpha$ such that the sequence of the form $(\lbrace a_n \alpha \rbrace)_{n \in {\mathbb N}}$ has Poissonian pair correlations.\\
We recall that the sequence $(\lbrace 2^n \alpha \rbrace)_{n \in {\mathbb N}}$ has Poissonian pair correlations for almost all $\alpha$. In this note, we study the distribution of the pair correlations of the sequence $(\lbrace 2^n \alpha \rbrace)_{n \in {\mathbb N}}$, where $\alpha$ is the Champernowne constant in base $2$, i.e., $\alpha = 0.1101110010111011 \ldots_2$. It is a well known fact that the Champernowne constant in base $2$ is normal to base 2. Moreover we know that the sequence $(\lbrace 2^n \alpha \rbrace)_{n \in {\mathbb N}}$ is uniformly distributed modulo $1$ if and only if $\alpha$ is normal, see e.g., \cite{not11}. If we want to investigate, whether the distribution of the pair correlations for some explicit given sequence is Poissonian, i.e., satisfies property (\ref{eq:pc1}), the sequence has to be uniformly distributed modulo $1$. Therefore, if we investigate the distribution of the spacings between the sequence elements of $(\lbrace 2^n \alpha \rbrace)_{n \in {\mathbb N}}$, the only reasonable choice for $\alpha$ is a normal number. We obtain the following result.
\begin{theorem}
The sequence $(\lbrace 2^n \alpha \rbrace)_{n \in {\mathbb N}}$ where $\alpha$ is the Champernowne constant in base $2$, i.e., $\alpha = 0.1101110010111011 \ldots_2$ \textbf{does not} have Poissonian pair correlations.
\end{theorem}
This paper was initiated by the conjecture of G.\ Larcher (mentioned during a personal discussion) that all normal numbers are Poissonian, due to the lacunarity of $(\lbrace 2^n \rbrace)_{n \in {\mathbb N}}$. To make it more tangible why this conjecture is reasonable, we recall that Kronecker sequences are not Poissonian for any $\alpha$ and $(\lbrace \alpha n^d\rbrace)_{n \in {\mathbb N}}$, $d \geq 2$ is Poissonian for almost all $\alpha$, whereby it is known that $\alpha$ has to satisfy some Diophantine condition, see e.g., \cite{not3}. Hence, one would expect the sequence $(\lbrace 2^n \alpha \rbrace)_{n \in {\mathbb N}}$ to have the Poissonian property for all normal numbers $\alpha$, as it shows less structure than the Kronecker and polynomial sequences. The motivation to study the sequence described in Theorem 1 was to find the first explicit example of a sequence having Poissonian pair correlations. At least our result allows to immediately deduce that the sequence $(\lbrace 2^n \alpha \rbrace)_{n \in {\mathbb N}}$ \textbf{cannot} have Poissonian pair correlations \textbf{for all} normal numbers $\alpha$. \\
To prove Theorem 1 we use elementary combinatorics. We give a short outline of the proof. Let $e, d$, be two integers, where $d=2^e$ is understood to be very large compared to $e$. Further, we set $s=1$ and $N=2^{d+e}$ in (\ref{eq:pc1}). The reason for choosing $N$ in such a manner is the following. The Champernowne constant is the concatenation of the numbers which have a digit expansion (with a leading 1) in base $2$ of length $1, 2, 3, \ldots, d, \ldots$ and so forth. In order to account for all blocks of words of length $1, \ldots, d$, we have to choose $N$ large enough, i.e., in our case at the beginning of the block of words having length $d+1$. Note that the length of the block containing the words of length $d$ is $d2^{d-1}=2^{d+e-1}$ and for a very large $e$ all previous blocks of words with length $1, \ldots, d-1$ have in total approximately this length. We then count the occurrence of bit patterns (in the block of words of length $d$), which correspond to shifts of $\alpha$ having a distance (i.e., the Euclidean distance on $\mathbb{R}$) $<1/N$ (we will henceforth abbreviate to simply saying that the patterns have this distance). If we have two patterns which match in the first $d+e$ bits or which are of the form $\underbrace{a_1 a_2 \ldots 0 1 \ldots 1}_{d+e} b_1 b_2 b_3 \ldots$ and $\underbrace{a_1 a_2 \ldots 1 0 \ldots 0}_{d+e} c_1 c_2 c_3 \ldots $ with $c_1 c_2 c_3 \ldots < b_1 b_2 b_3 \ldots$, then their distance is $< 1/N$. It turns out that already the number of pairs matching in the first $d+e$ bits (in the block of words of length $d$) yields a too large contribution. The second case is studied in the appendix. Though the number of pairs (with distance $<1/N$) is small compared to the first case, it is of interest in its own right to see how to count the occurrence of such patterns.
\section{Proof of the main Theorem}
\begin{proof}
Let $s=1$ and set $N=2^{d+e}$ where $d$ and $e$ are defined as in the previous section (at first we will not use the relation between $d$ and $e$, though).
Let a bit pattern $a_1\dots a_w$ be given where $w=d+e$, $e>0$. We are aiming to count
the occurrences of the pattern in the full block
$$c_{0,1}\dots c_{0,d}\dots c_{2^{d-1}-1,1}\dots c_{2^{d-1}-1,d},$$
and put $c_{n}:=2^{d-1}+n = \sum_{i=1}^d c_{n,i}2^{i-1}$. Note that $c_{i,1}=1$ for $i=0, \ldots, 2^{d-1}-1$.
That is, the pattern has an overlap of $e$ bits to the word length $d$.
The overlap $e$ is understood to be small.
First, we investigate the patterns where the first $e$ bits
match the last ones, i.e., $a_i=a_{d+i}$ for $i=1,\dots,e$.
We denote the index before the start of a possible matching word by
$z\geq0$, i.e., if a match occurs then there is an $n$ such that
$$a_{z+1}a_{z+2}\dots=c_{n,1}c_{n,2}\dots$$ and at least one of
$$a_{z }a_{z-1}\dots=c_{n-1,d}c_{n-1,d-1}\dots,$$
$$a_{z+d+1}a_{z+d+2}\dots=c_{n+1,1}c_{n+1,2}\dots.$$
\textbf {Basic Fact (BF1):} for a match to occur, $a_{z}$ must not
equal $a_{z+d}$ since these bits correspond to the least significant
bits of consecutive digit expansions $c_{n-1},c_n$.
\textbf{BF2:}
As a first consequence of BF1, $z$ must be zero or greater
than $e$ since otherwise $a_z=a_{z+d}$ and similarly $z$ must be at
most $d$, else $a_{z-d}=a_{z}$, i.e., $z\in\{0,e+1,\dots,d\}$.
\textbf{BF3:}
Furthermore, for a match with $z>0$ to occur it is necessary that
$a_{z+1}=1$ and at least one zero occurs in the sequence $a_{e+1}\dots
a_z$. This excludes subpatterns of the forms \[ a_{e+1}\dots a_{z+1} =
1\dots10 \text{ or } 1\dots11 , \] which cannot occur due to the
fact that in this case $c_n=c_{n-1}+1$ has carries affecting $a_d\dots
a_{d+e}$.
We now make case distinctions according to the number $k$ of ones in
the `middle block' $a_{e+1}\dots a_d$.
If $k=0$ the pattern can occur in the full block only if $z=0$ and
$a_{z+1}=c_{n,1}=1$ which cannot happen in the middle block or if
$a_1=0$.
If $k=1$ this type of pattern (or `meta-pattern') can occur
in the case $a_{e+1}=1$ only if $a_1=1$ and $z=0$; BF3 forbids
$z>0$ and for $z=0$ again $a_{z+1}=a_1=1$ is necessary. If the $1$
appears later in the middle block, again $z=0$ is possible, if $a_1=1$
or $z=j$ if $j+1$ is the index of the $1$. This gives
\begin{itemize}
\item $2^{e-1} + 2^{e-1}(d-e-1)$ patterns occuring only one time
\item $2^{e-1}(d-e-1)$ patterns occuring two times.
\end{itemize}
Let us also look at the case $k=2$:
first, $a_{e+1}=a_{e+2}=1$ by BF2 again necessitates $z=0$ and $a_1=1$,
and can occur only in one match. If $a_{e+1}=1\neq a_{e+2}$ there are
one or two possible matches in dependence of $a_1=0$ or $1$. Finally,
two or three possible matches can happen if both ones occur later in
the middle block. The tally thus is:
\begin{itemize}
\item one match: $2^{e-1}(1+(d-e-2))$ patterns
\item two matches: $2^{e-1}((d-e-2)+\binom{d-e-1}{2})$ patterns
\item three matches: $2^{e-1}\binom{d-e-1}{2}$ patterns
\end{itemize}
We can now present the general case $2<k<d-e$:
\begin{itemize}
\item $a_1=0, a_{e+1}=1,a_{e+2}=0$:
we have $2^{e-1} \binom{d-e-2}{k-1} $
patterns having $k-1$ matches
\item $a_1=0, a_{e+1}=a_{e+2}=1$:
let $a_{e+1}=\dots=a_{e+j}=1\neq a_{e+j+1}$, i.e., there are $j$
consecutive ones at the start of the middle block followed by a zero.
Then there are $k-j$ ones left to distribute on $d-e-j-1$ places.
We have a match for each of those ones, so
there are\\ $2^{e-1} \binom{d-e-j-1}{k-j}$ patterns having $k-j$
matches, where $j=1,\dots,k$.
\item $a_1=0, a_{e+1}=0$: we have $2^{e-1} \binom{d-e-1}{k} $
patterns having $k$ matches
\item $a_1=1, a_{e+1}=0$: we have $2^{e-1}\binom{d-e-1}{k}$
patterns having $k+1$ matches
\item $a_1=1, a_{e+1}=1$:
let $a_{e+1}=\dots=a_{e+j}=1\neq a_{e+j+1}$, i.e., there are $j$
consecutive ones at the start of the middle block followed by a zero.
Then there are $k-j$ ones left to distribute on $d-e-j-1$ places.
We have a match for each of those ones plus one attributed to $z=0$, i.e.,
there are\\ $2^{e-1} \binom{d-e-j-1}{k-j}$ patterns having $k-j+1$
matches, where $j=1,\dots,k$.
\end{itemize}
The case $k=d-e$ has only patterns matching just once.
Taking all together we get the following formula for the number of
pairs $c_{n_1,i_1},c_{n_2,i_2}$ such that there is a match in
(at least) $w$ bits. Note that the pairs are ordered.
\begin{align}
2^{e} \sum_{k=1}^{d-e-1}
\Bigg(&
\sum_{j=0}^{k-1} \binom{k-j}{2} \binom{d-e-j-1}{k-j} + \nonumber \\
&
\sum_{j=0}^{k-1} \binom{k-j+1}{2} \binom{d-e-j-1}{k-j}
\Bigg) \nonumber
\\=
2^{e} \sum_{k=1}^{d-e-1}
&
\sum_{j=0}^{k-1} {(k-j)}^{2} \binom{d-e-j-1}{k-j} \nonumber
\end{align}
By Stirling's formula, $\binom{2n}{n} \gg \frac{2^{2n}}{\sqrt{n}}$, and considering $j=0$ and $k=\lfloor \frac{d-e-1}{2} \rfloor$ in the above sum, we obtain a contribution of magnitude
\begin{equation*}
2^ek^2 \binom{d-e-1}{k} \gg \frac{d^2}{\sqrt{d}}2^d = \sqrt{d} 2^{d+e}.
\end{equation*}
Therefore, if we divide this amount of pairs by $N = 2^{d+e}$ (recall that we set $d=2^e$) and consider $e \to \infty$, we obtain $+\infty$ in the limit and deduce that the pair correlations distribution cannot be asymptotically Poissonian. \\
For the sake of completeness, we study two further types of patterns. We will see that these two structures of patterns yield a negligible amount of pairs.
The next type of pattern is where the matching $c_n$ ends in a string
of ones, inducing a chain of carries for $c_{n+1}$. I.e., there are
$j_0,j_1$, $1\leq j_0 \leq e < j_1\leq d-1$ such that
\[
a_{j_0}a_{j_0+1}\dots a_e a_{e+1}\dots a_{j_1}a_{j_1+1} =
01\dots11\dots10
\]
and $a_i=a_{d+i}$ for $1\leq i<j_0$, $a_i=1-a_{d+i}$ for $j_0\leq
i\leq e$. Again, a possible matching $c_n$ can obviously not start
with an index earlier than $e+1$ since then inevitably mismatches
$a_i\neq a_{d+i}$ that cannot be accounted for by carries occur. But
then each of the consecutive ones can be taken as start of a
$c_n$-block, i.e., $z=e,\dots,j_1-1$ are all possible, giving $j_1-e$
matches. Given $j_0,j_1$ there are $2^{j_0-1+\,d-j_1-1}$ such patterns.
For the case $j_1=d$ there are $d-e$ matches as well, plus an
additional one if $a_1=1$. Both subcases have $2^{j_0-2}$ according
patterns for $j_0\geq2$ and additionally there is one further case
for $j_0=1$ with $d-e$ matches, the pattern $01^{d}0^{e-1}$.
The number of ordered pairs thus equals:
\begin{align*}
2 \Bigg( &\sum_{j_0=1}^{e} \sum_{j_1=e+1}^{d-1}
\binom{j_1-e}{2} 2^{j_0+d-j_1-2} \\&
+ \left( \binom{d-e}{2}+\binom{d-e+1}2\right)
\sum_{j_0=2}^e 2^{j_0-2}
+ \binom{d-e}2 \Bigg) \\
= \frac{2^e-1}{2^{e-1}}&(2^d-2^e)-(d-e)2^e
\quad <\quad 2^{d+1}.
\end{align*}
The next type of pattern, which only yields a negligible amount of relevant pairs, is the one where $a_1 \ldots a_e =1 \ldots 1$ and $a_{e+1} \ldots a_{z+1} = 1 \ldots 1$, where $z \in \lbrace e, \ldots, d-1 \rbrace$. As a consequence thereof $a_{d+1} \ldots a_{d+e} = 0 \ldots 0$. Hence, we have
\begin{equation*}
\sum_{i=3}^{d-e} \binom{i-1}{2} = \frac{1}{6} (d-e-2)(d-e-1)(d-e)
\end{equation*}
pairs with distance $ < 1/N$.
\end{proof}
\begin{remark}
In the proof we have only studied the case, where a fixed bit pattern of length $w$ overlaps two words of length $d$. Of course, an overlap of the pattern with three words might also occur, but these cases yield a small number of pairs with prescribed distance. Therefore we have omitted the exact study of these structures. If the relative number of pairs in the block of words of length $d$ would have given a number less than $2s$, then a study of the occurrence of the pattern in the block of words of length $d-1$ (and so forth) would have been necessary.
\end{remark}
\begin{remark}
The techniques from above can of course be adapted to any other base $b$, i.e., we can conclude that the Champernowne constant in base $b \geq 2$ (note that the Champernowne constant in base $b$ is normal to base $b$) is not Poissonian.
\end{remark}
\section{Open Problems and Outlook}
In this section we first want to state an open problem, which involves the notion of weak pair correlations (introduced by Steinerberger in \cite{not12}), a concept that relaxes the requirements of (\ref{eq:pc1}). \\
We state the following open problem. \\
\textbf{Problem 1:} Does the sequence $(x_n)_{n \in {\mathbb N}}= (\lbrace 2^n \alpha \rbrace)_{n \in {\mathbb N}}$, where $\alpha$ is the Champernowne constant in base $2$, satisfy the notion of weak pair correlation, i.e., is there an $0 <\beta < 1$, such that
\begin{equation*}
\lim_{N \to \infty} \frac{1}{N^{2- \beta}} \# \left\lbrace 1 \leq l \neq m \leq N: \| x_l - x_m \| \leq \frac{s}{N^{\beta}} \right\rbrace = 2s
\end{equation*}
for every $s \geq 0$? \\
Further, we still need to find an explicit construction of an $\alpha$ such that a sequence of the form $(\lbrace a_n \alpha \rbrace)_{n \in {\mathbb N}}$ has Poissonian pair correlations and maybe criteria which relax the definition of Poissonian pair correlations, e.g., that it possibly suffices to show that (\ref{eq:pc1}) holds for $s \in \mathbb{N}$ only. A possible approach would be to modify the Champernowne constant in a certain way, e.g., by shifts, such that we avoid the situation that we have too many patterns where the first and last $e$ bits match.
\section{Appendix}
Though the here presented results are not needed for the proof of Theorem 1, they give additional interesting information about the pair correlation structure of the Champernowne constant and therefore we add them as appendix. \\
In the previous section we have counted the occurrence of a bit pattern $a_1 \ldots a_w$ in the full block of words of length $d$. Now, we consider patterns of the form $b:=a_1 a_2 \ldots a_w b_1b_2b_3 \ldots = \underbrace{a_1 a_2 \ldots a_j 0 1 \ldots 1}_{w} b_1 b_2 b_3 \ldots$ and $c:=a^{'}_1 a^{'}_2 \ldots a^{'}_w c_1c_2c_3 \ldots = \underbrace{a_1 a_2 \ldots a_j 1 0 \ldots 0}_{w} c_1 c_2 c_3 \ldots$, with $b_1 b_2 b_3 \ldots > c_1 c_2 c_3 \ldots$. These two types of bit words also have a distance less than $1/N$. \\
We only study the case $b_1=1$ and $c_1=0$ in detail. The ideas used for the special case can be readily generalized. Therefore, we are aiming at counting the occurrences of bit blocks of the form $B: = \underbrace{a_1 a_2 \ldots a_j 0 1 \ldots 1}_{w} 1$ (in the full block of words of length $d$) and the ones of the form $C: =\underbrace{a_1 a_2 \ldots a_j 1 0 \ldots 0}_{w} 0$. \\
\begin{coro}
The patterns of the form $B$ and $C$ yield for $j=d$ at least
\begin{equation}\label{eq:eq122}
2^{d-e-1}(d-e-5)
\end{equation}
pairs with distance less than $1/N$. For $j > d$ we obtain at least
\begin{equation}\label{eq:eq123}
2^{-1 - e} (2^e-2) (2^{2 + e} + 2^d d + 2^{1 + e} d -
2^d e - 2^{1 + e} e - 2^{2 + d})
\end{equation}
pairs with distance less than $1/N$.
\end{coro}
\begin{proof}
We start by studying the occurrence of the first pattern. Note that we first consider the case, where the first and the last $e$ bits of $\underbrace{a_1 a_2 \ldots a_j 0 1 \ldots 1}_{w}$ match. In the following, we distinguish several cases, depending on the position of the index $j$. Later, we will see that the only relevant cases are the ones where $j \geq d$. Thus, we will examine only those in more detail.
\begin{itemize}
\item $j=d$:
\begin{itemize}
\item First, let $a_{e+1} =1$ and due to $j=d$, $a_1a_2 \ldots a_e = a_{d+1}a_{d+2} \ldots a_{d+e} = 0 1 \ldots 1$. Let $k$ be the number of ones in the block $a_{e+1} \ldots a_{d}$. Then, we obtain
\begin{equation*}
\sum_{k=2}^{d-e-1} \sum_{l=1}^{k-1}(k-l) \binom{d-e-l-1}{k-l}
\end{equation*}
matches.
\item Consider now $a_{e+1}=0$. If there exists $z \leq d$ with $a_{e+1} a_{e+2} \ldots a_{z} = 01 \ldots 1$, then this case yields
\begin{equation*}
\sum_{k=1}^{d-e-1} \sum_{l=1}^{k} l \binom{d-e-l-2}{k-l}
\end{equation*}
matches.
\end{itemize}
\item $j>d$:
\begin{itemize}
\item Let $a_{e+1}=1$. We have the structure (the first and last $e$ digits are again equal) $a_1 \ldots a_e = a_{d+1} \ldots a_j a_{j+1} \ldots a_{d+e} = a_{d+1} \ldots a_{j} 0 \ldots 1$. In total there are
\begin{equation*}
2^{j-d-1}\Bigg[ \sum_{k=2}^{d-e-1} \Bigg( \sum_{l=1}^{k} (k-l) \binom{d-e-l-1}{k-l} + (k-l+1)\binom{d-e-l-1}{k-l} \Bigg) \Bigg]
\end{equation*}
matches.
\item Let $a_{e+1}=0$. Here, we get (similar to above)
\begin{equation*}
2^{j-d} \Bigg( \sum_{k=1}^{d-e-1} \sum_{l=1}^{k} l \binom{d-e-l-2}{k-l} \Bigg)
\end{equation*}
matches.
\end{itemize}
\end{itemize}
A similar study can be carried out for the structure of the second pattern mentioned at the beginning. \\ \\
It remains to check if above cases allow starting and end blocks of the form $a_1 \ldots a_e = a_1 \ldots a_{m-1} 0 1 \ldots 1$ and $a_{d+1} \ldots a_{d+e} = a_1 \ldots a_{m-1} 1 0 \ldots 0$, respectively. If $d > j > e$, or $ j \leq e$, then this cannot happen. The case $j \geq d$ allows starting and end blocks of this form for the second pattern. \\ \\
Above, we have investigated how often (and for which cases) one of the two patterns $B$ and $C$ occurs. It remains to analyse how many matches of the respective patterns agree in the first $j$ digits. The only relevant cases are where $j \geq d$.
\begin{itemize}
\item $j=d$: Here we have for the pattern $B$ the structure \\ $a_1a_2 \ldots a_e = a_{d+1}a_{d+2} \ldots a_{d+e} = 0 1 \ldots 1$. For the pattern $C$ we have the feasible structure $a_1a_2 \ldots a_e = 0 1 \ldots 1$ and $a_{d+1}a_{d+2} \ldots a_{d+e} = 1 0 \ldots 0$. I.e., we obtain
\begin{equation*}
2 \sum_{k=2}^{d-e-1} \sum_{l=1}^{k-1} (l-1)(k-l) \binom{d-e-l-1}{k-l}
\end{equation*}
pairs with distance $< 1/N$. Note that the last equation can be simplified to (\ref{eq:eq122}).
\item $j > d$: Here, we therefore get
\begin{align*}
2^{j-d}\Bigg[ &\sum_{k=2}^{d-e-1} \Bigg( \sum_{l=1}^{k} (l-1)(k-l) \binom{d-e-l-1}{k-l} \\
&+ (l-1)(k-l+1)\binom{d-e-l-1}{k-l} \Bigg) \Bigg] \nonumber
\end{align*}
pairs with distance $< 1/N$. Summation for $d+1 \leq j \leq d+e-1$ yields (\ref{eq:eq123}). \\
To make this formula more tangible, we note that for the first pattern we have shifts according to the number of ones after the first zero in the middle block $a_{e+1} \ldots a_{d}$. If $a_{d+1}=1$, then we have one additional shift. The second pattern allows (due to the necessary carry and $c_1=0$) $l-1$ shifts, where $l$ denotes the number of ones at the beginning of the block $a_{e+1} \ldots a_{d}$.
\end{itemize}
\end{proof}
\noindent
\textbf{Acknowledgements.} We would like to thank Gerhard Larcher for his valuable comments, suggestions and his general assistance during the writing of this paper.
|
2,877,628,091,300 | arxiv | \section{Introduction}
In this paper, we study the moduli space of logarithmic connections of rank $2$ on $\mathbb{P}^1 \setminus \{ t_1, \dots, t_n \}$ with fixed spectral data.
These moduli spaces have been studied from various points of view.
For example, they occur as spaces of initial conditions for Garnier systems (\cite{IIS}).
In a recent paper \cite{S},
C.\ Simpson studied some of the topological structures of related moduli spaces in the context of problems such as the WKB theories, and the $P = W$ conjecture.
Our interest in the subject of moduli spaces comes from its relation to the Geometric Langlands Correspondence.
In \cite{A}, D.\ Arinkin proved such correspondence in a special case, by using the geometry of the moduli space of such connections on $\mathbb{P}^1 \setminus \{ t_1, \dots, t_4 \}$.
If $n \geq 5$, this moduli space has not been studied in detail, for its dimension is $2(n-3)$, which is larger than $4$.
In this work, by using canonical coordinates introduced by apparent singularities, we are able to reduce the problem to that of the geometry of surfaces (see \S \ref{Apparent singularities}).
\subsection*{The logarithmic connections.}
Fix points $t_1, \dots, t_n \in \mathbb{P}^1 (t_i \neq t_j)$, and set $D = t_1 + \cdots + t_n$.
We consider pairs $(E, \nabla)$, where $E$ is a rank $2$ vector bundle on $\mathbb{P}^1$ and $\nabla : E \ra E \otimes \Omega^1_{\mathbb{P}^1}(D)$ is a connection having simple poles supported by $D$.
At each pole, we have two residual eigenvalues $\{ \nu_i^+, \nu_i^- \}$ of $\nabla$ for each $i = 1, \dots, n$; they satisfy the Fuchs relation $d + \sum_i(\nu_i^+ + \nu_i^-)= 0$, where $d := \deg(E)$.
Moreover, we can naturally introduce parabolic structures $\mbox{\boldmath $l$} = \{l_i \}_{1 \leq i \leq n}$ such that $l_i$ is a one-dimensional subspace of $E_{t_i}$ which corresponds to an eigenspace of the residue of $\nabla$ at $t_i$ with the eigenvalue $\nu_i^+$.
Note that, when $\nu_i^+ \neq \nu_i^-$, the parabolic structure $\mbox{\boldmath $l$}$ is determined by the connection $(E, \nabla)$.
Fixing spectral data $\bnu = (\nu_i^{\pm})$ with integral sum $-d$, by introducing the weight $\mbox{\boldmath $w$}$ for stability, one can construct the moduli space $M^{\mbox{\boldmath $w$}}(\mbox{\boldmath $t$}, \bnu, d)$ of $\mbox{\boldmath $w$}$-stable $\bnu$-parabolic connections $(E, \nabla, \mbox{\boldmath $l$})$ of degree $d$ using Geometric Invariant Theory, and the moduli space $M^{\mbox{\boldmath $w$}}(\mbox{\boldmath $t$}, \bnu, d)$ turns out to be a smooth irreducible quasi-projective variety of dimension $2(n-3)$ (see \cite{IIS} for details).
We note that, when $\sum_{i = 1}^n \nu_i^{\epsilon_i} \not\in \mathbb{Z}$, for any choice $(\epsilon_i) \in \{ +, - \}^n$, every parabolic connection $(E, \nabla, \mbox{\boldmath $l$})$ is irreducible, and thus stable for any weight $\mbox{\boldmath $w$}$; the moduli space $M^{\mbox{\boldmath $w$}}(\mbox{\boldmath $t$}, \bnu, d)$ does not depend on the choice of weights $\mbox{\boldmath $w$}$ in that case.
These moduli spaces occur as spaces of initial conditions for Garnier systems, the case $n = 4$ corresponding to the Painlev\'e VI equation.
Such differential equations are nothing but isomonodromic deformations for linear connections.
By suitable transformations, we may normalize $\bnu$ as
\begin{equation*}
\begin{cases}
\nu_i^{\pm}=\pm \nu_i& (i=1,\ldots,n-1)\\
\nu_n^+=-d - \nu_n \\
\nu_n^-= \nu_n,
\end{cases}
\end{equation*}
for some $(\nu_1,\ldots,\nu_n) \in \mathbb{C}^{n}$.
Denote by $\cM(d)$ the moduli stack of $\bnu$-$\mathfrak{sl}_2$-parabolic connections of degree $d$ and by $M(d)$ its coarse moduli space.
By the above normalization, we have a natural isomorphism $M(d) \simeq M^{\mbox{\boldmath $w$}}(\mbox{\boldmath $t$}, \bnu, d)$ (see \cite{IIS}).
Moreover, $M(d)$ has a natural compactification $\overline{M(d)}$, which is the moduli space of $\lambda$-$\bnu$-parabolic connections $(E, \nabla_{\lambda}, \lambda \in \mathbb{C})$ over $\mathbb{P}^1$.
(Note that the moduli space $M(d)$ is nothing but the moduli space of $(\nu_1,\ldots,\nu_n)$-bundles on $\mathbb{P}^1$ treated in \cite{AL} and \cite{Obl}, and $\overline{M(d)}$ is the moduli space of $\epsilon$-bundles on $\mathbb{P}^1$ in \cite{A}).
We should mention that P.\ Boalch has a number of related works concerned with the case of meromorphic connections with irregular singularities.
We refer to \cite{B}, for example.
\subsection*{Main Results.}
\begin{Thm}\label{coh of M}
\textit{
Let $\cM(d)$ be the moduli stack of $\bnu$-$\mathfrak{sl}_2$-parabolic connections of degree $d$.
Then we have\\
$$H^i(\mathcal{M}(d), \mathcal{O}_{\mathcal{M}(d)}) = \begin{cases}
\mathbb{C}, & i = 0, \\
0, & i > 0.
\end{cases} $$}
\end{Thm}
\section{Preliminaries}
\subsection{$\mathfrak{sl}_2$-connections.}
We introduce $\mathfrak{sl}_2$-connections.
Fix complex numbers $\nu_1, \dots, \nu_n \in \mathbb{C}$.
Suppose that $\nu_1 \cdots \nu_n \neq 0$ and
\begin{equation*}\label{generic condition 2}
\sum^n_{i=1} \epsilon_i \nu_i \notin \bZ
\end{equation*}
for any $(\epsilon_i) \in \{+, - \}^n$.
\begin{Def}
A \textit{$\bnu$-$\mathfrak{sl}_2$-parabolic connection on $\bP^1$} is a triplet $(E, \nabla,\varphi)$
such that
\begin{enumerate}
\item[(1)] $E$ is a rank $2$ vector bundle of degree $d$ on $\bP^1$,
\item[(2)] $\nabla\colon E \ra E\otimes \Omega^1_{\bP^1}(D)$ is a connection, where $D := t_1 + \cdots +t_n$,
\item[(3)] $\varphi\colon \bigwedge^2 E \simeq \cO_{\bP^1}(d)$ is a horizontal isomorphism,
\item[(4)] the residue $\res_{t_i}(\nabla)$ of the connection $\nabla$ at $t_i$ has eigenvalues $\nu_i^{\pm}$ for each $i$ ($1\le i \le n$).
\end{enumerate}
We call $\bnu = (\nu_i^{\pm})_{1 \leq i \leq n}$ {\it local exponents}.
\end{Def}
There exists a one dimensional subspace $l_i \subset E_{t_i}$ on which $\res_{t_i}(\nabla)$ acts as multiplication by $\nu_i^+$.
For generic $\bnu$, the parabolic direction $l_i$ is nothing but the eigenspace for $\res_{t_i}(\nabla)$ with respect to $\nu_i^+$ so that the parabolic data $\mbox{\boldmath $l$} = \{ l_i \}$ is uniquely determined by the connection $(E, \nabla, \varphi)$ itself.
In this paper, it is enough to consider the case where $d = -1$.
By suitable transformations, we may put
\begin{equation*}
\nu^{\pm}_i := \pm \nu_i \ \ (i=1,\ldots, n-1 ),\ \nu^+_n:=1-\nu_n,\ \nu^-_n := \nu_n \rlap{.}
\end{equation*}
Denote by $\cM(d)$ the moduli stack of $\bnu$-$\mathfrak{sl}_2$-parabolic connections on $\bP^1$,
and by $M(d)$ its coarse moduli space.
This moduli space is a smooth, irreducible quasi-projective algebraic variety of dimension $2(n-3)$ (\cite[Theorem 2.1]{IIS}).
Recall that $\cM(d)$ has a natural compactification $\overline{\cM(d)}$ which is the moduli stack of $\lambda$-$\bnu$-parabolic connections $(E, \nabla_{\lambda}, \varphi, \lambda \in \mathbb{C})$ over $\mathbb{P}^1$.
(Note that, in \cite{A}, $\lambda$-$\bnu$-parabolic connections are called $\epsilon$-bundles.)
Then, under the condition that $(E, \nabla, \varphi)$ is irreducible, Arinkin showed that the moduli stack $\overline{\cM(d)}$ is a complete smooth Deligne-Mumford stack \cite[Theorem 1]{A}.
Moreover, he also showed that the $\lambda = 0$ locus $\cM(d)_H \subset \overline{\cM(d)}$, which is the moduli stack of parabolic Higgs bundles, is also a smooth algebraic stack.
On the other hand, as remarked in the proof of \cite[Proposition 7]{A}, the coarse moduli space $\overline{M(d)}$ corresponding to $\overline{\cM(d)}$ is not smooth: it has quotient singularities.
As for the possible smooth compactification by $\phi$-parabolic-connections, one may refer to \cite{IIS}.
\subsection{Lower and upper modifications.}
In this subsection, following \cite[\S 2]{Obl}, we describe the lower and upper modifications.
Let $E$ be an algebraic vector bundle on $\bP^1$ of rank $2$ and of degree $d$.
Fix a point $t \in \bP^1$.
Let $l \subset E_t$ be a one-dimensional subspace.
\begin{Def}
We call
\begin{equation*}
(t, l)^{\text{low}}(E) := \{ s \in E \mid s(t) \in l \}, \quad (t, l)^{\text{up}}(E) := (t, l)^{\text{low}}(E) \otimes \cO_{\mathbb{P}^1}(t)
\end{equation*}
\textit{the lower and upper modifications} of $E$, respectively.
\end{Def}
The lower and upper modifications provide the exact sequences
\begin{equation*}
0\lra (t, l)^{\text{low}}(E) \lra E \lra E_{t}/l \lra 0,
\end{equation*}
\begin{equation*}
0\lra E \lra (t, l)^{\text{up}}(E) \lra l \otimes \cO_{\mathbb{P}^1}(t) \lra 0,
\end{equation*}
respectively.
In other words, we change our bundle by rescaling the basis of sections in the neighborhood of a point $t$ as follows:
given a local decomposition $V=l\oplus l'$ of $E\simeq V \otimes \cO$,
we take the local basis $\{ s_1(z), s_2(z) \}$ with $l\otimes \cO \simeq \langle s_1(z) \rangle$ and $l'\otimes \cO \simeq \langle s_2(z) \rangle$.
Then the basis of the lower modification $(t, l)^{\text{low}}$ of the bundle is generated by the sections $\{ s_1(z),(z-x) s_{2}(z)\}$,
and the upper one $(t, l)^{\text{up}}$ is given by $\{ (z-t)^{-1}s_1(z), s_{2}(z)\}$.
Consequently, in the punctured neighborhood, we may represent the actions of the modifications by the following gluing matrices
\begin{equation*}
(t, l)^{\text{low}}=\left(
\begin{array}{ll}
1 & 0 \\
0 & (z-t)
\end{array}
\right),\quad
(t, l)^{\text{up}}=\left(
\begin{array}{ll}
(z-t)^{-1} & 0 \\
0 & 1
\end{array}
\right).
\end{equation*}
For the parabolic bundle $(E, \mbox{\boldmath $l$})$, we recall the geometrical properties of these modifications.
Denote by $\mathbb{P}(E, \mbox{\boldmath $l$})$ the projectivization of the parabolic bundle $(E, \mbox{\boldmath $l$})$.
It consists of the projective bundle $\mathbb{P}E$ together with a parabolic point $l_i$ in the fiber $F$ of each $t_i$.
In this situation, the lower and the upper modifications of $E$ are birational transformations of the total space $\text{tot}(\mathbb{P}E)$: these are the blowing-ups of the point $l_i \in \mathbb{P}E$ followed by the contraction of the total transform $\widetilde{F}$ of the fiber $F$.
The point resulting from this contraction gives the new parabolic direction $l'_i$.
We recall their properties in the following proposition:
\begin{Prop}
\textit{
Let $(E, \mbox{\boldmath $l$})$ be a parabolic bundle over $(\mathbb{P}^1, \mbox{\boldmath $t$} = \{ t_i \})$. Then the parabolic bundle $(E', \mbox{\boldmath $l$}') = (t_i,l_i)^{\text{low}}(E)$ satisfies the following properties:}
\begin{enumerate}
\item[(1)] \textit{$\det (E', \mbox{\boldmath $l$}') = \det (E, \mbox{\boldmath $l$}) \otimes \cO_{\mathbb{P}^1}(-t_i)$.}
\item[(2)] \textit{If $L \subset E$ is a line subbundle passing through $l_i$, its image by $(t_i,l_i)^{\text{low}}$ is a subbundle $L' \simeq L$ of $(t_i,l_i)^{\text{low}}(E)$ not passing through $l'_i$.}
\item[(3)] \textit{If $L \subset E$ is a line subbundle not passing through $l_i$, its image by $(t_i,l_i)^{\text{low}}$ is a subbundle $L' \simeq L \otimes \cO_{\mathbb{P}^1}(-t_i)$ of $(t_i,l_i)^{\text{low}}(E)$ passing through $l'_i$.}
\end{enumerate}
\textit{For the upper modification, the parabolic bundle $(E'', \mbox{\boldmath $l$}'') = (t_i,l_i)^{\text{up}}(E)$ satisfies:}
\begin{enumerate}
\item[(4)] \textit{$\det (E'', \mbox{\boldmath $l$}'') = \det (E, \mbox{\boldmath $l$}) \otimes \cO_{\mathbb{P}^1}(t_i)$.}
\item[(5)] \textit{If $L \subset E$ is a line subbundle passing through $l_i$, its image by $(t_i,l_i)^{\text{up}}$ is a subbundle $L' \simeq L \otimes \cO_{\mathbb{P}^1}(t_i)$ of $(t_i,l_i)^{\text{up}}(E)$ not passing through $l''_i$.}
\item[(6)] \textit{If $L \subset E$ is a line subbundle not passing through $l_i$, its image by $(t_i,l_i)^{\text{up}}$ is a subbundle $L' \simeq L$ of $(t_i,l_i)^{\text{up}}(E)$ passing through $l''_i$.}
\end{enumerate}
\end{Prop}
For a $\bnu$-$\mathfrak{sl}_2$-parabolic connection $(E, \nabla, \varphi)$, the lower modification of $E$ gives the new connection $\nabla'$ which is deduced from the action of $\nabla$ on the subsheaf $(t_i,l_i)^{\text{low}}(E) \subset E$, and, over $t_i$, local exponents are changed by
$$ (\nu_i^+, \nu_i^-)' = (\nu_i^- + 1, \nu_i^+) \ \ \text{(and other} \ \nu_j^{\pm}\ \text{are left unchanged for} \ j \neq i). $$
The lower modufication gives us a morphism of moduli spaces $M(d) \ra M(d-1)$.
The upper modification defines the inverse map, and therefore, we have $M(d) \simeq M(d-1)$.
\subsection{Hirzebruch surfaces and the blowing-ups.}\label{Hirzebruch blowing up}
To describe the moduli space $M(-1)$, we introduce some blowing-ups of the Hirzebruch surface $\mathbb{F}_{n-2}$.
Put $L :=\Omega^1_{\mathbb{P}^1}(D)$.
We consider the surface $\mathbb{F}_{n-2}$ as the total space of $\mathbb{P}(\cO_{\mathbb{P}^1} \oplus L)$.
Denote by $s_{\infty}$ the section defined by $L$.
$\mathbb{F}_{n-2} \setminus s_{\infty}$ is naturally identified with the total space of $L$.
In particular, the affine part of the fiber $F_i$ over $t_i$ has the natural chart
$\res_{t_i} \colon F_i \setminus s_{\infty} \xrightarrow{\sim} \mathbb{C}$ given by the residue of sections of $L$.
We define two points $\hat{\nu}_i^{\pm} \in F_i$ by $\res_{t_i}(\hat{\nu}_i^{\pm}) = \nu_i^{\pm}$.
Denote by $\widetilde{\mathbb{F}_{n-2}} := \Bl_{\hat{\nu}_i^{\pm}} \mathbb{F}_{n-2}$ the blowing-up of $\mathbb{F}_{n-2}$ at $\hat{\nu}_i^{\pm}$ for each $i = 1, \dots, n$, by $\widetilde{s}_{\infty}, \widetilde{F}_i$ the strict transforms,
and by $E^{\pm}_i$ the exceptional curves at $(t_i, \hat{\nu}_i^{\pm})$.
Set
$$ \cK'_n := \widetilde{\mathbb{F}_{n-2}} \setminus (\widetilde{s}_{\infty} \cup \widetilde{F}_1 \cup \cdots \cup \widetilde{F}_n).$$
We denote by $\cK_n$ the image of $\cK'_n$ under the projection $\cK'_n \rightarrow \mathbb{F}_{n-2} \setminus s_{\infty}$.
\subsection{Apparent singularities and the dual parameters.}\label{Apparent singularities}
Let $(E,\nabla,\varphi) \in M(-1)$.
We can define the \textit{apparent singularities of $(E,\nabla,\varphi) \in M(-1)$} as follows:
we fix a section $s \in H^0(\bP^1, E)$.
For the section $s$, we define the following composition
\begin{equation*}
\cO_{\bP^1} \xrightarrow{\ s\ } E \xrightarrow{\ \nabla\ } E \otimes L \lra (E/\cO_{\bP^1}) \otimes L.
\end{equation*}
The composition $\cO_{\bP^1}\ra (E/\cO_{\bP^1}) \otimes L$ is an $\cO_{\bP^1}$-morphism, which is injective.
Then we can define a subsheaf $F^0\subset E $ such that $\cO_{\bP^1} \ra (F^0/\cO_{\bP^1}) \otimes L$ is an isomorphism.
By the isomorphism $F^0/\cO_{\bP^1} \simeq L^{-1}$, we have $F^0 \simeq \cO_{\bP^1} \oplus L^{-1}$.
Therefore, we have the following exact sequence
\begin{equation*}\label{ES of App for conn}
0 \lra \cO_{\bP^1} \oplus L^{-1} \lra E \lra T_A \lra 0,
\end{equation*}
where $T_A$ is a torsion sheaf.
By the Riemann-Roch theorem, the torsion sheaf $T_A$ is length $n-3$.
\begin{Def}
For $(E,\nabla , \varphi) \in M(-1)$ and a nonzero section $s \in H^0(\bP^1,E)$,
we call the support of $T_A$ {\it the apparent singularities of a $\bnu$-$\mathfrak{sl}_2$-parabolic connection with a cyclic vector $(E,\nabla , \varphi,[s])$}.
\end{Def}
Now, we consider the following stratification of $M(-1)$.
By the irreducibility of $(E, \nabla, \varphi) \in M(-1)$, we have the following proposition.
\begin{Prop}\label{bundle type}
\textit{
For $(E ,\nabla ,\varphi) \in M(-1)$,
we have
\begin{equation*}
E\simeq \cO(k) \oplus\cO(-k-1) \text{ where } 0\le k \le \left[ \frac{n-3}{2} \right].
\end{equation*}
}
\end{Prop}
Denote by $M(-1)^k$ the subvariety of $M(-1)$ where $E\simeq \cO(k) \oplus\cO(-k-1)$.
Then
\begin{equation*}
M(-1) = M(-1)^0\cup \cdots \cup M(-1)^{[ (n-3)/2]}.
\end{equation*}
Note that the stratum $M(-1)^0$ is a Zariski open dense set of $M(-1)$.
For $(E,\nabla , \varphi) \in M(-1)^0$, we define \textit{dual parameters} as follows:
put $U_0 := \mathbb{P}^1 \setminus \{ \infty \}, U_{\infty} := \mathbb{P}^1 \setminus \{ 0 \}$.
Let $z$ and $w$ be coordinates on $U_0$ and $U_{\infty}$, respectively.
Put
$$ \omega_z := \frac{dz}{\prod_{i = 1}^n(z - t_i)} \quad
\text{and} \quad
R_0 :=\left(
\begin{array}{clcl}
1 & 0 \\
0 & \frac{1}{z}
\end{array}
\right). $$
Since $E \simeq \cO_{\mathbb{P}^1} \oplus \cO_{\mathbb{P}^1}(-1)$, we can denote the connection $\nabla$ by
\begin{equation*}
\nabla =
\begin{cases}
d+ A_z^0 \otimes \omega_z & \text{on } U_0 \\
d+ R_0^{-1}dR_0 + R_0^{-1}(A_z^0 \otimes \omega_z )R_0 & \text{on } U_{\infty},
\end{cases}\quad \text{where }\
A_z^0 :=\left(
\begin{array}{clcl}
f_{11}^{(n-2)}(z) & f_{12}^{(n-1)}(z) \\
f_{21}^{(n-3)}(z) & -f_{11}^{(n-2)}(z)
\end{array}
\right).
\end{equation*}
Note that the zeros of the polynomial $f_{21}^{(n-3)}(z)$ are the apparent singularities of $(E,\nabla , \varphi)$.
We denote by $\{ q_1,\ldots,q_{n-3} \}$ the apparent singularities.
We put $p_i := f_{11}^{(n-2)}(q_i) \in L_{q_i}$.
We call $\{ p_1 ,\ldots, p_{n-3} \}$ the \textit{dual parameters} of $(E,\nabla , \varphi) \in M(-1)^0$.
\section{Geometric description of $M(-1)^0$}
Let $\cK_n'$ be the Zariski open set of the blowing-up of the Hirzebruch surface of degree $n-2$ defined in subsection \ref{Hirzebruch blowing up},
and let $\cK_n$ be the contraction $\cK'_n \ra \cK_n$.
Then we can define the map
\begin{equation}\label{M0 to Sym}
\begin{aligned}
M(-1)^0 &\lra \mathrm{Sym}^{n-3}(\cK_n) \\
(E,\nabla, \varphi) &\longmapsto \{(q_1,p_1),\ldots,(q_{n-3},p_{n-3}) \},
\end{aligned}
\end{equation}
which was constructed in \cite[\S 3]{Obl}.
We consider the composite of the Hilbert-Chow morphism and the blowing-up
\begin{equation*}
\Hilb^{n-3}(\cK'_n) \lra \mathrm{Sym}^{n-3}(\cK'_n) \lra \mathrm{Sym}^{n-3}(\cK_n) \rlap{,}
\end{equation*}
where $\cK_n' \ra \cK_n$ is the blowing-up defined in subsection \ref{Hirzebruch blowing up}.
We have the following proposition.
\begin{Prop}[\cite{KS} Theorem 5.2]\label{injective M0}
\textit{
We can extend the map (\ref{M0 to Sym}) to
$$ M(-1)^0 \longrightarrow \Hilb^{n-3}(\cK'_n), $$
and this map is injective.
}
\end{Prop}
Suppose $n = 5$.
We denote by $Z \subset \mathrm{Sym}^2(\cK'_5)$ the proper pre-image of $\{ (q_1, p_1), (q_1, -p_1) \} \subset \mathrm{Sym}^2(\cK'_5)$ under the blowing-up $\mathrm{Sym}^2(\cK'_5) \ra \mathrm{Sym}^2(\cK_5)$, and by $\widetilde{Z} \subset \Hilb^2(\cK'_5)$ the proper pre-image of $Z$ under the Hilbert-Chow morphism $\Hilb^2(\cK'_5) \ra \mathrm{Sym}^2(\cK'_5)$.
Denote by
\begin{equation}
\widetilde{\Hilb}^2(\cK'_5) \ra \Hilb^2(\cK'_5)
\end{equation}
the blowing-up along $\widetilde{Z}$, and by $\widehat{Z}$ the strict transform of $\widetilde{Z}$.
We also denote by $(\cK'_5 \times \cK'_5)^{\sim}$ the blowing-up of $\cK'_5 \times \cK'_5$ along the ideal $(q_1 - q_2, p_1 - p_2)$, and by $(\cK'_5 \times \cK'_5)^{\approx}$ the blowing-up of $(\cK'_5 \times \cK'_5)^{\sim}$ along the ideal $(q_1 - q_2, p_1 + p_2)$.
Then $\Hilb^2(\cK'_5) = (\cK'_5 \times \cK'_5)^{\sim}/\mathfrak{S}_2$ and $\widetilde{\Hilb}^2(\cK'_5) = (\cK'_5 \times \cK'_5)^{\approx}/\mathfrak{S}_2$.
Now, using the above description, we define another important blowing-up of the Hirzebruch surface $\mathbb{F}_3$.
Fix $q_1 \in \mathbb{P}^1 \setminus \{ t_1, \cdots, t_5 \}$ and define the fiber $F_6$ over $q_1$.
We denote by $(\mathbb{F}_3)^{\approx}$ the blowing-up of $\widetilde{\mathbb{F}_3}$ at two points $\{(q_1, p_1), (q_1, -p_1) \}$
(when $p_1 = p_2 = 0$, it blows up twice at $(q_1, 0)$).
Set
$$ \cK'_{5, q_1} := (\mathbb{F}_3)^{\approx} \setminus (\widetilde{s}_{\infty} \cup \widetilde{F}_1\cup \cdots \cup \widetilde{F}_6), $$
where $\widetilde{F}_6$ is the strict transform of $F_6$.
We denote by $E^{\pm}_6$ the exceptional curves at $(q_1, \pm p_1)$, and denote by $\cK_{5, q_1}$ the image of $\cK'_{5, q_1}$ under the projection $\cK'_{5, q_1} \rightarrow \mathbb{F}_{3} \setminus s_{\infty}$.
\section{Geometric description of $\cK'_{5, q}$}
In this section, for the sake of simplicity, we write $\cK'_{5, q}$ for $\cK'_{5, q_1}$.
\begin{Prop}\label{second coh of K}
\textit{
Let $\cF$ be any quasi-coherent sheaf on $\cK'_{5,q}$.
Then $H^i(\cK'_{5, q}, \cF) = 0$ for $i \geq 2$.
}
\end{Prop}
\begin{proof}
Let $Q$ be a projective line doubled at the six points $\{t_1, \dots, t_5, q \}$.
We can define a natural projection $\cK'_{5, q} \rightarrow Q$.
Moreover, this map is an affine bundle, thus it is an affine morphism.
\end{proof}
Set $D_{q} := 2\widetilde{s}_{\infty} + \widetilde{F}_1 + \cdots + \widetilde{F}_6$.
Then
\begin{equation}\label{deg N 0}
(D_q, D_q) = (D_q, \widetilde{s}_{\infty}) = (D_q, \widetilde{F}_i) = 0.
\end{equation}
We also have $K := K_{(\mathbb{F}_3)^{\approx}} = -2\widetilde{s}_{\infty} - \sum_{i = 1}^5 \widetilde{F}_i + E_6^+ + E_6^-$.
By the Riemann-Roch theorem, we have
$$ \chi(\cO_{D_q}) = -\frac{D_q(D_q + K)}{2} =-1. $$
This implies the following statement.
\begin{Prop}\label{prop11}
\textit{
Let $\cE$ be a locally free sheaf on $D_q$ of rank $r$. Then
$$ \chi(\cE) = 2\deg(\cE|_{\widetilde{s}_{\infty}}) + \sum_{i = 1}^6 \deg(\cE|_{\widetilde{F}_i}) - r. $$
}
\end{Prop}
\begin{proof}
This follows from the Riemann-Roch theorem for an embedded curve (cf. \cite[Chapter 2, Theorem 3.1]{BHPV}).
\end{proof}
\begin{Lem}\label{lem8}
\textit{
Let $\cE$ be a nontrivial invertible sheaf on $D_q$ such that
$\deg (\cE |_{\widetilde{s}_{\infty}}) = 0$, and either $\deg ( \cE |_{\widetilde{F}_i} ) = 0$ for all i, or one of the numbers is $\deg ( \cE |_{\widetilde{F}_i} ) = -1$, another one is $1$, and the remaining three equal zero.
Then $H^i(D_q, \cE) = 0$ for $i \neq 1$, and $H^1(D_q, \cE) = \mathbb{C}$.
}
\end{Lem}
\begin{proof}
By Proposition \ref{prop11}, we have $\chi(\cE) = -1$. Therefore, it is enough to prove that $H^0(D_q, \cE) = 0$.
Assume the converse. Let $f \in H^0 (D_q, \cE)$, $f \neq 0$.
Now $\chi (\cE) = \chi ( \mathcal{O}_{D_q})$, and $\cE \not\simeq \mathcal{O}_{D_q}$, so $f$ is zero on one of the irreducible components of $D_q$.
We take $\tilde{F}_1$ to be this component.
We may assume that $\deg (\mathcal{E}|_{\widetilde{F}_i} ) \leq 0$ for $i \neq 1$.
The closed subscheme $D'_q := \widetilde{s}_{\infty} + \sum_{i \neq 1} \widetilde{F}_i \subset D_q$ is reduced and connected.
Besides this, $\cE|_{D'_q}$ has nonpositive degree on any irreducible component of $D'_q$.
Therefore, either $f|_{D'_q} = 0$, or $f|_{D'_q}$ has no zero.
In the second case, $f|_C \neq 0$, where $C \subset D_q$ is any irreducible component.
Therefore, $f \in \ker(H^0(D_q, \cE) \rightarrow H^0(D'_q, \cE))$.
In other words, $f \in H^0(D_q, \cE \otimes \mathcal{I}_{D'_q})$, where $\mathcal{I}_{D'_q} := \{ \tilde{f} \in \mathcal{O}_{D_q} \ |\ \tilde{f}|_{D'_q} = 0 \}$ is the sheaf of ideals of $D'_q$.
We have $\mathcal{I}_{D'_q} = \mathcal{O}_{(\mathbb{F}_3)^{\approx}}(-D'_q)/\mathcal{O}_{(\mathbb{F}_3)^{\approx}}(-D_q)$, and $\supp \mathcal{I}_{D'_q} = \widetilde{s}_{\infty} + \widetilde{F}_1$.
Hence, $\deg (\mathcal{I}_{D'_q}|_{\widetilde{F}_1}) |_{\widetilde{F}_1}= \deg (\mathcal{O}_{(\mathbb{F}_3)^{\approx}}(-D'_q)|_{\widetilde{F}_1}) = -1$.
Therefore, $\deg (\cE \otimes \mathcal{I}_{D'_q}) = \deg (\cE|_{\widetilde{F}_1}) -1 \leq 0$.
In the same way, $\deg (\cE \otimes \mathcal{I}_{D'_q})|_{\widetilde{s}_{\infty}} = \deg (\cE|_{\widetilde{s}_{\infty}}) -1 = -1$.
Since $\cE \otimes \mathcal{I}_{D'_q}$ is an invertible sheaf on the connected reduced scheme $\widetilde{s}_{\infty} + \widetilde{F}_1$,
this implies $f \in H^0(D_q, \cE \otimes \mathcal{I}_{D'_q}) = 0$.
\end{proof}
Set $\Pic^{0}(D_q) := \{ \cE \in \Pic(D_q) | \deg (\cE | _{\widetilde{s}_{\infty}}) = 0, \ \deg ( \cE | _{\widetilde{F}_i}) = 0 \ \text{for all} \ i \}$.
\begin{Prop}\label{prop12}
\textit{
$$\Pic^0(D_q) \simeq \mathbb{A}^2\rlap{.}$$
}
\end{Prop}
\begin{proof}
Set $D_q^{red} := \widetilde{s}_{\infty} + \sum_{i=1}^{6} \widetilde{F}_i \subset D_q $.
Then $\Pic^0(D_q) = \ker (\Pic (D_q) \rightarrow \Pic ( D_q^{red} ) ) $.
Set $\mathcal{O}' := \ker ( \mathcal{O}_{D_q}^{*} \rightarrow \mathcal{O}_{D_q^{red}}^{*})$.
Then the exact sequence $ 0 \rightarrow \mathcal{O}' \rightarrow \mathcal{O}_{D_q}^{*} \rightarrow \mathcal{O}_{D_q^{red}}^{*} \rightarrow 1 $ defines an isomorphism
$H^1(D_q, \mathcal{O}') \xrightarrow{\sim} \Pic^0(D_q)$.
However, $\mathcal{O}'$ is a locally free $\mathcal{O}_{\widetilde{s}_{\infty}}$-module which satisfies $\deg (\mathcal{O}') = -(\widetilde{s}_{\infty} , D_q^{red}) = -3$.
Hence $\Pic^0(D_q)$ is a $2$-dimensional $\mathbb{C}$-space.
\end{proof}
\begin{Lem}\label{lem10}
\textit{
The sheaf $\cN_{D_q} := \cO_{(\mathbb{F}_3)^{\approx}}(D_q)|_{D_q}$ is not trivial.
}
\end{Lem}
\begin{proof}
Assume the converse.
Let $\sigma \in H^0(D_q, \cN_{D_q})$ be a global section of $\cN_{D_q}$ with no zeros.
Since $(\mathbb{F}_3)^{\approx}$ is a smooth rational projective variety, $H^1((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) = 0$,
and therefore $\sigma \in H^0(D_q, \cN_{D_q}) = H^0((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}(D_q)/\cO_{(\mathbb{F}_3)^{\approx}})$
can be lifted to $s \in H^0 ((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}(D_q))$.
Then $(s)$ is an effective divisor equivalent to $D_q$, and $C :=\supp (s) \subset \cK'_{5, q}$.
Denote by $C'$ the image of $C$ under the blowing-down $\cK'_{5, q} \ra \cK_{5, q}$.
Then $C' \sim 2s_{\infty} + \sum_{i = 1}^6 F_i$.
Now let $f(x, y)$ be a local equation for $C'$ on some local chart.
Then we can write $f(x, y) = y^2 + a_1(x)y + a_2(x)$, where $\deg a_i(x) = 3i$.
By definiton, $C'$ passes through $(t_i, \hat{\nu}_i^+)$ and $(q, p_1)$ with multiplicity $1$.
Since we put $\hat{\nu}_i^{\pm} = \Pi_{t_i \neq t_j} (t_i - t_j)\nu_i^{\pm}$, where
$$\nu^{\pm}_i := \pm \nu_i \ \ (i=1,\ldots, 4 ),\ \nu^+_5:=1-\nu_5,\ \nu^-_5 := \nu_5\rlap{,}$$
by Vieta's formula, $a_1(x)$ satisfies $a_1(t_i) = 0$ for $i = 1, \dots, 4$ and $a_1(q) = 0$.
This implies $a_1(x) \equiv 0$. However, then $0 = (1-\nu_5) + \nu_5 = 1$, which is a contradiction.
\end{proof}
\begin{Prop}\label{vanish of N}
\textit{ For $k \neq 0$,
$H^i(D_q, (\cN_{D_q})^{\otimes k}) = 0$ if $i \neq 1$ and $H^1(D_q, (\cN_{D_q})^{\otimes k}) = \mathbb{C}$.}
\end{Prop}
\begin{proof}
By (\ref{deg N 0}), we have $\cN_{D_q} \in \Pic^0(D_q)$.
Lemma $\ref{lem10}$ and Proposition $\ref{prop12}$ imply $(\cN_{D_q})^{\otimes k} \not\simeq \cO_{D_q}$ for $k \neq 0$.
Lemma $\ref{lem8}$ completes the proof.
\end{proof}
\begin{Cor}\label{coh of O}
\textit{
$$ H^i(\cK'_{5, q}, \cO_{\cK'_{5, q}}) = \begin{cases}
\mathbb{C}, & i = 0, \\
H_{D_q}^2((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}), &i =1 \\
0, & i > 1.
\end{cases} $$
}
\end{Cor}
\begin{proof}
By local cohomology theory, we have the long exact sequence
\begin{equation*}
\begin{split}
0 &\ra H_{D_q}^0((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) \ra H^0((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) \ra H^0(\cK'_{5, q}, \cO_{\cK'_{5, q}}) \\
&\ra H_{D_q}^1((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) \ra H^1((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) \ra H^1(\cK'_{5, q}, \cO_{\cK'_{5, q}}) \\
&\ra H_{D_q}^2((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) \ra H^2((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) \ra H^2(\cK'_{5, q}, \cO_{\cK'_{5, q}}) \\
&\ra H_{D_q}^3((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) \ra 0.
\end{split}
\end{equation*}
and $H_{D_q}^i((\mathbb{F}_3)^{\approx}, \cO_{(\mathbb{F}_3)^{\approx}}) \simeq \ilim[k]\Ext^i (\cO_{kD_q}, \cO_{(\mathbb{F}_3)^{\approx}}) \simeq \ilim[k]H^{i-1}((\mathbb{F}_3)^{\approx}, \cN_{kD_q})$.
The statement follows from proposition \ref{vanish of N} and the rationality of $(\mathbb{F}_3)^{\approx}$.
\end{proof}
\subsection*{Special case: $q_1 \in \{ t_1, \dots, t_5 \}$ }
For the sake of simplicity, we may assume that $q_1 = t_1$.
Then, $(q_1, p_1)$ lies on one of the two exceptional curves $E_1^{\pm}$ at $(t_1, \hat{\nu}_1^{\pm})$.
Suppose that $(q_1, p_1)$ is on $E_1^+$.
We consider the blowing-up of $\widetilde{\mathbb{F}_3}$ at the two points $\{ (q_1, p_1), (q_1, -p_1) \}$.
We denote by $\widetilde{E}_1^+$ the strict transform of $E_1^+$.
In this situation, set
$$ \cK'_{5, q_1} := \Bl_{\{(q_1, p_1), (q_1, -p_1) \}} \widetilde{\mathbb{F}_3} \setminus (\widetilde{s}_{\infty} \cup \widetilde{F}_1 \cup \cdots \cup \widetilde{F}_5 \cup \widetilde{E}_1^+). $$
We will show that the similar result as Corollary $\ref{coh of O}$.
Instead of considering $\cK'_{5, q_1}$, we will consider the following surface:
$$ \cL := \widetilde{\mathbb{F}_3} \setminus (\widetilde{s}_{\infty} \cup \widetilde{F}_1 \cup \cdots \cup \widetilde{F}_5 \cup E_1^+).$$
\begin{Prop}\label{coh of L}
\textit{
$$ H^i(\cL, \cO_{\cL}) = \begin{cases}
\mathbb{C}, & i = 0, \\
0, & i > 0.
\end{cases} $$
}
\end{Prop}
\begin{proof}
In $\widetilde{\mathbb{F}_3}$, we have that $E_1^+$ is a $(-1)$-curve, and hence we contract this curve.
Then $\widetilde{F}_1$ becomes a $(-1)$-curve, and we also contract this curve.
As a result, we have the blowing-ups of $\mathbb{F}_2$ at $8$ points, and we have to compute the cohomology of the surface
$$ \cL' := \Bl_{\{8pts\}} \mathbb{F}_2 \setminus (\widetilde{s}_{\infty} \cup \widetilde{F}_2 \cup \cdots \cup \widetilde{F}_5).$$
This is the same situation as \cite[Theorem 2 (iii)]{AL}, and the statement is proved.
\end{proof}
The difference between $\cK'_{5, q_1}$ and $\cL$ is that, adding the points $\{ (q_1, p_1), (q_1, -p_1) \}$, blowing-up these points, and removing the corresponding points.
These operations do not change the cohomology $H^i(\cO)$.
\section{Proof of Theorem \ref{coh of M}}\label{section proof}
By the same argument of Proposition $\ref{second coh of K}$, we have
\begin{Prop}\label{second coh of K5}
\textit{
Let $\cF$ be any quasi-coherent sheaf on $\cK'_{5}$.
Then $H^i(\cK'_{5}, \cF) = 0$ for $i \geq 2$.
}
\end{Prop}
Since $D^{red} := \widetilde{s}_{\infty} + \widetilde{F}_1 + \cdots + \widetilde{F}_5 \subset \widetilde{\mathbb{F}_3}$ is contractible, we have the following lemma.
\begin{Lem}\label{coh of K5}
\textit{$H^i(\cK'_5, \cO_{\cK'_5}) = \begin{cases}
\mathbb{C}, & i = 0, \\
H^2_m(A), & i = 1, \\
0, & i \geq 2,
\end{cases} $\\
where $(A, \mathfrak{m})$ is a local ring such that $\dim(A_{\mathfrak{m}}) = 2$.}
\end{Lem}
\begin{proof}
Let $\pi : \widetilde{\mathbb{F}_3} \ra S$ be a map onto a rational surface $S$ which contracts the divisor $D^{red} \subset \widetilde{\mathbb{F}_3}$ to the rational singular point $\{p \} \subset S$.
Set $U := S \setminus \{ p \}$. Then we have the long exact sequence
\begin{equation*}
\begin{split}
0 &\ra H_p^0(S, \cO_S) \ra H^0(S, \cO_S) \ra H^0(U, \cO_U)\\
&\ra H_p^1(S, \cO_S) \ra H^1(S, \cO_S) \ra H^1(U, \cO_U)\\
&\ra H_p^2(S, \cO_S) \ra H^2(S, \cO_S) \ra H^2(U, \cO_U)\\
&\ra H_p^3(S, \cO_S) \ra 0.
\end{split}
\end{equation*}
By excision isomorphism, we have $H_p^i(S, \cO_S) = H_p^i(V, \cO_V)$, where $V = \Spec(A)$ and $\{p\}$ corresponds to the maximal ideal $\mathfrak{m}$ of $A$.
Since $V$ is affine, this cohomology is equal to $H_{\mathfrak{m}}^i(A)$.
Now it is straightforward to see that $\dim(A_{\mathfrak{m}}) = \depth_{\mathfrak{m}}(A) = 2$.
Therefore we have $H_{\mathfrak{m}}^i(A) = 0$ for $i \neq 2$, and $H^1(U, \cO_U) \simeq H_{\mathfrak{m}}^2(A) \neq 0$
(see, for example, \cite{H} p.217 exercise 3.4(b)).
\end{proof}
\begin{proof}[Proof of Theorem \ref{coh of M}]
We may assume that $d = -1$.
Set $\widehat{M(-1)}_Z := M(-1)^0 \cup \widehat{Z}$.
By Proposition \ref{injective M0}, we have injective maps $\iota : M(-1)^0 \hookrightarrow \Hilb^2(\cK'_5)$ and $\hat{\iota} : \widehat{M(-1)}_Z \hookrightarrow \widetilde{\Hilb}^2(\cK'_5)$.
We define the blowing-up parameter $\lambda_-$ by $p_1 + p_2= \lambda_- (q_1 - q_2)$.
Set $T := \widetilde{\Hilb}^2(\cK'_5) \setminus \widehat{M(-1)}_Z$. For a vector bundle $\cF$ on $\widetilde{\Hilb}^2(\cK'_5)$,
\begin{equation*}
\begin{split}
H^i(\widehat{M(-1)}_Z, \cF|_{\widehat{M(-1)}_Z}) &= H^i(\widetilde{\Hilb}^2(\cK'_5), \hat{\iota}_*\hat{\iota}^* \cF) \\
&= \varinjlim H^i(\widetilde{\Hilb}^2(\cK'_5), \cF(kT)).\\
\end{split}
\end{equation*}
To compute $H^i(\widetilde{\Hilb}^2(\cK'_5), \cF(kT))$, consider $H^i((\cK'_5 \times \cK'_5)^{\approx}, \cF(kT'))$, where
$T'$ is defined by $ ( \lambda_- = \infty )$.
We can define a map
\begin{equation*}
\begin{aligned}
f : (\cK'_5 \times \cK'_5)^{\approx} \setminus T' &\lra \cK'_5 \\
(q_1, p_1, q_2, p_2) &\longmapsto (q_1,p_1),
\end{aligned}
\end{equation*}
and the fiber is $f^{-1}(\{(q_1, p_1)\}) \simeq \cK'_{5, q_1}$.
By Leray's spectral sequence, we have
\begin{equation*}
H^i((\cK'_5 \times \cK'_5)^{\approx} \setminus T', \cF) \simeq \bigoplus_{p + q = i} H^p(\cK'_5, R^qf_* \cF).
\end{equation*}
Using the base change theorem, we have
$ (R^qf_* \cF)_{(q_1, p_1)} \simeq H^q(\cK'_{5, q_1}, \cF_{(q_1, p_1)})$.
Hence, Theorem \ref{coh of M} follows from Corollary \ref{coh of O}, Lemma \ref{second coh of K5} and Lemma \ref{coh of K5} as follows:
we have
$$H^i((\cK'_5 \times \cK'_5)^{\approx} \setminus T', \cO) = \begin{cases}
\mathbb{C}, & i = 0, \\
H^2_{\mathfrak{m}}(A) \oplus H^0(\cK'_5, R^1f_*\cO), & i = 1, \\
H^1(\cK'_5, R^1f_*\cO), &i=2, \\
0, & i > 2\rlap{.}
\end{cases} $$
Moreover, the action of $\mathfrak{S}_2$ on $H^i((\cK'_5 \times \cK'_5)^{\approx} \setminus T', \cO)$ is nontrivial.
Therefore,
$$H^i(\widehat{M(-1)}_Z, \cO_{\widehat{M(-1)}_Z}) = \begin{cases}
\mathbb{C}, & i = 0, \\
0, & i > 0\rlap{.}
\end{cases}$$
Since $\codim_{\Hilb^2(\cK'_5)}(\widetilde{Z}) = 2$, and $M(-1)^1 = M(-1) \setminus M(-1)^0 \simeq \mathbb{A}^2$ (see \cite{Obl}), we have
\begin{equation*}
\begin{split}
H^i(\widehat{M(-1)}_Z, \cO_{\widehat{M(-1)}_Z}) &= H^i(M(-1)^0 \cup \widetilde{Z}, \cO)\\
&= H^i(M(-1)^0, \cO_{M(-1)^0})\\
&= H^i(M(-1), \cO_M(-1)) .
\end{split}
\end{equation*}
\end{proof}
\subsection*{Acknowledgements}
I am very grateful to Professor Masa-Hiko Saito for his constant attention to this work and for his warm encouragement.
I would also like to thank Doctor Arata Komyo for his numerous stimulating discussions and Professor Frank Loray for his hospitality at Universit\'{e} de Rennes 1.
|
2,877,628,091,301 | arxiv | \section{Introduction}
\label{intro}
In this paper we discuss the relation between the chiral symmetry breaking
in the two-dimensional 't Hooft model \cite{tHoof1}
and the heavy-light meson mass spectrum.
The action of the version of the 't Hooft model we will consider
is
\beq
S = \int d^2 x\left[ -\frac{1}{4} G_{\mu\nu}^aG_{\mu\nu}^a
+
\sum_{f=1,2} \bar \psi_f ( iD \hspace{-0.37cm}\not \,\,\,\, -m_f )\psi_f\right]
\end{equation}
where
$G_{\mu\nu}^a$ is the gluon field strength tensor, the index $a$ runs
from $1$ to $N^2 -1 $, and $N$ is the number of colors,
$$
N\rightarrow\infty\, .
$$
The subscript $f$ marks quarks of different flavors. The quarks
are assumed to belong to the fundamental representation of the
gauge group SU$(N)$.
Moreover, in our consideration we will assume that $m_2\to\infty$,
so that the second quark will play the role of a static force center, while
$m_1\to 0$ so that the first quark is massless. The theory then possesses two U(1) symmetries,
generated by the vector and axial currents, $\bar\psi_1\gamma^\mu\psi_1$ and
$\bar\psi_1\gamma^\mu\gamma^5\psi_1$, respectively.
The axial symmetry is spontaneously broken (see below).
The coupling constant $g$ has dimension of mass, and
in the large-$N$ limit scales as
\beq
\lambda \equiv \frac{g^2 N}{4\pi} = {\rm const}\, .
\end{equation}
The constant $\lambda$ is referred to as the 't Hooft coupling.
The very fact of confinement is obvious in this model since in two
dimensions the Coulomb potential generated by the static color source
(i.e. the infinitely heavy quark at the origin) grows linearly with
separation. The model was solved in the
light-cone formalism by 't Hooft \cite{tHoof1} and further
developed along the same lines in Refs. \cite{Call1,Einh1}.
The spectrum of the light-light mesons and the light-cone
wave functions were obtained from the 't Hooft equation,
an integral equation, supplemented by certain boundary
conditions, well studied in the literature (for a review see e.g. \cite{Hornbostel:1988ne}).
In the light-cone formalism one chooses the light-cone gauge
condition,
$$
A_- = 0 \, .
$$
The light-cone time derivative of $A_+$ does not appear in
$G_{\mu\nu}$; hence, $A_+$ is a non-dynamical degree of freedom
which can be eliminated through the equations of
motion. In the large-$N$ limit the only surviving diagrams
are ladders and rainbows. The 't Hooft equation
for the bound state built from the quark of the first flavor and
anti-quark of the second flavor has the form
\beq
\left( \frac{m_1^2}{x} + \frac{m_2^2}{1-x} - M^2\right) \phi (x)
=2 \lambda \,\,\int_0^1\,
\frac{\phi (y) - \phi (x) }{(x-
y)^2}dy
\, ,
\label{thooftequ}
\end{equation}
where $x$ is the first quark's share of the total (light-cone) momentum
of the composite meson with mass $M$.
If we deal with massless (anti)quarks in the equation above ($m_1=m_2=0$),
Eq. (\ref{thooftequ}) has a massless-meson solution (``pion" with $M=0$)
which is known
exactly. The corresponding light-cone wave function is $x$-independent,
$ \phi (x) = $const. The existence of the massless pion implies \cite{Zhit1},
through the standard current algebra relations, a non-vanishing
quark condensate \footnote{
Generally speaking, in two dimensions any continuous (e.g. chiral)
symmetry cannot be spontaneously broken (which is known as the
Mermin-Wagner-Coleman theorem).
This is because massless Goldstone bosons would bring a long-range
infrared divergence for $d=2$. However,
at $N_c=\infty$ self-interaction of Goldstone bosons
vanishes (they do not interact also with all other mesons)
and, consequently, at $N_c=\infty$ the chiral symmetry can and is
indeed spontaneously broken in the 't Hooft model.}
$
\langle \bar \psi \psi \rangle$ proportional to $
- N \sqrt \lambda$, see also
\cite{Ming1,Lenz1,Burk1}.
The problem
is that this chiral condensate is not seen directly in the light-cone
consideration, a usual story with all light-cone analyses
of the vacuum condensates.
The chiral condensate on the light cone
is buried somewhere in zero modes and boundary conditions.
Indeed, if one tries to extract the quark condensate
directly from the light-cone quark Green's function given by 't~Hooft,
one obtains
\beq
\langle \bar \psi \psi \rangle \propto \lim_{x\rightarrow 0}\, {\rm Tr}
\left\{ S(x,0)\right\}\, ,
\label{GF}
\end{equation}
where $S(x,0)$ is the massless quark Green's function
describing the quark propagation from the point 0 to
the point $x$. The
right-hand side vanishes after taking trace, since this
Green's function is linear in the $\gamma$ matrices.
Our task is not only to reveal the chiral condensate (this had been already done
by shifting slightly away from the light cone \cite{Lenz1} or, from the solution
of the gap equation in the laboratory frame \cite{Bars1}), but also to analyze its impact on the spectrum
of bound mesons. In order to keep a closed-form integral equation
\'a la 't Hooft as the spectral equation we
have to focus on a system of an infinitely heavy anti-quark at rest
at the origin and a dynamical quark of mass $m_1\to 0$ bound by a linearly growing
potential, i.e. the heavy-light quark system. The bound quark is ultra-relativistic,
and dynamical details of its binding crucially depend on the
chiral condensate (see below).
At the same time, the system in question can be considered in the laboratory frame
(as opposed to the light-cone consideration). The static infinitely heavy (anti)quark suppresses
the so called $Z$ graphs in much the same way as the
transition to the light cone in the case of two massless (anti)quarks.
The absence of the $Z$ graphs is necessary to
keep the spectral equation in the closed form.
The above integral equation applies to
the one-particle wave function in the momentum space.
It can be readily obtained from the general analysis
of \cite{Bars1} in the limit $m_2\to \infty$ and $m_1\to 0$. We will briefly review the derivation below.
Another aspect, to be addressed below, is the
the relation with the ``original" light-cone
spectral equation for the heavy-light system, which we will refer to as the
't Hooft-like equation. It was obtained \cite{Bur,Zhit2} from the general light-cone 't~Hooft equation valid for arbitrary
$m_{1,2}$ in the limit $m_2\to\infty$ and $m_1\to 0$.
In fact, we deal with {\em two different} one particle
equations. One of them is just a limiting case of the 't Hooft equation,
and applies to the light-cone wave function,
which depends on $x$ ($0\leq x\leq 1$). Within this approach the (massless quark)
condensate vanishes.
At the same time, our laboratory frame equation has the condensate buit in.
It is the spectral equation for $\phi(p)$ where $p$ is the light-quark momentum in the laboratory frame.
In deriving these two equations one uses two distinct limiting procedures.
To obtain the 't Hooft-like equation one first tends the momentum to infinity, keeping the quark masses fixed, and then tends one of the quark masses to infinity.
At the same time, when one works in the laboratory frame, one keeps the total momentum fixed and sends the
quark mass to infinity from the very beginning. Generally speaking, these two limits need not be commutative.
Our analysis will demonstrate that the above two equations are, in fact, {\em isospectral};
i.e. the limiting procedures are interchangeable, with no obstructions.
Surprisingly, the
laboratory frame equation for $\phi (p)$ {\em formally} becomes identical to
the 't' Hooft-like equation for
$\varphi (\xi )$ (see Eq. (\ref{HQLtHooft})) upon substitution into the laboratory-frame equation
a ``wrong" solution for the chiral angle (i.e. a singular solution with no chiral symmetry breaking) and a rescaling of the overall
energy scale. This curious coincidence has no obvious physical reason; at least, we were unable to
find such a reason.
The heavy-light systems in the 't Hooft model were considered
previously, in an applied context, e.g. in Ref. \cite{Grin1}. In this work
the original light-cone 't Hooft equation was numerically solved at large values of
$m_2/\sqrt\lambda$. As was mentioned, in the 't Hooft-like equation the limit
$m_2/\sqrt\lambda \longrightarrow \infty$ is taken {\em before} solving the 't~Hooft equation.
The appropriate limiting procedure was implemented in \cite{Bur,Zhit2}.
Note that when the heavy-light meson is boosted (to put it on the light cone)
the total momentum of the meson is shared between quarks
proportionally to their masses. Therefore, the heavy quark will have
$x$ very close to unity while the light quark's share will be close to
zero. The width of the $x$ distribution will be proportional to
$\sqrt\lambda/m_2\to 0$. This fact was noted long ago
\cite{Bjor1}, and was later extensively exploited in phenomnenology.
The light-cone wave function will have an infinitely narrow support
in the limit $\sqrt\lambda/m_2\rightarrow 0$ unless we rescale the variable $x$,
so that
the corresponding distribution does not shrink to a delta function but
is, rather, characterized by a constant width.
The appropriate rescaling laws are as follows \cite{Bur,Zhit2}:
\begin{eqnarray}
x &=&1- \frac{\sqrt{2 \lambda}}{m_2}\xi\, ,\nonumber\\[2mm]
M &=& m_2 + {\mathcal E}\, ,
\nonumber\\[2mm]
\phi (x ) &=& \sqrt{m_2 (2 \lambda )^{-1/2}}\,\, \varphi (\xi )\, ,
\end{eqnarray}
where $m_2$ is to be sent to infinity while ${\mathcal E}$ is
kept fixed (i.e. ${\mathcal E}$ is the mass of the bound state after the subtraction of the
mechanical mass of the infinitely heavy anti-quark).
Then the light-cone 't Hooft equation takes the form
\begin{equation}
2 {\cal E} \varphi(\xi) = \sqrt{2 \lambda}\, \xi\, \varphi(\xi) - \sqrt{2 \lambda}
\int_0^\infty \,
\frac{\varphi(\tilde{\xi}) - \varphi(\xi)}{(\tilde{\xi} -
\xi)^2}d\tilde{\xi}\, .
\label{HQLtHooft}
\end{equation}
The boundary conditions in this equation are as follows:
\begin{equation}
\varphi(\xi \rightarrow 0) \rightarrow {\rm const}\, ,~~\varphi(\xi \rightarrow \infty) \rightarrow 0\, . \label{boundcondZ}
\end{equation}
Our main results can be summarized as follows.
We solve the heavy-light system in the laboratory frame using
the Coulomb (axial) gauge. As the first step we solve the
gap equation and obtain the required quark Green function.
Given this quark Green function we are in position to solve
the Bethe--Salpeter equation. Both the single-quark Green function
(the quark condensate follows straightforwardly from the quark
Green function) and the meson spectrum manifestly
exhibit dynamical chiral symmetry breaking. Then we solve the
same system on the light cone by integrating (numerically) the 't Hooft-like equation.
We obtain exactly the same spectrum even though the dynamical equations
in both cases have very different physical meaning, and there is no gap equation
on the light cone. Dynamical chiral symmetry breaking is manifest through
the absence of parity doubling in the spectrum in both cases, but
in the laboratory frame this chiral symmetry breaking is also clearly
seen through the nonzero quark condensate in the vacuum. While
all the intermediate color-nonsinglet quantities, such as the
quark Green function, manifestly depend on the reference frame and on
the gauge-fixing condition, the spectrum of the color-singlet
system is independent of the choice of the quantization scheme,
of the reference frame and of the gauge condition.
In Section \ref{ChiralSym} we briefly review the chiral symmetry breaking and solution of the associated gap equation in the
laboratory frame. In Section \ref{hsmes} we discuss the spectral equation for the heavy-light mesons in the
laboratory frame and on the light cone. Numerical solutions are presented. Section \ref{concl} briefly summarizes our results and conclusions.
\section{\label{ChiralSym} Chiral symmetry breaking in vacuum}
\subsection{The gap equation}
In the laboratory frame, the axial (Coulomb) gauge condition
\begin{equation}
A_1 = 0
\label{gaugefix}
\end{equation}
is convenient.
The derivation of the bound state equation is carried
out in two steps, see \cite{Bars1} for details. First one
needs to obtain the
quark Green's function for the massless quark. Its self-energy
saturated in the large-$N$ limit by the
rainbow graphs.
To introduce necessary notation it is convenient to start,
however, from the one-loop graph
presented in Fig.~\ref{tf13}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.3\linewidth]{thirteen.pdf}
\caption{\small
Quark self-energy at one loop.}
\label{tf13}
\end{center}
\end{figure}
We will denote the quark self-energy by $-i \Sigma $, so that
the quark Green's function is
\beqn
G_{ij}(p_0,p) &=&\int d^2 x\, e^{ip_\mu x^\mu}\,\left\langle T\left\{
\psi (x)\, \bar\psi (0)\right\}\right\rangle \nonumber\\[3mm]
&=& \frac{i}{\not\! p -m -\Sigma}\,,
\label{qgfo}
\end{eqnarray}
where the mass parameter $m$ is arbitrary (real and positive) for the time being.
In the $A_1=0$ gauge
$\Sigma$ depends only on the
spatial component of the quark momentum $p$, not on $p^0$.
In calculating the graph of Fig.
\ref{tf13} we benefit from the fact that only $D_{00}$ is non-vanishing,
and perform the integral over the time component of the
loop momenta using residues. In this way we arrive at
\begin{widetext}
\beqn
\Sigma (p) &=&\frac{\lambda}{2}\left\{
-2\gamma^1\left[\frac{p}{m^2+p^2}+\frac{m^2}{2(m^2+p^2)^{3/2}}
\ln\frac{\sqrt{m^2+p^2} +p}{\sqrt{m^2+p^2} -p}\right]\right.\nonumber\\[4mm]
&-&
m\left.\left[\frac{2}{m^2+p^2}-\frac{p}{(m^2+p^2)^{3/2}}
\ln\frac{\sqrt{m^2+p^2} +p}{\sqrt{m^2+p^2} -p}\right]
\right\}.
\label{qgfu}
\end{eqnarray}
\end{widetext}
Now we see that
(i) The loop expansion parameter is $\lambda/(m^2+p^2)$;
it explodes at $m,p <\sqrt\lambda$, so that summation
of the infinite series is necessary;
(ii) In the $A_1=0$ gauge $\Sigma$ depends only on the spatial
component of momentum; (iii) Its general Lorentz structure is
\beq
\Sigma (p) = A(p) + B(p) \,\gamma^1\,,
\end{equation}
where $A$ and $B$ are some real functions of $p$ (for real $p$). From Eq.~(\ref{qgfo})
we see that the combination we will be dealing with
in the quark Green's function is
\beq
m +p\, \gamma^1 +A(p) + B(p) \,\gamma^1\,.
\label{dvap}
\end{equation}
Usually $A$ and $B$ are traded for two other functions,
which parametrize the quark Green's function in a more convenient way. Namely,
\beqn
&& E_p \equiv \sqrt{(m+A(p))^2+(p+B(p))^2}\,,\nonumber\\[3mm]
&& m+A(p) = E_p\,\cos\theta_p\,,\nonumber\\[3mm] && p+B(p)= E_p\,\sin\theta_p\,,
\label{dvapp}
\end{eqnarray}
where for consistency one should demand $E_p$ to be {\em positive} for all real $p$.
The angle $\theta_p$ is referred to as the Bogoliubov angle, or, more commonly,
the chiral angle. The exact quark Green's function now can be rewritten
as
\beq
G = i\,\,\frac{p^0\gamma^0 - E_p\,\sin\theta_p\,\gamma^1 + E_p\,\cos\theta_p}{p_0^2-E_p^2+i\varepsilon}\,.
\label{egfp}
\end{equation}
Closed-form exact equations can be obtained for
$E_p$ and $\theta_p$ due to the fact that in the 't Hooft limit
the quark self-energy is saturated by ``rainbow graphs."
An example of the rainbow graph is depicted in Fig.~\ref{tf14}.
Intersections of the gluon lines and insertions of the internal quark loops are
forbidden, and so are the gluon lines on the other side of the quark line. This diagrammatic structure implies an equation
depicted in Fig.~\ref{tf15}, where the bold solid line denotes the exact Green's function
(\ref{egfp}).
Algebraically
\beq
\Sigma (p) = \frac{i\,\lambda}{2 \pi}\,\Xint-\,\frac{d^2k}{(p-k)^2}\,
\gamma^0 G(k)\gamma^0\,.
\label{SigmaPV}
\end{equation}
It is easy to see that this equation sums up
the infinite sequence of the rainbow graphs in its entirety.
In Eq. (\ref{SigmaPV}) a principal value of the integral on the right-hand side is assumed.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=0.3\linewidth]{ch8_thtfigfourteen.pdf}
\caption{\small
An example of the rainbow graph in $\Sigma (p)$. }
\label{tf14}
\end{center}
\end{figure}
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=0.3\linewidth]{ch8_thtfigfifteen.pdf}
\caption{\small
Exact equation for $\Sigma (p)$ summing all rainbow graphs. The bold solid line
is the exact quark propagator (\ref{egfp}).}
\label{tf15}
\end{center}
\end{figure}
Using
(\ref{egfp}) and performing integration over $k^0$, the time component
of the loop momentum, by virtue of residues, it is not difficult to obtain
\beq
\Sigma (p) =\frac{\lambda}{2}\,\Xint- dk\left\{\gamma^1\,\sin\theta_k\,\frac{1}{(p-k)^2}+ \cos\theta_k\,\frac{1}{(p-k)^2}
\right\}\,,
\end{equation}
which implies, in turn,
\beqn
&& A(p) = E_p\cos\theta_p - m = \frac{\lambda}{2}\,\Xint- dk\, \cos\theta_k\,\frac{1}{(p-k)^2}\,,\nonumber\\[3mm]
&& B(p) = E_p\sin\theta_p - p = \frac{\lambda}{2}\,\Xint- dk\, \sin\theta_k\,\frac{1}{(p-k)^2}
\,.
\label{eqs30}
\end{eqnarray}
This should be supplemented by the boundary conditions
\beq
\theta_p \to\left\{\begin{array}{ll}
\frac{\pi}{2}\quad{\rm at} \quad p\to \infty ,\\[2mm]
-\frac{\pi}{2}\quad{\rm at} \quad p\to -\infty ,
\end{array}
\right.
\label{eqs31}
\end{equation}
determined by the free-quark limit.
The integrals (\ref{SigmaPV}) -- (\ref{eqs30}) contain singularity at $p = k$,
so a regularization is required. We use the principal value regularization.
This set of equations, called the gap or the Schwinger--Dyson equation, was first obtained by Bars and Green \cite{Bars1}.
Multiplying the first equation by $\sin\theta_p$ and the second by
$\cos\theta_p$ and subtracting one from another one gets
an integral equation for the chiral angle, namely,
\beq
p\,\cos\theta_p - m\,\sin\theta_p = \frac{\lambda}{2}\,\int dk\, \sin
(\theta_p-\theta_k)\,\frac{1}{(p-k)^2}\,.
\label{chirang}
\end{equation}
The latter equation, in contrast to (\ref{SigmaPV}) -- (\ref{eqs30}),
does not contain singularity at $p = k$.
Assuming that the chiral angle is found in the limit $m=0$
from
\beq
p\,\cos\theta_p = \frac{\lambda}{2}\,\int dk\, \sin
(\theta_p-\theta_k)\,\frac{1}{(p-k)^2}\,,
\label{chirangm}
\end{equation}
one can get $E_p$ from the equation
\beq
E_p = p\,\sin\theta_p +
\frac{\lambda}{2}\,\Xint- dk\, \cos (\theta_p-\theta_k)\,\frac{1}{(p-k)^2}\,.
\label{chirangmp}
\end{equation}
An immediate consequence is that $\theta_p$ is an odd function of $p$,
while $E(p)$ is even.
By solving the gap equation one obtains the chiral angle $\theta_p$ and both
dressing functions $A(p)$ and $B(p)$. In the chiral limit $m=0$ the chiral
symmetry breaking part of the quark Green function is $A(p)$. Consequently
a nonzero $A(p)$ signals dynamical chiral symmetry breaking in the vacuum.
It is an intrinsically non-perturbative effect that cannot be obtained
within the perturbation theory.
\subsection{A wrong solution}
Upon examining Eq.~(\ref{chirangm}) it is not difficult to
guess an analytic solution,
\beq
\theta_p = \frac{\pi}{2}\, {\rm sign}\,p\,,
\label{unsst}
\end{equation}
where ${\rm sign}\,p$ is the sign function,
$$
{\rm sign}\,p =\vartheta (p) - \vartheta (-p)\,.
$$
The solution (\ref{unsst}) is singular. If nevertheless we use it, then
substituting (\ref{unsst}) in Eq.~(\ref{chirangmp})
one obtains
\beq
E_p = |p|-\frac{\lambda}{|p|}\,\,.
\label{unsse}
\end{equation}
The above results shows that the analytic solution (\ref{unsst}) is unphysical. This is obvious from the
fact that $E_p$ becomes negative at $|p|<\sqrt\lambda$.
This feature of the solution (\ref{unsse}) --- negativity at small
$|p|$ --- cannot be amended by a change of the infrared regularization. See also \cite{NefKalash}.
The unphysical solution (\ref{unsse}) leads to the vanishing quark condensate, as will be clear from Eq.
(\ref{psibarpsii}). We will return to the unphysical solution later, after discussing the (nonsingular) physical solution.
\subsection{Physical solution}
A solution that leads to a nonvanishing condensate
has the form depicted in Fig. \ref{changle}.
It is smooth everywhere.
At $|p|\ll\sqrt\lambda$ it is linear in $p$. Its asymptotic approach to
$\pm\pi/2$ at
$|p|\gg\sqrt\lambda$ will be discussed later.
Now, let us calculate the chiral condensate,
the vacuum expectation value $\langle\bar\psi\psi\rangle$,
\beq
\langle\bar\psi\psi\rangle = -\,{\rm Tr}\, \int \frac{d^2p}{(2\pi)^2}\,G(p_0,p)\, ,
\label{psibarpsi}
\end{equation}
where Tr stands for both traces, with respect to color and Lorentz indices, and the quark Green
function $G(p_0,p)$ is defined in Eq.~(\ref{egfp}). Taking the trace and performing the $p_0$ integration we arrive at
\beq
\langle\bar\psi\psi\rangle = - N\,\int \frac{dp}{2\pi}\,\cos\theta_p\,.
\label{psibarpsii}
\end{equation}
For the singular solution (\ref {unsst}) the above quark condensate vanishes since
$\cos\theta_p\equiv 0$. However, for the physical smooth solution depicted in
Fig. \ref{changle}
the quark condensate does not vanish,
\beq
\langle\bar\psi\psi\rangle = - \frac{N}{\sqrt 6}\,\sqrt\lambda\,.
\label{zhit}
\end{equation}
Equation~(\ref{psibarpsii})
in conjunction with (\ref{chirangm}),
allow us to determine the leading preasymptotic correction in
$\theta_p$ at $|p|\gg \sqrt\lambda$. Indeed, in this limit
the right-hand side of
Eq.~(\ref{chirangm}) reduces to (at $p>0$)
\beq
\frac{\lambda}{2 p^2}\, \int dk\, \sin\left(\frac{\pi}{2}-\theta_k\right)=
\frac{\lambda}{2 p^2}\, \int dk\, \cos\theta_k\,,
\end{equation}
while the left-hand side
\beq
p\, \sin\left(\frac{\pi}{2}-\theta_p\right)\to p\, \left(\frac{\pi}{2}-\theta_p\right).
\end{equation}
This implies, in turn, that
\beq
\theta_p =\frac{\pi}{2}\, {\rm sign}\, p - \frac{\pi}{\sqrt 6}\left(\frac{\sqrt\lambda}{p}
\right)^3+ ...\,,\qquad |p|\gg\sqrt\lambda\,.
\end{equation}
At the same time, from Eq.~(\ref{chirangmp}) we deduce that there is no $p^{-3}$ correction in
$E/|p|$, the leading correction is of order of $\lambda^3/p^6$.
\subsection{\label{NGapSolSec} Numerical solution of the gap equation and an alternative scheme of regularization}
The gauge choice (\ref{gaugefix}) for the model (1) ensures the existence of only
one non-trivial component of the gluon propagator:
\begin{eqnarray}
\nonumber
D_{01}^{ab}(x_0 - y_0, x - y) = D_{11}^{ab} (x_0 - y_0, x - y) = 0\, , \\[2mm]
D_{00}^{ab}(x_0 - y_0, x - y) = -\frac{i}{2}\delta^{ab} |x - y| \delta(x_0 - y_0)\, .
\label{Dprop}
\end{eqnarray}
$D_{00}^{ab}(x_0 - y_0, x - y)$ corresponds to an instantaneous linear confining potential.
All loop integrals calculated with a linear potential diverge in the infrared region,
hence one has to introduce an infrared regularization. This can be done in a number of ways.
In previous sections we used a principal value regularization.
Here we apply an alternative regularization,
which suppresses the small momenta of the linear potential
by introducing a cutoff parameter into the propagator in the momentum representation.
We define propagator in momentum representation as
\begin{equation}
D_{00}^{ab}(x_0 - y_0, p) = i \frac{\delta^{ab} \delta(x_0 - y_0)}{p^2 + \mu_{IR}^2}\, .
\label{scdef}
\end{equation}
Then in the final answer for the color-singlet quantities the infrared limit $\mu_{IR} \rightarrow 0$ must be taken.
In the regularization scheme defined by (\ref{scdef}) the expression for the self-energy operator
(\ref{SigmaPV}) turns into
\begin{eqnarray}
\Sigma(p) &=& \frac{\lambda}{2} \int dk \left[ \gamma^{1} \sin\theta_k \frac{1}{(p - k)^2 + \mu_{IR}^2}
\right. \nonumber\\
&+& \left. \cos\theta_k \frac{1}{(p - k)^2 + \mu_{IR}^2} \right]\, .
\label{Sigmamu}
\end{eqnarray}
Using the representation of the delta-function
\begin{equation}
\delta(x) = \lim_{\mu_{IR} \rightarrow 0} \,\frac{1}{\pi} \,\frac{\mu_{IR}}{x^2 + \mu_{IR}^2}\, , \label{deltaf}
\end{equation}
it is easy to see that the self-energy defined in (\ref{Sigmamu}) diverges at $\mu_{IR} \rightarrow 0$ as:
\begin{equation}
\lim_{\mu_{IR} \rightarrow 0} \Sigma(p) = \frac{\lambda \pi}{2 \mu_{IR}} \sin\theta_p \gamma^1 +
\frac{\lambda \pi}{2 \mu_{IR}} \cos\theta_p + {\rm a~finite~part}\,.
\label{Sigmalim}
\end{equation}
The self-energy operator defined in (\ref{SigmaPV}) via the principal value regularization is always finite.
This is also true for the energy of a single quark which, being regularized through
(\ref{scdef}) takes the form
\beq
E_p = p\,\sin\theta_p +
\frac{\lambda}{2}\,\int dk\, \cos (\theta_p-\theta_k)\,\frac{1}{(p-k)^2 + \mu_{IR}^2} \, .
\label{Epmu}
\end{equation}
$ E_p$ diverges at $\mu_{IR} \rightarrow 0$ as
\beq
\lim_{\mu_{IR} \rightarrow 0} E_p = \frac{\lambda \pi}{2 \mu_{IR}} + {\rm finite~terms}\, , \label{Epbeh}
\end{equation}
while with the principal value regularization it is always finite.
For any other color-nonsinglet quantity one has the same situation.
This circumstance reflects the confining properties of the 't Hooft model.
Confinement means that only observable color-singlet quantities have finite well-defined
values, that should not depend on the infrared regularization scheme. The color-nonsinglet quantities
are not observable and manifestly depend on the regularization choice. Our present regularization
is convenient in the sense that it explicitly removes all color-nonsiglet objects from the physical Hilbert
space since they are all infrared divergent. At the same time this infrared divergence exactly cancels in all color-singlet
observable quantities, such as the meson spectrum, the chiral angle and the quark condensate.
The color-singlet quantities are finite and do not depend on the choice of the regularization.
In the following we show that the infrared divergences exactly cancel in the gap equation,
written in the form
\begin{equation}
A(p)\,\sin\theta_p - [B(p) + p]\,\cos\theta_p = 0\, ,
\label{ABgapeq}
\end{equation}
where $A(p)$ and $B(p)$ in the regularization scheme (\ref{scdef}) are
\begin{eqnarray}
\nonumber
A(p) = \frac{\lambda}{2} \int dk \frac{\cos\theta_k}{(p - k)^2 + \mu_{IR}^2}\, ,\\
B(p) = \frac{\lambda}{2} \int dk \frac{\sin\theta_k}{(p - k)^2 + \mu_{IR}^2}\, .
\label{ABdefmu}
\end{eqnarray}
Using the representation of the delta function (\ref{deltaf}) we obtain at $\mu_{IR} \rightarrow 0$:
\begin{eqnarray}
\nonumber
A(p) = \frac{\lambda \pi}{2 \mu_{IR}} \cos\theta_p + A_{\rm finite}(p)\, ,\\
B(p) = \frac{\lambda \pi}{2 \mu_{IR}} \sin\theta_p + B_{\rm finite}(p)\, .
\label{ABfindivp}
\end{eqnarray}
Note that in (\ref{ABgapeq}) all divergences exactly cancel
and
\begin{equation}
\tan\theta_p = \frac{B(p) + p}{A(p)} = \frac{B_{\rm finite}(p) + p}{A_{\rm finite}(p)}\,.
\label{tanth}
\end{equation}
Equation (\ref{ABgapeq}) can be solved at exceedingly small but finite values of $\mu_{IR}$;
then extrapolation to the limit $\mu_{IR} \rightarrow 0$ must be performed. The equation is solved recurrently
with a special care for the numerical integration in the vicinity of $ p = k$.
The resulting chiral angle is consistent with
previous studies \cite{Ming1,Lenz1} and is presented in Fig. \ref{changle}.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=0.8\linewidth]{changle.pdf}
\caption{\small
Numerical solution of the gap equation for the Bogolyubov angle $\theta(p)$, $p$ is in units of $\sqrt{\lambda}$.
Where x comes from the change of variable $p = \tan(x)$.}
\label{changle}
\end{center}
\end{figure}
\section{\label{hsmes} The heavy-light mesons}
\subsection{Equation for the heavy-light mesons}
The Bethe--Salpeter equation for the heavy-light mesons in the laboratory frame
follows from \cite{Bars1} in a straightforward manner, by taking the limit $m_2\to\infty$ in the coupled equations
of \cite{Bars1}, which untangles them.
The corresponding Bethe--Salpeter equation was obtained e.g. in Refs. \cite{NefKalash,msh};
an alternative derivation can be found in the text \cite{ShifmanTB}.
It has the form
\beqn
{\cal E} \phi (p) &=& p\,\sin \theta_p\, \,\phi(p) \nonumber\\[3mm]
&-&\lambda\,\int\, \frac{dk}{(p-k)^2}\left[ \cos\frac{\theta_p-\theta_k}{2}
\,\phi (k) \right.
\nonumber\\[3mm]
&-&
\left.\left(\cos\frac{\theta_p-\theta_k}{2}
\right)^2 \,\,\phi(p)\right] .
\label{bettereq}
\end{eqnarray}
It is not difficult to derive the boundary conditions on $\phi(p)$ and
some properties of the wave function:
(i) it can be taken real, nonsingular, and either symmetric or
antisymmetric under $p\to -p$,
$$
\phi (-p) =\pm \phi (p);
$$
and
(ii) at large $|p|$
\beq
\phi (p ) \sim \left\{
\begin{array}{c}
\frac{1}{|p|^3}\quad \mbox{symmetric levels}\,,
\\[3mm]
\frac{1}{p^4}\quad \mbox{antisymmetric levels}
\end{array}
\right.
.\label{boundcond}
\end{equation}
This asymptotic behavior is necessary
to guarantee the cancellation of the leading (at large $p$) term on the right-hand side
of Eq.~(\ref{bettereq}).
Knowing the numerical solution for the chiral angle $\theta_p$, we are able to solve equation (\ref{bettereq}).
For the numerical solution of equation (\ref{bettereq}) it is convenient to use the regularization (\ref{scdef}).
Equation (\ref{bettereq}) then takes the form
\beqn
{\cal E} \phi(p) &=& p \sin\theta_p \phi(p) \nonumber\\[3mm]
&-& \lambda \int \frac{dk}{(p - k)^2 + \mu_{IR}^2}
\left[
\cos\frac{\theta_p - \theta_k}{2}~\phi(k)
\right.
\nonumber\\[3mm]
&-&
\left.
\left(\cos\frac{\theta_p - \theta_k}{2}\right)^2 \phi(p)
\right]
\, . \label{bettereqmu}
\end{eqnarray}
Considering (\ref{bettereqmu}) at $\mu_{IR} \rightarrow 0$ one can see that all
infrared divergences cancel each other
\begin{equation}
{\cal E} \phi(p) = p \sin\theta_p \phi(p) - \frac{\lambda \pi}{\mu_{IR}}\phi(p) + \frac{\lambda \pi}{\mu_{IR}}\phi(p)
+ {\rm a~finite~part}\, . \label{bsecanc}
\end{equation}
We solve Eq. (\ref{bettereqmu}) variationally by expanding the unknown wave function in the basis
\begin{equation}
\phi(p) = \sum_{i = 1}^{N} C_i \chi_i(p)\, . \label{chibasis}
\end{equation}
For the symmetric levels, we choose a basis in the form $$\chi_i(p) = \exp(-\alpha_i p^2)$$
while for antisymmetric $$\chi_i(p) = p\, \exp(-\alpha_i p^2)\,.$$
A relatively small number of gaussians
is required for a sufficiently accurate expansion.
Given the above basis, Eq. (\ref{bettereqmu}) transforms into a system of linear equations
\begin{widetext}
\begin{eqnarray}
\nonumber
{\cal E} \sum_{i = 1}^{N} C_i \chi_i(p) &=& p\, \sin\theta_p \sum_{i = 1}^{N} C_i \chi_i(p) \\
\nonumber\\[3mm]
&-& \lambda \int \frac{dk}{(p - k)^2 + \mu_{IR}^2}
\left[
\cos\frac{\theta_p - \theta_k}{2}~\sum_{i = 1}^{N} C_i \chi_i(k)
-
\left(\cos\frac{\theta_p - \theta_k}{2}\right)^2 \sum_{i = 1}^{N} C_i \chi_i(p)
\right]\, .
\label{beqmus}
\end{eqnarray}
\end{widetext}
Multiplying (\ref{beqmus}) by $\chi_j(p)$, we obtain the generalized eigenvalue problem:
\begin{equation}
\nonumber {\cal E} D \vec{C_{n}} = (A + B)\vec{C_{n}}\, ,
\end{equation}
where
\begin{eqnarray}
D_{ij} &=& \int dp\, \chi_i(p) \chi_j(p)\, ,
\nonumber\\[3mm]
A_{ij} &=& \int dp\, p \sin\theta_p \chi_i(p) \chi_j(p)\, ,\label{geigenpr}\\[3mm]
B_{ij} &=& \int dp \int dk
\left[
\cos\frac{\theta_p - \theta_k}{2}~\chi_i(k) \chi_j(p)
\right.
\\
&-&
\left.
\left(\cos\frac{\theta_p - \theta_k}{2}\right)^2 \chi_i(p) \chi_j(p)
\right]\, .\nonumber
\end{eqnarray}
Energy levels obtained by solving the problem (\ref{geigenpr})
are shown in Table \ref{tablmes} and in Fig. \ref{spectrum},
the corresponding wave functions are in Fig. \ref{wpion} and Fig. \ref{wsigma}.
All wave functions are normalized by condition $\int dp\, \phi^{2}(p) = 1$.
\begin{center}
\begin{table}
\caption{Energy levels of the heavy-light hadrons in units of $\sqrt{\lambda}$}
\label{tablmes}
\begin{center}
~\newline~\newline
\begin{tabular}{|c|c|c|}
\hline
$n$ & $P = -$ & $P = +$ \\ \hline
$0$ & $1.161$ & $3.043$\\
$1$ & $4.300$ & $5.286$\\
$2$ & $6.126$ & $6.868$\\
$3$ & $7.540$ & $8.159$\\
$4$ & $8.734$ & $9.276$\\
$5$ & $9.789$ & $10.27$\\
$6$ & $10.74$ & $11.18$\\ \hline
\end{tabular}
\end{center}
\end{table}
\end{center}
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=0.4\linewidth]{spectrum.pdf}
\caption{\small
Spectrum of the heavy-light mesons in units of $\sqrt{\lambda}$.}
\label{spectrum}
\end{center}
\end{figure}
\begin{figure}[!htpb]
\includegraphics[width=\linewidth]{pion.pdf}
\caption{\small Wave functions of mesons with the negative parity (i.e. with the "symmetric" relative motion wave function).
The wave function $\phi_{n}(p)$ is in units of $\lambda^{(-1/4)}$ and momentum $p$ is in units of $\sqrt{\lambda}$.}
\label{wpion}
\end{figure}
\begin{figure}[!htpb]
\includegraphics[width=1\linewidth]{sigma.pdf}
\caption{\small Wave functions of mesons with the positive parity (i.e. with the "antisymmetric" relative motion wave function).
The wave function $\phi_{n}(p)$ is in units of $\lambda^{(-1/4)}$ and momentum $p$ is in units of $\sqrt{\lambda}$.}
\label{wsigma}
\end{figure}
\newpage
\subsection{The heavy-light mesons on the light cone}
Now we deal with the 't Hooft-like equation (\ref{HQLtHooft}).
In order to solve it numerically we split the integral into two parts
\begin{eqnarray}
2 {\cal E}_{m}\varphi_m(\xi) &=& \sqrt{2 \lambda}\, \xi\, \varphi_m(\xi) \nonumber\\[3mm]
&-&\sqrt{2 \lambda}\, \lim_{\epsilon \rightarrow 0}
\left(
\int_{0}^{\xi - \epsilon} \frac{\varphi_m(\tilde{\xi}) - \varphi_m(\xi)}{(\tilde{\xi} - \xi)^2} d\tilde{\xi} \right. \nonumber\\[3mm]
&+&
\left.\int_{\xi + \epsilon}^{\infty} \frac{\varphi_m(\tilde{\xi}) - \varphi_m(\xi)}{(\tilde{\xi} - \xi)^2} d\tilde{\xi}
\right)
\, . \label{Zhitneps}
\end{eqnarray}
Alternatively the 't Hooft-like equation can be solved with definition (\ref{scdef}).
Then it takes form:
\begin{equation}
2 {\cal E}_{m}\varphi_m(\xi) = \sqrt{2 \lambda}\, \xi\, \varphi_m(\xi) - \sqrt{2 \lambda} \int_{0}^{\infty}
\frac{\varphi_m(\tilde{\xi}) - \varphi_m(\xi)}{(\tilde{\xi} - \xi)^2 + \mu_{IR}^2} d\tilde{\xi}\, , \label{Zhitnmu}
\end{equation}
where $\mu_{IR} \rightarrow 0$ is assumed.
Both equations (\ref{Zhitneps}) and (\ref{Zhitnmu}) were solved numerically in much the same way
as Eq.~(\ref{bettereqmu}).
The results in both cases (\ref{Zhitneps}) and (\ref{Zhitnmu}) coincide. The spectrum is identical
to that following from the laboratory-frame equation (\ref{bettereqmu}), see Fig.~\ref{spectrum}.
The light-cone wave functions are normalized by the condition $\int d\xi\, \varphi_{m}^{2}(\xi) = 1$ and presented in Fig. \ref{ZhitnPlot}.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=1\linewidth]{waveZhitnitsky.pdf}
\caption{\small
Wave functions of mesons obtained from the 't Hooft-like equation.
Even $m$ represent the negative parity mesons and odd $m$ correspond to the positive parity mesons.
Both the wave functions $\varphi_{m}(\xi)$ and the variable $\xi$ are dimensionless.}
\label{ZhitnPlot}
\end{center}
\end{figure}
\subsection{Equation (\ref{bettereq}) with the unphysical chiral angle vs. the
't Hooft-like equation}
\label{unphis}
People are used to the fact that the chiral condensate cannot be directly captured if one works on the light cone.
At the same time, the chiral symmetry breaking is seen indirectly, through the absence of the parity degeneracy
in the spectrum of physical mesons. The situation in our laboratory-frame construction is totally different.
The nonsingular solution for $\theta_p$, see Section \ref{NGapSolSec}, immediately produces
$\langle\bar\psi\psi\rangle\neq 0$, see Eq. (\ref{psibarpsii}). As a result, naturally, all $P$-odd states split from $P$-even.
The singular solution (\ref{unsst}) would lead to $\langle\bar\psi\psi\rangle =0\, .$ If, using (\ref{unsst}), we could
obtain a consistent laboratory-frame Bethe--Salpeter equation, with a proper Foldy--Wouthuysen
transformation, it should have produced a parity degenerate meson spectrum,
in full accord with general theorems. However, (\ref{unsst}) implies (\ref{unsse}), which obviously
precludes the use of (\ref{unsst}) in the Bethe--Salpeter equation
because of negativity of the solution (\ref{unsse}) at small $|p|$.
Physically it means that the chiral symmetry
is {\em a priori} broken in the 't Hooft model. Trying to restore it by brute force insisting on the
chirally symmetric vacuum, we see that the bound state equation for hadrons in the rest frame
is not defined, and no consistent solutions for hadronic spectrum exists.
Nevertheless, let us perform this incorrect and illegitimate operation, and see what happens. Below
we examine a strange construct, namely, Eq.~(\ref{bettereq}) with the {\em singular} (unphysical) chiral angle, i.e.
we replace $\theta_{p,k}$ in Eq.~(\ref{bettereq}) by (\ref{unsst}). This is no longer a legitimate
laboratory-frame Bethe--Salpeter equation. But it has a miraculous feature.
For positive values of $p$ we get
\beq
{\cal E} \phi (p) = p\, \phi(p)
-\lambda\,\int_0^\infty\, \frac{dk}{(p-k)^2}\left[
\,\phi (k) -
\,\,\phi(p)\right] .
\label{bettequ}
\end{equation}
Next, we introduce dimensionless variables (marked by tildes)
\beq
p =\sqrt{\lambda}\, \tilde{p}\,,\qquad k =\sqrt{\lambda} \,\tilde{k}\,.
\label{dv}
\end{equation}
The wave functions are to be understood now as functions depending on $\tilde{p},\,\,\tilde{k}$ rather than
$p,\,\,k $, although we will keep using the same notation $\phi$.
Then, in terms of these dimensionless variables, Eq.~(\ref{bettequ}) takes the form
\beq
{\cal E} \phi (p) = \sqrt{\lambda} \,\tilde{p}\, \phi(\tilde{p})
-\sqrt\lambda\,\int_0^\infty\, \frac{d\tilde{k}}{(\tilde{p}-\tilde{k})^2}\left[
\,\phi (\tilde{k}) -
\,\,\phi(\tilde{p})\right] .
\label{bettequp}
\end{equation}
Compare it with Eq. (\ref{Zhitneps}) or (\ref{Zhitnmu}). We observe, with surprise, that Eq. (\ref{bettequp}) is
identical to (\ref{Zhitneps}), up to a renaming of the integration variables and rescaling
\beq
{\cal E} \to \sqrt{2} {\cal E}_m\, .
\end{equation}
Thus, the laboratory frame Bethe--Salpeter equation with the {\em wrong} chiral angle
and the boundary conditions inappropriate for the laboratory frame equation
\footnote
{
The laboratory frame Bethe--Salpeter equation requires vanishing of the odd wave functions at $p=0$, while the wave functions of the
't Hooft-like equation do not vanish at $\xi=0$.
}
reproduces the spectrum of the ({\em correct}) 't Hooft-like light-cone equation
up to an overall energy scale which is off by a factor of $1/\sqrt{2}$. In particular, the ratios of the energy levels
following from (\ref{bettequ}) are correct. The physical reason for this coincidence remains puzzling.
~\newline
\section{\label{concl} Conclusions}
We studied the heavy-light mesons in $(1+1)$-dimensional QCD in the 't Hooft limit, with the emphasis on the
impact of the chiral symmetry breaking both on the spectrum and wave functions. To this end we compared
two alternative quantization schemes: laboratory frame Bethe--Salpeter equation with a nontrivial chiral angle
and the light-cone 't Hooft-like equation which has no direct information on the chiral condensate in the vacuum.
Two distinct limiting procedures leading to these two respective equations are not a priori interchangeable.
First, we solved the system in the laboratory frame using the Coulomb (axial) gauge.
The solution proceeds via two steps. One begins from the solution of the gap equation and obtains a single-quark
Green's function as well as the quark condensate in the vacuum. Chiral symmetry is manifestly dynamically broken
in the vacuum. Then one solves the Bethe--Salpeter equation determining the odd and even wave functions and the spectrum.
Chiral symmetry is broken in the spectrum too. The spectral results are independent
on the gauge choice and on an infrared regularization scheme.
Second, we solved the same system on the light cone. In this case there is no analog of the gap equation,
and vacuum is trivial. Nevertheless, the chiral symmetry is broken in the observable
spectrum.
Needless to say, all wave functions are totally different (they depend on variable which have very different meanings in these two schemes).
While dynamical equations on the light cone and in the laboratory frame (with the Coulomb gauge)
look very different, the results for the spectra are the same. We demonstrated this numerically;
the question of explicitly finding an appropriate unitary transformation between both schemes remains open.
A curious fact was observed {\em en route}.
The
laboratory frame equation for $\phi (p)$ becomes identical to the 't
Hooft-like equation for
$\varphi (\xi )$ (see Eq. (\ref{HQLtHooft})) upon substitution into the laboratory-frame equation
a singular (nonphysical) solution for the chiral angle with simultaneous rescaling of the overall
energy scale.
{\bf Acknowledgements}
L.Ya.G. and V.K.S. acknowledge support from the Austrian Science
Fund (FWF) through the grant P21970-N16. M.S. is grateful to
Frieder Lenz and Michael Thies for useful discussions. The work of M.S.
is supported in part by DOE grant DE-FG02-94ER40823.
|
2,877,628,091,302 | arxiv | \section{introduction}
Using the quantum field theory in curved spacetime, Hawking \cite{hawking1}
found that the collapsing black hole will lead, at late times, to a radiation
of particles in all modes of the quantum field, with characteristic thermal
spectrum at a temperature $1/8\pi M$. It is generally believed that the pair
productions occur inside and outside the horizon of the black hole and tunnel
across the horizon. In the late time, with knowledge of Feyman's
\cite{feyman} path integral method in the quantum mechanics, he found
\cite{hawking2} that the probability of emission particles from the past
horizon is not the same as the probability of absorption into the future
horizon. The ratio for the Schwarzschild black hole between them is of the
form $\label{ts} \Gamma_{out}=e^{- E/8\pi M}\Gamma_{in}$; for
Reissner-Nordstr\"{o}m black hole is $\label{trn}
\Gamma_{out}=e^{-(E-qV_+)/T_R}\Gamma_{in}$; for Kerr black hole is
$\label{tk} \Gamma_{out}=e^{-(E-m\Omega_+)/T_K}\Gamma_{in}$. These discovers
have excited a lot of interest
\cite{sri,sh1,sh2,ag,man,man1,wil1,wil2,wil3,wil4,berger,medved,chendeyou,
jingyi,jiang,zz1,zz, par,par1,par2,arz,sqw,sqw1,sqw2,ajm,pm1,pm}.
Being enlightened by path-integral method, K. Srinivasan {\it et al}
\cite{sri,sh1,sh2} used Landau's \cite{landau} complex paths method to deduce
radiance without using the Kruskal extension. They treated the radiance as
tunnelling across the singularity and the wave functions as semiclassical
approximation modes $\exp[\frac{i}{\hbar}I(r,t)]$, where $I$ is the classical
action function which can be expanded by $\hbar/i$. To the lowest order, $I$
satisfies the relativistic Hamilton-Jacobi equation which gives a solution
$I_\pm=-Et\pm W(r)+J(x^i)$, where ``$+$" is of outgoing particles and ``$-$"
of incoming particles. $I$ has a pole at horizon $r=r_+$ and the
probabilities of the particles $\Gamma_{out}\sim e^{-2\text{Im}I_+},~~
\Gamma_{in}\sim e^{-2\text{Im}I_-}$.
This complex-path method has been known as Hamilton-Jacobi method after
developed by Angheben {\it et al} \cite{ag} and man {\it et al}
\cite{man,man1}. To ensure the probability is normalized, they used the
boundary conditions for incoming particles which fall behind the horizon
along classically permitted trajectories, i.e. $I=-Et+W(r)+J(x^i)+K$, where
$K$ is a complex normalizing constant. So the total probability is
\begin{eqnarray}\label{gamma2} \Gamma=\Gamma_{out}\sim
e^{-2[\text{Im}I_+-\text{Im}I_-]}.
\end{eqnarray}
However this method can only be used under the condition that the background
spacetime is considered fixed in which the energy conservation is not
enforced during the emission process. Some efforts on extension this method
to dynamic geometry have been done \cite{medved,chendeyou}. But these
generalizations have some crudeness.
Using usual thermodynamic way, we divide the emission process into many
infinite small segments, every one of which can be treated as a quasi-static
process and, the background spacetime can be treated as fixed, there exists
equilibrium temperature. Thus in every segment we can use Hamilton-Jacobi
method to handle. In different segment the instantaneous event horizon is
different. We obtain each action $I_i$ in every tiny time piece after the
particle tunneled across the instantaneous horizon. To get the last action
$I$, the change $\Delta I_i$ between the instantaneous actions should be
considered. Then $I=\sum \Delta I_i\sim \int dI$. After integrating over the
whole process, we obtain the thermal spectrum incorporating the effects of
the back-reaction on the background spacetime, which is the same as that
obtained by the null geodesic method proposed in \cite{wil1}.
There are two different approaches that are used to model tunneling process.
The first method developed was the null geodesic method used by Parikh and
Wilczek\cite{wil1}. Another one is the Hamilton-Jacobi method. There have a
couple of unpleasant features in the null geodesic method: (i) it strongly
relies on a very specific choice of (regular-across-the-horizon) coordinates,
and (ii) it turns upside down the relationship between Hawking's radiation
and back-reaction \cite{vanzo}. The Hamilton-Jacobi method can cope with both
issues. Therefore our method can be applied to any sort of horizons and
particles.
We also study the change of total entropy of the system including black hole
and radiating particles by investigating that where the particle energy comes
from. Then we know why the entropy change of the black hole can be obtain by
probing its radiating particles. Our result is that the difference of total
entropy $\Delta S$ is very small but nonzero, which has some difference from
ref. \cite{wil1} in which the difference of total entropy $\Delta S=0$. As in
ref. \cite{jingyi}, the authors argue that the null geodesic method is only
suitable for the reversible process and the factual emission process is
irreversible which is possible to lose information. Here we argue that
Hamilton-Jacobi method can be suitable for the irreversible process and there
are very few information lost in the emitting process.
Our paper is outlined as follows. Section
II is for neutral scalar particles radiation from Reissner-Nordstr\"{o}m
black hole and the entropy change of the black hole and radiating particles.
Section III is devoted for charged scalar particles radiating from
Reissner-Nordstr\"{o}m black hole. Section IV is used discussing the charged
scalar particles radiating from Kerr-Newman black hole.
In section V, charged Dirac particles radiation from Kerr-Newman
black hole is investigated. Section VI is a summary.
\section{Hawking radiation for Reissner-Nordstr\"{o}m black hole from neutral scalar particles radiation}
The line element for the charged Reissner-Nordstrom back hole is described by
\begin{eqnarray}\label{rn}
ds^2=-\frac{\Delta(r)}{r^2}dt^2+\frac{r^2}{\Delta(r)}dr^2+r^2(d\theta^2+\sin^2\theta
d\varphi^2),
\end{eqnarray}where $\Delta(r)=r^2-2Mr+Q^2$. When a neutral particle
with the energy $E$ tunnels out across the horizon, the black hole mass and
electric charge $M$ would be decreased to $M-E$ due to energy conservation,
that is to say, the background spacetime is affected by the back-reaction of
the emitting particle, $g^{\mu\nu}(r(M))\rightarrow g^{\mu\nu}(r(M-E))$.
However, because of quantum uncertainty principle, it seems too crude that
the approximation of a discontinuous jump from $M$ to $M-E$ for the black
hole mass. Rather, it will require a ``gradual" transition (relative to
whatever time scale is characteristic of the radiation process). Therefore we
divide this process into many infinite small segments, during $i$th one of
which the particle obtains energy $\Delta\omega_i$, where
$\Delta\omega_i=\omega_i-\omega_{i-1}\ll E$. These segments can be treat as
many quasi-static processes and can be handled by Hamilton-Jacobi method.
A particle of instantaneous energy $\omega_i$ will effectively ``sees" a
spacetime metric of the form
\begin{eqnarray}\label{backreaction}
ds^2=-\frac{\Delta(r(M-\omega_i))}{r^2}dt^2+\frac{r^2}
{\Delta(r(M-\omega_i))}dr^2+r^2(d\theta^2+\sin^2\theta d\varphi^2).
\end{eqnarray}
In the following subsection, we use Hamilton-Jacobi method to study Hawking
radiation incorporating back-reaction as tunneling, then the entropy change
of the whole system of black hole and radiating particles in the next
subsection.
\subsection{The tunneling process}
Now we divide the tunneling time $t$ into infinite small pieces $t_i$ and
use Hamilton-Jacobi method to study these infinite small
processes. The WKB approximation wave function is
\begin{eqnarray}\label{wave}
\phi(t_i,r,\theta,\varphi)=\exp\Big[\frac{i}{\hbar}I_i(t_i,r,\theta,\varphi)
+I'_1(t_i,r,\theta,\varphi) +\mathcal{O}(\hbar)\Big].
\end{eqnarray}The Klein-Gordon equation is
\begin{eqnarray}\label{kg}
&&\frac{1}
{\sqrt{-g(r(M-\omega_i))}}\partial_{\mu}\Big(\sqrt{-g(r(M-\omega_i))}
\;g^{\mu\nu}(r(M-\omega_i))
\partial_{\nu}\phi\Big)-\frac{u^2}{\hbar^2}\phi =0.
\end{eqnarray}
Substituting Eq. (\ref{wave}) into (\ref{kg}), to leading order in $\hbar$, one
get the following relativistic Hamilton-Jacobi equation
\begin{eqnarray}\label{hj}
g^{\mu\nu}(r(M-\omega_i))(\partial_{\mu} I_i\partial_{\nu} I_i)+u^2=0,
\end{eqnarray}where there exists a solution in the form
\begin{eqnarray}I_i=-\omega_i t_i+W_i(r)+J_i(\theta,
\varphi)+K_i\;,\end{eqnarray}
where $K_i$ is a complex
constant normalizing the action function.
Substituting Eq. (\ref{backreaction}) into
(\ref{hj}) yields
\begin{eqnarray}
W_{i\pm}(r)&=&\pm \int\frac{r^2dr}{\Delta(r(M-\omega_i))}
\sqrt{\omega_i^2-\frac{\Delta(r(M-\omega_i))}{r^2}(u^2+g^{pk}J_pJ_k)}\;,
\nonumber
\end{eqnarray}where $J_p=\partial_p I_i$, $p=\theta,\varphi$.
One solution of above
corresponds to the scalar particles moving
away from the black hole (i.e. ``+" outgoing) and the other solution
corresponds to particles moving toward the black hole (i.e. ``$-$" incoming).
Imaginary parts of the action can only come from the vicinity of the pole at
the horizon. Integrating around the pole, the imaginary parts are
\begin{eqnarray}
\text{Im}I_{i\pm}=\text{Im}W_{i\pm}(r)= \pm
f(M-\omega_i)\;\omega_i+\text{Im}K_i\;,\;\;f(M-\omega_i)=\frac{\pi
r_+'^2(M-\omega_i)} {2\sqrt{(M-\omega_i)^2-Q^2}}, \nonumber\\
\end{eqnarray}
where $r'_+(M-\omega_i)=M-\omega_i
+\sqrt{(M-\omega_i)^2-Q^2}$ is the instantaneous horizon. Therefore the
action of the particle tunneled across the $i$th instantaneous horizon is
\begin{eqnarray}I_i=-\omega_i t_i+if(M-\omega_i)\;\omega_i+K_i\;.\end{eqnarray}
When
the energy of the particle gradually approaches to $E$, its action is $I$. To
obtain it, we should consider the change between the $i$th and $(i-1)$th
instantaneous actions $\Delta I_i$
\begin{eqnarray}\Delta I_i=-\Delta\omega_i\; t_i+if(M-\omega_i)\;\Delta\omega_i
+\Delta K_i\;.\end{eqnarray} Then
\begin{eqnarray}I=\sum \Delta I_i=\int_0^E-td\omega+if(M-\omega)\;d\omega
+K\;.\end{eqnarray}
At last the imaginary part of its action can be
obtained by
\begin{eqnarray}\label{integral}\text{Im}I=\int^{E}_{0}
\frac{\pi r_+'^2} {2\sqrt{(M-\omega)^2-Q^2}}d\omega+\text{Im}K.\end{eqnarray}
The imaginary
parts of the action of tunneled particle are
\begin{eqnarray}
\text{Im}I_\pm
&=&\pm\int^{E}_{0}\frac{\pi(M-\omega+\sqrt{(M-\omega)^2-Q^2})^2}
{2\sqrt{(M-\omega)^2-Q^2}}d\omega+\text{Im} K \nonumber\\
&=&\pm \frac{\pi}{2}\big[2E(M-\frac{E}{2})-(M-E)\sqrt{(M-E)^2-Q^2}
+M\sqrt{M^2-Q^2}\;\big]+\text{Im} K.\nonumber\\
\end{eqnarray}
Using Eq.
(\ref{gamma2}), the emission probability is
\begin{eqnarray}\label{bhentropy}
\Gamma
=e^{-2\pi\big(2E(M-\frac{E}{2})-(M-E)\sqrt{(M-E)^2-Q^2}
+M\sqrt{M^2-Q^2}\;\big)}=e^{\Delta S_{B-H}},
\end{eqnarray}
where $\Delta S_{B-H}$ is the difference of Bekenstein-Hawking entropy of the
black hole. This result is the same as in \cite{wil1}. In the following
subsection, we study the entropy change of the whole system and why the
emission probability (\ref{bhentropy}) is related to the entropy change of
the black hole $\Delta S_{B-H}$.
\subsection{The total entropy change of the whole system}
How does the particle obtain energy? If we consider the black hole and its
radiation as a isolated system, then the particle energy can only come from
absorbing black hole's amount of heat $\widetilde{Q}$. Because of vacuum
fluctuation near the event horizon, at sometime the black hole temperature
approaches to $T'(M-\omega)$, which is higher than the particle temperature
$T'(M)$, where $\omega$ is the energy from vacuum fluctuation. So that amount of heat
can spontaneously transfers from black hole to the particle.
In the first
segment, particle energy $0\rightarrow\omega_1$, the
black hole entropy decreases,
\begin{eqnarray}\Delta
S'_1=-\widetilde{Q}_1/T'(M-\omega_1),\end{eqnarray}
where $S'$ is the black hole
entropy, $T'(M-\omega_1)=\frac{1}{2\pi}\sqrt{(M-\omega_1)^2-Q^2}
/r_+'^2(M-\omega_1)$. The particle entropy $S''$ increases,
\begin{eqnarray}\Delta S''_1=\widetilde{Q}_1/T'(M),\end{eqnarray}
and the entropy of the system increase
\begin{eqnarray}\Delta
S_1=\Delta S''_1+\Delta
S'_1=\widetilde{Q}_1/T'(M)-\widetilde{Q}_1/T'(M-\omega_1)>0.\end{eqnarray} It
shows that the radiating process is irreversible and $\omega_1$ is
\begin{eqnarray}\Delta\omega_1=\omega_1-0=-T'(M-\omega_1)\Delta S'_1\;.\end{eqnarray}
It is easy to see that we can really obtain the entropy change of the black
hole by probing its radiating particle energy and, the
emission probability (\ref{bhentropy}) is really related to the entropy
change of the black hole $\Delta S_{B-H}$. At the end of the first segment,
the particle temperature approached to $T'(M-\omega_1)$ and, for the black
hole, it approached to $T'(M-\omega_2)$.
In the second segment, particle energy $\omega_1\rightarrow\omega_2$, the
particle absorbs heat $\Delta\omega_2=\omega_2-\omega_1=-T'(M-\omega_2)\Delta
S'_2$, and the increase of the system entropy is
\begin{eqnarray}\Delta
S_2=\Delta S''_2+\Delta S'_2=\Delta\omega_2/T'(M-\omega_1)-\Delta\omega_2
/T'(M-\omega_2).\end{eqnarray}
When the energy of the particle has approached to $E$, the black hole
temperature
is $T(M-E)$,
and the total increase of the system entropy is
\begin{eqnarray}\label{entropy}\Delta
S=\Delta S_1+\Delta
S_2+\cdots=\Delta\omega/T'(M)-\Delta\omega/T'(M-E)<\Delta\omega/T'(M)\end{eqnarray}
under condition that $\Delta\omega_1=\Delta\omega_2=\cdots=\Delta\omega$. Due
to $\Delta\omega\ll E$, the total increase of the system entropy is very
small but nonzero. It is the same as in Ref. \cite{zurek}, but has some
difference from Ref. \cite{wil1} in which $\Delta S=0$ ({\it In the limit the
emitted particle carries away the entire mass and charge of the black hole,
there are exp$(S_{B-H})$ states in total and the probability of findings a
shell containing all of the mass of the black hole is proportional to
exp$(-S_{B-H})$}).
\section{Hawking radiation for Reissner-Nordstr\"{o}m black hole from charged
scalar particles radiation}
A particle of instantaneous energy $\omega_i$ and charge $q_i'$ will
effectively sees a spacetime metric of the form
\begin{eqnarray}\label{charged}
ds^2=-\frac{\Delta(r(M-\omega_i,Q-q_i'))}{r^2}dt^2+\frac{r^2}
{\Delta(r(M-\omega_i,Q-q_i'))}dr^2+r^2(d\theta^2+\sin^2\theta d\varphi^2).
\nonumber\\
\end{eqnarray}
The charged Klein-Gordon equation is
\begin{eqnarray}\label{kg2}
&&\frac{(\partial_{\mu}-iq_i'A_\mu)\Big(\sqrt{-g(r(M-\omega_i,Q-q_i'))}
\;g^{\mu\nu}(r(M-\omega_i,Q-q_i')) (\partial_{\nu}-iq_i'A_\nu)\phi\Big)}
{\sqrt{-g(r(M-\omega_i,Q-q_i'))}} -\frac{u^2}{\hbar^2}\phi=0,\nonumber\\
\end{eqnarray}
where $A_{{\mu}}=(-(Q-q_i')/r,0,0,0)$ is the potential of the electromagnetic
field of
the background spacetime.
Substituting Eq. (\ref{wave}) into (\ref{kg2}), to leading order in $\hbar$, one
get the following relativistic Hamilton-Jacobi equation
\begin{eqnarray}\label{hj2}
g^{\mu\nu}(r(M-\omega_i,Q-q_i'))(\partial_{\mu} I_i\partial_{\nu}
I_i+q_i'^2A_\mu A_\nu-2q_i'A_\mu\partial_\nu I_i)+u^2=0.
\end{eqnarray}
Substituting Eq. (\ref{charged}) into
(\ref{hj2}) yields
\begin{eqnarray}
W_{i\pm}(r)&=&\pm \int\frac{r^2dr}{\Delta(r(M-\omega_i,Q-q_i'))}
\sqrt{\big[\omega_i-\frac{q_i'(Q-q_i')}{r}\big]^2
-\frac{\Delta(r(M-\omega_i,Q-q_i'))}{r^2}(u^2+g^{pk}J_pJ_k)}\;. \nonumber
\end{eqnarray}
Integrating around the pole, the imaginary parts are
\begin{eqnarray}
\text{Im}I_{i\pm}=\text{Im}W_{i\pm}(r)= \pm\frac{\pi
r_+'^2(M-\omega_i,\;Q-q_i')}
{2\sqrt{(M-\omega_i)^2-(Q-q_i')^2}}\Big(\omega_i-q_i'V'_+(M-\omega_i,\;Q-q_i')\Big)
+\text{Im}K,\nonumber\\
\end{eqnarray}
where $V'_+(M-\omega_i,\;Q-q_i')=(Q-q_i')/r'_+(M-\omega_i,\;Q-q_i')$ is the instantaneous
electric potential near the outer horizon,
and $r'_+(M-\omega_i,\;Q-q_i')=M-\omega_i
+\sqrt{(M-\omega_i)^2-(Q-q_i')^2}$ is instantaneous the horizon. The action
of the particle tunneled across the $i$th instantaneous horizon is
\begin{eqnarray}I_i=-\omega_i t_i+i\frac{\pi r_+'^2}
{2\sqrt{(M-\omega_i)^2-(Q-q_i')^2}}(\omega_i-q_i'V'_+)+K_i\;,\end{eqnarray}
and the change between the $i$th and $(i-1)$th instantaneous actions $\Delta
I_i$
\begin{eqnarray}\Delta I_i=-\Delta\omega_i\; t_i+i\frac{\pi r_+'^2}
{2\sqrt{(M-\omega_i)^2-(Q-q_i')^2}}(\Delta\omega_i-\Delta q_i'V'_+) +\Delta
K_i\;,\end{eqnarray}where $\Delta q_i'=q_i'-q'_{i-1}$.
When
the energy of the particle gradually approaches to $E$, its action is $I$
\begin{eqnarray}\label{integral4}I=\sum \Delta I_i=\int_0^E-td\omega
+i\int_{(0,0)}^{(E,q)}\frac{\pi r_+'^2}
{2\sqrt{(M-\omega_i)^2-(Q-q_i')^2}}(d\omega-dq'V'_+) +K\;.\nonumber\\
\end{eqnarray}
Using Eq. (\ref{integral4}), the imaginary
parts of the action of tunneled particle are
\begin{eqnarray}
\text{Im}I_\pm
&=&\pm\int^{(E,q)}_{(0,0)}\frac{\pi(M-\omega+\sqrt{(M-\omega)^2-(Q-q')^2})^2}
{2\sqrt{(M-\omega)^2-(Q-q')^2}}(d\omega-V_+'dq')+\text{Im} K \nonumber\\
&=&\pm \frac{\pi}{2}\big[2E(M-\frac{E}{2})-(M-E)\sqrt{(M-E)^2-(Q-q)^2}
\nonumber\\
&&+M\sqrt{M^2-(Q-q)^2}-Qq+\frac{1}{2}q^2\;\big]+\text{Im} K.
\end{eqnarray}
Using Eq.
(\ref{gamma2}), the emission probability is
\begin{eqnarray}
\Gamma
=e^{-2\pi\big(2E(M-\frac{E}{2})-(M-E)\sqrt{(M-E)^2-(Q-q)^2}
+M\sqrt{M^2-(Q-q)^2}-Qq+\frac{1}{2}q^2\;\big)}=e^{\Delta S_{B-H}},\nonumber\\
\end{eqnarray}
which is the same as in \cite{zz1}.
Now turn to the acquisition of particle energy. At sometime, the black hole
temperature rises from $T'(M,\;Q)$ to $T'(M-\omega_1,\; Q-q_1')$, its horizon
shrinks from $r_+'(M,\;Q)$ to $r_+'(M-\omega_1,\; Q-q_1')$, which lead to
increase of the electric potential of the black hole, $V_+'(M,\;Q)\rightarrow
V_+'(M-\omega_1,\; Q-q_1')$, where $T'(M-\omega_1,\;
Q-q_1')=\frac{1}{2\pi}\sqrt{(M-\omega_1)^2-(Q-q_1')^2}/
r_+'^2(M-\omega_1,\;Q-q_1')$. Then the amount of heat and electric charge
flow to the particle, which will lead to another further increase of black
hole temperature and electric potential. The particle energy comes from two
ways, absorbing black hole's internal energy and electric potential energy
$q'_iV'_+$. Energy $\omega_i$ is
\begin{eqnarray}\Delta\omega_i=\omega_i-\omega_{i-1}=-
T'(M-\omega_i)\Delta S_i'+V'_+\Delta q_i'\;,\end{eqnarray}
where $\Delta q_i'=q_i'-q'_{i-1}$.
\section{Hawking radiation for Kerr-Newman black hole from charged
scalar particles radiation } The ``no hair" theorem stated that all
information about the collapsing body was lost from the outside region apart
from three conserved quantities: the mass, the angular momentum and the
electric charge, the final state of most of collapsing star is Kerr-Newman
black hole. In the Boyer-Lindquist coordinate, its line element in four
dimensional spacetime is described by
\begin{eqnarray}\label{kn}
&&ds^2=-\left(1-\frac{2Mr}{\rho^2}\right)dt^2-\frac{4Mra\sin^2\theta}
{\rho^2}dtd\varphi+\frac{\rho^2}{\triangle}dr^2\nonumber\\
&&+\rho^2d\theta^2+\left(r^2+a^2+\frac{2Mra^2\sin^2\theta}{\rho^2}\right)\sin^2\theta
d\varphi^2,
\end{eqnarray}with
\begin{eqnarray}\nonumber
&&\rho^2=r^2+a^2\cos^2\theta,~~~~ \triangle=r^2-2Mr+a^2+Q^2=(r-r_+)(r-r_-)\nonumber\\
&&r_+=M+\sqrt{M^2-a^2-Q^2},~~~~r_-=M-\sqrt{M^2-a^2-Q^2},
\end{eqnarray}where $M$ is the mass of the black hole and $a=J/M$ is the angular momentum
parameter; $r_-$ and $r_+$ are the inner and event horizons; its
electromagnetic field potential is
$A_{{\mu}}=(-Qr/\rho^2,~0,~0,~Qra\sin^2\theta/\rho^2)$. When a particle with
the energy $E$, electric charge $q$ and angular momentum $j$ tunnels out
across the horizon, the black hole mass, angular momentum, and electric
charge $M,\;Q,\;J$ would be decreased to $M-E,\;Q-q,\;J-j$ due to energy,
electric charge and angular momentum conservation.
We also divide this process into many infinite small
segments, during the $i$th one of which the particle obtains energy
$\Delta\omega_i$, angular momentum $\Delta j'_i$, where $\Delta
j'_i=j'_i-j'_{i-1}$. These segments can be treat as many quasi-static
processes and can be handled by Hamilton-Jacobi method.
A particle of instantaneous energy $\omega_i$, charge $q_i'$ and angular
momentum $j'_i$ will effectively sees a spacetime metric of the form
\begin{eqnarray}\label{knp}
&&ds^2=-\left(1-\frac{2(M-\omega_i)r}{\tilde{\rho}^2}\right)dt^2
-\frac{4(M-\omega_i)r\tilde{a}_i\sin^2\theta}
{\tilde{\rho}^2}dtd\varphi+\frac{\tilde{\rho}^2}{\triangle(r(M-\omega_i,
Q-q'_i,J-j_i'))}dr^2\nonumber\\
&&+\tilde{\rho}^2d\theta^2+\left(r^2+\tilde{a}_i^2+\frac{2(M-\omega_i)r\tilde{a}_i^2
\sin^2\theta}{\tilde{\rho}^2}\right)\sin^2\theta d\varphi^2,
\end{eqnarray}where $\tilde{a}_i=(J-j_i')/(M-\omega_i)$,
$\tilde{\rho}^2=r^2+\tilde{a}_i^2\cos^2\theta$.
We divide tunneling time $t$, rotating angle $\varphi$ into infinite small
pieces $t_i,\;\varphi_i$ and use Hamilton-Jacobi method. Due to the energy
and angular momentum conservation, its instantaneously action function must
be of the form
\begin{eqnarray}I_i=-\omega_i t_i+W_i(r)+j_i'\varphi_i+\tilde{J_i}(\theta)+K_i.\end{eqnarray} Taking the line
element (\ref{knp}) into Hamilton-Jacobi equation (\ref{hj2}), one find
\begin{eqnarray}\label{separation}
&&\triangle^2W_i'^2(r)+\Big[\triangle u^2r^2-\tilde{a}_i^2 \widetilde{j_i'}^2
+4(M-\omega_i )r\tilde{a}_i \widetilde{j_i'}\widetilde{\omega}_i
-(r^2+\tilde{a}_i^2)^2\widetilde{\omega}_i^2\Big]+\triangle\lambda=0,\nonumber\\
&&~~~~\lambda=\Big[\tilde{a}_i^2\sin^2\theta
\widetilde{\omega}_i^2+\tilde{J_i}'^2(\theta)+\frac{ \widetilde{j_i'}^2}
{\sin^2\theta}+u^2\tilde{a}_i^2\cos^2\theta\Big],
\end{eqnarray}
where \begin{eqnarray}
\widetilde{\omega}_i=\omega_i-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2},
~~\widetilde{j_i'}=j'-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\tilde{a}_i\sin^2\theta.
\end{eqnarray}
Then the imaginary parts of the action function are
\begin{eqnarray}\label{ww3}
\text{Im}I_{i\pm}&=&\pm \text{Im}\left[\int dr
\frac{1}{\triangle}\sqrt{(r^2+\tilde{a}_i^2)^2\big(\widetilde{\omega}_i
-\frac{\widetilde{j_i'}\tilde{a}_i}{r^2+\tilde{a}_i^2}\big)^2
-\triangle(u^2r^2+\lambda-2\widetilde{j_i'}\tilde{a}_i\widetilde{\omega}_i)}
\;\right]+\text{Im}K_i\nonumber\\
&=&\pi\frac{r'^{2}_++\tilde{a}_i^2}{2(r_+'-M+\omega_i)}
\left(\widetilde{\omega}_{i+}
-\frac{\widetilde{j_i'}_+\tilde{a}_i}{r^{'2}_++\tilde{a}_i^2}\right)+\text{Im}K_i
\nonumber\\
&=&\pi\frac{r'^{2}_++\tilde{a}_i^2}{2(r_+'-M+\omega_i)}\left(\omega_i
-\frac{q_i'(Q-q_i')r'_+}{r'^{2}_++\tilde{a}_i^2}
-\frac{j_i'\tilde{a}_i}{r'^{2}_++\tilde{a}_i^2}\right)+\text{Im}K_i\nonumber\\
&=&\pi\frac{r'^{2}_++\tilde{a}_i^2}{2(r_+'-M+\omega_i)}\left(\omega_i-q_i'V'_+
-j_i'\Omega'_+\right)+\text{Im}K_i,
\end{eqnarray}
where $r'_+(M-\omega_i,\;Q-q_i',\;J-j'_i)=M-\omega_i+\sqrt{(M-\omega_i)^2
-(Q-q_i')^2-\tilde{a}_i^2}$, $V'_+(M-\omega_i,\;Q-q_i',\;J-j'_i)
=(Q-q_i')r'_+/(r'^{2}_++\tilde{a}_i^2)$
is the electromagnetic potential on the horizon,
$\Omega'_+(M-\omega_i,\;Q-q_i',\;J-j'_i)=\tilde{a}_i/(r'^{2}_++\tilde{a}_i^2)$
is the dragging velocity of the horizon.
When the energy, charge and angular momentum of the particle gradually
approaches to $E,\;q,\;j$, the imaginary part of its action can be obtained
by
\begin{eqnarray}\label{integral2}\text{Im}I=\int^{(E,q,j)}_{(0,0,0)}
\pi\frac{r'^{2}_++\tilde{a}_i^2}{2(r_+'-M+\omega)}(d\omega-V'_+dq'
-\Omega_+'dj')+\text{Im}K.\end{eqnarray}
Using Eq. (\ref{integral2}), the imaginary
parts of the action of tunnelled particle are
\begin{eqnarray}
\text{Im}I_\pm &=&\pm
\frac{\pi}{2}\big[2E(M-\frac{E}{2})-(M-E)\sqrt{(M-E)^2-(Q-q)^2-\tilde{a}^2}
\nonumber\\
&&+M\sqrt{M^2-(Q-q)^2-a^2}-Qq+\frac{1}{2}q^2+\tilde{a}^2-a^2\;\big]
+\text{Im} K,
\end{eqnarray}where $\tilde{a}=(J-j)/(M-E)$.
Using Eq. (\ref{gamma2}), the emission probability is
\begin{eqnarray}
\Gamma
=e^{-2\pi\big(2E(M-\frac{E}{2})-(M-E)\sqrt{(M-E)^2-(Q-q)^2-\tilde{a}^2}
+M\sqrt{M^2-(Q-q)^2-a^2}-Qq+\frac{1}{2}q^2+\tilde{a}^2-a^2\;\big)}=e^{\Delta
S_{B-H}}.\nonumber\
\end{eqnarray}
If set $j=Ea$ for special case, then $\tilde{a}=a$, so this result is the
same as in \cite{zz,jiang}.
Now turn to the acquisition of particle energy near the horizon of
Kerr-Newman black hole. At sometime, the black hole temperature rises from
$T'(M,\;Q,\;J)$ to $T'(M-\omega_1,\; Q-q_1',\;J-j'_1)$, its horizon shrinks
from $r_+'(M,\;Q,\;J)$ to $r_+'(M-\omega_1,\; Q-q_1',\;J-j'_1)$, which lead
to increase of the electric potential and the angular velocity of the black
hole, $V_+'(M,\;Q,\;J)\rightarrow V_+'(M-\omega_1,\; Q-q_1',\;J-j'_1)$,
$\Omega_+'(M,\;Q,\;J)\rightarrow \Omega_+'(M-\omega_1,\; Q-q_1',\;J-j'_1)$,
where $T'(M-\omega_1,\;
Q-q_1',\;J-j'_1)=\frac{1}{2\pi}\sqrt{(M-\omega_1)^2-(Q-q_1')^2-\tilde{a}_1^2}/
(r_+'^2+\tilde{a}_1^2)$. Then the amount of heat, electric charge and angular
momentum flow to the particle, which will lead to another further increase of
black hole temperature, electric potential and angular velocity. The particle
energy comes from three ways, absorbing black hole's internal energy,
electric potential energy $q'_iV'_+$ and rotational kinetic energy
$\Omega_+'\Delta j_i'$
\begin{eqnarray}\Delta\omega_i=-T'(M-\omega_i)\Delta S_i'+V'_+\Delta q_i'
+\Omega_+'\Delta j_i'\;.\end{eqnarray}
\section{Hawking radiation for Kerr-Newman black hole from charged Dirac particles tunnelling }
In this section, we extend the Hamilton-Jacobi method to Dirac field. The
charged Dirac equation is\begin{eqnarray}\label{dirac} &&\left[\gamma^\alpha
e^{{\mu}}_\alpha(\partial_{{\mu}}+\Gamma_{{\mu}}-iq_i'A_{\mu})
+\frac{u}{\hbar}\right] \psi=0,
\end{eqnarray}
with
\begin{eqnarray}
&&\Gamma_{{\mu}}=\frac{1}{8}[\gamma^a,\gamma^b]e^{{\nu}}_ae_{b{{\nu}};{{\mu}}},
\nonumber \end{eqnarray}
where $\gamma^a$ are the Dirac matrices and $e^{{\mu}}_a$ is the
inverse tetrad defined by
$\{e_a^{{\mu}}\gamma^a,~~~e_b^{{\nu}}\gamma^b\}=2g^{{{\mu}}{{\nu}}}
\times1$. For the Kerr-Newman metrics in Boyer-Lindquist coordinate (\ref{knp}), the nonzero tetrad elements can be
taken as
\begin{eqnarray}\label{tetrad}
&&e_0^{t_i}=\frac{1}{\sqrt{1-\frac{2(M-\omega_i)r}{\tilde{\rho}^2}}},
~~e^{t_i}_3=-\frac{2(M-\omega_i)r\tilde{a}_i\sin\theta}{\tilde{\rho}
\sqrt{\triangle^2-\tilde{a}_i^2\sin^2\theta\triangle}},\nonumber\\
&&e^r_1=\frac{\sqrt{\triangle}}{\tilde{\rho}},~~~~
e^\theta_2=\frac{1}{\tilde{\rho}},
~~e^{\varphi_i}_3=\frac{\sqrt{\triangle-\tilde{a}_i^2\sin^2\theta}}
{\tilde{\rho}\sin\theta\sqrt{\triangle}},\nonumber\
\end{eqnarray}where $\triangle=r^2+\tilde{a}_i^2-2(M-\omega_i)r+(Q-q_i')^2$.
We employ the following ansatz for the Dirac field
\begin{eqnarray}\label{psi}
&&\psi_{i\uparrow}=\bigg(\begin{array}{ccc}A(t_i,r,\theta,\varphi_i)
\xi_\uparrow\nonumber\\
B(t_i,r,\theta,\varphi_i)\xi_\uparrow\end{array}\bigg)
\exp\big(\frac{i}{\hbar}I_{i\uparrow}(t_i,r,\theta,\varphi_i)\big)
=\left(\begin{array}{ccc}A(t_i,r,\theta,\varphi_i)\nonumber\\ 0\nonumber\\
B(t_i,r,\theta,\varphi_i)\nonumber\\0\end{array}\right)
\exp\big(\frac{i}{\hbar}I_{i\uparrow}(t_i,r,\theta,\varphi_i)\big),\nonumber\\
&&\psi_{i\downarrow}=\bigg(\begin{array}{ccc}C(t_i,r,\theta,\varphi_i)
\xi_\downarrow\nonumber\\
D(t_i,r,\theta,\varphi_i)\xi_\downarrow\end{array}\bigg)
\exp\big(\frac{i}{\hbar}I_{i\downarrow}(t_i,r,\theta,\varphi_i)\big)
=\left(\begin{array}{ccc}0\nonumber\\ C(t_i,r,\theta,\varphi_i)\nonumber\\
0\\D(t_i,r,\theta,\varphi_i)\nonumber\end{array}\right)
\exp\big(\frac{i}{\hbar}I_{i\downarrow}(t_i,r,\theta,\varphi_i)\big),\nonumber\\
\end{eqnarray}
where ``$\uparrow$" and ``$\downarrow$" represent the spin up and spin down
cases, and $\xi_{\uparrow}$ and $\xi_{\downarrow}$ are the eigenvectors of
$\sigma^3$. Inserting Eqs. (\ref{tetrad}), (\ref{psi}) into the Dirac
equation (\ref{dirac}) and employing $I_{i\uparrow}=-\omega_i
t_i+W_i(r)+j_i'\varphi_i+\tilde{J_i}(\theta)+$Im$K_i$, to the lowest order in
$\hbar$ we obtain\begin{eqnarray}\label{aa}
&& -e^{t_i}_0A\Big(\omega_i-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\Big)
+e^r_1BW_i'(r)
+u A=0,\\\label{bb}&&e^{t_i}_0B
\Big(\omega_i-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\Big)-e^r_1AW_i'(r)
+u B=0, \\
\label{cc}
&&
B\left[-ie^{t_i}_3\Big(\omega_i-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}
\Big)+e^\theta_2\tilde{J_i}'(\theta)+ie^{\varphi_i}_3
\Big(j_i'-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\tilde{a}_i\sin^2\theta\Big)
\right]=0,\nonumber\\
\\\label{dd}&&
-A\left[-ie^{t_i}_3\Big(\omega_i-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}
\Big)+e^\theta_2\tilde{J_i}'(\theta)+ie^{\varphi_i}_3
\Big(j_i'-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\tilde{a}_i
\sin^2\theta\Big)\right]=0,\nonumber\\
\end{eqnarray}
where we consider only the positive frequency contributions
without loss of generality. From above four equations, we can
obtain\begin{eqnarray} &&\frac{\triangle}{\tilde{\rho}^2}W_i'^2(r)+u^2
-\frac{\Big(\omega_i-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\Big)^2}{\rho^2\triangle}
\left[\frac{4(M-\omega_i)^2r^2\tilde{a}_i^2\sin^2\theta}
{\triangle-\tilde{a}_i^2\sin^2\theta}+(r^2+\tilde{a}_i^2)^2-\triangle
\tilde{a}_i^2\sin^2\theta\right]=0,\nonumber\\
&&\frac{1}{\tilde{\rho}^2}\left[\tilde{J_i}'^2(\theta)
+\frac{\Big(j_i'-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\tilde{a}_i
\sin^2\theta\Big)^2
(\triangle-\tilde{a}_i^2\sin^2\theta)}{\triangle\sin^2\theta}
+\frac{4(M-\omega_i)^2r^2\tilde{a}_i^2\Big(\omega_i-\frac{q_i'(Q-q_i')r}
{\tilde{\rho}^2}\Big)^2
\sin^2\theta}{\triangle(\triangle-\tilde{a}_i^2\sin^2\theta)}\right.
\nonumber\\
&& \left.
+2\Big(j_i'-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\tilde{a}_i\sin^2\theta\Big)
\Big(\omega_i-\frac{q_i'(Q-q_i')r}{\tilde{\rho}^2}\Big)
\frac{2(M-\omega_i)r\tilde{a}_i}{\triangle}\right]=0.
\end{eqnarray}
After calculating the above two equations, we find
\begin{eqnarray}
&&\triangle^2W_i'^2(r)-(r^2+\tilde{a}_i^2)^2\widetilde{\omega}_i^2
-\tilde{a}_i^2 \widetilde{j_i'}^2 +4(M-\omega_i )r\tilde{a}_i
\widetilde{j_i'}\widetilde{\omega}_i\nonumber\\
&&+ \triangle\left[\tilde{J_i}'^2(\theta)+\frac{ \widetilde{j_i'}^2}
{\sin^2\theta}+u^2r^2+u^2\tilde{a}_i^2\cos^2\theta- \tilde{a}_i^2\sin^2\theta
\widetilde{\omega}_i^2\right]=0,\nonumber
\end{eqnarray}which is the same as the Eq. (\ref{separation}),
so the Hawking radiation is again
recovered. The result can be interpreted that the black holes
radiate different spin weight of particles at the same
temperature.
\section{summary}
In
the Hamilton-Jacobi framework, we have naturally discussed
Reissner-Nordstr\"{o}m and Kerr-Newman black holes' radiance with
back-reaction from neutral scalar, charged scalar and Dirac particles
radiation.
Hamilton-Jacobi method can only be used to
modeling the case that the background geometry is considered fixed. To handle
the back-reaction of the radiating particles, we should first divide the
radiation time into a series of infinite small pieces in which the small
process can be a quasi-static one. Then using Hamilton-Jacobi method, we obtain
the instantaneous action of the particle tunneled across the instantaneous
black hole horizon. To get the last action, we should find the changes
between the $i$th and $(i-1)$th instantaneous action and sum them. The last
result is the same as that obtained via the null geodesic method.
The physical meaning is obvious in this processing. Due to vacuum fluctuation
near horizon, a virtual particle pair is created. The negative energy
particle is absorbed by the black hole, resulting in the black hole mass
decrease, while temperature, electric potential and angular velocity
increase. Then the black holes' amount of heat, electric charge and angular
momentum can spontaneously transfer to the positive energy particle. This
process also results in the black hole temperature, electric potential and
angular velocity further increase and further transfer. When the particle
obtains enough energy, it can escape away to infinity, visible to distant
observers.
We have also studied the change of total entropy of the system including
black hole and radiating particles and, answered that why the entropy change
of the black hole can be obtain by probing its radiating particles. Our
result is that the difference of total entropy $\Delta S$ is very small and can be ignored.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation of China
under No. 11247013; Hunan Provincial NSFC No. 11JJ3014, the Scientific
Research Fund of Hunan Provincial Education Department No. 11B067 and, the
Foundation for the Author of Hunan Provincial Excellent Doctoral Dissertation
No. YB2012B034; Aid program for Science and Technology Innovative Research
Team in Higher Educational Institutions of Hunan Province.
|
2,877,628,091,303 | arxiv | \section{Introduction}
This paper deals with the initial-value problem for a coupled system of generalized Korteweg–de Vries (gKdV) equation
\begin{equation}\label{p01}
\left\{\begin{array}{l}
u_{t}+\partial_{x}^{3} u+\partial_{x}\left(u^{p} v^{p+1}\right)=0 \\
v_{t}+\partial_{x}^{3} v+\partial_{x}\left(u^{p+1} v^{p}\right)=0, \quad x, t \in \mathbb{R}, p \in \mathbb{Z}^{+} \\
u(x, 0)=u_{0}(x), \quad v(x, 0)=v_{0}(x),
\end{array}\right.
\end{equation}
where the unknown $ u = u(x, t),~ v = v(x, t)$ and the initial data$ ( u_{0}(x), v_{0}(x) ) $ are real-valued.\\
This type of equation is a special case of an important vast class of nonlinear evolution equations which was studied by M. Ablowitz \cite{17}, and it has applications in physical problems, which describes the strong interaction of two dimensional long internal gravity waves.\\
For $ p=1 $, the system can be reduce to a coupled system of modified KdV (mKdV) equations
\begin{equation}\label{p08}
\left\{\begin{array}{l}
u_{t}+\partial_{x}^{3} u+\partial_{x}\left(u v^{2}\right)=0 \\
v_{t}+\partial_{x}^{3} v+\partial_{x}\left(u^{2} v\right)=0, \quad x, t \in \mathbb{R} \\
u(x, 0)=u_{0}(x), \quad v(x, 0)=v_{0}(x).
\end{array}\right.
\end{equation}
Here, the author proved the local well posdness in in $H^s, s \geq \frac{1}{4}$. For $ s \geq 1 $, it is proved that the global well posdness is assured. In addition, M. Panthee improved it to extend solution to be in any time interval $[0, T] $ for $ s > \frac{4}{9}$.\\
The authors in \cite{20} studied the local well-posdness in $ (H^{s} \times H^{s} ) $ with $ s > -\frac{1}{2} $ for system consisting modified Korteweg–de Vries-type equations
\begin{equation}
\left\{\begin{array}{l}
u_{t}+\partial_{x}^{3} u+\partial_{x}\left(u^{2} v^{3}\right)=0, \\
v_{t}+ \alpha \partial_{x}^{3} v+\partial_{x}\left(u^{3} v^{2}\right)=0, \quad x, t \in \mathbb{R} \\
u(x, 0)=u_{0}(x), \quad v(x, 0)=v_{0}(x),
\end{array}\right.
\end{equation}
where $ 0 < \alpha < 1 $ and $ (u_{0}, v_{0}) $ is given in low regularity Sobolev spaces$ (H^{s} \times H^{s} )$, but if $ \alpha = 1 $ the authors obtained the local well posedness for $ s \geqslant \frac{1}{4}$.\\
In \cite{7}, the problem $ (\ref{p01}) $ is studied and the local and global well-posedness results with $ (u_{0}, v_{0}) \in H^{s} \times H^{s} $, $ s\geqslant 1 $ and $ p \geqslant 1 $ is shown. The golobal well-posedness was obtained by using the next conserved quantities satisfied by the flow of $ (\ref{p01}) $
$$ \int_{\mathbb{R} }u dx \quad\int_{\mathbb{R} } v dx \quad \frac{1}{2} \int_{\mathbb{R}}u^{2}+v^{2} dx
\quad \textit{and}
\quad \frac{1}{2} \int_{\mathbb{R} } u_{x}^{2}+v_{x}^{2} -\frac{2}{p+1} u^{p+1} v^{p+1} dx.$$
In addition, the authors showed the existence and nonlinear stability of the solitary wave solution. The study of stability for solitary wave solution is followed from the abstract results of Grillakis, for more details, please see \cite{9,4,14,18}.\\
For $ p =2 $, the system is turn out to a coupled system of modified Korteweg–de Vries (gKdV) equation
\begin{equation}
\left\{\begin{array}{l}\label{p07}
u_{t}+\partial_{x}^{3} u+\partial_{x}\left(u^{2} v^{3}\right)=0, \\
v_{t}+\partial_{x}^{3} v+\partial_{x}\left(u^{3} v^{2}\right)=0, \quad x, t \in \mathbb{R} \\
u(x, 0)=u_{0}(x), \quad v(x, 0)=v_{0}(x).
\end{array}\right.
\end{equation}
Panthee and Scialom \cite{5}, investigated some well-posedness issues for eq $ (\ref{p07}) $
in $H^{s} \times H^{s}$, which proved local and global will posdness for $ s\geqslant 0$.\\
For related problems in analytic Gevrey spaces, we review the results in $ 2 D $ by M. Shan, L. Zhang \cite{21}, where the authors proved that the following problem (the Cauchy problem associated with the $2 D $ generalized Zakharov-Kuznetsov equation)
\begin{equation}
\left\{\begin{array}{l}\label{p098}
u_{t}+ (\partial_{x}^{3} +\partial_{y}^{3} ) u + (\partial_{x} + \partial_{y} ) u^{p+1} =0, \\
u(0, x,y)=u_{0}(x, y),
\end{array}\right.
\end{equation}
has an analytic solutions in a strip the width, and they gave an algebraic lower bounds. \\
Bona and Gruji\'c \cite{22} showed the well-posedness of a KdV-type Boussinesq
system
\begin{equation}
\left\{\begin{array}{l}\label{p09}
u_{t}+ v_{x} + u u_{x} + v_{xxx} = 0 \\
v_{t}+ u_{x} + ( u v)_{x} + u_{xxx} = 0.
\end{array}\right.
\end{equation}
There is another method in this direction, we mention the works by A. Boukarou et {\em al.} in the next series of papers \cite{BoukarouA2,Boukarou2,Boukarou3,Boukarou4,Boukarou6,Boukarouarxiv,Boukarou5}.\\
Motivated by the previouse results, we consider our main ptoblem with initial data are analytic on a band in the complex plane and obtained solution for all time. We also showed that the width of this band decreases algebraically with time. \\
This paper is continuation of our prevouse results and it is structured as follows. In section $ 1$, we give some historical review and motivate this paper to further strengthened, and innovate the main contributions and introduce our main results which we will prove later (local and global well posedness of equation $ \eqref{p01}$). In section $ 2$, we present some definition and the necessary function spaces such as the analytic function spaces $ \mathcal{G}_{\rho, s} $, analytic Bourgain space $X_{\rho, s, b} $ which will be used. In section $3$, we prove the Linear and Bilinear Estimates which needed to prove the main results. In section $4$, we prove the local and global well-posdness and then obtained lower bound. \\
We provide a clear, sober and well-written analysis of the problem.
\begin{theorem}\label{the1.2}
Let $ s > \frac{3}{2} $ and $ p \geq 1 $ and for initial data $(u_0,v_0)\in
\mathcal{G}_{\rho, s}\times \mathcal{G}_{\rho, s}$, $ \rho > 0 $, there exists a positive time $ T $, such that the initial -value problem (\ref{p01}) is well-posed in the space
$$C \left( \left[ 0, T \right];\mathcal{G}_{\rho, s} \right)\times C \left( \left[ 0, T \right];\mathcal{G}_{\rho, s} \right). $$
\end{theorem}
\begin{theorem}\label{th03}
Let $ \rho_{0} > 0 $ and $ s > \frac{3}{2} $ and let $ T \geq t_{0} $ suppose that the solution $ u $, $ v $
given by Theorem $ (\ref{the1.2}) $ extends globally in time. Then, we have
$$ (u,v)\in C ([0, 2T ], \mathcal{G}_{\rho(T ) /2, s} )\times C ([0, 2T ], \mathcal{G}_{\rho(T ) /2, s} ),$$
where $ \rho(T ) $ is given by
$$ \rho (t) = \min \left\lbrace \rho_{1}, K T ^{-2p^{2} -6p-1} \right\rbrace. $$
for some constant $ K > 0 $.
\end{theorem}
\section{Preliminary estimates and Function spaces}
The $\widehat{ u}$ is denote the Fourier transform of $ u $ which is defined as
$$
\widehat{ u}(\zeta) = \frac{1}{\sqrt{2\pi }}\int _{-\infty}^{+ \infty} u(x) e^{-ix \zeta} dx. $$
For a function $ u(x, t) $ of two variable we have
$$
\widehat{u}^{x} (\zeta, t) = \frac{1}{\sqrt{2\pi }}\int _{-\infty}^{+ \infty} u(x, t) e^{-ix \zeta} dx,
$$
and
$$
\widehat{u} (\zeta,\eta ) = \frac{1}{2\pi }\int _{-\infty}^{+ \infty} \int _{-\infty}^{+ \infty} u(x, t) e^{-ix \zeta} e^{-it \eta} dx d\eta.
$$
We note that the operators $ A, \Lambda $ and $ F_{\rho }$ are defined as
\[
\widehat{Au} (\zeta,\eta ) = \left( 1+ | \zeta | \right)\widehat{u}(\zeta,\eta );
\]
\[
\widehat{\Lambda u} (\zeta,\eta ) = \left( 1+ | \eta | \right)\widehat{u}(\zeta,\eta );
\]
\[
\widehat{F_{\kappa} }(\zeta, \eta ) = \frac{f(\zeta, \eta )}{\left(1+| \eta - \zeta^{3} |\right)^{\kappa} }.
\]
The mixed $ L^{p}- L^{q} $ -norm is defined by
$$
\| u \|_{L^{p} L^{q} } = \left( \int _{-\infty}^{+ \infty} \left| \int _{-\infty}^{+ \infty} |u(x, t)|^{q } dt\right|^{\frac{p}{q} } dx \right)^{\frac{1}{p} }
$$
The analytic Gevrey class $ \mathcal{G}_{\rho,s}$ is defined by Foias and Temam \cite{T1} as
\[
\left\|u_{0}\right\|_{\mathcal{G}_{\rho, s}}^{2}=\|\mathrm{e}^{ \rho(1+|\zeta|)}(1+|\zeta|)^{s} \widehat{u_{0}}(\zeta) \|_{L^{2}_{\zeta}}.
\]
For $ s, \in \mathbb{R} $, $b \in[-1,1]$ and $ \rho > 0 $, we denote $ X_{\rho,s,b}$ by $\|\cdot\|_{\rho,s, b}$ with respect to the norm
\[
\| u \|_{X_{\rho,s,b}} =
\bigg\| e^{\rho ( 1+ |\zeta |)} (1+| \zeta | )^{s} (1+\left|\eta-\zeta^{3}\right|)^{b} \hat{u}( \zeta,\eta ) \bigg\|_{L^{2}_{\zeta, \eta}}.
\]
For $\rho=0, X_{\rho, s, b}$ coincides with the space $X_{s, b}$ introduced by Bourgain \cite{L1}, and Kenig, Ponce and Vega \cite{A1}. The norm of $X_{s, b}$ is denoted by $\|\cdot\|_{s, b}$, as follow
\[
\| u \|_{X_{s,b}} =
\bigg\| (1+| \zeta | )^{s} (1+\left|\eta-\zeta^{3}\right|)^{b} \hat{u}( \zeta,\eta ) \bigg\|_{L^{2}_{\zeta, \eta}}.
\]
\section{Linear and Multilinear Estimates}
In this section, we shall deduce several estimates to be used in the proof of Theorem (\ref{the1.2}).
\begin{lemma}\label{lem3}
Let $ 0 < \sigma < \rho $ and $ n \in \mathbb{N} $. Then, we have
\begin{align*}
\sup_{\substack{ x+iy \in S_{\rho- \sigma } }} \vert \partial^{n}_{x} u(x+iy) \vert \leq C \Vert u \Vert_{\mathcal{G}_{\rho}},
\end{align*}
where $ C $ is constant depending on $ \zeta $ and $ n $.
\end{lemma}
\begin{lemma}\label{lem2}
Let $ b > \frac{1}{2} $,~~$ s \in \mathbb{R}$
~~and $ \rho \geq 0 $, then for all $ T > 0 $,~
we have
\begin{equation*}
X_{\rho, s, b } \hookrightarrow C\left([0,T], G^{\rho, s} \right).
\end{equation*}
\end{lemma}
\begin{proof}
We define the operator $ \Theta $ \\
$$ \widehat{\Theta u}^{x}(\zeta, t) = e^{\rho (1+| \zeta |)}\widehat{u}^{x}(\zeta, t), $$
satisfy
\begin{equation*}
\| u \|_{X_{\rho, s, b}} = \| \Theta u \|_{X_{ s, b}},
\end{equation*}
and
\begin{equation*}
\| u \|_{\mathcal{G}_{\rho, s }} = \| \Theta u \|_{H^{ s}}.
\end{equation*}
We observe that $ \Theta u $ belongs to $ C([0,T ], H^{s})$ and for
some $ C > 0 $ we have
\begin{equation*}
\| \Theta u \|_{C\left([0,T], H^{s} \right)} \leq C ~\| \Theta u \|_{X_{ s, b}}.
\end{equation*}
Thus, it follows that $u \in C\left([0,T], G^{\rho, s} \right) $ and
\begin{equation*}
\| u \|_{C\left([0,T], G^{\rho, s}\right)} \leq C ~\| u \|_{X_{ \rho, s, b}}.
\end{equation*}
\end{proof}
By using Duhamel's formula (\ref{p01}), we may write the solution
\begin{equation*}\left\{
\begin{array}{l}
u(x,t) = W(t)u_{0}(x ) - \displaystyle\int_{0}^{t} W(t-t^{\prime} )w_{1}(x, t^{\prime})dt^{\prime}, \\ \\
v(x,t)= W(t)v_{0} (x ) - \displaystyle\int_{0}^{t} W(t-t^{\prime})w_{2}(x, t^{\prime})dt^{\prime},
\end{array}\right.
\end{equation*} \\
where $W(t)= e^{-t\partial_{x}^{3}}$, $w_{1} = \partial_{x}\left(u^{p} v^{p+1}\right) $ and
$w_{2} = \partial_{x}\left(u^{p+1} v^{p}\right)$.\\
Next, we localize in time variable by using a cut-off function
$ \psi(t) \in C_{0}^{\infty} (-2,2)$ with\\
$ 0\leq \psi(t) \leq 1, \psi(t) =1 $
on $[-1,1]$ and for
$0 < T < 1$.\\
We define $\psi_{T}(t)= \psi(\frac{t}{T}) $, where
\begin{equation*}
\left\{\begin{array}{l}
\psi \in C_{0}^{\infty}, \psi = 1 \quad in \big[ -1; 1 \big]
\\
supp \psi\subset \big[ -2; 2 \big]\\
\psi_{T}(t) = \psi((\frac{t}{T})).
\end{array}\right.
\end{equation*}
We consider the operator $ \varXi $, $ \Gamma $ given by the following
\begin{equation}\label{p5}
\left\{\begin{array}{l}
\varXi (t) = \psi (t) W(t)u_{0}- \psi_{T}(t) \displaystyle \int_{0}^{t} W(t-t^{\prime})w_{1}(t^{\prime}) dt^{\prime} \\ \\
\Gamma(t)= \psi (t)W(t)v_{0}- \psi_{T}(t)\displaystyle\int_{0}^{t} W(t-t^{\prime})w_{2}(t^{\prime}) dt^{\prime}.
\end{array}\right.
\end{equation}
We start with the following useful Lemma.
\begin{lemma}\label{lem1}\cite{A1,13}
Let $ \rho \geq 0 $, $b > \frac{1}{2}$,~~$ b-1 < b' < 0 $, and $ T \geq 1 $. Then there exist a constant $ c $ such that the following estimates holds
\begin{equation}\label{eq1}
\| \psi(t) W(t)u_{0}\|_{\rho, s, b} \leq c T^{\frac{1}{2}} \| u_{0} \|_{\mathcal{G}_{\rho, s }},
\quad \quad
\| \psi(t) W(t)v_{0}\|_{\rho, s, b} \leq c T^{\frac{1}{2}} \| v_{0} \|_{\mathcal{G}_{\rho, s }},
\end{equation}
and
\begin{equation}\label{eq3}
\| \psi_{T}(t) u\|_{\rho, s, b } \leq c \| u\|_{\rho, s, b},
\quad \quad
\| \psi_{T}(t) v\|_{\rho, s, b} \leq c \| v\|_{\rho, s, b},
\end{equation}
and
\begin{equation}\label{eq4}
\| \psi_{T}(t) \int_{0}^{t} W(t-s)w(s) ds\|_{\rho, s, b} \leq c T\| w\|_{\rho, s, b'}.
\end{equation}
\end{lemma}
\begin{lemma}\label{2.3}
(\cite{13,4})
Let $ s $ and $ \kappa $ be given. There is a constant $ c $ depending on $ s $ and $\kappa $ such that
\begin{equation}\label{102}
If \quad \kappa > \frac{1}{4}, \quad then \quad
\| A^{\frac{1}{2}} F_{\kappa} \|_{L_{x}^{4} L_{t}^{2} } \leq C \| f \|_{L^{2}_{\zeta} L^{2}_{\eta} },
\end{equation}
\begin{equation}\label{103}
If \quad \kappa > \frac{1}{4},\quad then \quad
\| A F_{\kappa} \|_{L_{x}^{\infty} L_{t}^{2} } \leq C \| f \|_{L^{2}_{\zeta} L^{2}_{\eta} },
\end{equation}
\begin{equation}\label{104}
\textit{If} \quad \kappa > \frac{1}{2}, \quad \textit{and}\quad s >3\kappa ,\quad\textit{then} \quad
\| A^{-s} F_{\kappa} \|_{L_{x}^{2} L_{t}^{\infty} } \leq C \| f \|_{L^{2}_{\zeta} L^{2}_{\eta} },
\end{equation}
\begin{equation}\label{105}
\textit{If} \quad \kappa > \frac{1}{2}, \quad \textit{and}\quad s >\frac{1}{4},\quad\textit{then} \quad
\| A^{-s} F_{\kappa} \|_{L_{x}^{4} L_{t}^{\infty} } \leq C \| f \|_{L^{2}_{\zeta} L^{2}_{\eta} },
\end{equation}
\begin{equation}\label{106}
If \quad \kappa > \frac{1}{2}, \quad and \quad s > \frac{1}{2}, \quad\textit{then} \quad
\| A^{-s} F_{\kappa} \|_{L_{x}^{\infty} L_{t}^{\infty} } \leq C \| f \|_{L^{2}_{\zeta} L^{2}_{\eta} }.
\end{equation}
\end{lemma}
\begin{lemma}\label{1.3}
Let $ b >\frac{1}{2} $, $ b ' < - \frac{1}{4} $, and $ s \geq 3b $. Let $ p \in \mathbb{N}$ and suppose $ u_{1},...,u_{p+1}, v_{1},..., v_{p+1} \in X_{\rho, s, b}$. Then there exists a constants $ c $ such that
\begin{equation}\label{eq25}
\| \partial_{x}\prod _{i=1}^{p} u_{i} \prod _{j=1}^{p+1} v_{j} \|_{\rho, s, b'}\leq
C \prod _{i=1}^{p}\| u_{i} \|_{\rho, s, b}. \prod _{j=1}^{p+1}\| v_{j} \|_{\rho, s, b},
\end{equation}
\begin{equation}\label{eq28}
\| \partial_{x}\prod _{i=1}^{p+1} u_{i} \prod _{j=1}^{p} v_{j} \|_{\rho, s, b'}\leq
C \prod _{i=1}^{p+1}\| u_{i} \|_{\rho, s, b}. \prod _{j=1}^{p}\| v_{j} \|_{\rho, s, b}.
\end{equation}
\end{lemma}
\begin{proof}
First of all, for $ i= 1,2,...,p+1$ and $j=1,2,...,p+1 $, we define
\begin{align*}
f_{i} (\zeta, \eta ) = (1+ | \zeta |)^{s} (1+ |\eta - \zeta^{3} |)^{b} e^{\rho (1+ | \zeta |)} | \widehat{u_{i} }(\zeta,\eta )| \\
g_{j} (\zeta, \eta ) = (1+ | \zeta |)^{s} (1+ |\eta - \zeta^{3} |)^{b} e^{\rho (1+ | \zeta |) }| \widehat{v_{j} }(\zeta,\eta )|.
\end{align*}
The proof is first given for the case $ p= 1$, after which the proof for a general $ 2p+1 $ will be more transparent, that means we prove
\begin{equation*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'}\leq
C \| u_{1} \|_{\rho, s, b} \|v_{1} \|_{\rho, s, b} \|v_{2} \|_{\rho, s, b}
\end{equation*}
\begin{equation*}
\| \partial_{x} u_{1} u_{2} v_{1} \|_{\rho, s, b'}\leq
C \| u_{1} \|_{\rho, s, b} \|u_{2} \|_{\rho, s, b} \|v_{1} \|_{\rho, s, b}.
\end{equation*}
We have
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'}&= \left\| (1+ | \zeta |)^{s} (1+ |\eta - \zeta^{3} |)^{b'} e^{\rho (1+| \zeta |)} | \widehat{\partial_{x} u_{1} v_{1} v_{2} }(\zeta,\eta )| \right\|_{L^{2}_{\zeta} L^{2}_{\eta} }
\\ \\& = \left\| (1+ | \zeta |)^{s} e^{\rho (1+| \zeta |)}(1+ |\eta - \zeta^{3} |)^{b'} | \zeta| ~~ | \widehat{u_{1} v_{1} v_{2} }(\zeta,\eta )| \right\|_{{L^{2}_{\zeta} L^{2}_{\eta} }}\\ \\&
= \left\| (1+ | \zeta |)^{s}e^{\rho (1+| \zeta |)} (1+ |\eta - \zeta^{3} |)^{b'} | \zeta|~~ | \widehat{u_{1}}\ast \widehat{v_{1}} \ast \widehat{v_{2}}(\zeta,\eta )| \right\|_{L^{2}_{\zeta} L^{2}_{\eta} } \\ \\&
= \| (1+ | \zeta |)^{s}e^{\rho (1+| \zeta |)} (1+ |\eta - \zeta^{3} |)^{b'} | \zeta | \int_{\mathbb{R}^{4}} \widehat{u_{1}}(\zeta_{1},\eta_{1} ) \widehat{v_{1}}(\zeta-\zeta_{2},\eta - \eta_{2} )\\ \\& \quad \times
\widehat{v_{2}}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1} )| d\zeta_{1} d\eta_{1} d\zeta_{2} d\eta_{2} \|_{{L^{2}_{\zeta} L^{2}_{\eta} }}\\ \\&
= \| (1+ | \zeta |)^{s} e^{\rho (1+| \zeta |)}(1+ |\eta - \zeta^{3} |)^{b'} | \zeta | \int_{\mathbb{R}^{4}} ~~
\left (\frac{(1+ | \zeta_{1} |)^{-s} e^{-\rho (1+ | \zeta_{1} |)} \widehat{f_{1}}(\zeta_{1},\eta_{1} )}{(1+ | \eta - \zeta^{3} |)^{b}}\right) \\ \\& \quad \times
\left (\frac{(1+ | \zeta-\zeta_{2} |)^{-s} e^{-\rho (1+ |\zeta-\zeta_{2} |)} \widehat{g_{1}}(\zeta-\zeta_{2},\eta - \eta_{2})}{(1+ | ( \eta - \eta_{2} ) - (\zeta-\zeta_{2})^{3} |)^{b}} \right)\\ \\&\quad \times
\left (\frac{(1+ | \zeta_{2} -\zeta_{1} |)^{-s} e^{-\rho (1+ | \zeta_{2} -\zeta_{1}|)} \widehat{g_{2}}(\zeta_{2} -\zeta_{1},\eta_{2}-\eta_{1} )}{(1+ |\eta_{2}-\eta_{1} -(\zeta_{2} -\zeta_{1} )^{3} |)^{b}}\right) d \mu \|_{{L^{2}_{\zeta} L^{2}_{\eta} }},
\end{align*}
where $ d \mu = d\zeta_{1} d\eta_{1} d\zeta_{2} d\eta_{2} d\zeta d\eta$. \\
By using the duality, we proof this estimate, where $ m(\zeta,\eta) $ is a positive function in $ L^{2}( \mathbb{R}^{2})$ with norm $ \| m\|_{ L^{2}( \mathbb{R}^{2})}=1 $, then
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'} & \leqslant \int_{\mathbb{R}^{6}}\frac{e^{\rho (1+| \zeta |)}(1+ | \zeta |)^{s} | \zeta | m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~~~
\frac{e^{-\rho (1+ | \zeta_{1} |)}(1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}} \\ \\&
\frac{e^{-\rho (1+ | \zeta-\zeta_{2} |)}(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}~~
\\ \\&
\frac{e^{-\rho (1+ | \zeta_{2}-\zeta_{1} |)}(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}} d \mu.
\end{align*}
Using the inequality
\begin{equation*}
| \zeta| \leq | \zeta_{1}| + | \zeta- \zeta_{2}| + | \zeta_{2}- \zeta_{1}| \quad
\text{then} \quad
e^{\rho (1+| \zeta |)} \leq e^{\rho (1+ | \zeta_{1} |)} \times e^{\rho (1+ | \zeta-\zeta_{2} |)} \times e^{\rho (1+ | \zeta_{2}-\zeta_{1} |)}.
\end{equation*}
Then
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'} & \leqslant \int_{\mathbb{R}^{6}}\frac{(1+ | \zeta |)^{s} | \zeta | m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}
\frac{(1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}} \\ \\&
\frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}} d\mu.
\end{align*}
Now, split the Fourier space into six regions as follow
\begin{enumerate}\label{ta14}
\item $|\zeta-\zeta_{2} | \leq | \zeta_{2}-\zeta_{1} |\leq | \zeta_{1} | $
\item $| \zeta-\zeta_{2} | \leq| \zeta_{1} | \leq | \zeta_{2}-\zeta_{1} | $
\item $ | \zeta_{1} | \leq | \zeta_{2}-\zeta_{1} |\leq| \zeta-\zeta_{2} | $
\item $ | \zeta_{1} | \leq | \zeta-\zeta_{2} | \leq | \zeta_{2}-\zeta_{1} | $
\item $ | \zeta_{2}-\zeta_{1} |\leq| \zeta-\zeta_{2} \leq | \zeta_{1} | $
\item $ | \zeta_{2}-\zeta_{1} | \leq | \zeta_{1} | \leq | \zeta-\zeta_{2} |. $
\end{enumerate}
We begin by the case $ (1) $
$$
|\zeta-\zeta_{2} | \leq | \zeta_{2}-\zeta_{1} |\leq | \zeta_{1} |.
$$
Then
\begin{equation}\label{eq55}
(1+ |\zeta-\zeta_{2} |)^{-s} \geq (1+| \zeta_{2}-\zeta_{1} | )^{-s} \geq (1+ | \zeta_{1} | )^{-s},
\end{equation}
and, we assume that $ | \zeta| \leq 1 $
or $ | \zeta| \geq 1 $. \\
Firstly, by $ | \zeta| \geq 1 $, then
$$ (1+ | \zeta| )^{s} \leq (| \zeta|+ | \zeta| )^{s}= 2^{s}(| \zeta|)^{s} = C (| \zeta|)^{s}. $$
By the last inequality and $ (\ref{eq55})$, we obtain
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'} & \leqslant\int_{\mathbb{R}^{6}}\frac{(1+ | \zeta |)^{s} | \zeta | m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~
\frac{(1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}~
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}\\& \quad \times
\frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}}d\mu
\\&
\leq C
\int_{\mathbb{R}^{6}}\frac{( | \zeta |)^{s} | \zeta | m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~
\frac{(1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}~
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}~
\\& \quad \times
\frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}} d\mu \\&
\leq C
\int_{\mathbb{R}^{6}}\frac{( | \zeta |)^{s+1} m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~~
\frac{(1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}~
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}~
\\& \quad \times
\frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}} d\mu,
\end{align*}
then
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'} & \leq C
\int_{\mathbb{R}^{6}}\frac{( | \zeta |)^{\frac{1}{2}} m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}
\frac{(1+ | \zeta_{1} |)^{\frac{1}{2}} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}
\\&
\frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}} d\mu.
\end{align*}
By
$$
|\zeta-\zeta_{2} | \leq | \zeta_{2}-\zeta_{1} |\leq | \zeta_{1} |,
$$
and
\begin{align*}
| \zeta |^{s+1}(1+| \zeta_{1} |)^{-s}& = | \zeta |^{s+1} | \zeta_{1} |^{-s} | \zeta_{1} |^{s} (1+| \zeta_{1} |)^{-s}
\leq | \zeta |^{s+1} | \zeta_{1} |^{-s} \frac{|\zeta_{1} |^{s}}{(1+| \zeta_{1} |)^{s} } \leq | \zeta |^{s+1} | \zeta_{1} |^{-s},
\end{align*}
and
\begin{align*}
| \zeta |^{s+1} ~ | \zeta_{1} |^{-s}& = | \zeta |^{\frac{1}{2}} | \zeta_{1} |^{\frac{1}{2}} | \zeta |^{s+ \frac{1}{2} }| \zeta_{1} |^{-s- \frac{1}{2}} \\ &\leq c | \zeta |^{\frac{1}{2}} | \zeta_{1} |^{\frac{1}{2}} \left( | \zeta-\zeta_{2}|)^{s+ \frac{1}{2} } + | \zeta_{2}-\zeta_{1}|^{s+ \frac{1}{2} } + | \zeta_{1}|^{s+ \frac{1}{2}} \right)| \zeta_{1} |^{-s- \frac{1}{2}} \\
&\leq c | \zeta |^{\frac{1}{2}} | \zeta_{1} |^{\frac{1}{2}} \left( |\zeta_{1}|)^{s+ \frac{1}{2} } + | \zeta_{1}|^{s+ \frac{1}{2} } + | \zeta_{1}|^{s+ \frac{1}{2}} \right)| \zeta_{1} |^{-s- \frac{1}{2}} \\
& \leq c | \zeta |^{\frac{1}{2}} | \zeta_{1} |^{\frac{1}{2}} \left( 3 |\zeta_{1} |^{s+\frac{1}{2}}~~| \zeta_{1} |^{-s- \frac{1}{2}} \right) \\ &
\leq C | \zeta |^{\frac{1}{2}} | \zeta_{1} |^{\frac{1}{2}}.
\end{align*}
We suppose that
\begin{align*}
\widehat{A^{\frac{1}{2}} M_{-b'}}(\zeta,\eta)& = \frac{( | \zeta |)^{\frac{1}{2}} m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}\\
\widehat{A^{\frac{1}{2}} F_{b}}(\zeta_{1},\eta_{1}) &= \frac{(1+ | \zeta_{1} |)^{\frac{1}{2}} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}\\
\widehat{A^{-s} G^{1}_{b}}(\zeta-\zeta_{2},\eta-\eta_{2}) &=\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}\\
\widehat{A^{-s} G^{2}_{b}}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})&= \frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}},
\end{align*}
and
\begin{align*}
&\int_{\mathbb{R}^{6}}\frac{( | \zeta |)^{\frac{1}{2}} m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~~~~
\frac{(1+ | \zeta_{1} |)^{\frac{1}{2}} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}~
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}
\frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}} d\mu\\&
\int_{\mathbb{R}^{6}} \widehat{A^{\frac{1}{2}} M_{-b'}}(\zeta,\eta) \widehat{A^{\frac{1}{2}} F_{b}}(\zeta_{1},\eta_{1})
\widehat{A^{-s} G^{1}_{b}}(\zeta-\zeta_{2},\eta-\eta_{2})
\widehat{A^{-s} G^{2}_{b}}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})
d\mu\\&
= \int_{\mathbb{R}^{2}} \left( \widehat{A^{\frac{1}{2}} M_{-b'}}(\zeta,\eta) \right) \left( \int_{\mathbb{R}^{4}} \widehat{A^{\frac{1}{2}} F_{b}}(\zeta_{1},\eta_{1})
\widehat{A^{-s} G^{1}_{b}}(\zeta-\zeta_{2},\eta-\eta_{2})
\widehat{A^{-s} G^{2}_{b}}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1}) d\zeta_{1} d\eta_{1} d\zeta_{2} d\eta_{2} \right)
d\zeta d\eta
\\&= \int_{\mathbb{R}^{2}} \left( \widehat{A^{\frac{1}{2}} M_{-b'}}(\zeta,\eta) \right) \left( \left( \widehat{A^{\frac{1}{2}} F_{b}}\ast
\widehat{A^{-s} G^{1}_{b}}\ast
\widehat{A^{-s} G^{2}_{b}} \right) (\zeta,\eta) \right)
d\zeta d\eta
\\& = \int_{\mathbb{R}^{2}} \left( \widehat{A^{\frac{1}{2}} M_{-b'}}(\zeta,\eta) \right) \left( \left( \widehat{A^{\frac{1}{2}} F_{b}. A^{-s} G^{1}_{b}. A^{-s} G^{2}_{b}}\right) (\zeta,\eta) \right)
d\zeta d\eta
\\& = \int_{\mathbb{R}^{2}} A^{\frac{1}{2}} M_{-b'}(x,t) \left( A^{\frac{1}{2}} F_{b}. A^{-s} G^{1}_{b}. A^{-s} G^{2}_{b} \right) (x,t) dx dt.
\end{align*}
We suppose that
\begin{align*}
h_{1}(x,t) &= A^{\frac{1}{2}} M_{-b'}(x,t) \\
h_{2}(x,t) &= A^{\frac{1}{2}} F_{b}(x,t)\\
h_{3}(x,t) &= A^{-s} G^{1}_{b} (x,t)\\
h_{4}(x,t) &=A^{-s} G^{2}_{b}(x,t),
\end{align*}
then
\begin{align*}
\bigg|\int_{\mathbb{R}^{2}} A^{\frac{1}{2}} D_{-b'}(x,t) A^{\frac{1}{2}} F_{b}. A^{-s} G^{1}_{b}. A^{-s} G^{2}_{b} (x,t) dx dt\bigg| &= \bigg|\int_{\mathbb{R}^{2}} h_{1}(x,t). h_{2}(x,t) h_{3}(x,t) h_{4}(x,t) dx dt\bigg|\\&
\leq \bigg| \int_{\mathbb{R}^{2}} h_{1}(x,t). h_{2}(x,t) \sup_{\substack{ t \in [0, T]}} h_{3}(x,t) \sup_{\substack{ t \in [0, T]
}} h_{4}(x,t) dx dt\bigg|\\&
\leq \bigg| \int_{\mathbb{R}^{2}}\left( h_{1}(x,t). h_{2}(x,t) \right) \left( \sup_{\substack{ t \in [0, T]}} h_{3}(x,t) \sup_{\substack{ t \in [0, T]
}} h_{4}(x,t) \right) dx dt\bigg|.&
\end{align*}
By using Cauchy-Schwarz's inequality for the variables $ x $ and $ t $
\begin{align*}
& \bigg|\int_{\mathbb{R}^{2}}\left( h_{1}(x,t). h_{2}(x,t) \right) \left( \sup_{\substack{ t \in [0, T]}} h_{3}(x,t) \sup_{\substack{ t \in [0, T]
}} h_{4}(x,t) \right) dx dt\bigg| \\ &
\leq
\| h_{1}(x,t) \|_{L_{x}^{4} L_{t}^{2} } \| h_{2}(x,t) \|_{L_{x}^{4} L_{t}^{2} } \| h_{3}(x,t) \|_{L_{x}^{2} L_{t}^{\infty}} \| h_{4}(x,t) \|_{L_{x}^{\infty} L_{t}^{\infty} }
\\&
= \| A^{\frac{1}{2}} M_{-b'} \|_{L_{x}^{4} L_{t}^{2} } \| A^{\frac{1}{2}} F_{b} \|_{L_{x}^{4} L_{t}^{2} } \| A^{-s}
G^{1}_{b} \|_{L_{x}^{2} L_{t}^{\infty}} \| A^{\frac{1}{2}} G^{2}_{b} \|_{L_{x}^{\infty} L_{t}^{\infty} }.
\end{align*}
Then
\begin{equation*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'}
\leq c \| A^{\frac{1}{2}} M_{-b'} \|_{L_{x}^{4} L_{t}^{2} } \| A^{\frac{1}{2}} F_{b} \|_{L_{x}^{4} L_{t}^{2} } \| A^{-s}
G^{1}_{b} \|_{L_{x}^{2} L_{t}^{\infty}} \| A^{\frac{1}{2}} G^{2}_{b} \|_{L_{x}^{\infty} L_{t}^{\infty} }.
\end{equation*}
Hence by Lemma \ref{2.3}
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'} & \leq c \| m \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| f \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| g_{1} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| g_{2} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \\ &
\leq c\| u_{1}\|_{\rho, s, b} \| v_{1} \|_{\rho, s, b} \| v_{2} \|_{\rho, s, b}.
\end{align*}
Secondly for the case $ |\zeta| \leq 1 $, then
\begin{align*}
(1+ | \zeta |)^{s} | \zeta | (1+ | \zeta_{1} |)^{-s} &= (1+ | \zeta |)^{^{\frac{1}{2}}} (1+ | \zeta_{1} |)^{^{\frac{1}{2}}}
(1+ | \zeta_{1} |)^{-s-\frac{1}{2}}(1+ | \zeta |)^{s-\frac{1}{2}} | \zeta | \\
&\leq (1+ | \zeta |)^{^{\frac{1}{2}}} (1+ | \zeta_{1} |)^{^{\frac{1}{2}}}
1+ | \zeta_{1} |)^{-s-\frac{1}{2}}(1+ | \zeta |)^{s-\frac{1}{2}} ( 1+ |\zeta | )
\\
&\leq (1+ | \zeta |)^{^{\frac{1}{2}}} (1+ | \zeta_{1} |)^{^{\frac{1}{2}}}(1+ | \zeta_{1} |)^{-s-\frac{1}{2}}
(1+ | \zeta |)^{s+\frac{1}{2}}
\\
& \leq (1+ | \zeta |)^{^{\frac{1}{2}}} (1+ | \zeta_{1} |)^{^{\frac{1}{2}}}
(1+ | \zeta_{1} |)^{-s-\frac{1}{2}} (1+ | \zeta_{1} | + | \zeta - \zeta_{2} | + | \zeta_{2} - \zeta_{1} | )^{s+\frac{1}{2}}
\\& \leq (1+ | \zeta |)^{^{\frac{1}{2}}} (1+ | \zeta_{1} |)^{^{\frac{1}{2}}}
(1+ | \zeta_{1} |)^{-s-\frac{1}{2}} ( 3(1+ | \zeta_{1} | ) )^{s+\frac{1}{2}}
\\
& \leq C (1+ | \zeta |)^{^{\frac{1}{2}}} (1+ | \zeta_{1} |)^{^{\frac{1}{2}}},
\end{align*}
then
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'} & \leqslant \int_{\mathbb{R}^{6}}\frac{(1+ | \zeta |)^{s} | \zeta | m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~
\frac{(1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}~
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}~\\& \quad\times
\frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}}d\mu
\\&
\leq C
\int_{\mathbb{R}^{6}}\frac{ (1+ | \zeta |)^{\frac{1}{2}} m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~
\frac{(1+ | \zeta_{1} |)^{\frac{1}{2}} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}~
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}~
\\& \quad\times
\frac{(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}} d\mu.
\end{align*}
Then, by the inner product, we have
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'} & \leq c \langle \widehat{A^{\frac{1}{2}} M_{-b'}}; \widehat{A^{\frac{1}{2}} F_{b}}\star \widehat{A^{-s} G^{1}_{b}} \star \widehat{A^{-s} G^{2}_{b}}\rangle \\ &
\leq c \langle \widehat{A^{\frac{1}{2}} M_{-b'} }; \widehat{ A^{\frac{1}{2}} F_{b}~. A^{-s} G^{1}_{b}~A^{-s}G^{2}_{b}} \rangle \\ &
\leq c\langle A^{\frac{1}{2}} M_{-b'}; A^{\frac{1}{2}} F_{b}. A^{-s} G^{1}_{b}A^{-s} G^{2}_{b} \rangle \\ &
\leq c \| A^{\frac{1}{2}} M_{-b'} \|_{L_{x}^{4} L_{t}^{2} } \| A^{\frac{1}{2}} F_{b} \|_{L_{x}^{4} L_{t}^{2} } \| A^{-s}
G^{1}_{b} \|_{L_{x}^{2} L_{t}^{\infty}} \| A^{-s} G^{2}_{b} \|_{L_{x}^{\infty} L_{t}^{\infty} }.
\end{align*}
Hence by Lemma \ref{2.3}
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'} & \leq c \| m \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| f \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| g_{1} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| g_{2} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \\ &
\leq c \| u_{1}\|_{\rho, s, b} \| v_{1} \|_{\rho, s, b} \| v_{2} \|_{\rho, s, b}.
\end{align*}
By the same way, we prove the inequality in the five region.\\
For the case $ p \geq 2 $ is virtually identical. The only difference is that we need to split the Fourier space in $((2p+1)+1)!$.\\
We prove that
\begin{align*}
\| \partial_{x}\prod _{i=1}^{p} u_{i} \prod _{j=1}^{p+1} v_{j} \|_{\rho, s, b'}\leq
C \prod _{i=1}^{p}\| u_{i} \|_{\rho, s, b}. \prod _{j=1}^{p+1}\| v_{j} \|_{\rho, s, b},
\end{align*}
We have :
\begin{align*}
\| \partial_{x}\prod _{i=1}^{p} u_{i} \prod _{j=1}^{p+1} v_{j} \|_{\rho, s, b'} & = \| (1+ | \zeta |)^{s} (1+ |\eta - \zeta^{3} |)^{b'} e^{\rho (1+| \zeta |)} | \widehat{ \partial_{x}\prod _{i=1}^{p} u_{i} \prod _{j=1}^{p+1} v_{j} }(\zeta, \eta ) \|_{L^{2}_{\zeta} L^{2}_{\eta} },
\\ &= \| (1+ | \zeta |)^{s} (1+ |\eta - \zeta^{3} |)^{b'} e^{\rho (1+| \zeta |)}
| \zeta \prod _{i=1}^{p} \widehat{ u_{i} }\star \prod _{j=1}^{p+1} \widehat{v_{j} } (\zeta, \eta ) \|_{L^{2}_{\zeta} L^{2}_{\eta} }.
\end{align*}
By the same way, by the inner product, we have
\begin{align*}
\| \partial_{x}\prod _{i=1}^{p} u_{i} \prod _{j=1}^{p+1} v_{j} \|_{\rho, s, b'} & \leq c \langle \widehat{A^{\frac{1}{2}} M_{-b'}}; \widehat{A^{\frac{1}{2}} F_{b}}\star \widehat{A^{-s} G^{1}_{b}} \star \prod _{i=1}^{p} \widehat{A^{-s} F^{i}_{b}} \star \prod _{j=1}^{p}\widehat{A^{-s} G^{i}_{b}}\rangle \\ &
\leq c \langle \widehat{A^{\frac{1}{2}} M_{-b'}}; \widehat{A^{\frac{1}{2}} F_{b}. A^{-s} G^{1}_{b} \prod _{i=1}^{p} A^{-s} F^{i}_{b}\prod _{j=1}^{p}A^{-s} G^{i}_{b}}\rangle \\ &
\leq c \| A^{\frac{1}{2}} M_{-b'} \|_{L_{x}^{4} L_{t}^{2} } \| A^{\frac{1}{2}} F_{b} \|_{L_{x}^{4} L_{t}^{2} } \| A^{-s} G^{1}_{b} \|_{L_{x}^{2} L_{t}^{\infty}} \| \prod _{i=1}^{p}A^{-s}
F^{i}_{b} \|_{L_{x}^{\infty} L_{t}^{\infty}} \| \prod _{j=1}^{p} A^{\frac{1}{2}} G^{i}_{b} \|_{L_{x}^{\infty} L_{t}^{\infty} } \\ &
\leq c \prod _{i=1}^{p}\| u_{i} \|_{\rho, s, b}. \prod _{j=1}^{p+1}\| v_{j} \|_{\rho, s, b}.
\end{align*}
\end{proof}
\begin{lemma}
Let $\rho > 0 $, $ s \geq 3b $, $ b >\frac{1}{2} $, and $ b ' < - \frac{1}{4} $. Let $ p \in \mathbb{N}$ and suppose that $ u_{1},..., u_{p+1} $\\ $, v_{1},...,v_{p+1} \in X_{\rho, s, b}$. Then there exists a constants $ c $ such that
\begin{equation*}
\| \partial_{x}\prod _{i=1}^{p} u_{i} \prod _{j=1}^{p+1} v_{j} \|_{\rho, s, b'}\leq
C \prod _{i=1}^{p}\| u_{i} \|_{ s, b}. \prod _{j=1}^{p+1}\| v_{j} \|_{ s, b} + c \prod _{i=1}^{p}\| u_{i} \|_{ \rho, s, b}. \prod _{j=1}^{p+1}\| v_{j} \|_{\rho, s, b},
\end{equation*}
\begin{equation*}
\| \partial_{x}\prod _{i=1}^{p+1} u_{i} \prod _{j=1}^{p} v_{j} \|_{\rho, s, b'}\leq
C \prod _{i=1}^{p+1}\| u_{i} \|_{ s, b}. \prod _{j=1}^{p}\| v_{j} \|_{ s, b} + c \prod _{i=1}^{p+1}\| u_{i} \|_{ \rho, s, b}. \prod _{j=1}^{p}\| v_{j} \|_{\rho, s, b}.
\end{equation*}
\end{lemma}
\begin{proof}
We begin by the case $ p = 1 $, thats mean we prove that
\begin{equation}
\| \partial_{x} ( u_{1} v_{1} v_{2} ) \|_{\rho, s, b'}\leq
C \| u_{1} \|_{ s, b}. \| v_{1} \|_{ s, b} \| v_{2} \|_{ s, b} + c \| u_{1} \|_{ \rho, s, b}. \| v_{1} \|_{\rho, s, b} \| v_{2} \|_{\rho, s, b}.
\end{equation}
We define
$$
f_{i} (\zeta, \eta ) = (1+ | \zeta |)^{s} (1+ |\eta - \zeta^{3} |)^{b} e^{\rho (1+ | \zeta |)} | \widehat{u_{i} }(\zeta,\eta )|, $$
$$
g_{j} (\zeta, \eta ) = (1+ | \zeta |)^{s} (1+ |\eta - \zeta^{3} |)^{b} e^{\rho (1+ | \zeta |) }| \widehat{v_{j} }(\zeta,\eta )|. $$
Then
\begin{align*}
\| \partial_{x} u_{1} v_{1} v_{2} \|_{\rho, s, b'}&= \| (1+ | \zeta |)^{s} (1+ |\eta - \zeta^{3} |)^{b'} e^{\rho (1+| \zeta |)} | \widehat{\partial_{x} u_{1} v_{1} v_{2} }(\zeta,\eta )| \|_{L^{2}_{\zeta} L^{2}_{\eta} } \\ \\ &
= \| (1+ | \zeta |)^{s}e^{\rho (1+| \zeta |)} (1+ |\eta - \zeta^{3} |)^{b'} ~ | \zeta |~~ |\widehat{u_{1}}\ast \widehat{v_{1}} \ast \widehat{v_{2}}(\zeta,\eta )| \|_{_{L^{2}_{\zeta} L^{2}_{\eta} }}
\\ \\&
= \| (1+ | \zeta |)^{s}e^{\rho (1+| \zeta |)} (1+ |\eta - \zeta^{3} |)^{b'} ~ | \zeta | ~ \int_{\mathbb{R}^{4}} \widehat{u_{1}}(\zeta_{1},\eta_{1} ) \widehat{v_{1}}(\zeta-\zeta_{2},\eta - \eta_{2} )
\\ \\&\quad \times \widehat{v_{2}}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1} )| d\zeta_{1} d\eta_{1} d\zeta_{2} d\eta_{2} \|_{_{L^{2}_{\zeta} L^{2}_{\eta} }}
\\ \\ & = \| (1+ | \zeta |)^{s} e^{\rho (1+| \zeta |)}(1+ |\eta - \zeta^{3} |)^{b'} | \zeta | \int_{\mathbb{R}^{4}} ~~
\frac{(1+ | \zeta_{1} |)^{-s} e^{-\rho (1+ | \zeta_{1} |)} \widehat{f_{1}}(\zeta_{1},\eta_{1} )}{(1+ | \eta - \zeta^{3} |)^{b}} \\ \\& \quad \times
\frac{(1+ | \zeta-\zeta_{2} |)^{-s} e^{-\rho (1+ |\zeta-\zeta_{2} |)} \widehat{g_{1}}(\zeta-\zeta_{2},\eta - \eta_{2})}{(1+ | ( \eta - \eta_{2} ) - (\zeta-\zeta_{2})^{3} |)^{b}} \\ \\& \quad \times
(\frac{(1+ | \zeta_{2} -\zeta_{1} |)^{-s} e^{-\rho (1+ | \zeta_{2} -\zeta_{1}|)} \widehat{g_{2}}(\zeta_{2} -\zeta_{1},\eta_{2}-\eta_{1} )}{(1+ |\eta_{2}-\eta_{1} -(\zeta_{2} -\zeta_{1},\eta_{2}-\eta_{1} )^{3} |)^{b}} d \mu \|_{{L^{2}_{\zeta} L^{2}_{\eta} }}.
\end{align*}
We proof this estimate by the duality. Let $ m(\zeta,\eta) $ be a positive function in $ L^{2}( \mathbb{R}^{2})$ with norm $ \| m \|_{ L^{2}( \mathbb{R}^{2})}=1 $, then
\begin{align*}
&\int_{\mathbb{R}^{6}}\frac{e^{\rho (1+| \zeta |)}(1+ | \zeta |)^{s} | \zeta | m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}
\frac{e^{-\rho (1+ | \zeta_{1} |)}(1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}} \\ \\ &
\frac{e^{-\rho (1+ | \zeta-\zeta_{2} |)}(1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}~~~~
\frac{e^{-\rho (1+ | \zeta_{2}-\zeta_{1} |)}(1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}}d\mu.
\end{align*}
Using the inequality
\begin{equation*}
e^{\rho (1+| \zeta |)} \leq e + \rho^{\frac{1}{2}} e^{\rho (1+ | \zeta |)} (1+ | \zeta |)^{\frac{1}{2}}.
\end{equation*}
Then
\begin{align*}
&\int_{\mathbb{R}^{6}}\frac{ e^{\rho (1+ | \zeta |)} (1+ | \zeta |)^{1+s} m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}
\frac{ e^{-\rho (1+ |\zeta_{1} |)} (1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}\\ \\&
\frac{ e^{-\rho (1+ |\zeta-\zeta_{2} |)} (1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}\frac{ e^{-\rho (1+ |\zeta_{2}-\zeta_{1} |)} (1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1}-(\zeta_{2}-\zeta_{1})^{3} |)^{b}}\\ \\&
\leq I +I^{\prime},
\end{align*}
where
\begin{align*}
I + I^{\prime}& = e \sup_{m \in B}
\int_{\mathbb{R}^{6}}\frac{ (1+ | \zeta |)^{1+s} m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~~~~
\frac{ e^{-\rho (1+ |\zeta_{1} |)} (1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}\\ \\& \quad \times
\frac{ e^{-\rho (1+ |\zeta-\zeta_{2} |)} (1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}
\frac{ e^{-\rho (1+ |\zeta_{2}-\zeta_{1} |)} (1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}}d\mu
\\ \\& \quad + \rho^{\frac{1}{2}} \sup_{m \in B}
\int_{\mathbb{R}^{6}}\frac{ e^{\rho (1+ | \zeta |)} (1+ | \zeta |)^{\frac{1}{2}} (1+ | \zeta |)^{1+s} m(\zeta,\eta)} {(1+ |\eta - \zeta^{3} |)^{-b'}}~~ \frac{ e^{-\rho (1+ |\zeta_{1} |)} (1+ | \zeta_{1} |)^{-s} f_{1} (\zeta_{1},\eta_{1})} {(1+ |\eta_{1} - \zeta_{1}^{3} |)^{b}}\\ \\& \quad \times
\frac{ e^{-\rho (1+ |\zeta-\zeta_{2} |)} (1+ | \zeta-\zeta_{2} |)^{-s} g_{1}(\zeta-\zeta_{2},\eta-\eta_{2})} {(1+ |\eta -\eta_{2} - (\zeta-\zeta_{2})^{3} |)^{b}}
\frac{ e^{-\rho (1+ |\zeta_{2}-\zeta_{1} |)} (1+ | \zeta_{2}-\zeta_{1} |)^{-s} g_{2}(\zeta_{2}-\zeta_{1},\eta_{2}-\eta_{1})} {(1+ |\eta_{2}-\eta_{1} - (\zeta_{2}-\zeta_{1})^{3} |)^{b}} d\mu.
\end{align*}
Now, split the Fourier space into six regions ( the same division as before $ \ref{ta14} $). We begin by the case $ (1) $
$
\left( |\zeta-\zeta_{2} | \leq | \zeta_{2}-\zeta_{1} |\leq | \zeta_{1} | \right).$ The integrale of
$ I $ corresponding to the particular region just delineated can be dominated by the supremum over all $m$ in $B$ of the duality relation the integral can be dominated by the inner product.
\begin{equation*}
\begin{array}{l}
I \leq c \langle \widehat{A^{\frac{1}{2}} M_{-b'}}; \widehat{ e^{-\rho A } A F_{b}}\star \widehat{ e^{-\rho A } A^{-s} G^{1}_{b}} \star \widehat{ e^{-\rho A} A^{-s} G^{2}_{b}}\rangle \\ \\
\leq c \langle \widehat{A^{\frac{1}{2}} M_{-b'}}; \widehat{ e^{-\rho A} A F_{b}~. ~ e^{-\rho A} A^{-s} G^{1}_{b} ~. ~ e^{-\rho A } A^{-s} G^{2}_{b}}\rangle \\ \\
\leq c \langle A^{\frac{1}{2}} M_{-b'}; e^{-\rho A } A F_{b} ~. ~ e^{-\rho A} A^{-s} G^{1}_{b} ~.~ e^{-\rho A } A^{-s} G^{2}_{b}\rangle \\ \\
\leq c \| A^{\frac{1}{2}} M_{-b'} \|_{L_{x}^{4} L_{t}^{2} } \| e^{-\rho A } A F_{b} \|_{L_{x}^{\infty} L_{t}^{2} } \| e^{-\rho A } A^{-s}
G^{1}_{b} \|_{L_{x}^{2} L_{t}^{\infty}} \| e^{-\rho A } A^{-s} G^{2}_{b} \|_{L_{x}^{4} L_{t}^{\infty} }.
\end{array}
\end{equation*}
Hence by Lemma \ref{2.3}
\begin{equation*}
\begin{array}{l}
I \leq c \| m \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| e^{-\rho A } f_{1} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| e^{-\rho A } g_{1} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| e^{-\rho A } g_{2} \|_{L_{\zeta}^{2} L_{\eta}^{2} }
\leq c \| u_{1}\|_{ s, b} \| v_{1} \|_{ s, b} \| v_{2} \|_{ s, b}.
\end{array}
\end{equation*}
By the same way, we treat the second part, that is, the integration $ I^{\prime} $ and we use the following inequality
$$ e^{\rho (1+| \zeta |)} \leq e^{\rho (1+ | \zeta_{1} |)} \times e^{\rho (1+ | \zeta-\zeta_{2} |)} \times e^{\rho (1+ | \zeta_{2}-\zeta_{1} |)},
$$
we find
\begin{equation*}
\begin{array}{l}
I^{\prime} \leq c \rho^{\frac{1}{2}}\sup_{\substack{m \in B }}\| A^{\frac{1}{2}} M_{-b'} \|_{L_{x}^{4} L_{t}^{2} } \| A^{\frac{1}{2}} F_{b} \|_{L_{x}^{4} L_{t}^{2} } \| A^{-s}
G^{1}_{b} \|_{L_{x}^{2} L_{t}^{\infty}} \| A^{-s} G^{2}_{b} \|_{L_{x}^{\infty} L_{t}^{\infty} } \\ \\
\leq c \| m\|_{L_{\zeta}^{2} L_{\eta}^{2} } \| f_{1} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| g_{1} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \| g_{2} \|_{L_{\zeta}^{2} L_{\eta}^{2} } \\ \\
\leq c \| u_{1}\|_{\rho, s, b} \| v_{1} \|_{\rho, s, b} \| v_{2} \|_{\rho, s, b}.
\end{array}
\end{equation*}
The other five cases, follow by symmetry.\\
For the case $ p > 2 $, the same scheme of estimation will yield for$( p -2) $ with additional factors of the form
$$ \| A^{-s}(G_{b}^{i} ) A^{-s}(F_{b}^{i} ) \|_{L_{x}^{\infty} L_{t}^{\infty}}. $$
We deal with the rest of the parts in the same way
\end{proof}
\section{Proof of Theorem \ref{the1.2}}
{\bf Existence of solution.} We define
\begin{equation*}
\mathcal{B}_{\rho, s, b} = X_{\rho, s, b} \times X_{\rho, s, b}, \quad \quad
\mathcal{N}^{\rho, s}= \mathcal{G}_{\rho, s} \times \mathcal{G}_{\rho, s},
\end{equation*}
\begin{equation*}
\| (u, v) \|_{\mathcal{B}_{\rho, s, b}} = \max \{\| u \|_{\rho, s, b}; \| u \|_{\rho, s, b} \} \quad \text{and} \quad \| (u_0, v_0) \|_{\mathcal{N}^{\rho, s}} = \max \{\|u_0 \|_{\mathcal{G}_{\rho, s}}; \| v_0 \|_{\mathcal{G}_{\rho, s}} \}.
\end{equation*}
\begin{lemma}
Let $ s\geq 0 $,$ \rho \geq 0 $, $ b > \frac{1}{2} $ and $ T \in ( 0; 1)$.
Then, for all $ (u_{0}, v_{0}) \in \mathcal{N}^{\rho, s} $, the map \\
$ \varXi \times \Gamma: B(0,R) \longrightarrow B(0,R) $ is a contraction, where $ B (0, R)$ is given by
\begin{equation*}
\mathbb{B}(0,R) = \{ (u,v) \in \mathcal{B}_{\rho, s, b};\quad \| (u, v) \|_{\mathcal{B}_{\rho, s, b}} \leq R \} \quad \text{where} \quad
R= 2C\| (u_{0}, v_{0}) \|_{\mathcal{N}^{\rho, s}}.
\end{equation*}
\end{lemma}
\begin{proof}
First it is proved that $ \varXi \times \Gamma $
is mapping on $ \mathbb{B}(0,R) $
\begin{align*}
\| \varXi [u,v](t) \|_{\rho, s, b}
&= \| \psi (t) W(t)u_{0}- \psi_{T}(t)\int_{0}^{t} W(t-t^{\prime})w_{1}(t^{\prime}) dt^{\prime} \|_{\rho, s, b} \\
&\leq \| \psi (t) W(t)u_{0} \|_{\rho, s, b}+ \| \psi_{T}(t)\int_{0}^{t} W(t-t^{\prime})w_{1}(t^{\prime}) dt^{\prime} \|_{\rho, s, b}\\
&\leq C \| u_{0} \|_{\mathcal{G}_{\rho, s}} + CT^{1-b+b'}
\| w_{1}(t^{\prime}) \|_{\rho, s, b'}\\
&= C \| u_{0} \|_{\mathcal{G}_{\rho, s}} + CT^{1-b+b'}
\| \partial_{x}\left(u^{p} v^{p+1}\right)\|_{\rho, s, b'}.
\end{align*}
We use Lemma \ref{1.3} to have
\begin{align*}
\| \partial_{x} u^{p} v^{p+1}\|_{\rho, s, b'}& \leq C \| u \|^{p}_{\rho, s, b} \| v \|^{p+1}_{\rho, s, b}.
\end{align*}
Then
\begin{align*}
\| \varXi [u,v](t) \|_{\rho, s, b}
&\leq C \| u_{0} \|_{\mathcal{G}_{\rho, s}} + CT^{1-b+b'}
\| u \|^{p}_{\rho, s, b} \| v\|^{p+1}_{\rho, s, b} \\
&\leq C \max \left( \| u_{0} \|_{\mathcal{G}_{\rho, s}}, \| v_{0} \|_{\mathcal{G}_{\rho, s}}\right) + CT^{1-b+b'}
\max \left( \| u \|_{\rho, s, b}, \| v \|_{\rho, s, b}\right)^{p} \\& \quad \times \max \left( \| u \|_{\rho, s, b}, \| v \|_{\rho, s, b}\right)^{p+1}\\
&\leq C \max \left( \| u_{0} \|_{\mathcal{G}_{\rho, s}}, \| v_{0} \|_{\mathcal{G}_{\rho, s}}\right) + CT^{1-b+b'}
\max \left( \| u \|_{\rho, s, b}, \| v \|_{\rho, s, b}\right)^{2p+1}.
\end{align*}
The estimates for the second term $ \Gamma $ are similar.
\begin{align*}
\| \Gamma[u,v](t) \|_{\rho, s, b} &\leq C \| ( u_{0}, v_{0}) \|_{\mathcal{N}^{\rho, s}}
+ CT^{1-b+b'}
\left( \| (u, v) \|_{\mathcal{B}_{\rho, s, b}}\right)^{2p+1}.
\end{align*}
Then we have
\begin{align*}
\| \varXi [u,v](t), \Gamma[u,v](t) \|_{\mathcal{B}_{\rho, s, b}} &\leq C \| ( u_{0}, v_{0}) \|_{\mathcal{N}^{\rho, s}}
+ CT^{1-b+b'}
\left( \| (u, v) \|_{\mathcal{B}_{\rho, s, b}}\right)^{2p+1}.
\end{align*}
Then
\begin{align*}
\| \varXi [u,v](t), \Gamma[u,v](t) \|_{\mathcal{B}_{\rho, s, b}} &\leq C \| ( u_{0}, v_{0}) \|_{\mathcal{N}^{\rho, s}}
+ CT^{1-b+b'}
\left( \| (u, v) \|_{\mathcal{B}_{\rho, s, b}}\right)^{2p+1} \\
& \leq \frac{R}{2} +T^{\epsilon} C R^{2p+1}.
\end{align*}
We choose sufficiently small $T $ such that
\begin{equation*}
T^{\epsilon} \leq \frac{1}{4CR^{2p}}.
\end{equation*}
Hence
\begin{equation*}
\| \varXi [u, v](t), \Gamma[u, v](t) \|_{\mathcal{B}_{\rho, s, b}} \leq R \quad, \forall
(u, v) \in \mathbb{B}(0, R).
\end{equation*}
Secondly we proof that the map $ \varXi \times \Gamma: \mathbb{B}(0,R) \longrightarrow \mathbb{B}(0,R) $ is a contraction.\\
For this end, let $ (u, v) \in \mathbb{B}(0,R) $ and $ (u^{*}, v^{*}) \in \mathbb{B}(0,R) $ such that
\begin{align*}
\| \varXi [u,v](t) - \varXi [u^{*},v^{*}](t) \|_{\rho, s, b} &= C\| \psi_{T}(t)\int_{0}^{t} W(t-t^{\prime}) \partial_{x}\left(u^{p} v^{p+1}-u^{*p} v^{*p+1} \right)dt^{\prime}\|_{\rho, s, b} \\ \\
&=C\| \psi_{T}(t)\int_{0}^{t} W(t-t^{\prime}) \partial_{x}\left[(u^{p}-u^{*p}) v^{p+1}+u^{*p} (v^{p+1}-v^{*p+1}) \right]dt^{\prime}\|_{\rho, s, b}.
\end{align*}
We use the Lemma \ref{1.3} to have
\begin{equation*}\label{p4}
\begin{array}{l}
\| \partial_{x}\left(u^{p} - u^{*p}\right)v^{p+1} \|_{\rho, s, b} \leq C \| u^{p} - u^{*p}\|_{\rho, s, b'} \| v^{p+1} \|_{\rho, s, b}
\\ \\
\| \partial_{x} u^{*p} \left( v^{p+1}- v^{*p+1} \right)\|_{\rho, s, b'} \leq C \| u^{*p} \|_{\rho, s, b} \| \left( v^{p+1}- v^{*p+1} \right)\|_{\rho, s, b}.
\end{array}
\end{equation*}
According to Lemma \ref{1.3}, we have \\
\begin{equation*}
\begin{array}{l}
\| \left(u^{p} - u^{*p}\right) \|_{\rho, s, b} \leq C \| \left(u - u^{*}\right)\|_{\rho, s, b} R^{p-1}
\\ \\
\| \left( v^{p+1}- v^{*p+1} \right)\|_{\rho, s, b}\leq C \| \left( v- v^{*} \right)\|_{\rho, s, b} R^{p}.
\end{array}
\end{equation*}
Then
\begin{equation*}
\begin{array}{ll}
\| \partial_{x}\left(u^{p} - u^{*p}\right)v^{p+1} \|_{\rho, s, b'}
& \leq C \| \left(u^{p} - u^{*p}\right)\|_{\rho, s, b} \| v \|^{p+1}_{\rho, s, b} \\ \\
&\leq C \| \left(u - u^{*}\right)\|_{\rho, s, b} R^{p-1} R^{p+1}
\\ \\
&\leq CR^{2p} \max \left( \|(u - (u^{*})\|_{\rho, s, b}, \| v- v^{*} \|_{\rho, s, b} \right)\\ \\
&= CR^{2p} \| (u- u^{*}),v- v^{*} \|_{\mathcal{B}_{\rho, s, b}},
\end{array}
\end{equation*}
and
\begin{equation*}\label{p7}
\begin{array}{ll}
\| \partial_{x} u^{*p} \left( v^{p+1}- v^{*p+1} \right)\|_{\rho, s, b}
\leq C R^{2p} \| (u- u^{*},v- v^{*} \|_{\mathcal{B}_{\rho, s, b}},
\end{array}
\end{equation*}
and
\begin{equation*}
\| \varXi [u,v](t) - \varXi [u^{*},v^{*}](t) \|_{\rho, s, b} \leq 2CT^{1-b+b'}R^{2p} \| u- u^{*},v- v^{*} \|_{\mathcal{B}_{\rho, s, b}},
\end{equation*}
\begin{equation*}
\| \Gamma [u,v](t) - \Gamma [u^{*},v^{*}](t) \|_{\rho, s, b} \leq 2CT^{1-b+b'}R^{2p} \| u- u^{*},v- v^{*} \|_{\mathcal{B}_{\rho, s, b}}.
\end{equation*}
By the same way we prove that $\Gamma [u,v](t) $
is contraction, so we have
\begin{align*}
\| \varXi [u,v](t) - \varXi [u^{*},v^{*}], \Gamma [u,v](t) - \Gamma [u^{*},v^{*}]\|_{\mathcal{B}_{\rho, s, b}} \\
\leq 2 C ~T^{\epsilon} R^{2p} \| u- u^{*}, v- v^{*})\|_{\mathcal{B}_{\rho, s, b}}.
\end{align*}
Since $ T^{\epsilon} \leq \frac{1}{4CR^{2p}}$, we have
\begin{align*}
\| \varXi [u,v] - \varXi [u^{*},v^{*}](t), \Gamma [u,v] - \Gamma [u^{*},v^{*}](t)\|_{\mathcal{B}_{\rho, s, b}} \\
\leq \frac{1}{2} \| (u- (u^{*})
,v- v^{*} \|_{\mathcal{B}_{\rho, s, b}}.
\end{align*}
Since the map $ \varXi \times \Gamma: \mathbb{B}(0,R) \longrightarrow \mathbb{B}(0,R) $ is a contraction, it follows that has a unique fixed point $(u,v)$ in $B(0, R)$.
\end{proof}
The rest of the proof follows a standard argument.
\section{Large time estimates on the radius of analyticity.}
\begin{lemma}\label{lem23}
Let $ s > \frac{3}{2} $,$ \rho > 0 $, $ T\geq 1 $ and $ b \in [-1, 1 ]$. We suppose that $ ( u,v ) $ is solution of $ (\ref{p01} )$ on the time interval $ [0, 2T ]$. Then there exists a constants $ C $ such that
\begin{equation}\label{eq250}
\| (\psi_{T}(t) u(.,t), \psi_{T}(t) v(.,t)) \|_{\mathcal{B}_{s,b}}\leq C T^{\frac{1}{2}} \left( 1+ \lambda_{T} (u,v) \right)^{2p+1},
\end{equation}
and
\begin{equation}\label{eq201}
\| \psi_{T}(t) u(.,t), \psi_{T}(t) v(.,t) \|_{\mathcal{B}_{\rho,s,b}}\leq CT^{\frac{1}{2}} \left( 1+\kappa_{T} (u,v) \right)^{{2p+1}},
\end{equation}
with
\begin{equation*}
\lambda_{T} (u,v)= \sup_{\substack{ t \in [0, 2T]
}} \left( \| u, v \|_{\mathcal{N}^{s+1}} \right) \quad \textit{and} \quad \kappa_{T} (u,v)= \sup_{\substack{ t \in [0, 2T]
}} \left( \| u, v \|_{\mathcal{N}^{\rho, s+1}} \right),
\end{equation*}
where $\mathcal{N}^{s}=H^{s}\times H^{s}$ and $\mathcal{B}_{b,s}=X_{b,s}\times X_{b,s} $.
\end{lemma}
\begin{proof} We have
\begin{align*}
\| \psi_{T}(t) u(x,t) \|^{2}_{s,b} & =\int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \int_{-\infty}^{+\infty} \bigg| \Lambda^{b} \left( e^{-it\zeta^{3}} \psi_{T}(t) \widehat{u}^{x}(\zeta,t) \right)\bigg|^{2} dt d \zeta.
\end{align*}
By using the inquality
$$ |\Lambda^{b}v(x,t)| \leq c |v(x,t) |+ |\partial_{t}v(x,t) |, $$
we get
\begin{align*}
\| \psi_{T}(t) u(.,t) \|^{2}_{s,b} &\leq c \int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \int_{-\infty}^{+\infty}\bigg | \left( e^{-it\zeta ^{3}} | \psi_{T}(t) \widehat{u}^{x} (\zeta,t) \right)\bigg|^{2} dt d \zeta
\\& \quad
+ c \int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \int_{-\infty}^{+\infty} \bigg|\partial_{t} \left( e^{-it(\zeta )^{3}} | \psi_{T}(t) \widehat{u}^{x} (\zeta,t) \right)\bigg|^{2} dt d \zeta.
\end{align*}\\
We have
\begin{align*}
\partial_{t} \left( e^{-it\zeta ^{3}} \psi_{T}(t) \widehat{u}^{x} (\zeta,t) \right)= &\frac{1}{T}\psi^{\prime}_{ T}(t) e^{-it\zeta ^{3}}\widehat{u}^{x} (\zeta,t) + \psi_{ T}(t) ( -i\zeta ^{3})e^{-it(\zeta )^{3}}\widehat{u}^{x} (\zeta,t) \\ & + \psi_{ T}(t) e^{-it\zeta ^{3}}\widehat{u}^{x}_{t} (\zeta,t),
\end{align*}
and
\begin{equation*}
u_{t}= -\partial_{x}^{3} u-\partial_{x}\left(u^{p} v^{p+1}\right).
\end{equation*}
Then
\begin{align*}
\widehat{u}^{x}_{t} (\zeta,t) & = - \widehat{\partial^{3}_{x}u }^{x} (\zeta,t) - \widehat{\partial_{x}\left(u^{p} v^{p+1}\right)}^{x} (\zeta,t)
\\&
= i\zeta^{3}\widehat{u}^{x}(\zeta,t) -i \zeta \widehat{\left(u^{p} v^{p+1}\right) }^{x}(\zeta,t).
\end{align*}
So
\begin{equation*}
\partial_{t} \left( e^{-it(\zeta )^{3}} \psi_{T}(t) \widehat{ u }^{x}(\zeta,t) \right)= \frac{1}{T}\psi^{\prime}_{ T}(t) e^{-it\zeta ^{3}} \widehat{u}^{x} (\zeta,t) + \psi_{ T}(t) e^{-it\zeta ^{3}} i \zeta \widehat{ ( u^{p} v^{p+1} ) }^{x} (\zeta,t),
\end{equation*}
and
\begin{align*}
\| \psi_{T}(t) u(.,t) \|^{2}_{s,b}& = \int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \int_{-\infty}^{+\infty}\bigg | \Lambda^{b} \left( e^{-it\zeta^{3}} \psi_{T}(t) \widehat{u}^{x} (\zeta,t) \right)\bigg |^{2} dt d \zeta
\\&
\leq c \int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \int_{-\infty}^{+\infty} \bigg | \left( e^{-it\zeta ^{3}} \psi_{T}(t) \widehat{u}^{x} (\zeta,t) \right)\bigg |^{2} dt d \zeta
\\& \quad
+ c \int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \int_{-\infty}^{+\infty} \bigg | \left( e^{-it\zeta ^{3}} \frac{1}{T}\psi^{\prime}_{T}(t) \widehat{u}^{x} (\zeta,t) \right)\bigg |^{2} dt d \zeta
\\& \quad
+ c \int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \int_{-\infty}^{+\infty} \bigg | \left( e^{-it\zeta^{3}} \psi_{T}(t)(i \zeta ) \widehat{u^{p} v^{p+1}}^{x} (\zeta,t) \right)\bigg |^{2} dt d \zeta
\\
&\leq 2 c \int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \int_{0}^{+2T} | \left( \widehat{u}^{x} (\zeta,t) \right)|^{2} dt d \zeta
+ c \int_{-\infty}^{+\infty} \left(1+| \zeta | \right)^{2s} \\& \quad \times \int_{0}^{2T} \bigg | \left( |\zeta | \widehat{ u^{p} v^{p+1} }^{x} (\zeta,t) \right)\bigg |^{2} dt d \zeta,
\end{align*}
and
\begin{align*}
\| \psi_{T}(t) u(.,t) \|^{2}_{s,b}&\leq 4 c T \sup_{\substack{t \in [0, 2T]}}\| u(., t)\|^{2}_{H^{s}} + 2cT \sup_{\substack{ t \in [0, 2T]}}( \| u^{p}v^{p+1} \|^{2}_{H^{s+1}})
\\ &
\leq 4 c T \sup_{\substack{t \in [0, 2T]}}\| u(., t)\|^{2}_{H^{s}} + 2cT \sup_{\substack{ t \in [0, 2T]}} (\| u^{p} \|_{H^{s+1}} \| v^{p+1} \|_{H^{s+1}} )^{2}
\\ &
\leq 4 c T \sup_{\substack{t \in [0, 2T]}}(\| (u, v)\|_{\mathcal{N}^{s}} )^{2} + 2cT \sup_{\substack{ t \in [0, 2T]}} ((\| (u, v )\|_{\mathcal{N}^{s+1}} )^{2p+1} )^{2},
\end{align*}
and
\begin{equation*}
\| \psi_{T}(t) u(.,t) \|_{s,b} \leq cT^{\frac{1}{2}} \left( 1+\lambda_{T} (u,v) \right)^{2p+1},
\end{equation*}
where
$$ \lambda_{T} (u,v) = \sup_{\substack{ t \in [0, 2T]}} (\| (u, v )\|_{\mathcal{N}^{s+1}} ). $$
Similarity,
$$ \| \psi_{T}(t) v(.,t) \|_{s,b} \leq cT^{\frac{1}{2}} \left( 1+\lambda_{T} (u,v) \right)^{2p+1}, $$
and
\begin{align*}
\| (\psi_{T}(t) u(.,t), \psi_{T}(t) v(.,t)) \|_{s,b} & \leq 2c~T^{\frac{1}{2}} \left( 1+\lambda_{T} (u;v) \right)^{2p+1}\\
&\leq CT^{\frac{1}{2}} \left( 1+\lambda_{T} (u;v) \right)^{2p+1}.
\end{align*}
This complets the proof.
\end{proof}
To prove the Theorem \ref{th03}, we need to define a sequence of approximations to $ (\ref{p01})$ as follows
\begin{equation}
\left\{\begin{array}{l}\label{eq10}
u^{n}_{t}+\partial_{x}^{3} u^{n}=-\partial_{x}\left(( \rho_{n}\ast \psi_{T} u^{n})^{p}) ( \rho_{n}\ast \psi_{T} v^{n})^{p+1})\right), \\
v^{n}_{t}+\partial_{x}^{3} v^{n}= -\partial_{x}\left(( \rho_{n}\ast \psi_{T} u^{n})^{p+1}) ( \rho_{n}\ast \psi_{T} v^{n})^{p})\right), \quad x, t \in \mathbb{R}, p \in \mathbb{Z}^{+} \\
u^{n}(x, 0)=u_{0}(x), \quad v^{n}(x, 0)=v_{0}(x),
\end{array}\right.
\end{equation}
where $ T > 0 $, $ n \in \mathbb{N}$
and $ \rho_{n} $ is defined as
\begin{equation*}
\widehat{\rho}_{n}(\zeta) =
\left\{\begin{array}{l}
0, \quad | \zeta | \geq 2n
\\
1, \quad | \zeta | \leq n,
\end{array}\right.
\end{equation*}
where $\widehat{\rho}_{n} $ is smooth and monotone on $ ( n, 2n)$.
\begin{lemma}\label{lem043}
Let $ s \geq 0 $ and $ (u_{0},v_{0} ) \in \mathcal{N}^{s} $ and we assume that $( u, v) $ is solution of $ (\ref{p01} )$ with $ (u_{0},v_{0} )$. Then for $ n \in \mathbb{N}$, we have
\begin{itemize}
\item $ (u^{n}, v^{n} )$ is in $ C([0,2T], H^{s} ) \times C([0,2T], H^{s} ) $. The sequence $ \{ (u^{n },v^{n} )\} $ converge to $ (u,v) $ \\
in
$ C([0,T], H^{s} ) \times C([0,T], H^{s} ) $.
\item The estimate in Lemma \ref{lem23} holds for $ (u^{n},v^{n} ) $ uniformly in $ n $.
\item If $ (u_{0}, v_{0} ) \in \mathcal{N}^{\rho, s} $ for $ \rho > 0 $, then
the result is also given for $ C([0,T], \mathcal{G}_{\rho, s} ) \times C([0,T], \mathcal{G}_{\rho, s} ). $
\end{itemize}
\end{lemma}
\begin{lemma}\label{lem4} (\cite{130})
Let $ (u, v) $ be solution of $(\ref{p01}) $ with the initial data $(u_{0},v_{0} )\in \mathcal{N}^{\rho_{0}, s+1}$ for $ \rho_{0} > 0 $ and $ s >\frac{3}{2} $ and $\eta > 0 $, then
\begin{align*}
\sup_{\substack{ t \in [0, 2\eta]}} \Vert (u(., t), v(., t)) \Vert_{\mathcal{N}^{\rho(t), s+1}} \leq \Vert (u_{0},v_{0} ) \Vert_{\mathcal{N}^{\rho_{0}, s+1}} + C \eta^{\frac{1}{2}} \sup_{\substack{ t \in [0, 2\eta]}} \Vert (u(., t), v(., t)) \Vert^{(2p+2)/2}_{ \mathcal{N}^{s+1}},
\end{align*}
with $ \rho (t)= \rho_{0} e^{-\gamma (t)} $
and $\gamma (t)$ is defined as
$$ \gamma (t)= \int_{0}^{t} \left( k_{1} + k_{2} \int_{0}^{t^{\prime}} \Vert \left( u(., t^{\prime \prime} ), v(., t^{\prime \prime}) \right) \Vert^{2p+2}_{ \mathcal{N}^{s+1}} dt^{\prime \prime} \right)^{2p} dt^{\prime}, $$
where
$$ k_{1}= \Vert (u_{0},v_{0} ) \Vert^{2}_{\mathcal{N}^{\rho_{0}, s+1}} , $$ and $ k_{2} $ is a constant.
\end{lemma}
\begin{proposition}\label{pro02}
Let $ \rho_{0} > 0 $, $ p \geqslant 1 $,$ T \geqslant 1$ and $ s > 3b $, we assume that $( u, v) $
is solution of $ (\ref{p01}) $ in $ C \left( [0, 2T], H^{s+1} \right) \times C \left( [0, 2T], H^{s+1} \right) $
with $ ( u_{0}, v_{0} ) \in \mathcal{G}_{\rho_{0}, s+1} \times \mathcal{G}_{\rho_{0}, s+1} $, then there exist $ \rho_{1} < \rho_{0} $ such that
$$ \lbrace \Psi_{T} u^{n}, \Psi_{T} v^{n} \rbrace \quad \text{bounded in} \quad \mathcal{B}_{\rho (t), s,b}, $$
with
$$ \rho (t) \leq \min \lbrace \rho_{1},K T ^{-2p^{2} -6p-1} \rbrace . $$
\end{proposition}
\begin{proof}
We have
\begin{align}
\psi_{T}(t) u^{n} = \psi_{T}(t) W(t) u_{0}- \psi_{T}(t) \int_{0}^{t} \partial_{x}\left(( \rho_{n}\ast \psi_{T} u^{n})^{p+1}) ( \rho_{n}\ast \psi_{T} v^{n})^{p})\right) dx,
\end{align}
where $ t \in (0, \infty )$. This will show that $\Psi_{T} u^{n} \in X_{\rho,s,b} $ for all $ n \in \mathbb{N}$. \\
We have
\begin{align*}
\| \psi_{T}(t) u^{n} \|_{\rho, s,b} &\leq \| \psi_{T}(t) W(t) u_{0} \|_{\rho, s,b} + \| \psi_{T}(t) \int_{0}^{t} \partial_{x}\left( \rho_{n}\ast \psi_{T} (u^{n})^{p+1}) ( \rho_{n}\ast \psi_{T} (v^{n})^{p} \right) \|_{\rho, s,b}\\&
\leq c T^{\frac{1}{2}} \| u_{0} \|_{\mathcal{G}_{\rho, s} } +c T \| \partial_{x}\left( ( \rho_{n}\ast \psi_{T} u^{n})^{p+1} (\rho_{n}\ast \psi_{T} (v^{n}))^{p}\right) \|_{\rho, s,b'}
\\ &\leq c T^{\frac{1}{2}} \| u_{0} \|_{\mathcal{G}_{\rho, s} } +c T \left( \| \psi_{T} u^{n}\|_{s,b}^{p+1} \| \psi_{T} v^{n}\|_{s,b}^{p} +
\rho^{\frac{1}{2}} \| \psi_{T} u^{n}\|_{ \rho,s,b}^{p+1} \| \psi_{T} v^{n}\|_{\rho,s,b}^{p} \right).
\end{align*}
For $ 0 < \rho < \rho_{0} $ and $ b' = b-1+ \epsilon ' $
where $ \epsilon '> 0 $ and we use the Lemma \ref{lem23} to obtain
$$ \| \psi_{T}(t) u^{n} \|_{ s,b} \leq c T^{\frac{1}{2}} (1+\alpha_{T}(u^{n}, v^{n} ) )^{2p+1}
\leq 2 c T^{\frac{1}{2}} (1+\alpha_{T}(u,v ) )^{2p+1}, $$
and
$$ \| \psi_{T}(t) v^{n} \|_{ s,b} \leq c T^{\frac{1}{2}} (1+\alpha_{T}(u^{n}, v^{n} ) )^{2p+1}
\leq 2 c T^{\frac{1}{2}} (1+\alpha_{T}(u,v ) )^{2p+1}. $$
Then
\begin{align*}
\| \psi_{T}(t) u^{n} \|_{\rho(t), s,b} & \leq c T^{\frac{1}{2}} \| u_{0} \|_{\mathcal{G}_{\rho (t), s} }+ c T ^{\frac{2p+3}{2}} (1+\alpha_{T}(u,v ) )^{(2p+1)^{2}} +
~ c ~T^{\frac{1}{2}} (\rho(t))^{\frac{1}{2}} \| \psi_{T} u^{n}\|_{ \rho (t),s,b}^{p+1} \| \psi_{T} v^{n}\|_{\rho(t),s,b}^{p}
\\& \leq c T^{\frac{1}{2}} \| ( u_{0}, v_{0} ) \|_{\mathcal{N}^{\rho(t), s} }+ c T ^{\frac{2p+3}{2}} (1+\alpha_{T}(u,v ) )^{(2p+1)^{2}} + c T^{\frac{1}{2}} \rho (t)^{\frac{1}{2}} \| (\Psi_{T} u^{n}, \Psi_{T} v^{n})\|_{\mathcal{B}_{ \rho(t),s,b}}^{2p+1}.
\end{align*}
holds for $ T \geq 1 $.\\
{\bf In the case $ T = 1 $}, and by using Lemma \ref{lem23} and Lemma \ref{lem4}, we have
\begin{equation*}
\| (\psi_{T}(t) u(.,t), \psi_{T}(t) v(.,t)) \|_{\mathcal{B}_{\rho(1),s,b}}\leq ~c~ T^{\frac{1}{2}} \left( 1+\kappa_{T} (u,v) \right)^{{2p+1}},
\end{equation*}
where
\begin{equation*}
\kappa_{T} (u,v)= \sup_{\substack{ t \in [0, 2]
}} \left( \| (u, v ) \|_{\mathcal{N}^{\rho, s+1}} \right)^{2p+1},
\end{equation*}
\begin{align*}
\| \psi_{1}(t) u^{n} \|_{\rho (1), s,b} & \leq c \left( 1+ \sup_{\substack{ t \in [0, 2]
}} \left( \| (u^{n}, v^{n} ) \|_{\mathcal{N}^{\rho(1), s+1}} \right) \right)^{2p+1} \\&
\leq 2c \left( 1+ \sup_{\substack{ t \in [0, 2]
}} \left( \| (u, v ) \|_{\mathcal{N}^{\rho(1), s+1}} \right) \right)^{2p+1} \\&
\leq 2cc_1 \left( 1+ \| ( u_{0}, v_{0} )\|^{2p+1}_{\mathcal{N}^{\rho(1), s+1} } + \sup_{\substack{t \in [0, 2]}}
\left( \| (u, v ) \|_{\mathcal{N}^{ s+1}} \right)^{((2p+2)(2p+1))/2} \right).
\end{align*}
We assume that
$$ M^{\ast} = 2cc_1 \left( 1+ \| ( u_{0}, v_{0} )\|^{2p+1}_{\mathcal{N}^{\rho(1), s+1} } + \sup_{\substack{t \in [0, 2]}}
\left( \| (u, v) \|_{\mathcal{N}^{ s+1}} \right)^{((2p+2)(2p+1))/2} \right). $$
Then
\begin{eqnarray}
\| \psi_{T}(t) u^{n} \|_{\rho(t), s,b}
&\leq& M^{\ast} + c T^{\frac{1}{2}} \| ( u_{0}, v_{0} ) \|_{\mathcal{N}^{\rho_{0}, s} }+ c T ^{\frac{p+3}{2}} (1+\alpha_{T}(u,v ) )^{(2p+1)^{2}} \nonumber\\
&+& c T^{\frac{1}{2}} \rho(t)^{\frac{1}{2}} \| ( \Psi_{T} u^{n}, \Psi_{T} v^{n})\|_{\mathcal{B}_{ \rho(t),s,b}}^{2p+1},\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\| ( \psi_{T}(t) u^{n},\psi_{T}(t) v^{n} ) \|_{\mathcal{B}_{\rho(t), s,b}}
&\leq& M^{\ast} + c T^{\frac{1}{2}} \| ( u_{0}, v_{0} ) \|_{\mathcal{N}^{\rho_{0}, s} }+ c T ^{\frac{p+3}{2}} (1+\alpha_{T}(u,v ) )^{(2p+1)^{2}} \nonumber\\
&+& c T^{\frac{1}{2}} \rho(t)^{\frac{1}{2}} \| (\Psi_{T} u^{n},\Psi_{T} v^{n})\|_{\mathcal{B}_{ \rho(t),s,b}}^{2p+1}.\nonumber
\end{eqnarray}
{For $ T \geq 1 $}, $ \rho(t) \leq \rho_{1} \leq \rho_{0} $, and for large enough $n$, we define the new variables
\begin{align*}
& y=y (T)= \| \psi_{T}(t) u^{n}, \psi_{T}(t) v^{n} \|_{\mathcal{B}_{\rho(t), s,b}} \\&
x = x(T)= M^{\ast} + c T^{\frac{1}{2}} \| ( u_{0}, v_{0} ) \|_{\mathcal{N}^{\rho_{0}, s} }+ c T ^{\frac{p+3}{2}} (1+\alpha_{T}(u,v ) )^{(2p+1)^{2}} \\ &
d= d(T)= c T^{1/2}.
\end{align*}
Then
$$ y \leq x+ d \rho (T)^{\frac{1}{2}} y^{2p+1}. $$
If define
$$\rho (T) = \frac{a^{2}}{d^{2} x^{4p} 2^{4p}}. $$
Then
$$ y \leq x+ d \rho (T)^{\frac{1}{2}} y^{2p+1} \leq x+ d (\frac{a^{2}}{d^{2} x^{4p} 2^{4p}})^{\frac{1}{2}} y^{2p+1} \leq x +(\frac{a}{ (2 x)^{2p} }) y^{2p+1} $$
$$ \Longrightarrow y \leq x +a \left( \frac{y}{ 2 x}\right) ^{2p} y \Longrightarrow \frac{y}{2x} \leq \frac{1}{2} +a (\frac{y}{ 2 x})^{2p+1}. $$
We define $ h(t) = \frac{y(t)}{ 2 x(t)}$. Then
$$ h (1- a h^{2p} ) \leq \frac{1}{2}. $$
We can choose small $ a $ for all $ p $, then there is $ M' $ and $ m' $
such that
$$ \frac{1}{2} < m' < 1 < M', $$
and
$$ h \leq m' \quad or \quad h \geq M'. $$
As $ \| \psi_{T}(t) u^{n}, \psi_{T}(t) v^{n} \|_{\mathcal{B}_{\rho(t), s,b}} $ is a continuous function of $ T \geq 1$, then
$$ h(t) \geq m' < 1 \Longrightarrow y(t) \leq 2x(t), $$
which means that
$$ \| \psi_{T}(t) u^{n}, \psi_{T}(t) v^{n} \|_{\mathcal{B}_{\rho(t), s,b}} \leq 2 x. $$
Then
$$ \lbrace \Psi_{T} u^{n} \rbrace ~~ ~~ and ~~~ \lbrace \Psi_{T} v^{n} \rbrace ~~~ \textit{bounded in } X_{\rho (t), s,b}. $$
On the other hand, we have
\begin{equation}
\left\{\begin{array}{l}
\rho (t) < \rho_{1}\\
\rho (t)= \frac{a^{2}}{d^{2} x^{4p} 2^{4p}}.
\end{array}\right.
\end{equation}
Since
\begin{align*}
x^{4p} = (x(T))^{4p}&=\left( M^{\ast} + c T^{\frac{1}{2}} \| ( u_{0}, v_{0} ) \|_{\mathcal{N}^{\rho_{0}, s} }+ c T ^{\frac{p+3}{2}} (1+\alpha_{T}(u,v ) \right)^{4p}\\& \geq
\left( c T^{\frac{1}{2}} \| ( u_{0}, v_{0} ) \|_{\mathcal{N}^{\rho_{0}, s} }+ c T ^{\frac{p+3}{2}} (1+\alpha_{T}(u,v ) \right)^{4p}
\\& \geq
T^{\frac{4p}{2}} \left( c \| ( u_{0}, v_{0} ) \|_{\mathcal{N}^{\rho_{0}, s} }+ c T ^{\frac{p+2}{2}} (1+\alpha_{T}(u,v ) \right)^{4p}\\& \geq
T ^{2p} \left( c T ^{\frac{p+2}{2}} (1+\alpha_{T}(u,v ) \right)^{4p} \\ &
= T ^{2p^{2} +6p } (1+\alpha_{T}(u,v )^{4p}.
\end{align*}
Then
\begin{align}
x^{-4p} \leqslant (T )^{-2p^{2} -6p } (1+\alpha_{T}(u,v)^{-4p},
\end{align}
and
\begin{align*}
\rho (t) & = \frac{a^{2}}{d^{2} x^{4p} 2^{4p}}= \dfrac{a^{2}}{ c^{2} T x^{4p} 2^{4p} }= \dfrac{a^{2}T^{-1} }{ c^{2} x^{4p} 2^{4p} } \leqslant \dfrac{a^{2}T^{-1} T ^{-2p^{2} -6p } }{ c^{2} ((1+\alpha_{T}(u,v ))^{4p} 2^{4p} } \\ \\ \rho (t) &
\leqslant \dfrac{a^{2} }{ c^{2} ((1+\alpha_{T}(u,v ))^{4p} 2^{4p} } T ^{-2p^{2} -6p-1} = K T ^{-2p^{2} -6p-1},
\end{align*}
where
$$ K = \dfrac{a^{2} }{ c^{2} ((1+\alpha_{T}(u,v ))^{4p} 2^{4p} }. $$
and
$$ \rho (t) = \min \left\lbrace \rho_{1}, K T ^{-2p^{2} -6p-1} \right\rbrace. $$
\end{proof}
We are now in potion to prove Theorem \ref{th03}.
\begin{proof}{Of Theorem \ref{th03}.}
We have $ ( u_{0}, v_{0} ) \in \mathcal{N}^{\rho_{0}, s+1} $,
then by Theorem \ref{the1.2}, we obtain
$$ ( u, v ) \in C([0, T^{*}], \mathcal{G}_{\rho_{0}, s+1} ) \times C([0, T^{*}],\mathcal{G}_{\rho_{0}, s+1} ). $$
We prove that
$$ (u,v )\in C \left( [0, T], \mathcal{G}_{\frac{\rho (t)}{2}, s+1} \right) \times C \left( [0, T],\mathcal{G}_{\frac{\rho (t)}{2}, s+1} \right). $$
If $ T^{*} = \infty,$ it is done.\\
If $ T^{*} < \infty $, it remains to prove that
$$ (u,v )\in C \left( [0, T], \mathcal{G}_{\frac{\rho (t)}{2}, s+1} \right) \times C \left( [0, T], \mathcal{G}_{\frac{\rho (t)}{2}, s+1} \right), \quad \forall ~~ T \geqslant T^{*}. $$
From the Proposition \ref{pro02}, we obtain that the sequence $ \lbrace ( u^{n}, v^{n} ) \rbrace $ is solution of $(\ref{eq10}) $ where $ (u_{0}, v_{0} ) $ is bounded in $ \mathcal{G}_{\rho (t), s} $ uniformly on $ [0, T ] $.\\
By using Lemma \ref{lem2}, with $ (u^{n}, v^{n} )$ satisfies $(\ref{eq10} )$ then, we obtain
\begin{align*}
( \partial_{t} u^{n},\partial_{t} v^{n} )
\quad
( \partial_{x} u^{n}, \partial_{x} v^{n} )
\quad
( \partial_{x}^{3} u^{n}, \partial_{x}^{3} v^{n} )
|\quad \textit{are uniformly bounded on the strip} \quad G_{\frac{\rho(t)}{2}, s}.
\end{align*}
Then
\begin{align*}
( \partial_{t} u^{n},\partial_{t} v^{n} )
\quad
( \partial_{x} u^{n}, \partial_{x} v^{n} )
\quad
( \partial_{x}^{3} u^{n}, \partial_{x}^{3} v^{n} )
|\quad \textit{ are equicontinuous families on strip } \quad G_{\frac{\rho(t)}{2}, s}.
\end{align*}
Then, we can extract a subsequence (without changing symbol of $ \lbrace ( u^{n}, v^{n} ) \rbrace $ ) converging uniformly on compact subsets of $ (0, T ) \times G_{\frac{\rho(t)}{2}, s } $ to smooth function $ ( \tilde{u},\tilde{v} )$ and
\begin{equation*}
( \partial_{t} u^{n},\partial_{t} v^{n} )
\quad
( \partial_{x} u^{n}, \partial_{x} v^{n} )
\quad
( \partial_{x}^{3} u^{n}, \partial_{x}^{3} v^{n} ) \quad
\textit{ is converging uniformly on compact subsets of} \quad (0, T ) \times G_{\frac{\rho(t)}{2}, s }.
\end{equation*}
Next we passe to the limit in $ (\ref{eq10} ) $, we obtain that $ (\tilde{u}, \tilde{v}) $ is a smooth extension of $ (u,v)$.\\
Since, $ ( u^{n}, v^{n} ) $ is analytic $ G_{\frac{\rho(t)}{2}, s } $ to the $ (\tilde{u}, \tilde{v})$, so $(\tilde{u}, \tilde{v}) $ is analytic in $ G_{\frac{\rho(t)}{2}, s }$, on the other hand, since $ \lbrace ( u^{n}, v^{n} ) \rbrace $ is bounded in $ G_{\frac{\rho(t)}{2}, s } $ uniformly on $ [0, T ],$ then
$$ \tilde{u} \equiv u \in L^{\infty} ( (0, T),\mathcal{G}_{\frac{\rho(t)}{2}} ) ~~~ \quad, ~~~~~ \tilde{v} \equiv v \in L^{\infty} ( (0, T),\mathcal{G}_{\frac{\rho(t)}{2}} ), $$
then
$$ u \in C( (0, T),\mathcal{G}_{\frac{\rho(t)}{2}} ) ~~~ \quad, ~~~~~ v \in C ( (0, T),\mathcal{G}_{\frac{\rho(t)}{2}} ). $$
\end{proof}
|
2,877,628,091,304 | arxiv |
\subsection{MIT Push Dataset}
The experiments are conducted as following. For a certain object, material, COM and pushing side setting, a total of around 20 straight push trajectories are used to test online learning. The 20 trajectories consist of different pushing points and pushing angles. For offline training, each setting has around 50 straight push trajectories. Each offline and online trajectory contains around 500 data points, collected at 250 Hz.The cases we consider include: Three objects: rect1, rect2, and rect3. Four materials: abs, delrin, plywood, pu. Two COM positions: center, UR $ = (0.01m, 0.01m)$. Pushing sides: front, left. See \cite{yu2016more} for more details about the dataset.
\hspace{1mm}
\subsubsection{Exp M1: different materials }
We consider rec2 object and do offline and online training with different materials. The goal is to verify if our model is able to adjust for different friction friction coefficients online.
Note that in the top case the online setting also appears in offline data. However, online learned model still achieve much better performance over the fixed model.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{exp1_MIT.png}
\caption{
Exp M1. (delrin+abs)/abs, delrin/abs, plywood/pu
}
\end{figure}
\subsubsection{Exp M2: manual online COM offsets }
We consider the rec2 object and add a manual COM offset of $(0.01m, 0.01m)$ to the object position online.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{exp2_MIT.png}
\caption{ Exp M2. abs center/abs UR, delrin center/delrin UR}
\end{figure}
\subsubsection{Exp M3: different objects}
We consider different objects for online and offline training. All are on abs material.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{exp3_MIT.png}
\caption{Exp M3. rect1/rect3, rect2/rect3}
\end{figure}
\subsubsection{Exp M4: different pushing sides}
We consider different pushing sides for online and offline training. Object is rect3 and materials are plywood and abs, respectively.
\begin{figure}
\centering
\centering
\includegraphics[width=0.48\textwidth]{exp4_MIT.png}
\caption{Exp M4. plywood front/plywood left, abs front/abs left}
\end{figure}
\subsubsection{Exp M5: different objects, materials, pushing sides, COM offsets}
We consider different objects, materials, pushing sides, COM offsets for offline and online.
\begin{figure}
\centering
\centering
\includegraphics[width=0.48\textwidth]{exp5_MIT.png}
\caption{Exp M5. rect2, delrin, front, center/rect3, abs, left, UR
}
\end{figure}
\begin{table}
\centering
\scriptsize
\setlength{\tabcolsep}{0.3em}
\renewcommand{\arraystretch}{1.2
\begin{tabular}{|p{8mm}||p{7mm}|p{7mm}|p{7mm}|p{7mm}||p{7mm}|p{7mm}|p{7mm}|p{7mm}|}
\hline
\multirow{2}{*}{Exps} & \multicolumn{4}{c||}{Positional Losses} & %
\multicolumn{4}{c|}{Rotational Losses}\\
\cline{2-9}
& Offline NN & Offline & Fixed & Online & Offline NN & Offline & Fixed & Online
\\
\hline
\multirow{3}{*}{Exp M1}
& 0.392 & 0.398 & 0.418 & 0.315 & 0.148 & 0.162 & 0.295 & 0.106
\\
\cline{2-9}
& 0.390 & 0.406 & 0.419 & 0.347 & 0.150 & 0.153 & 0.313 & 0.160
\\
\cline{2-9}
& 0.382& 0.376 & 0.642 & 0.554 & 0.269 & 0.247 & 0.981 & 0.492
\\
\hline
\multirow{2}{*}{Exp M2}
& 0.303 & 0.321 & 0.677 & 0.320 & 0.155 & 0.202 & 0.564 & 0.145
\\
\cline{2-9}
& 0.395 & 0.410 & 0.707 & 0.398 & 0.151 & 0.172 & 0.453 & 0.212
\\
\hline
\multirow{2}{*}{Exp M3}
& 0.388 & 0.388 & 1.718 & 0.562 & 0.255 & 0.209 & 0.394 & 0.176
\\
\cline{2-9}
& 0.307 & 0.322 & 0.827 & 0.380 & 0.161 & 0.144 & 0.196 & 0.088
\\
\hline
\multirow{2}{*}{Exp M4}
& 0.383 & 0.381 & 0.607 & 0.364 & 0.148 & 0.148 & 0.480 & 0.176
\\
\cline{2-9}
& 0.425 & 0.419 & 0.828 & 0.477 & 0.167 & 0.164 & 0.670 & 0.212
\\
\hline
\multirow{2}{*}{Exp M5}
& 0.391 & 0.400 & 0.989 & 0.506 & 0.149 & 0.156 & 0.987 & 0.353
\\
\cline{2-9}
& 0.378 & 0.381 & 0.631 & 0.476 & 0.265 & 0.246 & 0.778 & 0.462
\\
\hline
\end{tabular}
\caption{Experiments of the MIT push dataset.}
\label{table:MIT}
\end{table}
\subsection{TurtleBot3 Experiments}
See Figure \ref{fig:turtlebot} for our TurtleBot3 setting.
The experiments are conducted as following. For a certain object, material, COM and pushing side setting, a total of nine straight push trajectories are used to test online learning. These trajectories are a combination of different pushing points and pushing angles. For offline training, each setting has 27 straight push trajectories. Each offline and online trajectory contains around 100 data points, with a data collection frequency of 4 Hz. In order to make a reasonable prediction, we set the prediction horizon to be 2 seconds, predicting the object's position and rotation 2 seconds from the current time frame.
\begin{figure}
\centering
\centering
\includegraphics[width=0.45\textwidth]{Turtlebot_setup.png}
\caption{From left to right: TurtleBot3, box1, box2, and an opened box with a stationary object.
We track the robot and object positions and orientations by their ArUco markers using a ceiling camera.
We consider two different surface materials, paper (original) and plastic for the bottom of the boxes, and three COM positions, center, upper right corner (UR), lower right corner (LR), done by moving a stationary object in the box.
}
\label{fig:turtlebot}
\end{figure}
\subsubsection{Exp R1: different materials}
We consider box1 and do offline and online training with different materials.
\begin{figure}
\centering
\centering
\includegraphics[width=0.48\textwidth]{exp1_Real.png}
\caption{Exp R1. paper/plastic, plastic/paper
}
\end{figure}
\subsubsection{Exp R2: different COMs}
We train with different COM settings online with either paper or plastic. Note that in two EXP R1 and R2 settings the online loss for the fixed model also decreases. This actually is not due to learning, but because of the rotational loss resulting from the large initial rotational movement in the two cases.
\begin{figure}
\centering
\centering
\includegraphics[width=0.48\textwidth]{exp2_Real.png}
\caption{Exp R2. paper center/paper LR, paper center/paper UR
}
\end{figure}
\subsubsection{Exp R3: different objects}
We consider different objects for online and offline training on either paper or plastic.
\begin{figure}
\centering
\centering
\includegraphics[width=0.48\textwidth]{exp3_Real.png}
\caption{Exp R3. paper box1/box2, Exp R3. plastic box1/box2
}
\end{figure}
\subsubsection{Exp R4: different pushing sides}
We consider different pushing sides for online and offline training with box1.
\begin{figure}[h]
\centering
\centering
\includegraphics[width=0.48\textwidth]{exp4_Real.png}
\caption{Exp R4. paper front/paper right, plastic front/plastic right}
\end{figure}
\subsubsection{Exp R5: all different}
See the online prediction video.
\begin{table}
\centering
\scriptsize
\setlength{\tabcolsep}{0.3em}
\renewcommand{\arraystretch}{1.2
\begin{tabular}{|p{8mm}||p{7mm}|p{7mm}|p{7mm}|p{7mm}||p{7mm}|p{7mm}|p{7mm}|p{7mm}|}
\hline
\multirow{2}{*}{Exps} & \multicolumn{4}{c||}{Positional Losses} & %
\multicolumn{4}{c|}{Rotational Losses}\\
\cline{2-9}
& Offline NN & Offline & Fixed & Online & Offline NN & Offline & Fixed & Online
\\
\hline
\multirow{3}{*}{Exp R1}
& 0.116 & 0.117 & 0.441 & 0.287 & 0.319 & 0.307 & 0.264 & 0.292
\\
\cline{2-9}
& 0.105 & 0.112 & 0.176 & 0.118 & 0.183 & 0.219 & 0.279 & 0.236
\\
\cline{2-9}
& 0.382& 0.376 & 0.642 & 0.554 & 0.269 & 0.247 & 0.981 & 0.492
\\
\hline
\multirow{3}{*}{Exp R2}
& 0.160 & 0.120 & 0.186 & 0.123 & 0.320 & 0.316 & 0.331 & 0.321
\\
\cline{2-9}
& 0.138 & 0.126 & 0.145 & 0.121 & 0.282 & 0.307 & 0.523 & 0.435
\\
\cline{2-9}
& 0.102 & 0.109 & 0.190 & 0.148 & 0.207 & 0.194 & 0.268 & 0.201
\\
\hline
\multirow{2}{*}{Exp R3}
& 0.118 & 0.138 & 0.449 & 0.316 & 0.281 & 0.325 & 0.453 & 0.312
\\
\cline{2-9}
& 0.098 & 0.099 & 0.419 & 0.213 & 0.178 & 0.212 & 0.701 & 0.455
\\
\hline
\multirow{2}{*}{Exp R4}
& 0.123 & 0.128 & 0.483 & 0.269 & 0.291 & 0.327 & 0.603 & 0.498
\\
\cline{2-9}
& 0.105 & 0.104 & 0.490 & 0.232 & 0.182 & 0.218 & 0.924 & 0.388
\\
\hline
\end{tabular}
\caption{Experiments by TurtleBot3.}
\label{table:turtlebot}
\end{table}
\subsection{Related Work}
Data-driven approaches are popular in pushing prediction, and multiple learning methods have been proposed to train different prediction models.
Gaussian approximation is a basic learning tool which has been used to predict final object poses from initial pushing orientations \cite{mericcli2015push}.
Regular regression and density estimation methods have also been used to train push prediction models \cite{kopicki2017learning} which outperform analytical models.
To further improve predictions on planar pushing, other types of models have been considered in the literature, including physics-based force-motion models \cite{zhou2016convex}, local Markov decision process (MDP) models \cite{wang2017focused}, and heteroscedastic Gaussian process models \cite{bauza2017probabilistic}.
More recently, advances in deep learning have drawn attention to using neural network models for physical interactions.
SE3-Nets \cite{byravan2017se3} use deep neural networks to
directly predict rigid body motions from point cloud data.
Image prediction based deep learning models \cite{agrawal2016learning, finn2016unsupervised, xie2019improvisation} also show success in several manipulation tasks including planar pushing.
These deep learning models generally improve the prediction accuracy on specific distributions of the given dataset, but they often have generalizability issues for unseen data distributions.
One approach to improve the transferability of neural networks is to use hybrid architectures that combine analytical and data-driven models \cite{kloss2017combining, ajay2018augmenting}. This approach benefits from the expressiveness of data-driven models and the generalizability of analytical methods.
Our combined push prediction model is inspired by the generalization abilities of these hybrid models. We further push beyond generalization to online adaptation and design models that take advantages of both offline and online training.
There are other deep learning based online adaptation methods using either recurrent neural networks \cite{li2018push} or meta learning \cite{nagabandi2018deep}. Although these methods seem appealing, they require complex training procedures which often can only be done in simulators.
In comparison, our combined model and online learning method provide a simple though effective online adaptation scheme.
\section{Introduction}
\input{introduction}
\section{Preliminaries}
\input{preliminaries}
\section{Learning Method and Prediction Model}
\input{method}
\section{Experiments}
\input{experiments}
\section{Conclusion}
\input{discussion}
\newpage
\bibliographystyle{IEEEtran}
\subsection{Online Learning for Planar Push Prediction}
When a pushing dataset is available, data-driven approaches can learn prediction functions offline.
Offline trained functions may perform well for situations similar to the collected dataset, but they usually have generalizability issues with unseen cases.
In planar pushing, many important factors can vary in different scenarios. For example, different objects have different weights, friction coefficients, and different COM positions.
Furthermore, these pushing-related factors are often not available to the robot before it actually pushes the object.
In most cases, the only way to infer these properties is to observe the online pushing results. Therefore, online learning is essential to perform accurate push predictions.
Taking advantages of both online and offline training, we consider an online learning setting with offline pre-training.
In particular, we split the prediction function parameter into $\theta = (\theta_{\text{offline}}, \theta_{\text{online}})$.
The offline component $\theta_{\text{offline}}$ is trained offline with a pre-collected dataset while
the online component $\theta_{\text{online}}$ will be learned online to adapt to new pushing trajectories.
The idea is that the offline component $\theta_{\text{offline}}$ can be high-dimensional to improve the expressiveness of the prediction model. On the other hand, the online component $\theta_{\text{online}}$ can be designed to be low-dimensional for fast online adaptation.
Suppose we have a dataset $D$ consisting of data points of the form $(x,y)$ where $x = (\mathbf p^o_r , \mathbf u^o)$ and $y = ( \Delta {\mathbf p}_o, \Delta \omega_o)$.
Then the goal of the offline pre-training phase is to find optimal offline parameter
\begin{align}
\theta_{\text{offline}}^* = \argmin_{\theta_{\text{offline}}}
\frac{1}{|D|} \sum_{(x,y) \in D} \ell ( f_{(\theta_{\text{offline}}, \theta_{\text{online}}(0))}(x), y)
\label{eq:offline_training}
\end{align}
Note that we fix the online parameter to an initial value $\theta_{\text{online}}(0)$ in offline training.
An optional scheme is to also train the online parameter $\theta_{\text{online}}$ offline, but this may require additional hyperparameter tuning.
When it comes to the online situation, we have a pre-trained prediction function
$f_{(\theta_{\text{offline}}^*, \theta_{\text{online}}(0))}$ at time $0$.
The main idea of online learning is to adjust the online parameter $\theta_{\text{online}}(t)$ adapting to the pushing outcomes at each time $t$.
Let $x(t) = (\mathbf p^o_r(t) , \mathbf u^o(t))$ and $y(t) = ( \Delta {\mathbf p}_o(t), \Delta \omega_o(t))$ be the pushing outcome to be predicted.
Then the push prediction at this time is $f_{(\theta_{\text{offline}}^*, \theta_{\text{online}})} ( x(t) )$ with a prediction loss
$\ell ( f_{(\theta_{\text{offline}}^*, \theta_{\text{online}}(t))} ( x(t) ), y(t) )$.
After the pushing outcome $y(t)$ is observed at time $t+1$, we can update the online parameter by
\begin{align}
\theta_{\text{online}} (t+1) = \argmin_{\theta_{\text{online}}}
\ell ( f_{(\theta_{\text{offline}}^*, \theta_{\text{online}})}(x(t)), y(t))
\label{eq:online_training}
\end{align}
The overall online learning algorithm is described below.
\begin{algorithm}[H]
\caption{Online Learning for Push Prediction}
\label{alg:overall}
\begin{algorithmic}
\STATE \textbf{Inputs}:
\STATE \hspace{\algorithmicindent} Dataset $D$ of pushing trajectories
\STATE \hspace{\algorithmicindent} Prediction model $f_{(\theta_{\text{offline}}, \theta_{\text{online}})}$
\STATE \hspace{\algorithmicindent} Initial online parameter $\theta_{\text{online}}(0)$
\STATE \hspace{\algorithmicindent} Loss function $\ell$
\STATE Train offline parameter $\theta_{\text{offline}}^*$ by \eqref{eq:offline_training}
\FOR{$t = 0, 1, 2, \dots$}
\STATE Observe $x(t)$ and $y(t)$ online
\STATE Update online parameter $\theta_{\text{online}}(t+1)$ by \eqref{eq:online_training}
\ENDFOR{}
\end{algorithmic}
\end{algorithm}
To apply this online learning framework, we need to have a prediction model $f_{(\theta_{\text{offline}}, \theta_{\text{online}})}$ with a high-dimensional offline parameter $\theta_{\text{offline}}$ and a low-dimensional online parameter $\theta_{\text{online}}$.
To do this, we will make use of analytical models as low-dimensional online components, and a data-driven model for the high-dimensional offline component.
These components will be constructed in the next subsections.
\subsection{Center of Mass Corrections}
One challenge in push prediction is to know the center of mass (COM) of the object. The object center $\mathbf p_o$ can be an approximation of COM, but it usually has an offset from COM in many situations.
Therefore, we propose an analytical procedure to correct COM as shown in Figure \ref{fig:com_correction}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.8, every node/.style={scale=0.8}]
\def0{0}
\def0{0}
\def3.6{5}
\def2{2.5}
\pgfmathsetmacro\comx{0-1.0}
\pgfmathsetmacro\comy{0-0.5}
\def1{1}
\def-2{-2.2}
\pgfmathsetmacro\radius{--2-2/2}
\node[draw, rectangle,minimum width=3.6 cm, minimum height=2 cm, line width=0.5pt ]
at (0, 0) (box) {};
\node[circle, fill, scale=.5, label=right:{$\mathbf p_o$}] at (0, 0) (po) {};
\node[circle, fill, scale=.5, label={[red]270:{COM}}, red] at (\comx, \comy) (com) {};
\draw[arrow] (com) --node[above] {$\mathbf v$} (po);
\node[draw, circle, minimum height=2*\radius cm, line width=0.5pt]
at (1,-2) (robot) {};
\node[circle, fill, scale=.5] at (1, -2) (pr) {};
\draw[arrow] (po) --node[right, pos=0.75] {$\mathbf p_r^o$} (pr);
\draw[arrow, red] (com) --node[left, pos=0.65] {$\mathbf p_r^{o, \text{corrected}}$} (pr);
\def0.5{0.1}
\def0.8{0.6}
\node[circle, fill, scale=.5, red] at (\comx + 0.5, \comy + 0.8) (dcom) {};
\draw[arrow, dashed, red] (com) --node[left]{$\Delta$COM } (dcom);
\node[circle, fill, scale=.5] at (\comx + 0.5 + 0.89, \comy + 0.8 + 0.86) (dp) {};
\draw[arrow, dashed, red] (dcom) --node[left]{$\mathbf R_{\Delta \omega_o} \mathbf v$} (dp);
\draw[arrow, dashed] (po) --node[right]{$\Delta \mathbf p_o$} (dp);
\node[draw, rectangle, minimum width=3.6 cm, minimum height=2 cm, line width=0.5pt,
dashed, rotate around={15:(dp)}]
at (dp) (boxrotated) {};
\node at (0+3.6/2-0.5, 0+2/2) (w) {};
\draw[arrow, dashed] (w.120) arc (0:50:1cm) node[left, midway] {$\Delta \omega_o$};
\end{tikzpicture}
\caption{Center of Mass Corrections}
\label{fig:com_correction}
\end{center}
\end{figure}
Consider $\omega_o = 0$ without loss of generality. Let $\mathbf v = \mathbf p_o - \text{COM} \in \mathbb{R}^2$ be the offset vector from the true COM to the object center in the object's frame. Then the robot's relative position to the object should be corrected to its relative position to the COM by
\begin{align}
\mathbf p_r^{o, \text{corrected}} =
\mathbf p_r - \text{COM} = \mathbf p_r^o + \mathbf v
\label{eq:position_correction}
\end{align}
For the push prediction, note that the offset between the object center and COM after pushing is the rotated vector $\mathbf R_{\Delta \omega_o} \mathbf v$.
Suppose the COM motion is $\Delta$COM after being pushed, then the motion of the object center is given by
\begin{align}
\Delta \mathbf p_o = & \mathbf p_o(t+1) - \mathbf p_o(t)
\notag \\
= & \text{COM}(t+1) + R_{\Delta \omega_o} \mathbf v - \mathbf p_o(t)
\notag \\
= & \Delta\text{COM} + (R_{\Delta \omega_o} - \mathbf I) \mathbf v
\label{eq:motion_correction}
\end{align}
\subsection{Contact Point Prediction for Physical Model}
Since we want to apply the physical model $F_{\text{physical}}$, we need to predict the contact point $\mathbf c$ and the contact motion $\mathbf u_c$.
As discussed earlier, predicting the contact point and motion is one of the key challenges in analyzing pushing behaviors. Therefore, we take advantage of data-driven ideas to train a model for the complex contact interactions.
We achieve this by using a feedforward neural network $\phi_{\theta_{\text{offline}}}$ parameterized by an offline parameter $\theta_{\text{offline}}$.
In particular, this neural network will take the corrected relative robot position and motion as input, and output the predicted contact point and motion
$\phi_{\theta_{\text{offline}}}(\mathbf p_r^{o, \text{corrected}}, \mathbf u_r^o) = (\mathbf c, \mathbf u_c)$.
\subsection{Combined Push Prediction Model}
As shown in Figure \ref{fig:com_correction}, putting together the neural network $\phi_{\theta_{\text{offline}}}$, physical model $F_{\text{physical}}$, and COM corrections \eqref{eq:position_correction}-\eqref{eq:motion_correction} we get the combined push prediction model:
\begin{align}
f_{(\theta_{\text{offline}}, \theta_{\text{online}})}(\mathbf p^o_r, \mathbf u^o_r) = (\Delta \mathbf p_o, \Delta \omega_o)
\end{align}
In the prediction model, we have a high-dimensional offline parameter $\theta_{\text{offline}}$ from the neural network which can improve the expressive power of the model. Form the analytical components $F_{\text{physical}}$ and COM corrections \eqref{eq:position_correction}-\eqref{eq:motion_correction}, we have a low-dimensional online parameter $\theta_{\text{online}} = (\mathbf v, h) \in \mathbb{R}^3$.
Since the online parameter $\theta_{\text{online}}$ has a low dimension, it can be quickly trained to adapt to online data.
One important features of the model is that all the components are differentiable. This allows us to perform both offline and online training in an end-to-end manner.
Note that it's commonly observed in deep models that end-to-end training can further exploit the expressive power by letting the network to determine its own state representation.
This also implies that each intermediate variable may be trained to behave differently than its analytical role.
\begin{figure}
\begin{center}
\begin{tikzpicture}[x=1cm,y=1cm, scale=0.75, every node/.style={scale=0.75}]
\node at (0, 0) (pr) {$\mathbf p^o_r$};
\node at (2, 0) (ur) {$\mathbf u^o_r$};
\node[draw, rectangle, minimum width=3 cm, minimum height=0.8 cm, line width=0.5pt ]
at (0, - 1) (cominput) {Correction \eqref{eq:position_correction}};
\node[draw, rectangle, minimum width=5 cm, minimum height=0.8 cm, line width=0.5pt ]
at (1 , -2.5) (NN) {Neural Network $\phi_{\theta_{\text{offline}}}$};
\node[draw, rectangle, minimum width=5 cm, minimum height=0.8 cm, line width=0.5pt ]
at (1, -4) (physical) {Physical Model $F_{\text{physical}}$};
\draw[arrow] (pr) -- (cominput);
\draw[arrow] (cominput.south) --node[right]{$\mathbf p_r^{o, \text{corrected}}$} (pr.south|-NN.north);
\draw[arrow] (ur) -- (ur.south|-NN.north);
\draw[arrow] (pr.south|-NN.south) --node[right]{$\mathbf c$} (pr.south|-physical.north);
\draw[arrow] (ur.south|-NN.south) --node[right]{$\mathbf u_c$} (ur.south|-physical.north);
\node at (0, -6.5) (dpo) {$\Delta \mathbf p_o$};
\node at (2, -6.5) (domega) {$\Delta \omega_o$};
\node[draw, rectangle, minimum width=3 cm, minimum height=0.8 cm, line width=0.5pt ]
at (0, - 5.5) (comoutput) {Correction \eqref{eq:motion_correction}};
\draw[arrow] (comoutput) -- (dpo);
\draw[arrow] (physical.south-|comoutput.north) --node[right]{$\Delta$COM} (comoutput.north);
\draw[arrow] (domega.north|-physical.south) -- (domega.north);
\draw[arrow] (domega.north|-physical.south) |- (comoutput.east);
\node[draw, circle, minimum height=1.2 cm, line width=0.5pt ]
at (-3, - 2.5) (thetaoff) {$\theta_{\text{offline}}$};
\node[draw, circle, minimum height=1.2 cm, line width=0.5pt ]
at (-3, - 4) (h) {$h$};
\node[draw, circle, minimum height=1.2 cm, line width=0.5pt ]
at (-3, - 1) (v) {$\mathbf v$};
\draw[arrow, dashed] (thetaoff) -- (NN);
\draw[arrow, dashed] (h) -- (physical);
\draw[arrow, dashed] (v) -- (cominput);
\draw[dashed] (v) -| (-4, - 5.5);
\draw[arrow, dashed] (-4, - 5.5) -- (comoutput);
\end{tikzpicture}
\caption{Combined Push Prediction Model $f_{(\theta_{\text{offline}}, \theta_{\text{online}})}$}
\label{fig:combined_model}
\end{center}
\end{figure}
\subsection{Planar Push Prediction}
We consider the prediction problem of the pushing behavior between a robot and an object on a surface.
At each time $t$, we observe the (center) position $\mathbf p_o(t) = (\mathbf p_{o, x}(t), \mathbf p_{o, y}(t)) \in \mathbb R^2$ and orientation $\omega_o(t) \in [0, 2\pi)$ of the object,
the position $\mathbf p_r(t) = (\mathbf p_{r, x}(t), \mathbf p_{r, y}(t)) \in \mathbb{R}^2$ of the robot, and the robot's motion command $\mathbf u_r(t) = (\mathbf u_{r,x}(t), \mathbf u_{r,y}(t)) \in \mathbb R^2$.
The robot will then move according to $\mathbf u(t)$ to $\mathbf p_t(t+1) = \mathbf p_r(t) + \mathbf u_r(t)$ and push the object along its way.
Given the observation, our goal is to predict the object's next position $\mathbf p_o(t+1)$ and orientation $\omega_o(t+1)$ at time $t+1$ after pushed by the robot.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.8, every node/.style={scale=0.8}]
\def0{0}
\def0{0}
\def3.6{3.6}
\def2{2}
\def1{1}
\def-2{-2}
\pgfmathsetmacro\radius{--2-2/2}
\def1{1}
\def1{1}
\def0.7{0.7}
\def1{1}
\def0.5{0.5}
\def0.8{0.8}
\node[draw, rectangle,minimum width=3.6 cm, minimum height=2 cm, line width=0.5pt ]
at (0, 0) (box) {};
\node[circle, fill, scale=.5, label=left:{$\mathbf p_o$}] at (0, 0) (po) {};
\node[draw, circle, minimum height=2*\radius cm, line width=0.5pt]
at (1,-2) (robot) {};
\node[circle, fill, scale=.5] at (1, -2) (pr) {};
\draw[arrow] (po) --node[left, pos=0.85] {$\mathbf p_r^o$} (pr);
\node[circle, fill, scale=.5] at (1, -2 + \radius) (c) {};
\draw[arrow] (po) --node[right, pos=0.75] {$\mathbf c$} (c);
\node at (1 + 1, -2 + 1) (u) {};
\draw[arrow] (pr) --node[below]{$\mathbf u_r^o$} (u);
\node at (1 + 0.7, -2 + \radius + 1) (v) {};
\draw[arrow] (c) --node[right]{$\mathbf u_c$} (v);
\node at (0 + 0.5, 0 + 0.8) (dp) {};
\draw[arrow, dashed] (po) --node[left]{$\Delta \mathbf p_o$} (dp);
\node at (0+3.6/2, 0+2/2) (w) {};
\draw[arrow, dashed] (w.120) arc (0:50:1cm) node[left, midway] {$\Delta \omega_o$};
\node[draw, rectangle, minimum width=3.6 cm, minimum height=2 cm, line width=0.5pt,
dashed, rotate around={15:(dp)}]
at (dp) (boxrotated) {};
\end{tikzpicture}
\caption{Planar Pushing}
\label{fig:pushing}
\end{center}
\end{figure}
Since the goal is to predict the object's movement, we transform the observations and predictions into the object's current frame. Specifically, let $\mathbf p^o_r(t)$ and $\mathbf u_r^o(t)$ denote the relative position and motion of the robot, and $\Delta \mathbf p_o(t)$ and $\Delta \omega_o(t)$ be the relative change in position and orientation of the object. They satisfy the following equations:
\begin{align}
& \mathbf p^o_r(t) = \mathbf{R}_{(- \omega_o(t))} (\mathbf p_r(t) - \mathbf p_o(t))
\\
& \mathbf u_r^o(t) = \mathbf{R}_{(- \omega_o(t))} \mathbf u_r(t)
\\
& \Delta p_o(t) = \mathbf{R}_{(- \omega_o(t))} (\mathbf p_o(t + 1) - \mathbf p_o(t))
\\
& \Delta \omega_o(t) = \omega_o(t+1) - \omega_o(t)
\end{align}
where $\mathbf{R}_{\omega}$ denotes the rotation matrix by the angle $\omega$.
Figure \ref{fig:pushing} illustrates the planar pushing interaction between the robot and the object. For notation simplicity, we will drop the time index $t$ if it's clear in the context in the rest of the paper.
Then the goal of push prediction is to find a prediction function $f_\theta$, parameterized by $\theta$, that predicts
$f_\theta ( \mathbf p^o_r , \mathbf u_r^o)
= ( \hat{\Delta} \mathbf p_o, \hat{\Delta} \omega_o)$.
\subsection{Performance Metric}
To evaluate the prediction performance, we use the standard metric of normalized mean square error (NMSE) for both positional and rotational losses. The same metric was also used in data-driven pushing models \cite{bauza2017probabilistic}.
To compute NMSE, we define positional loss functions $\ell_{\text{pos},x}$, $\ell_{\text{pos},y}$, and a rotational loss function $\ell_{\text{rot}}$ by
\begin{align}
& \ell_{\text{pos}, i} (\hat{\Delta} \mathbf p_o, \Delta {\mathbf p}_o ) =
\frac{1}{\sigma_{\Delta \mathbf p_o}^2} (\hat{\Delta} \mathbf p_{o,i} - \Delta \mathbf p_{o,i})^2, \, i = x, y
\\
& \ell_{\text{rot}} ( \hat{\Delta} \omega_o ,\Delta \omega_o)
= \frac{1}{\sigma_{\Delta \omega_o}^2}(\hat{\Delta} \omega_o - \Delta \omega_o)^2
\end{align}
where $\sigma_{\Delta \mathbf p_o}$ and $\sigma_{\Delta \omega_o}$ are the standard deviations for $\Delta \mathbf p_o$ and $\Delta \omega_o$.
Positional and rotational NMSEs are given by
\begin{align}
&\text{NMSE}_{\text{pos}} \! =\! \frac{1}{2}\mathbb E \big[
\ell_{\text{pos}, x} (\hat{\Delta} \mathbf p_o, \Delta {\mathbf p}_o )
\! + \! \ell_{\text{pos}, y} (\hat{\Delta} \mathbf p_o, \Delta {\mathbf p}_o )
\big]
\\
& \text{NMSE}_{\text{rot}} = \mathbb E \big[
\ell_{\text{rot}} ( \hat{\Delta} \omega_o ,\Delta \omega_o) \big]
\end{align}
We also define the overall loss function $\ell$ as
\begin{align}
&\ell((\hat{\Delta} \mathbf p_o, \hat{\Delta} \omega_o),\, (\Delta {\mathbf p}_o, \Delta \omega_o) )
= \ell_{\text{pos}, x} (\hat{\Delta} \mathbf p_o, \Delta {\mathbf p}_o )
\notag\\
& \qquad + \ell_{\text{pos}, y} (\hat{\Delta} \mathbf p_o, \Delta {\mathbf p}_o )
+ \ell_{\text{rot}} ( \hat{\Delta} \omega_o ,\Delta \omega_o)
\end{align}
\subsection{Physical Model}
Suppose the center of mass (COM) of the object is at the object center such that $\mathbf p_o = \text{COM} = (\text{COM}_x, \text{COM}_y)$. Let $\mathbf c = (\mathbf c_x, \mathbf c_y)$ be the pushing contact point in the object's frame, and
$\mathbf u_c = (\mathbf u_{c,x}, \mathbf u_{c,y})$ is the motion (in the object's frame) of the contact point being pushed by the robot.
When $\mathbf c$, $\mathbf u_c$ and a friction-related parameter $h$ are available, the physical model of pushing dynamics \cite{lynch1992manipulation} gives
\begin{align}
& \Delta\text{COM}_x =
\frac{ (h^2 + \mathbf c_x^2) \mathbf u_{c,x} + \mathbf c^o_x \mathbf c^o_y \mathbf u_{c,y}}
{h^2 + \mathbf c_x^2 + \mathbf c_y^2 }
\label{eq:push_dynamics_x}
\\
& \Delta\text{COM}_y =
\frac{ (h^2 + \mathbf c_y^2) \mathbf {c,y} + \mathbf c_x \mathbf c_y \mathbf {c,x} }
{h^2 + \mathbf c_x^2 + \mathbf c_y^2 }
\label{eq:push_dynamics_y}
\\
& \Delta\omega_o= \frac{ \mathbf c_x \Delta\text{COM}_y - \mathbf c_y \Delta\text{COM}_x}{h^2}
\label{eq:push_dynamics_omega}
\end{align}
We use $F_{\text{physical}}$ to denote this physical model \eqref{eq:push_dynamics_x}-\eqref{eq:push_dynamics_omega} as
$F_{\text{physical}}(\mathbf c, \mathbf u_c) = (\Delta\text{COM}, \Delta\omega_o)$.
One may attempt to directly use this model as a prediction function, but the physical model is subjected to several limitations:
(1) COM of the object may not be at its center $\mathbf p_o$. For example, when the object is a box containing items with different weights, its COM is usually different from its geometric center.
(2) It's difficult to determine the exact contact point $\mathbf c$ between the robot and the object from their positions. Moreover, a push can be either sticking or slipping. For a sticking push we can simply get $\mathbf u_c = \mathbf u_r$. However, in the slipping case, $\mathbf u_c$ will depend on the complex friction interaction between the robot and the object.
(3) The physical model requires the knowledge of a friction-related parameter $h$, but this parameter varies for different contact surfaces and is usually unknown for unseen objects.
Therefore, directly applying the physical model may result in inaccurate predictions due to above issues.
|
2,877,628,091,305 | arxiv | \section{Introduction and main results}\label{section:intro}
Given a bounded Lipschitz planar domain $\Omega \subset \bb{R}^2$, a Riemmanian manifold $\mathcal N \subset \bb{R}^\nu$, and $g \in W^{\sfrac{1}{2},2}(\partial \Omega,\mathcal N)$, a minimizing $2$-harmonic map $u \in W^{1,2}(\Omega,\mathcal N) \doteq W^{1,2}(\Omega,\bb{R}^\nu)\cap \{\text{for a.e. } x \in \Omega, u(x)\in \mathcal N\}$ is a minimizer of
\begin{equation}
\label{eq_beiG7eijie3aexecaiti4lei}
\inf \left\{\int_\Omega \frac{|Du|^2}{2} : \begin{matrix}
u \in W^{1,2}(\Omega,\mathcal N) \\ \Tr_{\partial\Omega}u = g
\end{matrix}\right\}.
\end{equation}
If $W^{1,2}_g(\Omega,\mathcal N) \doteq W^{1,2}(\Omega,\mathcal N) \cap \{\Tr_{\partial\Omega}u = g\}$ contains at least one map, a minimizer do exist by the direct method of the calculus of variations. However in general due to topological obstructions \cite{bethuel1995extensions}\cite[Section 6.3]{vanschaftingen2021sobolev}, it can be that $W^{1,2}_g(\Omega,\mathcal N) = \Oset$ when $\mathcal N$ is not simply connected.
Meanwhile, when $p \in (1,2)$, minimizing $p$-harmonic map $u_p \in W^{1,p}(\Omega,\mathcal N)$, \emph{i.e.} minimizers of
\begin{equation}
\label{eq_zaequae8choGheitohv7ir6y}
\inf \left\{\int_\Omega \frac{|Du|^p}{p} : \begin{matrix}
u \in W^{1,p}(\Omega,\mathcal N) \\ \Tr_{\partial\Omega}u = g
\end{matrix}\right\}
\end{equation}
always do exist. This follows by an extension theorem of Robert \scauthor{Hardt} and Fang-Hua \scauthor{Lin} \cite{hardt1987mappings} (see proposition \ref{thm:HLthm} below) that asserts the existence of a least one map realizing the constraints.
Following \scauthor{Hardt} and \scauthor{Lin} \cite{hardt1995singularities} (see also Daniel \scauthor{Stern} \cite{stern2018p}) in the case of the circle $\mathcal N = \mathbb S^1$, we want to construct $2$-harmonic mappings as the limit of \(p\)--harmonic maps as \(p \nearrow 2\) for a general manifold. Other non-simply connected targets for harmonic mappings appear in several contexts: projective plane in liquid crystal models \cite[Section 1.A]{brezis1986harmonic}, the group of rotations in elasticity (Cosserat materials) \cite{Neff2004geometrically} and quotient of the group of rotations by discrete subgroups in computer graphics (frame-fields) \cite{beaufort2017computing,macq2020ginzburg}.
We point out that by the embedding theorem of John \scauthor{Nash} \cite{nash_imbedding_1956} any Riemannian manifold $\mathcal N$ ---compact or not--- can be embedded as a closed set (see Olaf \textsc{Müller} \cite{muller_note_2009}) of some euclidian space $\bb{R}^\nu$ and that one can define Sobolev spaces $W^{1,p}(\Omega,\mathcal N)$ intrincly \textit{i.e.} independently of the choice of a closed embedding (see Alexandra \scauthor{Convent} and Jean \scauthor{Van Schaftingen} \cite{convent_intrinsic_2016}).
Since we are mainly interested in the case where the infimum in \eqref{eq_beiG7eijie3aexecaiti4lei} is infinite, the infimum in \eqref{eq_zaequae8choGheitohv7ir6y} should blow up as \(p \nearrow 2\).
Our first result describes the asymptotic behavior of the infimum of \eqref{eq_zaequae8choGheitohv7ir6y}.
\begin{theoremA}\label{thm:esgjksg}
Let $\Omega\subset \bb{R}^2$ be a bounded Lipschitz domain and $\mathcal N$ a compact Riemannian manifold. For each $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$, if, for each $p \in (1,2)$, $u_p \in W^{1,p}(\Omega,\mathcal N)$ is a minimizing $p$-harmonic map with trace $\Tr_{\partial\Omega}u_p = g$, then
\begin{equation}\label{eq:limforder}
\lim_{p\nearrow 2}(2 - p)\int_\Omega \frac{|Du_p|^p}{p} = \mathcal E_{\mathrm{sg}}^{1,2}(g) \in \{0\} \cup \Big [\frac{\sys(\mathcal N)^2}{4\pi}, +\infty\Big ).
\end{equation}
\end{theoremA}
The \emph{systole} $\sys(\mathcal N)$ of the manifold $\mathcal N$ is the least length of a non-contractible map $\mathbb S^1 \to \mathcal N$ (see definition \ref{eq:defsys}; see \cite{pu1952some,gromov1983filling,berger1993systoles} for early apparitions in the litterature). If the manifold $\mathcal N$ is simply connected ($1$-connected or $\pi_1(\mathcal N) \simeq \{0\}$), the right hand side of \eqref{eq:limforder} is understood to be zero. Every compact manifold has positive sytole.
By \emph{Riemannian manifold}, we mean complete connected smooth Riemannian manifold without boundary and of finite dimension.
The \emph{singular energy} $\mathcal E_{\mathrm{sg}}^{1,2}(g)$ introduced by Antonin \textsc{Monteil}, Rémy \textsc{Rodiac} and Jean \textsc{Van Schaftingen} \cite{monteil2021renormalised} quantifies the nontriviality of the free homotopy class of the map $g$:
\begin{equation}
\label{eq_ooj9Iex8Hahr4aimohweoyif}
\mathcal E_{\mathrm{sg}}^{1,2}(g) = \inf\left \{ \frac{1}{4\pi}\sum_{i = 1}^k \int_{\mathbb S^1}|\gamma_i'|^2 : \begin{matrix} u \in C^{1}(\Omega \setminus \bigcup_{i = 1}^k \B(a_i;\rho), \mathcal N), \rho > 0 \text{ small } \\ \gamma_i \in C^1(\mathbb S^1,\mathcal N) \text{ geodesic homotopic to } u|_{\partial \B(a_i;\rho)} \\
u|_{\partial \Omega} \text{ homotopic to } g\end{matrix} \right\}
\end{equation}
If $\sys(\mathcal N) > 0$, the singular energy vanishes if and only if there exists $u \in W^{1,2}(\Omega,\mathcal N)$ such that $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u = g$ as explained in section \ref{subsec:upper_bound}. The singular energy is defined using minimal length of geodesics in $\mathcal N$ in chosen homotopy classes (see definition \ref{def:esg}), where the homotopy is understood in the sense of $\mathrm{VMO}$ (vanishing mean oscillation, see \cite{brezis_nirenberg_1995,brezis_nirenberg_1996}).
The singular energy only depends on the homotopy class of $g$: if $g_1,g_2 \in W^{\sfrac{1}{2},2}(\partial \Omega,\mathcal N)$ are freely homotopic, one has $\mathcal E_{\mathrm{sg}}^{1,2}(g_1) = \mathcal E_{\mathrm{sg}}^{1,2}(g_2)$.
Theorem \ref{thm:esgjksg} is closely related to the extension of trace for Sobolev mappings of \scauthor{Hardt} and \scauthor{Lin} (see \cite[Section 6]{hardt1987mappings} for compact $\mathcal N$; see \cite{vanschaftingen2021sobolev} for the general case):
\begin{proposition}\label{thm:HLthm}
Let $\Omega\subset \bb{R}^2$ be a bounded Lipschitz domain and $\mathcal N$ a Riemannian manifold. There exists a constant $C > 0$ depending on $\mathcal N$ and $\Omega$ such that for all $p \in [1,2)$ and \(g \in W^{\sfrac{1}{p},p}(\partial \Omega, \mathcal N)\), there exists $u \in W^{1,p}(\Omega,\mathcal N)$ such that
\[
(2 - p) \int_{\Omega} \frac{\vert Du\vert^p}{p} \le C
\iint_{\partial \Omega \times \partial \Omega}
\frac{\d_{\mathcal N} (g (x), g (y))^p}{\abs{x - y}^p} \d x \d y.
\]
\end{proposition}
Briefly, we will write that $W^{1,p}_g(\Omega,\mathcal N) \neq \Oset$ for $p \in [1,2)$. We repeat that it can be that for $p = 2$, $W^{1,2}_g(\Omega,\mathcal N) = \Oset$.
We next describe the asymptotics of families of \(p\)--harmonic maps.
\begin{theoremA}\label{thm:convofmaps}
Let $\Omega\subset \bb{R}^2$ be a bounded Lipschitz domain, $\mathcal N$ a compact Riemannian manifold, $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$ and $p_n \in (1,2)$ an increasing sequence converging to $2$.\\
If $u_{p_n} \in W^{1,p_n}(\Omega, \mathcal N)$ is a sequence of minimizing $p_n$-harmonic maps of trace $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n = g$, then there exists a renormalizable harmonic map $u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ with trace $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_* = g$ such that up to some subsequence
\[ u_{p_n} \xrightarrow{n \to +\infty} u_*
\] almost everywhere,
and there exists \(\kappa\leq 4\pi\mathcal E_{\mathrm{sg}}^{1,2}(g)/\sys(\mathcal N)^2\) distinct points $\{a_i\}_{i = 1,\dots,\kappa} \subset \Omega$ such that
\begin{enumerate}[(i)]
\item for \(\rho > 0\) small enough $u_*|_{\Omega\setminus \bigcup_{i = 1}^\kappa \B(a_i;\rho)}
\in W^{1, 2} (\Omega\setminus \bigcup_{i = 1}^\kappa \B(a_i;\rho), \mathcal N)$ is a minimizing harmonic map with respect to its own boundary condition,
\item the map $u_*$ minimizes the renormalized energy of mappings
\begin{equation}\label{eq:poitnskkjg}
\lim_{\rho \searrow 0}\int_{\Omega\setminus \bigcup_{i = 1}^\kappa \B(a_i;\rho)} \frac{|Du_*|^2}{2}- \mathcal E_{\mathrm{sg}}^{1,2}(g)\log \frac{1}{\rho} + \mathrm H ([u_*,a_i])_{i = 1,\dots,\kappa},
\end{equation}
\item the charges \(([u_*, a_i])_{i = 1, \dotsc, \kappa}\) and the points $(a_i)_{i = 1,\dots,\kappa}$ minimize
the renormalized energy of configuration of points
\begin{multline}
\mathcal{E}^{1, 2}_{\mathrm{geom}, \gamma_1, \dotsc, _{\gamma_k}} ([u_*,a_1], \dotsc, [u_*, a_k])
+ \mathrm H ([u_*, a_i])_{i = 1,\dots,\kappa}\\
=
\inf\left\{
\mathcal{E}^{1, 2}_{\mathrm{geom}, \gamma_1, \dotsc, _{\gamma_k}} (x_1, \dotsc, x_k) + \mathrm H (\gamma_i)_{i = 1,\dots,\kappa}:\begin{matrix}
x_1, \dotsc, x_k \in \Omega\\
\gamma_1,\dotsc, \gamma_k \text{ achieve the infimum in \eqref{eq_ooj9Iex8Hahr4aimohweoyif}}
\end{matrix}
\right\}.
\end{multline}
\end{enumerate}
\end{theoremA}
\emph{Renormalizable maps} $v \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ were introduced in \cite{monteil2021renormalised} (see definition \ref{def:Eren}) and are measurable maps $v : \Omega \to \mathcal N$ for which
\[
\mathcal E_{\ren}^{1,2}(v) = \lim_{\rho \searrow 0}\int_{\Omega\setminus \bigcup_{i = 1}^\kappa \B(a_i;\rho)} \frac{|Dv|^2}{2}- \mathcal E_{\mathrm{sg}}^{1,2}(g)\log \frac{1}{\rho}
\]
is finite. Here and after $|Dv|$ refers to the Frobenius norm of the weak derivative $D v$.
The term $\mathrm H ([u_*,a_i])_{i = 1,\dots,\kappa}$ in theorem \ref{thm:convofmaps} is defined in \eqref{eq:defofH} in proposition \ref{prop:upperBound} only depends on the homotopy class of $u_*$ \emph{near} each $a_i$, $i =1,\dots,\kappa$. That is, for $i =1,\dots,\kappa$ and $\rho > 0$ small, $\Tr_{\mathbb S^1(a_i;\rho)}u_*$ defines a map in $W^{\sfrac{1}{2},2}(\mathbb S^1(a_i;\rho),\mathcal N) \subset \mathrm{VMO}(\mathbb S^1(a_i;\rho),\mathcal N)$ which has a well-defined homotopy class $[u_*,a_i]$ on which depends $\mathrm H ([u_*,a_i])_{i = 1,\dots,\kappa}$ (see \eqref{eq:nationpoint} for details).
The geometrical renormalized energy, $\mathcal{E}^{1, 2}_{\mathrm{geom}}$, energy is defined in \eqref{eq_def_renorm_top}.
Theorem \ref{thm:convofmaps} generalizes Robert \scauthor{Hardt} and Fang-Hua \scauthor{Lin}’s result obtained for the circle, $\mathcal N = \bb S^1$ \cite{hardt1995singularities}; we relax both assumptions that the domain is simply connected and that the target is the unit circle. It will also appear that our method is quite different and robust enough to describe almost minimizers (also knows as quazimimimizers).
Other approaches to deal with the emptiness of the $W^{1,2}_g(\Omega, \mathcal N)$-class in dimension 2 exist: for instance, the well-studied Ginzburg--Landau family of functionals appeared before the $p$-harmonic map relaxation in the literature (see Fabrice \scauthor{Bethuel}, Haïm \scauthor{Brezis} and Frédéric \scauthor{Hélein} \cite{bethuel1994ginzburg} and references therein for the unit circle; more recently for general manifolds \cite{canevari2021topological,monteil2020ginzburg}): for every $\varepsilon > 0$, one considers a minimizer $u_\varepsilon \in W^{1,2}_g(\Omega,\bb{R}^\nu)$ of
\begin{equation*}
\inf \left \{ \int_\Omega \frac{|Du|^2}{2} + \frac{F(u)}{\varepsilon^2} : u \in W_g^{1,2}(\Omega, \bb{R}^\nu)\right\}
\end{equation*}
where $F(u)$ behaves as $\dist(u, \mathcal N)^2$ near $\mathcal N$ (see \cite{monteil2020ginzburg} for detailed assumptions on $F$), $\mathcal N$ is a connected compact submanifold of $\bb{R}^\nu$ and studies the limit of $(u_\varepsilon)_{\varepsilon>0}$ when $\varepsilon \searrow 0$. More precisely, one gets \emph{minimizing renormalizable singular harmonic maps} introduced in \cite{monteil2021renormalised} and the limit happens to be in various notions of topologies (see \cite {monteil2020ginzburg}). If we write $u_* = \lim_{\varepsilon \searrow 0}u_\varepsilon \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$, \scauthor{Monteil}, \scauthor{Rodiac} and \scauthor{Van Schaftingen} show that $u_*$ minimizes
a related renormalized energy which is the sum of a contribution depending on the charges and the location of the singularities, and another contribution depending on the charges and the potential $F$.
Interestingly the \emph{renormalized energy}, $\mathcal E_{\ren}^{1,2}(u_*)$, is common to both relaxation, see \cite{monteil2020ginzburg} for details.
The Ginzburg-Landau functional has an extrinsic character as one needs to embed isometrically the manifold $\mathcal N$ into a Euclidean space $\bb{R}^\nu$ and to chose a potential $F$ before beginning the asymptotic analysis. Due to the nature of the Ginzburg-Landau functional, the convergence is different: in \cite{monteil2020ginzburg}, it is shown that $Du_\varepsilon$ strongly converges in $L^2$ away from singularities of $u_*$ (compare with the convergence in theorem \ref{thm:convofmaps}).
\medskip
In order to perform the analysis described in theorems \ref{thm:esgjksg} and \ref{thm:convofmaps}, we adapt the approach that was used for studying the Ginzburg-Landau family of functionals in \cite{monteil2021renormalised} and thereby we do not rely on the regularity of $p$-harmonic maps. Our method is therefore different from \scauthor{Hardt} and \scauthor{Lin}'s \cite{hardt1995singularities} that heavely uses the structure of the circle $\mathcal N = \mathbb S^1$, estimates on liftings to its universal covering as well as monotonicity formulas.
Building an \emph{upper} and a \emph{lower bound} on the \emph{renormalized energy} $\mathcal E_{\ren}^{1,2}$ (see respectively propositions \ref{prop:upperBound} and \ref{prop:compactnessofboundedseqs}), we can treat the case of quaziminimizer of the $p$-Dirichlet functional \emph{in the spirit of $\Gamma$-convergence} (also known as $G$-convergence or variational convergence) through a variational approach (see \ref{prop:conv_of_min}).
The scheme of the proof of the lower bound follows Jerrard \cite{jerrard_lower_1999} and Sandier \cite{sandier_lower_1998}, as it was adapted to general target manifolds \cite{monteil2020ginzburg}, by an \emph{amalgation/merging ball lemma} (see lemma \ref{lemma:mergin_ball_lemma}) that we made more explicit for our purposes (see proposition \ref{prop:circleconstruction}).
We finally strengthen the result for minimizers in proposition \ref{prop:conv_of_min}.
Our approach also has the advantage of yielding some improved estimates in the Marcinkiewicz $L^{p,\infty}(\Omega)$ space (or weak Lebesgue space, weak-$L^p$ space or Lorentz space \cite{marcinkiewicz1939interpolation}\cite[Chapter 5]{castillo2016introductory}) which consists in $v : \Omega \to \bb{R}$ measurable verfying \(\sup_{t>0}t^p \,\mathrm{vol}{\{|v|>t\}\cap \Omega } <+\infty\). Here and after, $\,\mathrm{vol}{A}$ refers to the $2$-dimensional Hausdorff measure of measurable sets $A \subset \bb{R}^2$; equivalently \cite[Theorem 4.1.1]{attouch2014variational}, it corresponds to the Lebesgue measure on $\bb{R}^2$. We obtain (see proposition \ref{prop:mixedboundedness})
\begin{equation}
\label{eq_ooB6EiteiMeegai8bohkie2t}
\varlimsup_{p\to 2} \sup_{t>0}t^{p} \,\mathrm{vol}{\{|Du_{p}|>t\}\cap \Omega } <+\infty,
\end{equation}
which is some kind of
\emph{asymptotic upper bound in $L^{p,\infty}(\Omega)$} for the Frobenius norm of the derivative of $p$-harmonic maps when $p\nearrow 2$;
in view of theorem \ref{thm:esgjksg}, the corresponding strong \(L^p\) bound fails when \(
W^{1,2}_g(\Omega,\mathcal N)= \emptyset\).
Bounds similar to \eqref{eq_ooB6EiteiMeegai8bohkie2t} have been obtained for the Ginzburg--Landau relaxation for harmonic maps into the circle \cite{MR2381162} and into a compact submanifold \cite{monteil2020ginzburg}.
The compactness condition on the manifold $\mathcal N$ can be in fact weakened in our analysis. This covers some non-compact manifolds such as the cylinder $\mathbb S^1 \times \bb{R}$ but not all non-compact manifolds; we refer to \eqref{eq:sdfjk} for a non-compact manifold which is not covered by our analysis. We devote section \ref{sec:whattodononcomapct} to the coverable non-compact cases.
\section{Setting: Quantifying energies and other relevant quantities}\label{section:energies}
In this section we describe the length $\lambda$, the length spectrum, the singular energy, the renormalized energy and relation between them. All the homotopies are assumed to be \emph{free} and are thought in $\mathrm{VMO}$ (see \cite{brezis_nirenberg_1995,brezis_nirenberg_1996}).
\subsection{Length spectrum and systole}\label{subsec:lengsocheg}
Let $\gamma \in \mathrm{VMO}(\mathbb{S}^1, \mathcal N)$ be a map whose (essential) image lives in a Riemannian manifold $\mathcal N$ and of vanishing mean oscillation \cite{brezis_nirenberg_1995}. Following \cite{monteil2021renormalised}, we define the \emph{length of its (free) homotopy class} by
\begin{equation}
\label{eq_aeShi9aureeshug4unoh9cio}
\lambda(\gamma) = \inf \{\ell(\tilde\gamma) : \tilde \gamma \in W^{1,1}(\mathbb{S}^1, \mathcal N) \text{ is homotopic to } \gamma \}
\end{equation}
where $\ell(\tilde \gamma)$ is the Riemannian length of the closed curve $\tilde \gamma$.
In the definition, the homotopy refers to a (free) homotopy in the class of vanishing mean oscillation $\mathrm{VMO}(\mathbb{S}^1, \mathcal N)$.
Moreover, every map in $\mathrm{VMO}(\mathbb{S}^1, \mathcal N)$ is (freely) homotopic in $\mathrm{VMO}(\mathbb{S}^1, \mathcal N)$ to a smooth and hence $W^{1,1}$-map. Therefore, the infimum is finite.
The length $\lambda$ depends only on the $\mathrm{VMO}$--homotopy class of $\gamma$: if $\gamma_1$ is homotopic to $\gamma_2$ then $\lambda(\gamma_1) = \lambda(\gamma_2)$.
If $\mathcal N = \mathbb{S}^1$, $\lambda(\gamma) = 2\pi \abs{\deg \gamma}$, where \(\deg \gamma\) is the topological degree of the map $\gamma \in \mathrm{VMO}(\mathbb{S}^1, \mathbb{S}^1)$. If the manifold has the form $\mathcal N = \mathcal N_1 \times \mathcal N_2$, then one writes $\gamma = (\gamma_1, \gamma_2)$ and observes that $\lambda_{\mathcal N}(\gamma)^2 = \lambda_{\mathcal N_1}(\gamma_1)^2 + \lambda_{\mathcal N_2}(\gamma_2)^2$.
We now repeat a well-known construction for the topological degree that we adapt to the length $\lambda$.
If $u \in W^{1,2}_\mathrm{loc}(\B(a;\rho)\setminus\{a\},\mathcal N)$, it is possible to define the «homotopy of $u$ near $a$», $[u,a]$. If $\gamma_a : \mathbb S^1 \to \B(a;\rho)\setminus\{a\}$ is a smooth curve isotopic to $z \in \mathbb S^1 \mapsto a + \rho z$ then $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\gamma_a}u = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1}(u\circ \gamma)$ has a well defined homotopy-class in $\mathrm{VMO}$ that does not depend on the choice of the curve $\gamma_a$. This class of equivalence is denoted $[u,a]$. As $\gamma^t_a : z \in \mathbb{S}^1 \mapsto a + t\rho z$ is a family indexed by $t \in (0,1)$ of admissible curves, we may define
\begin{equation}\label{eq:nationpoint}
\lambda([u,a]) \doteq \lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb{S}^1}u(a + r \cdot))
\end{equation}
for small $r > 0$. Any curve $\gamma_a$ with the precited properties will then satisfy $ \lambda([u,a]) = \lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb{S}^1}(u\circ \gamma_a))$ and this definition does not depend on $\rho >0$.
\begin{definition}
The \emph{length spectrum} is the set of nonnegative real numbers $\{\lambda(\gamma) : \gamma \in W^{\sfrac{1}{2},2}(\mathbb{S}^1, \mathcal N)\}$.
\end{definition}
As our definition of smooth manifold implies that $\mathcal N$ is second countable, we obtain that this set is countable \cite[Proposition 1.16]{MR2954043} but can have accumulation points. In the compact case, it can be shown \cite[Proposition 3.2]{monteil2021renormalised} that
\begin{lemma}\label{lemma:discretspec}
If the Riemannian manifold $\mathcal N$ is compact, the length spectrum is a discrete set of the real line.
\end{lemma}
The compactness assumption is not necessary: the spectrum of the Euclidean space $\bb{R}^\mu$ and the infinite cylinder $\mathcal N \times \bb{R}$ (where $\mathcal N$ is compact manifold) have a discrete length spectrum but are not compact.
The \emph{systole} of the manifold $\mathcal N$ is defined by
\begin{equation}\label{eq:defsys}
\mathrm{Sys}(\mathcal N) \doteq \inf \{\lambda(\gamma) : \gamma \in \mathrm{VMO}(\mathbb{S}^1, \mathcal N)\text{ is not contractible}\}.
\end{equation}
We set $\sys(\mathcal N) = +\infty$ if the manifold $\mathcal N$ is simply connected. If $\mathcal N$ is compact, the systole is the first nonzero value of the length spectrum; this fact is sometimes used as a definition of the systole.
Our results heavily rely on the fact that the manifold $\mathcal N$ satisfies $\sys(\mathcal N) >0$. They cover Euclidean spaces $\bb{R}^\mu$, compact manifolds and infinite cylinder $\mathcal N \times \bb{R}^\mu$ ($\mathcal N$ compact). It does not cover infinite horns such as
\begin{equation}\label{eq:doesnotcover}
\mathcal N = \{(v \e^t, t) : t \in \bb{R}, v \in \mathbb S^1\} \subset \bb{R}^3.
\end{equation}
In the homotopy class of $\mathbb S^1 \xhookrightarrow{} \mathbb S^1 \times \{0\} \subset \mathcal N$, the infimum \eqref{eq_aeShi9aureeshug4unoh9cio} is equal to zero and is not achieved; hence, $\sys(\mathcal N) = 0$.
Here is a first link between the $p$-Dirichlet energy and the length $\lambda$.
\begin{lemma}[Local lower bound on circles]\label{lemma:loc_lower_bound_circle}
If $p \in [1,\infty)$ and if \(u \in W^{1, p} (\partial \B(a;\rho), \mathcal{N})\), with \(\B(a;\rho) \subset \bb{R}^2\), then
\begin{equation}
\label{eq:circle_ineq}
\frac{\lambda(u)^p}{p(2\pi\rho)^{p - 1}} \leq \int_{\partial \B(a;\rho)}\frac{\abs{u'}^p}{p}
\end{equation}
\end{lemma}
In the statement of lemma \ref{lemma:loc_lower_bound_circle}, $u'$ refers to the tangential derivative along the circle \(\partial \B(a;\rho)\).
\begin{proof}[Proof of lemma \ref{lemma:loc_lower_bound_circle}]
By Hölder's inequality,
\begin{equation*}
\frac{1}{\HH^1(\partial \B(a;\rho))^{p - 1}}\bigg (\int_{\partial \B(a;\rho)}\abs{u'} \bigg )^p \leq \int_{\partial \B(a;\rho)}\abs{u'}^p.
\end{equation*}
Since the map $u$ is admissible in the greatest lower bound \eqref{eq_aeShi9aureeshug4unoh9cio} that defines $\lambda$, we get the conclusion \eqref{eq:circle_ineq} as $\HH^1(\partial \B(a;\rho)) = 2\pi \rho$.
\end{proof}
The following proposition will play the role of an infinitesimal lower bound. It generalizes \cite[Lemma 2.3]{hardt1995singularities}.
\begin{proposition}\label{prop:loc_lower_bound}
Let $0 < \sigma < \rho$ and $u \in W^{1,2}(\B(a;\rho) \setminus\B(a;\sigma), \mathcal N)$. Then, if $p \in [1,\infty) \setminus \{2\}$,
\begin{equation}\label{eq:loc_lower_bound_1}
\frac{\big ( \rho^{2 - p} - \sigma^{2 - p}\big ) \lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a;\rho)}u)^p}{(2\pi)^{p - 1} p(2 -p)} \leq \int_{\B(a;\rho) \setminus \B(a;\sigma)}\frac{|Du|^p}{p}.
\end{equation}
\end{proposition}
\begin{proof}[Proof of proposition \ref{prop:loc_lower_bound}]
Integrating the estimate of lemma \ref{lemma:loc_lower_bound_circle} over $(\sigma,\rho)$, we get
\begin{equation*}
\begin{split}
\int_{\B(a;\rho) \setminus \B(a;\sigma)}\frac{|Du|^p}{p} &\geq \int_\sigma^\rho \frac{1}{p(2\pi r )^{p - 1}}\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb{S}^1}u(a + r\cdot))^p \d r = \frac{\rho^{2 - p} - \sigma^{2 - p}}{ 2 - p}\frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a;\rho)}u)^p}{(2\pi)^{p - 1} p}
\end{split}
\end{equation*}
by homotopy invariance.
\end{proof}
Letting \(\sigma \searrow 0\) in \eqref{eq:loc_lower_bound_1}, we get that if $u \in W^{1,2}_{\mathrm{loc}}(\B(a;\rho) \setminus\{a\}, \mathcal N)$,
\begin{equation}\label{eq:loc_lower_bound_2}
\frac{\rho^{2 - p} \lambda([u,a])^p}{(2\pi)^{p - 1} p(2 -p)} \leq \int_{\B(a;\rho)}\frac{|Du|^p}{p}.
\end{equation}
Letting $p \nearrow 2$ in \eqref{eq:loc_lower_bound_1} yields
\begin{equation}\label{eq:loc_lower_bound_1_p2}
\frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a;\rho)}u)^2}{4\pi }\log \frac{\rho}{\sigma} \leq \int_{\B(a;\rho) \setminus \B(a;\sigma)}\frac{|Du|^2}{2}.
\end{equation}
In the case of the circle $\mathcal N = \mathbb S^1$, this estimate is known as \emph{lower bounds on annuli} (see \cite[Lemma 1.1]{sandier_lower_1998}).
\begin{corollary}\label{coro:regularity} Let $\mathcal N$ be a Riemannian manifold of positive systole.
Let $p \in [1,\infty)\setminus \{2\}$ and $0 \leq \sigma < \rho$ and $u \in W^{1,p}\cap W^{1,2}_{\mathrm{loc}}(\B(a;\rho) \setminus\B(a;\sigma), \mathcal N)$. If \begin{equation*}
\int_{\B(a;\rho) \setminus \B(a;\sigma)}\frac{|Du|^p}{p} < \frac{\big ( \rho^{2 - p} - \sigma^{2 - p}\big )\sys(\mathcal N)^p}{(2\pi)^{p - 1} p(2 -p)},
\end{equation*}
then $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a;r)}u$ is homotopic to a constant function for each $r \in (\rho,\sigma)$.
\end{corollary}
\begin{proof}[Proof of corollary \ref{coro:regularity}]
The assumption implies that $\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial B(a;r)}u) < \sys(\mathcal N)$ which yields $\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a;r)}u) = 0$ and hence the conclusion.
\end{proof}
Observe that \eqref{eq:loc_lower_bound_2} fails at $p = 2$. However at the Marcinkiewicz scale we observe the following.
\begin{corollary}\label{coro:lorentzscale}
Let $\Omega \subset \bb{R}^2$ be an open subset of the plane. Let $a_1, \dots,a_k \in \Omega$ be points. If $u \in W_{\mathrm{loc}}^{1,2}(\Omega \setminus \{a_i\}_{i = 1,\dots,k},\mathcal N)$, then
\begin{equation}
\label{eq_quohw2xo8Laokoona4aihith}
\sum_{i = 1}^k\frac{\lambda([u,a_i])^2}{4\pi }\leq \varlimsup_{t \to \infty}t^2 \,\mathrm{vol} {\{|Du| \geq t \} \cap \Omega} .
\end{equation}
\end{corollary}
We observe that at this scale no constant depending on the domain $\Omega$ appears.
\begin{proof}[Proof of corollary \ref{coro:lorentzscale}]
Given \(t_0>0\), we define
\[ M(t_0) \doteq \sup_{t > t_0} t^2 \,\mathrm{vol}{\{|Du| \geq t\} } \cap \Omega).
\]
There exists a $\rho > 0$ such that
\( G_\rho \doteq
\bigcup_{i = 1}^k \B (a_i;\rho \lambda ([u, a_i])) \subset \Omega
\), the disks $\B (a_i;\rho \lambda ([u, a_i]))$ are disjoint
and
\begin{equation}\label{eq:cjkdgoiuj}
\,\mathrm{vol}{G_\rho} = \pi \rho^2 \sum_{i = 1}^k \lambda ([u, a_i])^2 \leq \frac{M(t_0)}{t_0^2}.
\end{equation}
Summing \eqref{eq:loc_lower_bound_2} $k$ times with \(p = 1\), we obtain
\begin{equation}\label{eq:loc_lower_bound_2_summed}
\begin{split}
\sum_{i = 1}^k \rho \lambda([u, a_i])^2 & \leq \int_{G_\rho} |Du|\\
& = \int_0^\infty \,\mathrm{vol}{\{|D u| \geq t\} \cap G_\rho } \d t =\int_{0}^\infty \min(\,\mathrm{vol}{G_\rho}, \frac{M(t_0)}{t^2}) \d t,
\end{split}
\end{equation}
by the choice \eqref{eq:cjkdgoiuj}. Therefore
\[
\rho\sum_{i = 1}^k \lambda([u, a_i])^2 \leq 2 \Big [ M(t_0) \pi \rho^2\sum_{i = 1}^k \lambda([u,a_i])^2\Big ]^\frac{1}{2}
\]
and the conclusion \eqref{eq_quohw2xo8Laokoona4aihith} follows by the arbitrariness of $t_0$.
\end{proof}
\subsection{Singular energy of the boundary data, \texorpdfstring{$\mathcal E_{\mathrm{sg}}^{1,p}$}{Esg1p}}
The following concept was introduced in \cite{monteil2021renormalised} and extend $\lambda$ to general boundary of domains.
\begin{definition}[Topological resolution] Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$, a Riemannian manifold $\mathcal N$ and $g \in \mathrm{VMO}(\partial \Omega, \mathcal N)$. A finite family of closed curves $\gamma_i \in \mathrm{VMO}(\mathbb S^1,\mathcal N)$, $i = 1, \dots,k$, with \(k \in \bb{N} = \{0,1,2,\dots\}\), called \emph{charges} constitutes a \emph{topological resolution} of the map $g$
whenever there exist $k$ non-intersecting closed balls $\B(a_i;\rho) \subset \Omega$ with \(\rho > 0\) and a map
\begin{equation*}
u \in W^{1,2}(\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\rho), \mathcal N)
\end{equation*}
such that $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u$ and $g$ are homotopic in $\mathrm{VMO}(\partial \Omega, \mathcal N)$ and $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb{S}^1}u(a_i + \rho \cdot )$ and $\gamma_i$ are homotopic in $\mathrm{VMO}(\mathbb S^1,\mathcal N)$ for $i = 1,\dots,k$.
\end{definition}
We allow $k = 0$ in the definition. In that case the topological resolution is said to be \emph{trivial} and the associated map $u \in W^{1,2}(\Omega,\mathcal N)$ is an extension of $g$.
We refer to the points $a_i\in\Omega$ $i = 1,\dots,k$ as \emph{singularities}.
If for some $j =1,\dots,k$, it happens that $\lambda(\gamma_j) = 0$ then, provided that $\sys(\mathcal N) > 0$ holds, there exists $w \in W^{1,2}(\B(a_j;\rho),\mathcal N)$ such that $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb{S}^1}u(a_{j} + \rho \cdot) = \gamma_{j}$ and one can replace in the definition $u$ by a new map
\begin{equation}\label{eq:toBiterated}
\bar u \in W^{1,2}(\Omega \setminus \bigcup_{\substack{i = 1\\ i\neq j}}^k\B(a_i;\rho), \mathcal N)
\end{equation}
where $\bar u = u$ on $\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\rho)$ and $\bar u = w$ on $\B(a_j;\rho)$.
\begin{definition}[Singular energy, $\mathcal E_{\mathrm{sg}}^{1,p}$]
\label{def:esg}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a Riemannian manifold $\mathcal N$, $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$ and $p \geq 1$. We set
\begin{equation*}
\mathcal E_{\mathrm{sg}}^{1,p}(g) \doteq \inf \left\{\sum_{i =1}^k\frac{\lambda(\gamma_i)^{p}}{p(2\pi)^{p-1} } : k \in \bb{N}, (\gamma_i)_{i = 1,\dots,k} \text{ is a topological resolution of $g$}\right\}.
\end{equation*}
\end{definition}
On the one hand, if the systole $\sys(\mathcal N)$ is positive, then $\mathcal E_{\mathrm{sg}}^{1,p}(g) =0$ if
and only if there exists a map $u \in W^{1,2}_g(\Omega,\mathcal N)$ that extends
the map $g$. This follows by iterating \eqref{eq:toBiterated} On the other hand,
if $\sys(\mathcal N) > 0$ and $\mathcal E_{\mathrm{sg}}^{1,p}(g) > 0$ then
\begin{equation}
\label{eq_ius6Cei3Tahwae2ahpoh5Iph}
\mathcal E_{\mathrm{sg}}^{1,p}(g) \geq \frac{\sys(\mathcal N)^p }{p(2\pi)^{p-1}}.
\end{equation}
We also have the following decreasing property of the singular energy $\mathcal E_{\mathrm{sg}}^{1,p}$ of importance in the circle construction (see proposition \ref{prop:circleconstruction}).
\begin{proposition}[Decreasing property of the singular energy]\label{prop:decreasing}
If two Lipschitz bounded domains satisfy $\Omega_0 \subset \Omega_1$ and $u \in W^{1,2}(\Omega_1 \setminus \Omega_0, \mathcal N)$, then $\mathcal E_{\mathrm{sg}}^{1,p}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega_0}u) \geq \mathcal E_{\mathrm{sg}}^{1,p}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega_1}u)$.
\end{proposition}
\begin{proof}
See \cite[Proposition 2.6]{monteil2021renormalised}.
\end{proof}
The positivity of the systole yields a continuity statement in $p$ of the $p$-singular energy :
\begin{lemma}[Continuity in $p$ of $\mathcal E_{\mathrm{sg}}^{1,p}$]\label{lemma:continuite_of_esgp}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$, a Riemannian manifold $\mathcal N$ and $g \in \mathrm{VMO}(\partial \Omega, \mathcal N)$. If \( \sys(\mathcal N) > 0\), then
\begin{equation}\label{eq:esg1pasafuncofp}
p \in [1,\infty) \mapsto \mathcal E_{\mathrm{sg}}^{1,p}(g)
\end{equation}
is locally bounded and locally Lipschitz continous.
\end{lemma}
The positive systole assumption is crucial as it allows us to state an equivalence between $\ell^p(\bb{N})$ and $\ell^q(\bb{N})$ norms of the vector $(\lambda_1,\dots,\lambda_k)$, see \eqref{eq:comprasion}.
\begin{proof}[Proof of lemma \ref{lemma:continuite_of_esgp}]
Set $1 \leq p<q < \infty $, $c_p \doteq 1/(2\pi)^{p-1} p$ and $f(p) \doteq \mathcal E_{\mathrm{sg}}^{1,p}(g)/c_p$.
For every \(\lambda_i \in \{\lambda(\gamma) : \gamma \in \mathrm{VMO}(\mathbb{S}^1, \mathcal N)\}\), $i = 1,\dots,k$, we have either \(\lambda_i \ge \sys(\mathcal N)\) or \(\lambda_i = 0\), and thus \(\sys(\mathcal N)^{q - p} \lambda_i^p \le \lambda_i^q\), so that
\begin{equation}\label{eq:comprasion}
\sys(\mathcal N)^{q - p} \sum_{i = 1}^k \lambda_i^p \leq \sum_{i = 1}^k \lambda_i^q \leq \Big [\sum_{i = 1}^k \lambda_i^p\Big]^{\frac{q}{p}}
\end{equation}
which implies
\[
\sys(\mathcal N)^{q - p}f(p) \leq f(q) \leq (f(p))^{q/p}
\]
so that $f$ is locally bounded. Moreover,
\begin{equation}\label{eq:toLipconst}
(\sys(\mathcal N)^{q - p} - 1)f(p) \leq f(q) - f(p) \leq f(p)\big(f(p)^{\frac{q - p}{p}} - 1\big)
\end{equation}
so that $f$ is locally Lipschitz-continuous as $f(p) \geq \sys(\mathcal N)^{\frac{p}{p - 1}}$.
\end{proof}
Using \eqref{eq:toLipconst}, it is possible to estimate the local Lipschitz constant of \eqref{eq:esg1pasafuncofp}. If $I \subset [1,\infty)$ is compact and $M = \sup_{p \in I}(2\pi)^{p - 1}p\mathcal E_{\mathrm{sg}}^{1,p}(g)$
\begin{equation}\label{eq:lipestimate}
[(2\pi)^{p - 1}p\mathcal E_{\mathrm{sg}}^{1,p}(g)]_{\mathrm{Lip}(I)} \leq \max(\big|\sys(\mathcal N)\log \sys(\mathcal N)\big |,\big|M\log M\big |).
\end{equation}
Aside from the continuity statement in $p$ of the map \eqref{eq:esg1pasafuncofp}, the singular energy $\mathcal E_{\mathrm{sg}}^{1,p}(g)$ is locally constant in $W^{\sfrac{1}{2},2}$:
\begin{lemma}
Given $g \in W^{\sfrac{1}{2},2}(\partial \Omega,\mathcal N)$, there exists $\delta = \delta(g,\mathcal N) > 0$ such that if $h \in W^{\sfrac{1}{2},2}(\partial \Omega,\mathcal N)$ satisfies $ \| g - h\|_{W^{\sfrac{1}{2},2}(\partial \Omega)} \leq \delta$ then $ \mathcal E_{\mathrm{sg}}^{1,p}(g) = \mathcal E_{\mathrm{sg}}^{1,p}(h)$.
\end{lemma}
This follows from the fact that given $g \in W^{\sfrac{1}{2},2}(\partial \Omega,\mathcal N)$ there exists $\delta = \delta(g) > 0$ such that if $h\in W^{\sfrac{1}{2},2}(\partial \Omega,\mathcal N)$ satisfies $\| g - h\|_{W^{\sfrac{1}{2},2}} \leq \delta $, then they are homotopic (for a proof of this fact : \cite[Lemma A.19]{brezis_nirenberg_1995} combined with $W^{\sfrac{1}{2},2}\subset \mathrm{VMO}$) and $\mathcal E_{\mathrm{sg}}^{1,p}(g)$ is homotopy invariant.
\begin{definition}[Minimal topological resolution]\label{def:mintopres}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$, a Riemannian manifold $\mathcal N$ and $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$.
A topological resolution of the map $g$ is \emph{minimal topological resolution} of $g$ whenever the singularities $(\gamma_i)_{i = 1,\dots,k}$ satisfy \(\lambda (\gamma_i) > 0\) and
\begin{equation*}
\mathcal E_{\mathrm{sg}}^{1,2}(g) =\sum_{i =1}^k\frac{\lambda(\gamma_i)^{2}}{4\pi}>0.
\end{equation*}
\end{definition}
By \eqref{eq_ius6Cei3Tahwae2ahpoh5Iph}, if $\sys(\mathcal N) > 0$, the number $k$ of singularities of a minimal topological resolution is at most \[
k \leq 4\pi \mathcal E_{\mathrm{sg}}^{1,2}(g)/\sys(\mathcal N)^{2}.
\]
\begin{definition}[Atomicity] \label{def:atomicity}
A map $\gamma \in W^{\sfrac{1}{2},2}(\mathbb{S}^1,\mathcal N)$ is said to be atomic if
\begin{equation*}
\mathcal E_{\mathrm{sg}}^{1,2}(\gamma) =\frac{\lambda(\gamma)^{2}}{4\pi}.
\end{equation*}
\end{definition}
For instance, when $\mathcal N = \mathbb S^1$ the identity map $\mathrm{Id}_{\mathbb{S}^1} : z \in \mathbb{S}^1 \to z \in \mathbb S^1$ and its conjugate are atomic maps.
We conclude this section with the following observation.
\begin{lemma}[Atomicity in minimal topological resolutions]\label{lemma:atomicityinminialtop}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$, a Riemannian manifold $\mathcal N$ and $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$ as well as a minimal topological resolution $(\gamma_i)_{i = 1,\dots,k}$ of the map $g$. For each $i = 1,\dots,k$ the map $\gamma_i$ is atomic.
\end{lemma}
Here and in the sequel we use the notation
\begin{equation}\label{eq:non-intersecting_radius}
\rho(\{a_i\}_{i = 1,\dots,\kappa}) \doteq \min\{\dist(\partial \Omega, a_i), |a_i - a_j|: i \neq j, \, i,j = 1,\dots,\kappa\}
\end{equation}
for a quantity that will play the role of a non-intersecting radius of balls.
\begin{proof}[Proof of lemma \ref{lemma:atomicityinminialtop}]
Fix $j \in \{1,\dots,k\}$ and $\rho \in (0,\rho(a_i)_{i = 1,\dots,k})$. Let $u$ be the map that carries the topological resolution. By definition of the singular energy,
\begin{equation*}
\begin{split}
\mathcal E_{\mathrm{sg}}^{1,2}(g)
= \sum_{i =1}^k\frac{\lambda(\gamma_i)^{2}}{4\pi }
&\geq \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial (\bigcup_{i \neq j} \B(a_i;\rho)}u) + \frac{\lambda(\gamma_j)^{2}}{4\pi} \\
& \geq \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial (\bigcup_{i \neq j} \B(a_i;\rho)}u) + \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_j;\rho)}u) \geq \mathcal E_{\mathrm{sg}}^{1,2}(g).
\end{split}
\end{equation*}
This forces \( \frac{\lambda(\gamma_j)^{2}}{4\pi} = \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_j;\rho)}u) = \mathcal E_{\mathrm{sg}}^{1,2}(\gamma_j)\).
\end{proof}
\subsection{Renormalized energy of a mapping, \texorpdfstring{$\mathcal E_{\ren}^{1,2}$}{Eren12}}
Minimizing $p$-harmonic maps when $p \nearrow 2$ will converge to minimizers of the following renormalized energy introduced in \cite{monteil2021renormalised}:
\begin{definition}[Renormalized energy and renormalizable Sobolev maps]\label{def:Eren}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a Riemannian manifold $\mathcal N$. A measurable map $u : \Omega \to \mathcal N$ is said to be \emph{renormalizable} whenever there exist $k \in \bb{N}$ distinct points $a_i \in \Omega$ such that $u \in W^{1,2}(\Omega \setminus \{a_1, \dots, a_k\},\mathcal N)$ and $\lambda([u,a_i])>0$ such that its \emph{renormalized energy}
\begin{equation}
\label{eq_tojie4OoLoos8ho5Oorua1Sh}
\mathcal E_{\ren}^{1,2}(u) \doteq \varliminf_{\rho \searrow 0}\int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\rho)}\frac{|Du|^2}{2} - \sum_{i = 1}^k\frac{\lambda([u,a_i])^2}{4\pi}\log\frac{1}{\rho}
\end{equation}
is finite.
\end{definition}
For example, if $\Omega = \B$ is the unit ball of $\bb{R}^2$, $\mathcal N = \mathbb{S}^1$ and $g = \Id_{\mathbb{S}^1} : \mathbb{S}^1 \doteq \partial \B \to \mathbb{S}^1$ is the identity map, then the \emph{vortex map \emph{or} hedgehog map} $u_h(x) = \sfrac{x}{|x|}$ belongs to $W^{1,2}_{\mathrm{ren}}(\B ,\mathbb{S}^1)$. Moreover $\mathcal E_{\ren}^{1,2}(u_h) = 0$.
We point out that existence of such maps is not granted when the manifold $\mathcal N$ is not compact, see proposition \ref{prop:wrenpeutetreempty}.
The inferior limit $\varliminf$ in \eqref{eq_tojie4OoLoos8ho5Oorua1Sh} is actually a limit as
\begin{equation}\label{eq:decreasingLemma}\displaystyle\rho \in (0,\rho(a_i)_{i = 1,\dots,k}) \mapsto \int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\rho)}\frac{|Du|^2}{2} - \sum_{i = 1}^k\frac{\lambda([u,a_i])^2}{4\pi}\log\frac{1}{\rho}
\end{equation}
is non-increasing, see \cite[Proposition 2.10]{monteil2021renormalised} for a proof that only assume $\sys(\mathcal N) > 0$.
We finally give an expression of the renormalized energy that does not make use of the limit operator. It splits the expression of $\mathcal E_{\ren}^{1,2}$ in two parts (an $L^2$-part and a renormalized part).
\begin{proposition}[Integral representation of the renormalized energy]\label{prop:polarCoordEren}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a Riemannian manifold $\mathcal N$. If $u \in W^{1,2}_{\mathrm{ren}}(\Omega,\mathcal N)$ is a renormalizable map associated to the distinct points $a_i \in\Omega$, $i = 1,\dots,k$, then, for every $\sigma \in (0,\rho(\{a_i\}_{i = 1,\dots,k}))$
\begin{multline}\label{eq:erenPolarCoord}
\mathcal E_{\ren}^{1,2}(u) + \sum_{i = 1}^k \frac{\lambda([u,a_i])^2}{4\pi}\log \frac{1}{\sigma} \\= \int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\sigma)}\frac{|Du|^2}{2} + \sum_{i =1}^k \int_0^\sigma\left [\int_{\partial \B(a_i;r)}\frac{|Du|^2}{2}\d \HH^1 - \frac{\lambda([u, a_i])^2}{4\pi r} \right] \d r.
\end{multline}
\\
Conversely, if $u \in W^{1,2}_{\mathrm{loc}}(\Omega \setminus \{a_i\}_{i = 1,\dots,k},\mathcal N)$ and there exists a $\sigma \in (0,\rho(a_i)_{i = 1,\dots,k})$ such that the right-hand side of \eqref{eq:erenPolarCoord} is finite, then $u \in W^{1,2}_{\mathrm{ren}}(\Omega,\mathcal N)$.
\end{proposition}
We recall that the non-intersection radius $\rho(\{a_i\}_{i = 1,\dots,k})$ was defined in \eqref{eq:non-intersecting_radius}.
\begin{proof}[Proof of propostion \ref{prop:polarCoordEren}]
We have by the existence of the limit in the expression of the renormalized energy (see \eqref{eq:decreasingLemma})
\begin{equation*}
\begin{split}
\mathcal E_{\ren}^{1,2}(u) &+ \sum_{i = 1}^k \frac{\lambda([u, a_i])^2}{4\pi}\log\frac{1}{\sigma} \\
&= \lim_{\rho \searrow 0} \int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\sigma)}\frac{|Du|^2}{2} + \sum_{i = 1}^k \left [ \int_{\B(a_i;\sigma)(a_i) \setminus \B(a_i;\rho)} \frac{|Du|^2}{2} - \frac{\lambda([u, a_i])^2}{4\pi}\log\frac{\sigma}{\rho}\right ] \\
&= \int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\sigma)}\frac{|Du|^2}{2} + \sum_{i = 1}^k\lim_{\rho \searrow 0} \int_\rho^{\sigma} \left[\int_{\partial \B(a_i;r)}\frac{|Du|^2}{2}\d \mathscr H^1 - \frac{\lambda([u, a_i])^2}{4\pi r}\right]\d r.
\end{split}
\end{equation*}
The terms in bracket are non-negative by lemma \ref{lemma:loc_lower_bound_circle}. As a consequence Levi's monotone convergence theorem applies. The reciprocal follows from the same computation.
\end{proof}
When $p\in [1,2)$, we introduce the following $p$-renormalized energy
\begin{equation}
\label{eq_caerieNuoGheich6fohquooz}
\begin{split}
\mathcal E_{\ren}^{1,p}(u) &\doteq \int_{\Omega}\frac{|Du|^p}{p} - \sum_{i = 1}^k\frac{\lambda([u,a_i])^p}{(2\pi)^{p - 1} p}\frac{1}{2 - p} \\
&= \lim_{\rho \searrow 0}\int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\rho)}\frac{|Du|^p}{p} - \sum_{i = 1}^k\frac{\lambda([u,a_i])^p}{(2\pi)^{p - 1} p}\frac{1 - \rho^{2 - p}}{2 - p}.
\end{split}
\end{equation}
defined for maps $u \in W^{1,2}_{\mathrm{ren}}(\Omega,\mathcal N)$. In \cite[Theorem 5.1]{monteil2021renormalised}, it is shown that if $u \in W^{1,2}_{\mathrm{ren}}(\Omega, \mathcal N)$ then $Du \in L^{2,\infty}(\Omega)$, see also corollary \ref{coro:erenweakL2}.
In particular $W^{1,2}_{\mathrm{ren}}(\Omega, \mathcal N) \subset W^{1,p}(\Omega,\mathcal N)$ for $p \in [1,2)$ since the weak-$L^2$ (Marcinkiewicz) space satisfies $L^{2,\infty}(\Omega) \subset L^p(\Omega)$, see \textit{e.g.} \cite[Theorem 5.9]{castillo2016introductory}.
\begin{proposition}\label{prop:limitErenpToEren}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a Riemannian manifold $\mathcal N$ with $\sys(\mathcal N) > 0$.
If $u \in W^{1,2}_{\mathrm{ren}}(\Omega,\mathcal N)$ then
\begin{equation*}
\mathcal E_{\ren}^{1,p}(u) \xrightarrow{p \nearrow 2} \mathcal E_{\ren}^{1,2}(u).
\end{equation*}
\end{proposition}
The proof of proposition~\ref{prop:limitErenpToEren} relies on the following Hölder-type estimate :
\begin{lemma}\label{lemma:usefulmajoration}
Let $u \in W^{1,2}_{\mathrm{loc}}(\B(a;\rho)\setminus\{a\})$. If $\lambda([u,a]) > 0$, then, for every $r \in (0,\rho)$,
\begin{equation}\label{eq:leb_hat}
\int_{\partial \B(a;r)}\frac{|Du|^p}{p}\d \HH^1 - \frac{\lambda([u, a])^p}{p(2\pi r)^{p - 1}} \leq \Big (\frac{2\pi r}{\lambda([u,a])} \Big )^{2 - p} \left [\int_{\partial \B(a;r)}\frac{|Du|^2}{2}\d \HH^1 - \frac{\lambda([u, a])^2}{4\pi r} \right ]
\end{equation}
and
\begin{equation}
\int_{\B(a;\rho)}\frac{|Du|^p}{p} - \frac{\lambda([u,a])^p}{p(2\pi |x - a|)^{p}} \d x \leq \Bigl(\frac{2\pi \rho}{\lambda([u,a_i])}\Bigr)^{2 - p} \int_{\B(a;\rho)}\frac{|Du|^2}{2}- \frac{\lambda([u,a])^2}{8\pi^2 |x - a|^2} \d x
\end{equation}
provided the right-hand sides are finite.
\end{lemma}
\begin{proof}[Proof of lemma \ref{lemma:usefulmajoration}]
By Young's inequality with exponents $1/(2/p) + 1/(2/(2 - p)) = 1$,
\begin{equation*}
|Du|^p \Big ( \frac{\lambda([u,a_i])}{2\pi r}\Big)^{2 - p} \leq \frac{p}{2} |Du|^2 + \Bigl(1 - \frac{p}{2}\Bigr) \Big ( \frac{\lambda([u,a_i])}{2\pi r}\Big)^{2}.
\end{equation*}
Hence,
\begin{equation*}
\frac{|Du|^p}{p} - \frac{1}{p}\Big ( \frac{\lambda([u,a])}{2\pi r}\Big)^{p}
\leq \Big (\frac{2\pi \rho}{\lambda([u,a])} \Big )^{2 - p} \left [ \frac{|Du|^2}{2} - \frac{1}{2}\Big ( \frac{\lambda([u,a])}{2\pi r}\Big)^{2} \right ].
\end{equation*}
Integrating on $\partial \B(a_i;r)$ we obtain \eqref{eq:leb_hat}.
The second conclusion follows by integration.
\end{proof}
\begin{proof}[Proof of proposition \ref{prop:limitErenpToEren}]
Let $u \in W^{1,2}_{\mathrm{ren}}(\Omega,\mathcal N)$ and let $(a_1,\dots,a_k) \in \Omega^k$ be the assumed distinct points associated with it (see definition \ref{def:Eren}). Fix $\rho \in (0,\rho(a_i)_{i = 1,\dots,k})$.
By Young's inequality,
\begin{equation}\label{eq:lebHat}
\frac{|Du|^p}{p} \leq \frac{|Du|^2}{2} + \frac{2 - p}{2p} \leq \frac{|Du|^2}{2} + \frac{1}{2}
\end{equation}
By Lebesgue's dominated convergence theorem with \eqref{eq:lebHat} as a Lebesgue dominant, since \(\Omega \setminus\bigcup_{i = 1}^k \B(a_i;\rho)\) has finite Lebesgue measure, we get
\begin{equation}\label{eq:loindespoints}
\int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\rho)}\frac{|Du|^p}{p} \xrightarrow{p \nearrow 2} \int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\rho)}\frac{|Du|^2}{2}.
\end{equation}
We now study the other part of the renormalized energy. The Lebesgue's convergence theorem will be used, with a Lebesgue dominant provided by \eqref{eq:leb_hat} in lemma \ref{lemma:usefulmajoration}.
By the converse of Lebesgue's dominated convergence theorem, for almost every $r \in (0,\rho)$,
\begin{equation}\label{eq:ae_conv}
\int_{\partial \B(a_i;r)}\frac{|Du|^p}{p} \xrightarrow{p \nearrow 2} \int_{\partial \B(a_i;r)}\frac{|Du|^2}{2}.
\end{equation}
In view of the bound \eqref{eq:leb_hat} and of the convergence \eqref{eq:ae_conv}, we get by Lebesgue's dominated convergence theorem that
\begin{equation}\label{eq:presdespouints}
\int_0^\rho \int_{\partial \B(a_i;r)}\frac{|Du|^p}{p}\d \HH^1 - \frac{\lambda([u,a_i])^p}{p (2\pi r)^{p - 1}} \d r \xrightarrow{p \nearrow 2} \int_{0}^\rho \int_{\partial \B(a_i;r)}\frac{|Du|^2}{2}\d \HH^1 - \frac{\lambda([u,a_i])^2}{4\pi r} \d r.
\end{equation}
Combining \eqref{eq:loindespoints} and \eqref{eq:presdespouints} and proposition \ref{prop:polarCoordEren}, we get that $\mathcal E_{\ren}^{1,p}(u) \to \mathcal E_{\ren}^{1,2}(u)$ when $p\nearrow 2$.
\end{proof}
\subsection{Existence of renormalizable mappings}
If the manifold $\mathcal N$ is compact there always exists a renormalizable mapping (see proposition \ref{prop:tralalala}) while in the non-compact case we descibe a manifold carrying no nonconstant renormalizable mappings (see proposition \ref{prop:wrenpeutetreempty})
\begin{proposition}\label{prop:tralalala}
Let $\Omega \subset \bb{R}^2$ be a Lipschitz domain, $\mathcal N$ be a Riemannian manifold and $g \in W^{\sfrac{1}{2},2}(\Omega,\mathcal N)$. If the manifold $\mathcal N$ is compact, there exists $u \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ such that $\Tr_{\partial \Omega}u = g$.
\end{proposition}
We recall that in the present paper every Riemannian manifold is assumed to be connected as explained in the introduction (section \ref{section:intro}).
\begin{proof}[Proof of proposition \ref{prop:tralalala}]
Taking $\delta>0$ sufficiently small the set $\Omega_\delta \doteq\{x \in \overline{\Omega} : \dist(x,\partial \Omega)\leq \delta\}$ is equivalent by biLipschitz homeomorphism to $\partial \Omega \times [0,1]$. As every $W^{\sfrac{1}{2},2}(\Omega,\mathcal N)$-map is smoothly homotopic to a smooth map, we realize this homotopy on $\Omega_\delta \simeq \partial \Omega \times [0,1]$. We therefore assume that $g$ is smooth.
We may also assume that $\Omega$ is simply connected. By the connectedness of $\mathcal N$ one can connect the image $g(\partial \Omega)$ by paths realized by segments in $\Omega$. Taking a finite number of such paths one partitions $\Omega$ in simply connected rooms delimited by the union of $\partial \Omega$ and those paths.
Assuming that $\Omega$ is simply connected, $\partial \Omega$ is equivalent to the circle $\mathbb S^1$. By compactness of the manifold $\mathcal N$, there exists a smooth map $\gamma : \mathbb S^1 \to \mathcal N$ such that $\gamma$ is homotopic to $g$ and \(\lambda(g) = \ell(\gamma) = 2\pi \|\gamma'\|_\infty.\) This is an instance of the existence of constant speed geodesics in each homotopy class (see \cite[Proposition 6.28]{lee2018riem}). Eventually $\gamma$ is token constant. There exists a small $\varepsilon> 0$ and $a \in \Omega$ such that $\Omega\setminus \B(a;\varepsilon)$ is equivalent to $\mathbb S^1 \times [0,1]$ and we realize the homotopy between $g$ and $\gamma$ there.
Combining the above described homotopies we obtain a map $u\in C^\infty(\Omega\setminus \B(a;\varepsilon), \mathcal N)\cap W^{1,2}(\Omega\setminus \B(a;\varepsilon), \mathcal N)$. We futher define for all $x \in \B(a;\varepsilon)$
\[
u(x) \doteq \gamma\Big (\frac{x - a}{|x - a|}\Big )
\]
and observe that $|D u(x)| = \|\gamma'\|_\infty / |x|$ so that for $\rho \in (0,\varepsilon)$
\[
\int_{\B(a;\varepsilon)\setminus\B(a;\rho)}|D u(x)|^2 = \frac{\lambda([g])^2}{2\pi}\log\frac{\varepsilon}{\rho},
\]
showing the renormalized caracter of $u$ near $a$.
\end{proof}
\begin{proposition} \label{prop:wrenpeutetreempty}
Let $\Omega \subset \bb{R}^2$ be a Lipschitz simply connected domain and $(\mathcal N, g)$ be the Riemannian manifold defined by $\mathcal N \doteq \mathbb S^1 \times \bb{R} \subset \bb{R}^3$ with metric defined for every $(v,t) \in \mathbb S^1 \times \bb{R}$ and \(h \in T_{v, t} \mathbb S^1 \times \bb{R} \subseteq \bb{R}^3\) by
\begin{equation}
g_{(v,t)}(h) \doteq (1 + f(t))\lvert h \rvert^2 \text{ where }
f(t) \doteq \frac{1}{\sqrt{1 + t^2}}.\label{eq:sdfjk}
\end{equation}
If $u \in W^{1,2}_{\mathrm{ren}}(\Omega,\mathcal N)$, then $\Tr_{\partial \Omega}u$ is homotopic to a constant.
\end{proposition}
In proposition \ref{prop:wrenpeutetreempty}, the manifold $\mathcal N$ is not presented as an isometrically embedded submanifold of $\bb{R}^3$. The proof relies on the fact that $f$ is not integrable in the complement of any compact set.
\begin{proof}[Proof of proposition \ref{prop:wrenpeutetreempty}]
The proof first reduces to the case of $\Omega = \B_1$ by a change of variable in the Dirichlet energy.
Given a map $u \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ such that $\Tr_{\partial \Omega}u$ is not homotopic to a constant there exist $a \in \Omega$ and $\sigma > 0$ such that $u \in W^{1,2}_\mathrm{loc}(\B(a;\sigma)\setminus \{a\},\mathcal N)$.
We may further assume that $a = 0, \sigma = 0$ and $\Tr_{\mathbb S^1}u$ is not homotopic to a constant.
We finally write $\lambda \doteq \lambda([u,0]) = 2\pi\,\lvert \deg ([u,0])\rvert$.
We denote $\pi_{\mathbb S^1} : \mathbb S^1 \times \bb{R} \to \mathbb S^1$ the projection on the circle, $\pi_{\bb{R}} : \mathbb S^1 \times \bb{R} \to \bb{R}$ the projection on the real line and claim that $\pi_{\bb{R}} \circ u$ is unbounded.
If $\pi_{\mathbb S^1} \circ \gamma$ is not homotopic to a constant, by lemma \ref{lemma:loc_lower_bound_circle},
\begin{align*}
\int_{\mathbb S^1}|\gamma'|_g^2 &\geq \int_{\mathbb S^1}(1 + f \circ \pi_{\bb{R}} )|(\pi_{\mathbb S^1} \circ \gamma)'|^2 + \int_{\mathbb S^1} |(\pi_{\bb{R}} \circ \gamma)'|^2 \\
&\geq \frac{\lambda^2}{2\pi}(1 + \inf_{\mathbb S^1}f\circ \pi_{\bb{R}} \circ \gamma) + \frac{1}{2\pi}(\sup_{\mathbb S^1}|\pi_{\bb{R}} \circ \gamma| - \inf_{\mathbb S^1} |\pi_{\bb{R}} \circ \gamma|)^2.
\end{align*}
Let us fix $t>0$. If
\[
\int_{\mathbb S^1}|\gamma'|_g^2 \leq \frac{\lambda^2}{2\pi} + t
\]
then
\(
t \geq\frac{\lambda^2}{2\pi}\inf_{\mathbb S^1}f\circ \pi_{\bb{R}} \circ \gamma + \frac{1}{2\pi}(\sup_{\mathbb S^1}|\pi_{\bb{R}} \circ \gamma| - \inf_{\mathbb S^1} |\pi_{\bb{R}} \circ \gamma|)^2,
\)
meaning that
\[
\sup_{\mathbb S^1} |\pi_{\bb{R}} \circ \gamma|^2
\geq \Bigl (\frac{\lambda^2}{2\pi t}\Bigr)^2 - 1
\text{ and }
\inf_{\mathbb S^1} |\pi_{\bb{R}} \circ \gamma| \geq \sup_{\mathbb S^1}|\pi_{\bb{R}} \circ \gamma|-\sqrt{2\pi t} \geq D(t)
\] where $D(t) \doteq ( (\frac{\lambda^2}{2\pi t} )^2 - 1)_+^\frac{1}{2} -\sqrt{2\pi t}.$
Since $u \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ there exists $M>0$ such that for all $\rho >0$ there exists $r\in (\rho,1)$ satisfying
\[
\log \frac{1}{\rho} \int_{\mathbb S^1}|u|_{\mathbb S^1}(r\cdot)'|_g^2 \leq\int_{\B_1\setminus\B_\rho} \frac{|Du|^2_g}{2} \leq M + \frac{\lambda^2}{2\pi} \log \frac{1}{\rho}.
\]
We therefore deduce that for each $\rho \in (0,1)$ there exists $r\in (\rho,1)$ such
\[
\inf_{\mathbb S^1} |\pi_{\bb{R}} \circ u|_{\mathbb S^1_r}| \geq D(\frac{M}{\log \frac{1}{\rho}})
\]
which implies the uniform divergence of the map $u$ as the right-hand side is unbounded as $\rho$ approaches zero.
We now claim that the set $\pi_{\bb{R}} \circ u(\B_1\setminus \B_\rho)$ contains the whole interval $[0, \inf_{\mathbb S^1}|\pi_{\bb{R}}\circ u|_{\mathbb S^1}(r\cdot)|]$. We assume that the map is continuous on $\B_1\setminus\B_\rho$ by approximation : this follows by \cite{bousquet2017density} and the fact that $\mathcal N$ satisfies the \emph{trimming property} \cite[Proposition 6.3]{bousquet2017density} ; we leave the details to the reader.
Assume there exist a point $x \in \mathbb S^1 \times \bb{R}$ such that $x \not\in u(\overline{\B_1\setminus \B_\rho})$. We then observe that $\mathbb S^1 \times [0,1]\setminus \{a\}$ (where $a\in \mathbb S^1\times(0,1)$) is homotopic to two copies of $\mathbb{S}^1$ joint by a segment. This implies that using the above described homotopy one can construct a retraction of the image $u(\overline{\B_1\setminus \B_\rho})$ to a continous mapping
\(\mathbb S^1\times [0,1] \simeq \B_1\setminus \B_\rho \to \mathcal N \xrightarrow{\pi_{\mathbb S^1}} \mathbb S^1\)
that is an homotopy between two nonzero degree maps in $t = 0,1$ but is constant near $t = 1/2$.
Since $|Du|_g^2 = |Du|^2 + (f \circ\pi_{\bb{R}}\circ u) |Du|^2$
and $|Du|^2 \geq 2|\partial_1 u \times_{\bb{R}^3} \partial_2 u|$, we obtain by the area formula \cite[Theorem 1.6]{giaquinta2006area} that
\begin{align*}
\int_{\B_1 \setminus\B_\rho} \frac{|Du|^2_g}{2} - \frac{\lambda^2}{2\pi}\log\frac{1}{\rho} &\geq \int_{\B_1 \setminus\B_\rho}f\circ \pi_2 \circ u|\partial_1 u \wedge \partial_2 u| \\
&\geq \int_{\mathcal N} f\circ \pi_2(y)\mathcal H^0(\B_1 \setminus\B_\rho \cap {u^{-1}(\{y\})}) \d y \\
&\geq \int_{0}^{\inf_{\mathbb S^1}|\pi_{\bb{R}}\circ u|_{\mathbb S^1}(r\cdot)|} f\d y
\end{align*}
by the surjectivity of $\pi_{\bb{R}} \circ u(\B_1\setminus \B_\rho)$ on the interval $[0, \inf_{\mathbb S^1}|\pi_{\bb{R}}\circ u|_{\mathbb S^1}(r\cdot)|]$.
When $\rho \searrow 0$, the left hand side is bounded as the map $u$ is assumed to be renormalizable while by the choice of $f \not\in L^1(\bb{R})$ the right hand side is unbounded, contradicting the fact that $u \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$.
\end{proof}
\subsection{Minimizing renormalizable singular harmonic maps}
In this section we recall the notion of minimizing renormalizable harmonic map.
\begin{definition}\label{def:mineren}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a Riemannian manifold $\mathcal N$ with $\sys(\mathcal N) > 0$. A map $u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ associated with $k$ distinct points $\{a_i\}_{i = 1,\dots,k} \subset \Omega$ is a \emph{minimal renormalizable singular harmonic map} (see \cite[Definition 7.8]{monteil2021renormalised}) whenever for every map $v \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ associated with $k$ distinct points $\{b_i\}_{i = 1,\dots,k} \subset \Omega$ such that $\Tr_{\partial\Omega}u_* = \Tr_{\partial\Omega}v$ and as sets \[\{([u_*,a_1],a_1),\dots, ([u_*,a_k],a_k)\} = \{([v,b_1],b_1),\dots, ([v,b_k],b_k)\},\] we have \( \mathcal E_{\ren}^{1,2}(u_*) \leq \mathcal E_{\ren}^{1,2}(v).\)
\end{definition}
When the manifold $\mathcal N$ is compact, minimal renormalizable maps enjoy of the following properties. Our analysis does not rely on those properties.
\begin{proposition}\label{prop:reg_of_ren_map}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a compact Riemannian manifold $\mathcal N$. Let $u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ associated with $k$ points $\{a_i\}_{i = 1,\dots,k} \subset \Omega$ be a \emph{minimizing renormalizable singular harmonic map}. If $\mathcal N$ is compact, then
\begin{enumerate}[(i)]
\item $u_* \in C^\infty(\Omega\setminus \{a_i\}_{i = 1,\dots,k},\mathcal N)$,
\item \label{item:behviooir_neara pont} for each $i = 1,\dots,k$, $\sup_{x \in \B(a_i;\rho(\{a_i\}_{i = 1,\dots,k}))}|x - a_i||Du_*(x)| < +\infty$,
\item for each $\rho \in (0, \rho(\{a_i\}_{i = 1,\dots,k}))$, $u_*$ is a minimizing harmonic map on $\Omega \setminus \bigcup_{i = 1}^k \B(a_i;\rho)$ with respect to its own boundary conditions provided $\Omega \setminus \bigcup_{i = 1}^k \B(a_i;\rho)$ is a Lipschitz domain.
\end{enumerate}
\end{proposition}
\subsection{Renormalized energy of configuration of points}
Following \cite[(2.8) and (2.11)]{monteil2021renormalised}, one can define the following renormalized energy of a configuration of points for a topological resolution $(\gamma_1,\dots,\gamma_k)$ of $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$
\begin{multline}
\label{eq_def_renorm_top}
\mathcal{E}^{1, 2}_{\mathrm{top}, g, \gamma_1, \dots, \gamma_k} (a_1, \dotsc, a_k)\\
\doteq \inf
\Bigg\{\int_{\Omega \setminus \bigcup_{i = 1}^k \B(a_i; \rho)}
\frac{\abs{D u}^2}{2} - \sum_{i = 1}^k \frac{\lambda([u, a_i])^2}{4 \pi} \log \frac{1}{\rho}: \\
\begin{matrix} \rho \in (0, \rho(\{a_i\}_{i = 1,\dots,k})) \\
u \in W^{1, 2} (\Omega \setminus \textstyle \bigcup_{i = 1}^k \B(a_i;\rho), \mathcal N), \\
\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega} u = g, \;
\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb{S}^1} u (a_i + \rho \cdot) \text{ is homotopic to } \gamma_i
\end{matrix}
\Bigg \}.
\end{multline}
The renormalized energy is locally Lipschitz function of configurations of distinct points, which is bounded from below when the singularities \([u, a_1], \dotsc, [u, a_i]\) form a minimal topological resolution of the boundary condition \(g\).
The renormalized energy of the configuration formed by the singularities provides a lower bound on the renormalized energy of a mapping \cite[Proposition 7.2]{monteil2021renormalised}.
\begin{proposition}
\label{proposition_ren_map_to_pts} Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a compact Riemannian manifold $\mathcal N$.
If $u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$, if $\{a_i\}_{i = 1,\dots,k} \subset \Omega$ are the associated distinct singular points and if \(g = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega} u_*\), then
\begin{equation*}
\mathcal{E}^{1, 2}_{\mathrm{top}, g, \gamma_1, \dotsc, \gamma_k} (a_1, \dotsc, a_k)
\le \mathcal{E}^{1, 2}_{\mathrm{ren}} (u_*).
\end{equation*}
where $\gamma_i \in [u_*,a_i]$ $i = 1,\dots,k$ are constant speed geodesic.
\end{proposition}
Conversely, given any configuration of points and choice of geodesics there exists a singular Sobolev map having homotopic singularities \cite[Proposition 8.2]{monteil2021renormalised}.
\begin{proposition}\label{proposition_ren_map_to_ptsII} Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a compact Riemannian manifold $\mathcal N$.
Given distinct points \(a_1, \dotsc, a_k \in \Omega\) and geodesics \(\gamma_1, \dotsc, \gamma_k : \mathbb{S}^1 \to \mathcal N\), there exists \(u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)\) with trace \(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega} u_* = g\) with associated singular points \(a_1, \dotsc, a_k\) such that \(\gamma_i\) and \([u_*, a_i]\) are homotopic and
\begin{equation*}
\mathcal{E}^{1, 2}_{\mathrm{ren}} (u_*)
\le \mathcal{E}^{1, 2}_{\mathrm{top}, g, \gamma_1, \dotsc, \gamma_k} (a_1, \dotsc, a_k).
\end{equation*}
\end{proposition}
\section{Upper bound}\label{sec:upper_bound}
\label{subsec:upper_bound} The renormalized energy introduced, we give the upper bound on sequences of minimizing $p$-harmonic maps.
\begin{proposition}[Upper bound]\label{prop:upperBound}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$, a Riemannian manifold $\mathcal N$ and $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$.
For all renormalizable $u \in W^{1,2}_{\mathrm{ren}}(\Omega,\mathcal N)$ associated to distinct points $a_i \in \Omega$, $i = 1,\dots,k$, if $\{([u,a_i],a_i)\}_{i = 1,\dots,k}$ is minimal topological resolution of $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u$, then
\begin{equation}\label{eq:upperBoundII}
\varlimsup_{p \nearrow 2}\left [ \int_\Omega\frac{|D u|^p}{p} - \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u)}{2 - p}\right ] \leq \mathcal E_{\ren}^{1,2}(u) + \mathrm{H}([u,a_i])_{i = 1,\dots,k}
\end{equation}
where
\begin{equation}\label{eq:defofH}
\mathrm{H}([u,a_i])_{i = 1,\dots,k} \doteq \sum_{i = 1}^k \frac{\lambda([u, a_i])^2}{8\pi} \Bigg [1+\log\Big (\frac{2\pi}{\lambda([u, a_i])}\Big )^2\Bigg ].
\end{equation}
\end{proposition}
\scauthor{Hardt} and \scauthor{Lin} established this result in the case of the circle $\mathcal N = \mathbb{S}^1$ \cite[Theorem 2.10]{hardt1995singularities}.
We observe that minimal topological resolutions when $\mathcal N = \mathbb{S}^1$ either all satisfy $\mathrm{deg}([u,a_i]) = 1$ or all satisfy $\mathrm{deg}([u,a_i]) =-1$, so that for every $k \in \bb{N}$, $\mathrm H([u,a_i])_{i = 1,\dots,k} = k \pi/2$ as $\lambda([u,a_i]) = 2 \pi$.
When $\Omega$ is simply connected, one further knows that $k = \deg (g, \partial\Omega)$.
The assumptions of proposition \ref{prop:upperBound} are consistent with the existence of minimizing $p$-harmonic maps $u_p \in W^{1,p}_g(\Omega,\mathcal N)$ (see proposition \ref{thm:HLthm}) and the proposition implies the first order bound
\begin{equation}\label{eq:fistorderupperbound}
\varlimsup_{p \nearrow 2}(2 - p)\int_\Omega \frac{|D u_p|^p}{p} \leq \sum_{i = 1}^k\frac{\lambda([u, a_i])^2}{4\pi}.
\end{equation}
In fact, by proposition \ref{prop:compactnessthm},
\begin{equation}\label{eq:fistordehfjkmdvkrupperbound}
\lim_{p \nearrow 2}(2 - p)\int_\Omega \frac{|D u_p|^p}{p} = \mathcal E_{\mathrm{sg}}^{1,2}(g).
\end{equation}
In particular the limit in \eqref{eq:fistordehfjkmdvkrupperbound} only depends on the homotopy class of $g$.
\begin{proof}[Proof of proposition \ref{prop:upperBound}]
We obtain
\begin{equation}\label{eq:upperBound}
\varlimsup_{p \nearrow 2}\left [ \int_\Omega \frac{|D u|^p}{p} - \frac{1}{2 - p}\sum_{i = 1}^k \frac{\lambda([u, a_i])^p}{(2\pi)^{p - 1} p}\right ] \leq \mathcal E_{\ren}^{1,2}(u)
\end{equation}
by proposition \ref{prop:limitErenpToEren} and the fact that $W^{1,2}_{\mathrm{ren},g}(\Omega,\mathcal N) \subset W^{1,p}_g(\Omega,\mathcal N)$.
To obtain \ref{eq:upperBoundII} we substract $\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u)/(2 - p)$ and use the fact that
\[
\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_\Omega u) = \sum_{i = 1}^k \frac{\lambda([u, a_i])^2}{4\pi}
\]
and
\begin{multline}\label{eq:technicallimit}
\lim_{p \nearrow 2}\frac{1}{2 - p}\left [\sum_{i = 1}^k \frac{\lambda([u, a_i])^p}{(2\pi)^{p - 1} p} - \sum_{i = 1}^k \frac{\lambda([u, a_i])^2}{4\pi}\right ] \\= \frac{1}{2}\left [\sum_{i = 1}^k \frac{\lambda([u, a_i])^2}{4\pi} - \sum_{i = 1}^k \frac{\lambda([u, a_i])^2}{2\pi}\log\frac{\lambda([u, a_i])}{2\pi} \right]. \qedhere
\end{multline}
\end{proof}
The compactness condition on the manifold $\mathcal N$ can be in fact weakened in proposition \ref{prop:upperBound} by the weaker condition that there exists at least a renormalizable map $u$ of trace $g$ \textit{i.e.} $W^{1,2}_{\mathrm{ren},g}(\Omega,\mathcal N) \neq \Oset$. Every compact manifold $\mathcal N$ satisifies $W^{1,2}_{\mathrm{ren},g}(\Omega,\mathcal N) \neq \Oset$ (propsoition \ref{prop:tralalala}) while their exist non-compact manifolds carrying no renormalizable mappings (proposition \ref{prop:wrenpeutetreempty})
\section{Lower bound}\label{section:lower_bound}
In order to state the proposition that will give us the lower bound we need to extend maps $u \in W^{1,p}(\Omega, \mathcal N)$ of trace $g \in W^{\sfrac{1}{2},2}(\Omega,\mathcal N)$ to a larger domain $\Omega_\delta$.
\subsection{Nonlinear extension of Sobolev maps}
Unlike the linear case $W^{\sfrac{1}{2},2}(\partial \Omega,\bb{R}^\nu)$, the trace operator is only locally surjective in the following sense (see \cite{bethuel1995extensions}):
\begin{proposition}\label{prop:non-surjectivity_of_the_trace}
Fix an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$ and a Riemannian manifold $\mathcal N$.
There exists a $\delta = \delta(\partial\Omega) > 0$ which depends only on the boundary $\partial\Omega$ such that for every boundary data $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$ one can construct a map $U \in W^{1,2}(\partial \Omega_\delta,\mathcal N)$ satisfying $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}U = g$.
\end{proposition}
Here $\partial \Omega_\delta$ means $\{x \in \bb{R}^2 : \dist(x,\partial \Omega) < \delta\}$. The proof of this lemma is a variant of \cite[Theorem 1]{mironescu_trace_2020}. We point out that there is no quantitative estimate on \(U\) \cite{vanschaftingen2021sobolev}.
Given the previous lemma, one can extend a map $u \in W^{1,p}_g(\Omega,\mathcal N)$, $p \in [1,2]$, to a map $\Bar{u} \in W^{1,p}_g(\Omega_\delta,\mathcal N)$ where $\Omega_\delta \doteq \{x \in \bb{R}^2 : \dist(x,\partial \Omega) < \delta\}$. Indeed, given the map $U$ of lemma \ref{prop:non-surjectivity_of_the_trace} we set
\begin{equation}\label{eq:extension_of_maps}
\bar u \doteq \begin{cases}u & \text{ on } \Omega \\
U\big|_{\Omega_\delta \setminus \Omega}& \text{ on } \Omega_\delta \setminus \Omega.
\end{cases}
\end{equation}
Along the boundary $\partial \Omega$ the traces of \(U\) and \(u\) coincide. Hence by the integration by parts formula it follows that the extended map $\bar u$ possesses a weak derivative on the extended domain $\Omega_\delta$ \cite{vanschaftingen2021sobolev}. For the sake simplicity we will denote $u$ the extended map instead of $\bar u$. We note that this extension is not canonical as the map $U$ is in general not unique.
\subsection{Approximation by smooth maps except at finitely many points}
The dense class we will use takes its roots in the work of Fabrice \textsc{Bethuel} \cite[Theorem 2]{bethuel_approximation_1991} (see also \cite{vanschaftingen2021sobolev}).
The following density result is due to Pierre \textsc{Bousquet}, Augusto \textsc{Ponce} and Jean \textsc{Van Schaftingen} \cite{bousquet_strong_2015,bousquet2017density} (see also \cite{ponce_closure_2009}).
\begin{proposition}\label{prop:density_of_the_R_class}
Fix $p \in (1,2)$, an open bounded Lipschitz domain $\Omega \subset \bb{R}^2$, a Riemannian manifold $\mathcal N$ and a map $u \in W^{1,p}(\Omega,\mathcal N)$. Assume that there exists an open set $\omega \subset\subset \Omega$ with $u \in W^{1,2}(\Omega\setminus \bar\omega,\mathcal N)$.\\
Then, for every $\varepsilon> 0$ and every $\theta \in (0,1)$, setting
\[\omega_\theta \doteq \{x\in \Omega : \dist(\omega,x) < \theta \dist(\omega,\Omega)\},\] there exists a map $v \in W^{1,p}(\omega_\theta,\mathcal N)$ such that
\begin{enumerate}[(i)]
\item for some points $\{a_i\}_{i = 1,\dots,k}\subset \omega_\theta$ ($k\in \bb{N}$) such that $v \in C^\infty(\omega_\theta\setminus\{a_i\}_{i = 1,\dots,k},\mathcal N)$,
\item $\|u - v\|_{W^{1,p}(\omega_\theta)} \leq \varepsilon$,
\item $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \omega_\theta}u$ and $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \omega_\theta}v$ are homotopic.
\end{enumerate}
\end{proposition}
We point out that no compactness assumption on the manifold $\mathcal N$ is done.
\begin{proof}[Sketch of the proof of proposition \ref{prop:density_of_the_R_class}]
Assuming first that $u \in L^\infty \cap W^{1,p}(\omega,\mathcal N)$,
we observe that the proof \cite{bethuel_approximation_1991,bousquet_strong_2015,ponce_closure_2009} of the density of smooth functions except at a finite number of points (elements of $C^\infty(\omega\setminus \{a_i\}_{i = 1,\dots,k},\mathcal N)$) in $L^\infty \cap W^{1,p}(\omega,\mathcal N)$ works by considering small disks $\B(a_n;\rho_n)$ and adaptively mollifying whether the mean oscillation of \(u\) on the disk depending is small or not.
The condition is always satisfied if $u \in W^{1,2}(\B(a_n;\rho_n),\mathcal N)$.
In our case this implies that the modification of $u$ in $\Omega\setminus \omega_\theta$ for some small $\theta \in (0,1)$ will not change the homotopy.
By \cite[Theorem 1]{bousquet2017density}, as $p \not\in \bb{N}$, $L^\infty \cap W^{1,p}(\Omega,\mathcal N)$ is dense in $W^{1,p}(\Omega,\mathcal N)$ and the proof of it shows that the modification done on $u$ preserve the $W^{1,2}$-character on open sets. By this fact, we may assume that $u \in L^\infty \cap W^{1,p}(\Omega,\mathcal N)$ and $u \in W^{1,2}(\Omega\setminus \bar\omega,\mathcal N)$ by first approximating it by bounded maps.
\end{proof}
\subsection{Expansion of circles method}
We now describe the expansion of disks method of Robert \scauthor{Jerrard} \cite{jerrard_lower_1999} (see also Etienne \scauthor{Sandier} \cite{sandier_lower_1998}) that we adapt to the $p$-energy. We recall that $\B(a;\rho)$ denotes the open disk of radius $r \geq 0$ centered in $a \in \bb{R}^2$.
In particular we warn the reader that the empty set can be written $\B(a;0)$.
\begin{proposition}\label{prop:circleconstruction}
Let $a_1,\dots,a_k \in \Omega$ where $k \in \bb{N}\setminus\{0\}$ and $u\in W^{1,p}(\Omega,\mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\bar\Omega\setminus\{a_i\}_{i = 1,\dots,k}),\mathcal N)$. If $\dist(\{a_i\}_{i = 1,\dots,k}, \partial \Omega) > 0$, for every $\delta \in (0,\dist(\{a_i\}_{i = 1,\dots,k},\partial \Omega)]$ there exists a collection $\mathcal S$ of circles $\mathbb S^1(a;\rho) \subset \Omega$ such that
$\cup \mathcal S$ is a finite union of disjoint annuli \emph{i.e.}\ sets of the form $\A(c;\rho,\sigma) = \B(c;\sigma)\setminus\B(c;\rho) \subset \Omega$. We have for some centers $c_i \in \Omega$, $0 \leq \underline{r}_i< \overline{r}_i$ for $i = 1,\dots, N$
\begin{equation}\label{eq:annuli}
\cup \mathcal S = \bigcup_{i = 1}^{N}\{\mathbb S^1(c_i;r) : \underline{r}_i\leq r <\overline{r}_i\}
\end{equation}
and the union is disjoint. Moreover,
\begin{enumerate}[(i)]
\item \label{item:sum_prop} $\cup \mathcal S$ is contained in a finite number of disks $\B_1,\dots,\B_l$ such that \(\displaystyle \sum_{i = 1}^l \mathrm{diam}(\B_i) = \delta\) and $\partial \B_i \in \mathcal S$ for each $i = 1,\dots,l$,
\item \label{item:lesai}\label{item:localestimate} for every $\varepsilon \in (0,\delta]$ there exists a finite number of disks $\mathbb S^1(c_i,\rho_i)\in \mathcal S$ $i = 1,\dots,n$ such that $\displaystyle\sum_{i = 1}^{n} \rho_i = \varepsilon$, $\displaystyle\{a_i\}_{i = 1,\dots,k} \subset \bigcup_{i = 1}^n \B(c_i,\rho_i)$
\item for any subcollection $\{\mathbb S^1(c^*_i;\rho^*_i)\}_{i = 1,\dots,k_*} \subset \{\mathbb S^1(c_i,\rho_i)\}_{i = 1,\dots,n}$, there exists a disjoint collection of annuli $\{\A(c_i;\underline{r}_i,\overline{r}_i)\}_{i = 1, \dotsc, n} \subset \bigcup_{i = 1}^{k_*} \B(c_i^*, \rho_i^*)$, $i = 1,\dots,N$, such that
\begin{equation}\label{eq:propcirlcinequlaitu}
\Big [\sum_{i = 1}^{k_*}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1(c^*_i;\rho^*_i)}u)\Big ]^{p - 1}\Big [\sum_{i = 1}^{k_*}\rho^*_i\Big ]^{2 - p} \leq (2 - p)\sum_{i =1}^n\int_{\underline{r}_i}^{\overline{r}_i}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1(c_i;r)}u)^{p - 1} \frac{\d r}{r^{p - 1}}.
\end{equation}
\end{enumerate}
\end{proposition}
\emph{Lattice structure of $\mathcal S$.} In fact, the proof of proposition \ref{prop:circleconstruction} shows that $\mathcal S$ has a finite lattice structure which is a finite union of oriented trees whose roots are the elements $\{a_i\}_{i = 1,\dots,k}$ (see item \ref{item:lesai}), the top of each tree is the collection $\partial \B_i$ of item \ref{item:sum_prop} of the proposition.
An element of the lattice corresponds to a circle of the collection $\mathcal S$.
The orientation encodes the following partial order relation: $\mathbb S^1(a;\rho) \preccurlyeq \mathbb S^1(b;\sigma)$ if and only if $\B(a;\rho) \subset \B(b;\sigma)$.
Edges on the lattice correspond to a linear parametrized subfamily of $\mathcal S$: each edge is described by $\{\mathbb S^1(a,s\rho) : s \in (t_*,t^*)\}$ for some $a \in \Omega$, $0 < t_* < t^* < \infty$; these are the annuli of \eqref{eq:annuli}.
The tree structure carries some topological information: given a element $\mathbb S^1_*(a_*,\rho_*) \in \mathcal S$ of the oriented lattice and any family $F$ of elements of the lattice such that each $\mathbb S^1(a) \in F$ satisfies $\mathbb S^1(a;\rho) \preccurlyeq \mathbb S^1_*(a_*,\rho_*)$ for each $a_i \in \B(a_*,\rho_*)$ and there exists a unique $\mathbb S^1(a_i;\rho_i) \in F$ such that $a_i \in \B(a_*;\rho_*)$,
then $(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1}u(a + \rho \cdot))_{\mathbb S^1(a;\rho) \in F}$ is topological resolution of $(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1}u(a_* + \rho_* \cdot))$.
\emph{Particular cases.} We discuss to particular cases of proposition \ref{prop:circleconstruction}.
\begin{itemize}[--]
\item \emph{Single singularity.} If $a \in \Omega$, $u\in W^{1,p}(\Omega,\mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\bar\Omega\setminus\{a\},\mathcal N)$. Then for $\delta \in (0,\dist(\{a\},\partial\Omega))$, proposition \ref{prop:circleconstruction} yields $\mathcal S = \{\mathbb S^1(a;r) : r \in [0,\delta]\}$ and $\cup \mathcal S = \B(a;\delta) \subset \Omega$. In this precise case \eqref{eq:propcirlcinequlaitu} is in fact an equality.
\item \emph{Pair of singularities}.
If $a,b \in \Omega$, $u\in W^{1,p}(\Omega,\mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\bar\Omega\setminus\{a,b\},\mathcal N)$. Then for $\delta \in (0,\dist(\{a,b\},\partial\Omega))$, proposition \ref{prop:circleconstruction} yields a collection of circles whose union consists in two disks centered in $a,b$ respectively and an annuli surrounding the two disks whose radii are chosen in order to \eqref{eq:propcirlcinequlaitu} to hold.
\end{itemize}
We record the following corollary that extends proposition \ref{prop:loc_lower_bound} to general domains.
\begin{corollary}\label{coro:lower_bound}
Let $a_1,\dots,a_k \in \Omega$ and $u\in W^{1,p}(\Omega,\mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\bar\Omega\setminus\{a_i\}_{i = 1,\dots,k},\mathcal N)$. There exists $l \in \bb{N}$ disjoint disks $\B_i$ $i = 1,\dots,l$ such that its sum of the radii is $\delta \in (0,\dist(\{a_i\}_{i = 1,\dots,k},\partial \Omega)]$ and
\begin{equation*}
\delta^{2 - p}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u)^{p - 1} \leq (2 - p)\frac{(p - 1)^{p-1}}{(2\pi /p)^{2 - p}}\int_{\bigcup_{i = 1}^l\B_l}\frac{|Du|^p}{p}
\end{equation*}
\end{corollary}
\begin{proof}[Proof of the corollary \ref{coro:lower_bound}]
Using proposition \ref{prop:circleconstruction}, we define \(\mathcal B\) to be the collection of proposition \ref{prop:circleconstruction} \ref{item:sum_prop}. By construction the disks are disjoint. Using proposition \ref{prop:circleconstruction}\ref{item:localestimate} with the collection of assertion \ref{item:sum_prop}, proposition \ref{prop:loc_lower_bound} and proposition \ref{prop:decreasing}, we get the desired result.
\end{proof}
\begin{lemma}[\emph{Merging disks lemma}]\label{lemma:mergin_ball_lemma}
Let $\mathcal B_*$ be a finite collection of closed disks in $\bb{R}^2$. There exists a finite collection of disks $\mathcal B^*$ ---called the merged collection--- of closed disks such that
\begin{enumerate}[(i)]
\item two distinct disks $\B_1,\B_2 \in \mathcal B_*$ satisfy $\B_1 \cap \B_2 = \Oset$,
\item $B^*$ covers $\mathcal B_*$: for every \(\B \in \mathcal B_*\), there exists \(\B{}' \in \mathcal B^*\) such that \(\B \subseteq \B{}'\),
\item the sum of the radii is conserved \emph{i.e.} \(\displaystyle \sum_{\B(a;\rho) \in \mathcal B_*}\rho = \sum_{\B(a;\rho) \in \mathcal B^*}\rho.\)
\end{enumerate}
\end{lemma}
The proof of the lemma is well--known, see \emph{e.g.} \cite[p. 386]{sandier_lower_1998}\cite[lemma 3.1]{jerrard_lower_1999}.
\begin{proof}[Proof of lemma \ref{lemma:mergin_ball_lemma}]
For the first part of the proof assume that there are $k = 2$ intersecting disks $\B(a_1;\rho_1)$ and $\B(a_2;\rho_2)$. We then set \[
\mathcal B^* = \{\B(c;\rho_1 + \rho_2)\} \text{ where } c \doteq \frac{\rho_1a_1 + \rho_2a_2}{\rho_1 + \rho_2}.
\]
If $k \geq 3$ one proceeds by induction.
\end{proof}
\begin{proof}[Proof of proposition \ref{prop:circleconstruction}]
We fix $u\in W^{1,p}(\Omega,\mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\bar\Omega\setminus\{a_i\}_{i = 1,\dots,k},\mathcal N)$.
For simplicity we write
$\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;\rho))$ for $\mathcal E_{\mathrm{sg}}^{1,p'}(\Tr_{\mathbb S^1(a;\rho)}u)$ and $\mathcal E_{\mathrm{sg}}^{1,p'}(a)$ for $\mathcal E_{\mathrm{sg}}^{1,p'}([u,a])$.
Fix $t_0 > 0$ to be chosen later. We set
\begin{equation}\label{eq:deltatomuch}
\mathrm S(0) \doteq\bigcup_{0 < s < t_0}\mathcal S_s \quad \text{ with } \quad \mathcal S_s \doteq \bigcup_{i = 1}^k\{\mathbb S(a_i,s \mathcal E_{\mathrm{sg}}^{1,p'}(a_i))\}.
\end{equation}
We choose the largest $t_0 > 0$ such that the collection of circles satisfies for each $s \in (0,t_0)$,
\begin{enumerate}[(I)]
\item \label{item:maxdelta} the sum of the radii of the circles in the collection $\mathcal S_s$ does not exceed $\delta$,
\item \label{item:disjointness} the $\mathcal S_s$ a pairwise disjoint collection of circles, agglomerated in disjoint annuli.
\end{enumerate}
If the process does not satisfy anymore \ref{item:maxdelta} at $t_0$ then we stop the process.
Note that for all $s \in (0,t_0)$ every circle $\mathbb S^1(a_s,\rho_s) \in \mathcal S_s$ satisifies
\begin{equation}\label{eq:equalitytimeradii}
\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a_s;\rho_s))s =\rho_s.
\end{equation}
If at time $t_0$ some circles intersect, by lemma \ref{lemma:mergin_ball_lemma}, we merge them and we obtain new disks $\{\mathbb S^1(a_{t_0};\rho_{t_0})\}$ such that
\begin{equation}\label{eq:inequalitytimeradii}
\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a_s;\rho_s))s \leq \rho_s
\end{equation}
by the monotonicity of the singular energy (proposition \ref{prop:decreasing}).
The collection $\{\mathbb S^1(a_{t_0};\rho_{t_0})\}$ is the disjoint union of the collection $\mathcal S_{t_0}^=$ circles such that \eqref{eq:equalitytimeradii} holds at $t$ and the collection $\mathcal S_{t_0}^<$ of those disks that satisfies \eqref{eq:inequalitytimeradii} with a strict inequality sign.
This latter collection has the property to contain circles that can be decomposed in a finite number of circles such that \eqref{eq:equalitytimeradii} holds. We gather those in a collection that we denote $\mathcal S_{t_0,\mathrm{sub}}$. Indeed, before merging, \eqref{eq:equalitytimeradii} held for each disks. We define $\mathcal S_{t_0} \doteq \{\mathbb S^1(a_{t_0};\rho_{t_0})\} = \mathcal S_{t_0}^< \cup \mathcal S_{t_0}^=$ and $\mathcal S_{t_0^-} = \mathcal S_{t_0}^< \cup \mathcal S_{t_0,\mathrm{sub}}$.
Next we define
\begin{multline}\label{eq:inductionstep}
\mathrm S(1) \doteq\bigcup_{t_0 < s < t_1}\mathcal S_s \quad \text{ with } \quad \mathcal S_s \doteq \mathcal S_{t_0}^< \cup \mathcal S_s^= \quad \\\text{ where } \quad \mathcal S_s^= \doteq \bigcup\{\mathbb S(a_{t_0}, s \mathcal E_{\mathrm{sg}}^{1,p'}( \mathbb S^1(a_{t_0};\rho_{t_0})) : \mathbb S^1(a_{t_0};\rho_{t_0}) \in \mathcal S_{t_0}^=\}.
\end{multline}
where $t_1 > t_0$ is the largest number such that \ref{item:disjointness} and \ref{item:maxdelta} hold for $s\in(t_0,t_1)$ and
\begin{enumerate}[resume]
\item[(III)] \label{item:strict} $\mathcal E_{\mathrm{sg}}^{1,p'}( \mathbb S^1(a_{t_0};\rho_{t_0}))s < \rho_{t_0}$ for all $\mathbb S^1( a_{t_0};\rho_{t_0}) \in \mathcal S_{t_0}^<$.
\end{enumerate}
If at $t_1$, \ref{item:maxdelta} happens we stop the construction.
If at $t_1$ \ref{item:disjointness} occurs we may repeat the merging process by distinguishing $\mathcal S_{t_1} = \mathcal S_{t_0}^= \cup \mathcal S_{t_0}^<$ circles that satifies \eqref{eq:inequalitytimeradii} with an equality or strict inequality.
If \eqref{eq:equalitytimeradii} is unsatisfied at $t_1$ for some circles $\{\mathbb S^1(a_j;\rho_j)\}_{j = 1,\dots,l}$.
We set $\mathcal S^<_{t_1} = \mathcal S^<_{t_0}\setminus \{\mathbb S^1(a_j;\rho_j)\}_{j = 1,\dots,l}$ and redefine $\mathcal S^=_{t_1}$ to be $\mathcal S^=_{t_1}\cup \{\mathbb S^1(a_j;\rho_j)\}_{j = 1,\dots,l}$.
It is then possible to reiterate the construction \eqref{eq:inductionstep}.
After a finite number $N$ of iterations the construction of $\mathrm S(0),\mathrm S(1),\dots,\mathrm S(N)$ stops and we obtain a $T > 0$ such that
\begin{enumerate}[(a)]
\item \label{item:sum} the sum of the radii at of $\mathcal S_T$ is equal to $\delta$.
\item \label{item:topres} for each $t \in (0,T]$, $(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1(a;\rho)}u)_{\mathbb S^1(a;\rho) \in \mathcal S_t}$ is a topological resolution of $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u$
\item \label{item:c} for each $t \in (0,T]$, every disk in $\mathbb S^1(a_t;\rho_t) \in \mathcal S_t$ satisfies
either (1) $\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a_t;\rho_t))t =\rho_t$,
or (2) there exists $k_t \in \bb{N}$ disks $\{\mathbb S^1(a_t^i;\rho_t^i)\}_{i = 1,\dots,k_t}$ such that $\sum_{i = 1}^{k_t}\rho_{t}^i = \rho_t$, $\bigcup_{i = 1}^{k_t}\mathbb S^1(a_t^i;\rho_t^i) \subset \B(a_t;\rho_t)$, $(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1(a_t^i;\rho_t^i)}u)_{i = 1,\dots,k_t}$ is a topological resolution of $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1(a_t,\rho_t)}u$ and $\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a_t^i;\rho_t^i))t =\rho_t^i$ for each $i = 1,\dots,k_t$.
\item for each $0<s<t<T$, $\mathcal S_s \preccurlyeq \mathcal S_t$ in the sense that for each circle $\mathbb S^1(a,\rho) \in \mathcal S_s$ there exists $\mathbb S^1(a',\rho') \in \mathcal S_t$ such that $\mathbb S^1(a,\rho) \subset \B(a';\rho')$.
\end{enumerate}
\emph{The local estimate}. We prove an intermediate and localized version of \eqref{eq:propcirlcinequlaitu}. Fix some $\mathbb S^1(a^*;r^*) \in \mathcal S$. We set $\bar T$ the largest $t > 0$ such that $\mathcal S_t \cap \B(a^*,r^*) \neq \Oset$.
By construction $\mathcal S_{\bar T} \cap \B(a^*,r^*)$ consists in a disjoint and finite collection of annuli $\mathcal A = \{\A(a^i;r_i,r^i))\}_{i = 1,\dots,n_*}$. For simplicity we denote
\begin{equation*}
\int_{\bigcup \mathcal A}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;r))^{p - 1} \frac{\d r}{r^{p - 1}} \doteq \sum_{i = 1}^n \int_{r_i}^{r^i}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a^i;r))^{p - 1} \frac{\d r}{r^{p - 1}}.
\end{equation*}
We have by change of variables
\begin{multline*}
\int_{\bigcup \mathcal A}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;r))^{p - 1} \frac{\d r}{r^{p - 1}} = \sum_{j = 1}^N \sum_{\mathbb S^1(a;\rho)\in \B(a^*;r^*)\cap \mathcal S_{t_j}}\int_{t_j}^{t_{j+1}}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;\rho_j))\frac{\d s}{s^{p -1}}\\ \geq \sum_{j = 1}^N \int_{t_j}^{t_{j+1}}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a^*;r^*))\chi_{[0,\bar T]}\frac{\d s}{s^{p -1}}
\end{multline*}
by definition of the singular energy.
We thus have
\begin{equation*}
\int_{\bigcup \mathcal A}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;r))^{p - 1} \frac{\d r}{r^{p - 1}} \geq \mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a^*;r^*))\int_0^{\bar T}\frac{\d s}{s^{p -1}} = \mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a^*;r^*))\frac{{\bar T}^{2 -p}}{2 - p}.
\end{equation*}
\emph{The estimate \eqref{eq:propcirlcinequlaitu} in \ref{item:localestimate}}.
For every $\varepsilon \in (0,\delta]$ there exists a finite number of disks $\mathbb S^1(c_i,\rho_i)\in \mathcal S$ $i = 1,\dots,n$ such that $\sum_{i = 1}^{n} \rho_i = \varepsilon$ and $\{a_i\}_{i = 1,\dots,k} \subset \bigcup_{i = 1}^n \B(c_i,\rho_i)$.
This is because $t \mapsto \sum_{\mathbb S^1(a;\rho) \in \mathcal S_t } \rho$ is increasing and continuous equal to zero at $t = 0$, equal to $\delta$ at $t = T$ and so the intermediate value theorem applies and yields the existence of a $\bar T \in (0,T]$ such that $\mathcal S_{\bar T} = \{\mathbb S^1(c_i,\rho_i)\}_{i = 1,\dots,n}$ and \(\sum_{i = 1}^n \rho_i = \rho\). By \ref{item:c}(2) we can assume that for each $i = 1,\dots,n$,
\begin{equation*}
\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(c_i;\rho_i))\bar T =\rho_i.
\end{equation*}
Let us consider an arbitrary subcollection $\{\mathbb S^1(c_i^*;\rho_i^*)\}_{i = 1,\dots,k_*} \subset \{\mathbb S^1(c_i,\rho_i)\}_{i = 1,\dots,n}$, by the preceding paragraph there exist $N$ disjoint collections $\mathcal A_j$ of cardinality $n_j$ such that \[\mathcal A_j = \{\A(c_i; \underline{r}_i,\overline{r}^i)\}_{i = 1,\dots,n_j}\]
and therefore
\begin{align*}
\sum_{i = 1}^{N}\int_{\underline r_i}^{\overline r^i}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(c_i;r))^{p - 1} \frac{\d r}{r^{p - 1}} &= \int_{\bigcup_{i = 1}^{N}\bigcup \mathcal A_i}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;r))^{p - 1} \frac{\d r}{r^{p - 1}}\\
&\geq \sum_{i = 1}^{k_*}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(c_i^*;\rho_i^*))\frac{{\bar T}^{2 -p}}{2 - p}\\
&= \Big (\sum_{i = 1}^{k_*}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(c_i^*;\rho_i^*)\Big )^{p - 1}\frac{\displaystyle\Big (\sum_{i = 1}^{k_*}\rho_i^*\Big )^{2 -p}}{2 - p}
\end{align*}
where $N = \sum_{j = 1}^nn_j$ and the last equality follows from the choice of the disks.
\end{proof}
\subsection{Going to the Marcinkiewicz scale}
In this section we prove the following proposition. We recall that $A_+ = \max(0,A)$.
\begin{proposition}[Mixed Marcinkiewicz estimate]\label{prop:mixedlorentz} Fix $p \in (1,2)$. Let $u \in W^{1,2}_{\mathrm{loc}}(\Omega\setminus \{a_i\}_{i = 1,\dots,k}, \mathcal N) \cap W^{1,p}(\Omega, \mathcal N)$ where $\{a_i\}_{i = 1,\dots,k} \subset \Omega$. Let also $\delta \in (0,\dist(\{a_i\}_{i = 1,\dots,k},\partial \Omega)]$.
There exists a finite collection $\mathcal B$ of disjoint disks $\B \subset \Omega$ such that the sum of their radii does not exceed $\delta$, and a nonnegative measurable function $U :\Omega \to \bb{R}$ supported in $\bigcup_{\B \in \mathcal B}\B$ such that
\begin{multline}\label{eq:lorentzI}
\frac{p - 1}{p}\int_{\bigcup_{\B\in \mathcal B}\B}(|Du| - U)_+^p
+\frac{(2 \pi \delta)^{2 - p}}{p^{2 - p}(p - 1)^{p - 1}} \Big (\sum_{\B\in \mathcal B} \mathcal E_{\mathrm{sg}}^{1,p'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B}u)\Big )^{p - 1} \\\leq \frac{(3-p)p}{2}\int_{\bigcup_{\B\in \mathcal B}\B}\frac{|Du|^p}{p}.
\end{multline}
Moreover, \(U \in \{0\} \cup [\sys(\mathcal N)/(2 \pi \delta),\infty)\) almost everywhere,
\begin{equation}
\label{eq:weak-Lp}\sup_{t > 0}t^p |\{U > t \}| \leq
\frac{(2 - p) p^{p - 1}}{2(p - 1)^{p - 1}(2 \pi)^{2 - p}} \int_{\Omega}\frac{|Du|^p}{p}, \\
\end{equation}
and
\begin{equation}
\label{item:perstuff}
\sup_{t > 0} t^{p - 1}\mathrm{Per}(\{U > t\}) \leq\frac{(2 - p)(2\pi)^{3 - p} (p')^{p - 1}}{\sys(\mathcal N)} \int_{\Omega}\frac{|Du|^p}{p}.
\end{equation}
\end{proposition}
The interest of \eqref{eq:weak-Lp}--\eqref{item:perstuff} is that, combined with the first-order upper bound \eqref{eq:fistorderupperbound}, it asserts that any family of minimizing $p$-harmonic maps $(u_p)_{p\in (1,2)}$ which is smooth outside a finite number of points and subject to $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_p = g \in W^{\sfrac{1}{2},2}(\partial\Omega,\mathcal N)$ satisfies
\begin{equation*}
\varlimsup_{p \nearrow 2}\sup_{t > 0}t^p |\{U_p > t \}| \leq 4\pi\varlimsup_{p \nearrow 2}(2 - p)\int_{\Omega}\frac{|Du_p|^p}{p} \leq 4\pi \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u),
\end{equation*}
where $U_p$ is the map given by proposition \ref{prop:mixedlorentz}. Consequences of this fact are described in proposition \ref{prop:mixedboundedness}. The perimeter $\mathrm{Per}$ of the level set $\{U > t\}$ is defined through the $\mathrm{BV}$-gradient of the characteristic function of the level set \cite[Section 7.4]{willem2013functional} :
\[
\mathrm{Per}(\{U > t\}) \doteq \int_{\Omega}|D \chi_{\{U > t\}}|.
\]
To prove the announced bound we will first write an estimate concerning real numbers (lemma \ref{lemma:alg_lemma_lorentz}). We will then integrate it in lemma \ref{prop:intermediateintegraloncircleestimate} on $\partial \B(a;\rho)$ and then, with the help of the circle proposition \ref{prop:circleconstruction}, we will choose appropriately the shells on which we will integrate. The function $U$ is explicitly given in the proof, see \eqref{eq:theFunctU}.
\begin{lemma}\label{lemma:alg_lemma_lorentz}
If $a,b \geq 0$ and $p \in [1,2]$, then
\begin{equation*}
\frac{a^p}{p} + a^{p - 1}(b - a) + \big( 1 - \frac 1 p \big)(b - a)_+^p \leq \frac{3 -p}{2}b^p.
\end{equation*}
\end{lemma}
\begin{proof}[Proof of lemma \ref{lemma:alg_lemma_lorentz}]
Considering the functions $t \in [0,1] \mapsto (a(1-t) + bt)^p$ we have by the integral form of the Taylor expansion
\begin{equation}\label{eq:eq_alg_I}
b^p = a^p + pa^{p - 1}(b - a) + p (p - 1) \int_0^1\frac{(b-a)^2(1 - t)}{|(1 - t)a + tb|^{2 - p}}\d t.
\end{equation}
We check that
\begin{equation}\label{eq:eq_alg_II}
\frac{(b-a)^2}{|(1 - t)a + tb|^{2 - p}} \geq \frac{(b-a)_+^2}{b^{2 - p}} \text{ and } \int_0^1 (1 - t) \d t = \frac{1}{2}.
\end{equation}
By Young's inequality, we have
\begin{equation}\label{eq:eq_alg_III}
(b- a)_+^p = \frac{(b- a)_+^p }{b^{\frac{p(2 - p)}{2}}}b^{\frac{p(2 - p)}{2}} \leq \frac{p}{2}\frac{(b - a)_+^2}{b^{2 - p}} +\frac{(2 - p)}{2}b^p
\end{equation}
as \(1 = 1/(2/p) + 1/(2/(2-p)).\)
Combining \eqref{eq:eq_alg_I}--\eqref{eq:eq_alg_III} multiplied by $(p - 1)/p$, we get the conclusion.
\end{proof}
\begin{proposition}\label{prop:intermediateintegraloncircleestimate}
Let $a \in \bb{R}^2$ and $0 <\rho < \sigma$.
If $u \in W^{1,2}(\B(a;\sigma)\setminus\B(a;\rho),\mathcal N)$, then we have for every $r \in (\rho,\sigma)$ and for any $0 \leq \eta \leq \lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a;\rho)}u)$
\begin{equation}\label{eq:propgoingtolorentz}
\frac{\eta^p}{p(2\pi r)^{p - 1}} + \Big (1 - \frac{1}{p}\Big ) \int_{\partial\B(a;r)}\Big (|Du| - \frac{\eta}{2\pi r}\Big )_+^p \leq \frac{(3 - p)p}{2}\int_{\partial\B(a;r)}\frac{|Du|^p}{p}.
\end{equation}
\end{proposition}
\begin{proof}[Proof of proposition \ref{prop:intermediateintegraloncircleestimate}]
From lemma \ref{lemma:alg_lemma_lorentz}, we get, setting $b = |Du|$,
\begin{multline*}
\frac{a^p}{p}(2\pi r) + a^{p - 1}\int_{\partial\B(a;r)}(|Du| - a) + \Big (1 - \frac{1}{p}\Big ) \int_{\partial\B(a;r)}\big (|Du| - a\big )_+^p \\\leq \frac{(3 - p)p}{2}\int_{\partial\B(a;r)}\frac{|Du|^p}{p}.
\end{multline*}
We now observe by the topological lower bound on the Sobolev energy on circles (lemma \ref{lemma:loc_lower_bound_circle}) that
\begin{equation*}
\int_{\partial\B(a;r)}|Du| \geq \lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\mathbb S^1}u(a+\rho\cdot)) \geq \eta = \int_{\partial\B(a;r)} \frac{\eta}{2\pi r}.
\end{equation*}
Taking $a = \eta/(2\pi r)$, we get the conclusion.
\end{proof}
\begin{proof}[Proof of proposition \ref{prop:mixedlorentz}]
From proposition \ref{prop:circleconstruction} with $\delta = \dist(\{a_i\}_{i = 1,\dots,N},\partial\Omega)$, we get the existence of a collection of circles $\mathcal S$ considered as a disjoint collection $\mathcal A$ of annuli. We write $\mathcal A = \{\A(c_i;\underline{r}_i,\overline{r}_i)\}_{1,\dots,N}$. Next, we define for all $x \in \Omega$
\begin{equation}\label{eq:theFunctU}
U(x) \doteq \sup \left \{ \frac{\Big ((2\pi)^{p' - 1}p'\, \mathcal E_{\mathrm{sg}}^{1,p'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a;r)}u)\Big )^{\frac{1}{p'}}}{2\pi r} : \mathbb S^1(a;r) \in \mathcal S, x \in \B(a;r)\right \}
\end{equation}
with the convention $\sup \Oset = 0$.
If $x \in \A(c_i;\underline{r}_i, \overline{r}^i)$ then by definition~\ref{def:esg}
\begin{equation*}
U(x) = \frac{\Big ((2\pi)^{p' - 1}p' \,\mathcal E_{\mathrm{sg}}^{1, p'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(c_i;\underline{r}_i)}u)\Big )^{\frac{1}{p'}}}{2\pi |x - c_i|}
\le \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i; \underline{r}_i)}u)}{2 \pi \abs{x - c_i}}.
\end{equation*}
Integrating proposition \ref{prop:intermediateintegraloncircleestimate} over all the annuli $\mathbb S(a;\rho) \in \mathcal S$ of proposition \ref{prop:circleconstruction}, we get
\begin{multline*}\label{eq:lorentzI_prep}
\frac{(3-p)p}{2}\int_{\cup\mathcal S}\frac{|Du|^p}{p}\\
\geq \frac{(2 \pi)^{2 - p}}{p^{2 - p}(p - 1)^{p - 1}} \sum_{i = 1}^{N}\int_{\underline{r}_i}^{\overline{r}^i}\mathcal E_{\mathrm{sg}}^{1,p'}(u,\mathbb S^1(c_i;r))^{p - 1} \frac{\d r}{r^{p - 1}} + \frac{p - 1}{p}\int_{\cup\mathcal S}(|Du| - U)_+^p
\end{multline*}
We define \(\mathcal B \doteq \{\B_i : i = 1,\dots,l\}\) where the disks are those of assertion \ref{item:sum_prop} of proposition \ref{prop:circleconstruction}.
By construction the disks are disjoint and by the choice of the circles of proposition \ref{prop:circleconstruction} and by adding the integral of $|Du|^p$ over $\bigcup\mathcal B\setminus\cup \mathcal S$, we get the announced result \eqref{eq:lorentzI}.
\emph{Weak-$L^p$ estimate \eqref{eq:weak-Lp}.}
By construction of $U$,
\begin{equation*}
\{U > t \} = \bigcup \mathcal B
\end{equation*}
where the union runs over a finite family $\mathcal B$ of pairwise disjoint disks $\B(a;\rho)$ such that
\begin{equation}
\label{eq_laeph4Oosah5theitha6taej}
\rho t \leq \frac{((2\pi)^{p' - 1}p' \, \mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;\rho))^{\frac{1}{p'}}}{2 \pi}.
\end{equation}
Moreover, one has
\begin{equation}
\label{eq_waingairi6bou9aeti8OiJ0v}
\rho^{2 - p}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;\rho))^{p - 1} \leq (2 - p) \int_{\B(a;\rho)}\frac{|Du|^p}{p}.
\end{equation}
Hence,
\begin{align*}
t^p |\{U > t \}| &\leq \pi \sum_{\B(a;\rho) \in \mathcal B} t^p \rho^2 \\
&\leq \frac{((2\pi)^{p' - 1}p')^{p - 1} \pi}{(2 \pi)^p} \sum_{\B(a;\rho) \in \mathcal B} \rho^{2-p} \mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;\rho))^{p - 1} \\
&\leq \frac{(2 - p) p^{p - 1}}{2(p - 1)^{p - 1}(2 \pi)^{2 - p}}
\int_{\Omega}\frac{|Du|^p}{p}.
\end{align*}
\emph{Perimeter estimate.}
Letting \(\mathcal{B}\) as above, we have
\begin{multline}\label{item:bvbvbv}
\mathrm{Per}(\{U > t\}) = \sum_{\B (a;\rho) \in \mathcal B} 2 \pi \rho \\= 2\pi \sum_{\B(a;\rho) \in \mathcal B} \rho^{2 - p}\mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;\rho))^{p - 1}\Big (\frac{\rho}{ \mathcal E_{\mathrm{sg}}^{1,p'}(\mathbb S^1(a;\rho))}\Big )^{p - 1}.
\end{multline}
On the other hand, by \eqref{eq_laeph4Oosah5theitha6taej} and by \eqref{eq_ius6Cei3Tahwae2ahpoh5Iph}
\begin{equation*}
\frac{\rho}{ \mathcal E_{\mathrm{sg}}^{1,p'}(\B(a;\rho))}\le \frac{((2\pi)^{p' - 1}p')^{\frac{1}{p'}}}{2 \pi t \mathcal E_{\mathrm{sg}}^{1,p'}(\B(a;\rho))^{\frac{1}{p}}}
\leq \frac{p'(2\pi)^{p' - 2}}{t \sys(\mathcal N)^\frac{1}{p - 1}}
\end{equation*}
and thus \eqref{item:bvbvbv} becomes in view of \eqref{eq_waingairi6bou9aeti8OiJ0v}
\begin{equation*}
\mathrm{Per}(\{U > t\}) \leq \frac{(2 - p)(2\pi)^{3 - p} (p')^{p - 1}}{t^{p - 1}\sys(\mathcal N)} \int_{\Omega}\frac{|Du|^p}{p}. \qedhere
\end{equation*}
\end{proof}
\section{Convergence of bounded sequences}\label{section:conv_bnd_seqs}
We establish a compactness result which will be applied to sequences of $p$-harmonic mappings as \(p \nearrow 2\).
\begin{proposition}[Compactness theorem]\label{prop:compactnessthm}
Let $\Omega\subset \bb{R}^2$ be a bounded Lipschitz domain, $\mathcal N$ a Riemannian manifold with $\sys(\mathcal N)> 0$ and $(p_n)_n$ be a sequence in the interval \((1, 2)\) converging to $2$. Let $(u_n)_n$ be a sequence of maps such that $u_n \in W^{1,p_n}(\Omega, \mathcal N)$, $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0 \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$ for all $n$. If
\begin{equation}\label{eq:upperBoundCompactnessthm}
\sup_{n}\left [ \int_\Omega \frac{|Du_n|^{p_n}}{p_n} - \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)}{2 - p_n}\right ] < +\infty,
\end{equation}
then, up to some unrelabeled subsequence, $(u_n)_n$ converges almost everywhere to some measurable $u_* : \Omega \to \mathcal N$.\\
Moreover,
\begin{enumerate}[(i)]
\item \label{item:firsitonfg} the map $u_*\in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ satisfies $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_* = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0$. Calling $\{a_i\}_{i = 1,\dots,\kappa} \subset \Omega$ the associated singular points of the renormalizable map $u_*$, we have that for all $\rho \in (0,\rho(\{a_i\}_{i = 1,\dots,k}))$, \((\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;\rho)}u_*(a_i + \rho \cdot))_{i = 1,\dots,\kappa}\) is a $2$-minimal topological resolution of $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_*$ \emph{i.e.}
\begin{equation*}
\mathcal E_{\mathrm{sg}}^{1,2}(\Tr_{\partial\Omega}u_*)= \sum_{i = 1}^\kappa \frac{\lambda([u_*, a_i])^2}{4\pi},
\end{equation*}
and, moreover, $\lambda([u_*,a_i])>0$ for all $i = 1,\dots,\kappa$,
\item \label{item:boundedseqcompact} for all $\rho \in (0,\rho(\{a_i\}_{i = 1,\dots,k}))$, $(Du_n)_n$ is uniformly integrable on $\Omega\setminus\bigcup_{i = 1}^\kappa\B(a_i;\rho)$ and for all $q \in [1,\infty)$,
\begin{equation*}
\varlimsup_{n\to\infty} \int_{\Omega\setminus\bigcup_{i = 1}^\kappa\B(a_i;\rho)}|u_n|^q + \int_{\Omega\setminus\bigcup_{i = 1}^\kappa\B(a_i;\rho)}|Du_n|^{p_n}< +\infty.
\end{equation*}
\item \label{item:propnarrow} narrowly as measures on $\Omega$, \begin{equation*}
(2 - p_n)\frac{|Du_n|^{p_n}}{p_n} \xrightarrow{n \to\infty} \sum_{i = 1}^\kappa \frac{\lambda([u_*,a_i])^2}{4\pi}\delta_{a_i}
\end{equation*}
\item one has
\label{item::scicompactnesslog}
\begin{equation*}\label{eq:scicompactnesslog}
\begin{split}
\varliminf_{n \to \infty} \int_{ \Omega } \frac{|Du_n|^{p_n}}{p_n} -\frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n)}{2 - p_n}
&\geq \mathcal E_{\ren}^{1,2}(u_*) + \mathrm{H}([u_*,a_i])_{i = 1,\dots,\kappa}\\
&\geq \mathcal{E}^{1, 2}_{\mathrm{top},g, [u_*, a_1], \dotsc, [u_*, a_k]} (a_1, \dotsc, a_k) + \mathrm{H}([u_*,a_i])_{i = 1,\dots,\kappa}.
\end{split}
\end{equation*}
\item \label{item:lsc}
for each $\rho \in (0,\rho(\{a_i\}_{i = 1,\dots,k}))$,
\begin{equation}\label{eq:presdessingu}
\varliminf_{n \to \infty} \int_{\B(a_i;\rho)}\frac{|Du_n|^{p_n}}{p_n} - \frac{\lambda([u_*,a_i])^{p_n}}{(2\pi)^{p_n}p_n|\cdot - a_i|^{p_n - 1} }
\ge
\int_{\B(a_i;\rho)} \frac{|Du_*|^2}{2}\ - \frac{\lambda([u_*,a_i])^{2}}{8\pi^2 |\cdot - a_i|}.
\end{equation}
\end{enumerate}
\end{proposition}
Recall the quantity $\mathrm{H}$ was defined in proposition \ref{prop:upperBound} and the non-intersection radius $\rho(\{a_i\}_{i = 1,\dots,k})$ was defined in \eqref{eq:non-intersecting_radius}.
By item \ref{item:boundedseqcompact}, we obtain that for all $\rho > 0$ and $p\in(1,2)$ $Du_n \to Du_*$ weakly in $L^p(\Omega\setminus\bigcup_{i = 1}^\kappa\B(a_i;\rho), \bb{R}^{\nu \times 2})$. This implies that $Du_n \to Du_*$ as distributions on $\Omega \setminus \{a_i\}_{i = 1,\dots,\kappa}$.
The narrow convergence in \ref{item:propnarrow} is equivalent (see \cite[Proposition 4.2.4]{attouch2014variational}) to the fact that for any bounded and continous function $\xi : \Omega \to \bb{R}$, \[(2 - p_n)\int_\Omega \xi \d |Du_n| \xrightarrow{n \to\infty} \sum_{i = 1}^\kappa \frac{\lambda([u_*,a_i])^2}{4\pi}\xi(a_i). \] Narrow convergence can also be understood as the combination of weak convergence of measures and the condition that the total mass converges : \[(2 - p_n)|Du_n|(\Omega) \xrightarrow{n \to\infty} \sum_{i = 1}^\kappa \frac{\lambda([u_*,a_i])^2}{4\pi}\delta_{a_i}(\Omega) = \mathcal E_{\mathrm{sg}}^{1,2}(\Tr_{\partial \Omega}u_*).\]
\subsection{Convergence of \texorpdfstring{$L^p$}{Lᵖ}-bounded sequences when \texorpdfstring{$p\nearrow 2$}{p↗2}}\label{subsec:t5dsgb5a}
In this section \ref{subsec:t5dsgb5a}, we prove a Fatou type result (see \eqref{eq:fatouWeak}) and a compactess result (see proposition \ref{prop:compactnessofboundedseqs}) for sequence bounded in the sense of \eqref{eq:boundedLptrhoif}.
\begin{proposition}\label{coro:fatouannulie} Let $\Omega \subset \bb{R}^2$ be an open and bounded set, $(p_n)_n$ be an increasing sequence converging to $2$ and $u_n \to u$ a.e. be sequence of measurable maps in $W^{1,{p_n}}(\Omega)$. If
\begin{equation}\label{eq:boundedLptrhoif}
\sup_n \int_{\Omega}|Du_n|^{p_n} < +\infty,
\end{equation}
then
\begin{equation}\label{eq:fatouWeak}
\int_{\Omega}|Du_*|^{2} \leq \varliminf_{n \to \infty}\int_{\Omega}|Du_n|^{p_n}.
\end{equation}
Moreover, for every $a \in \Omega$ and almost every $\rho \in (0,\dist(a,\partial \Omega))$,
\begin{equation*}\int_{\partial \B(a;r)}|Du_*|^{2}\d \HH^1 \leq \varliminf_{n \to \infty}\int_{\partial \B(a;r)}|Du_n|^{p_n}\d \HH^1.
\end{equation*}
\end{proposition}
The second part of proposition \ref{coro:fatouannulie} is based on the next lemma \ref{lemma:weakconvergenceimpliesfatouonaecicle} that disintegrates on circles the lower semi-continuity statement that weak convergence implies. Its proof is well-known, based on Fubini-Tonneli theorem and Fatou's lemma and we shall omit it.
\begin{lemma}\label{lemma:weakconvergenceimpliesfatouonaecicle}
Let $\Omega \subset \bb{R}^m$ be an open and bounded set, $p \in (1,\infty)$. Let $u_n \to u$ weakly in $W^{1,p}(\Omega)$. Then, for every $a \in \Omega$ and almost every $\rho \in (0,\dist(a,\partial \Omega))$,
\begin{equation*}
\int_{\partial \B(a;\rho)}|Du|^p \d \HH^1 \leq \varliminf_{n\to \infty}\int_{\partial \B(a;\rho)}|Du_n|^p \d \HH^1
\end{equation*}
and there exists a subsequence $n'$ depending on $a$ and $\rho$ such that $u_{n'} \to u$ $\HH^1$-a.e converges and $u_{n'}$ is bounded in $W^{1,p}(\partial \B(a;\rho))$.
\end{lemma}
\begin{proof}[Proof of proposition \ref{coro:fatouannulie}]
We only prove the second assertion. This is lemma \ref{lemma:weakconvergenceimpliesfatouonaecicle} combined with the observation that
\begin{align*}
\int_{\partial \B(a_i;r)}|Du_*|^{2}\d \HH^1& = \lim_{p \nearrow 2}\int_{\partial \B(a_i;r)}|Du_*|^{p}\d \HH^1 \leq \varliminf_{p \nearrow 2}\varliminf_{n \to \infty}\int_{\partial \B(a_i;r)}|Du_n|^{p}\d \HH^1 \\
&\leq \varliminf_{p \nearrow 2}\varliminf_{n \to \infty}\HH^1({\partial \B(a_i;r)})^{\frac{p(p_n - 1)}{p_n}}\left (\int_{\partial \B(a_i;r)}|Du_n|^{p_n} \d \HH^1\right )^{\frac{p}{p_n}}
\\
&\leq \varliminf_{n \to \infty}\int_{\partial \B(a_i;r)}|Du_n|^{p_n}\d \HH^1.\qedhere
\end{align*}
\end{proof}
In the proof of proposition \ref{prop:compactnessthm}, we will use the following routine.
\begin{proposition}
\label{prop:compactnessofboundedseqs}
Let $\Omega$ be an open Lipschitz bounded domain of $\bb{R}^d$.
Let $(u_{n})_n$ be a sequence of maps $u_n \in W^{1,p_n}(\Omega, \bb{R}^\nu)$ sharing all the same trace on $\partial \Omega$.
Let, for each $m$, $\mathcal B_m$ be a finite collection of disjoint disks contained in $\Omega$ whose union of forms a decreasing sequence of sets satisfying, as sets, $\cup\mathcal B_m \downarrow \{a_1,\dots,a_k\} \subset \Omega$.
Let $p_n \in [1,2)$ be such that $p_n \nearrow 2$.\\
If, for each $m$,
\begin{equation*}
\sup_n \int_{\Omega \setminus B_m}|Du_n|^{p_n} < +\infty,
\end{equation*}
then up to some unrelabelled subsequence $u_n$ converges almost everywhere to a map $u \in W^{1,2}_{\mathrm{loc}}(\Omega \setminus \{a_i\}_{i = 1,\dots,k},\bb{R}^\nu)$. Moreover the subsequence satisfies
\begin{equation}\label{eq:Lqatouslesq}
\varlimsup_{n\to\infty}\int_{\Omega \setminus \bigcup_{i = 1}^k \B(a_i;\rho)}|u_n|^{p_n} + |Du_n|^{p_n} < +\infty.
\end{equation}
\end{proposition}
We will deduce proposition \ref{prop:compactnessofboundedseqs} from the following lemmata.
\begin{lemma}\label{lemma:bouche-trou}
For each $m$, it is possible to change the values of each $u_{p_n}$ on $\cup \mathcal B_m$ in order to get a sequence $\bar u_{p_n} \in W^{1,p_n}(\Omega, \bb{R}^\nu)$ that verifies
\begin{equation*}
\sup_n \int_{\Omega}|D\bar u_n|^{p_n} < \infty.
\end{equation*}
\end{lemma}
\begin{proof}[Proof of lemma \ref{lemma:bouche-trou}]
It follows from (linear) trace theory and its estimates \cite{gagliardo1957caratterizzazioni}.
\end{proof}
\begin{lemma}\label{lemma:compactnessWholes}
Let $\Omega \subset \bb{R}^2$ be an open set. Let $p_n \in [1,2)$ be an increasing sequence such that $p_n \nearrow 2$. Assume a sequence $v_n \in W^{1,p_n}(\Omega, \bb{R}^\nu)$ verifies
\begin{equation*}
\sup_n \int_{\Omega}|D v_n|^{p_n} < \infty \text{ and } v_n -v_0 \in W^{1,p_0}_0(\Omega,\bb{R}^\nu)
\end{equation*}
for each $n$.
Then, it admits an unrelabelled subsequence $v_n$ such that $v_n \to v$ almost everywhere Moreover, $v \in W^{1,2}(\Omega,\bb{R}^\nu)$ and, for each $p \in [1,2)$,
\begin{equation}\label{eq:boundLpq}
\sup_n\int_\Omega |v_n|^p + |Dv_n|^p < +\infty.
\end{equation}
\end{lemma}
\begin{proof}[Proof of lemma \ref{lemma:compactnessWholes}]
As $p_n \nearrow 2$, for every $q \in [1,2)$, the tail of the sequence $(v_n)_n$ is bounded in $W^{1,q}(\Omega,\bb{R}^\nu)$ by the Poincaré inequality. Hence, for some $q \in [1,2)$ we extract a subsequence of $(v_n)$ still denote $v_n$ such that $v_n \to v$ almost everywhere and in $L^q(\Omega,\bb{R}^\nu)$. We also get that $Dv_n \to Dv$ weakly in $L^p(\Omega,\bb{R}^\nu)$. We next consider a sequence $q_m \nearrow 2$ and by a Cantor diagonal argument such that the diagonal sequence $v_n$ converges in $L^{q_m}(\Omega,\bb{R}^\nu)$ and $Dv_n \to Dv$ weakly in $L^{q_m}(\Omega,\bb{R}^\nu)$ for each $m$ and thus for each $p \in [1,2)$ we have
\begin{equation}\label{eq:fkljjhguvfs}
\sup_n \int_\Omega |Dv_n|^p < +\infty.
\end{equation}
The fact that $Du \in L^2(\Omega,\bb{R}^{2\times \nu})$ then follows from a variant of \cite[Theorem 6.1.7]{willem2013functional}.
The estimate \eqref{eq:boundLpq} is implied by the weak convergence.
\end{proof}
\begin{proof}[Proof of proposition \ref{prop:compactnessofboundedseqs}]
Lemmata \ref{lemma:bouche-trou} and \ref{lemma:compactnessWholes} with a Cantor diagonal argument yield the conclusion.
\end{proof}
\subsection{Proof of proposition \ref{prop:compactnessthm}}
We will make use of the following lemma from \cite[Lemma 6.2]{monteil2021renormalised}.
\begin{lemma}\label{lemma:ainomega}
Let $\Omega \subset \bb{R}^2$, $a \in \bb{R}^2$ and $0 < \sigma < \tau$. If $u \in W^{1,2}(\B(a;\tau)\setminus \B(a;\sigma), \mathcal N)$, then,
\begin{multline*}
\int_{(\B(a;\tau) \setminus \B(a;\sigma)) \cap \Omega} \frac{|Du|^2}{2} \geq \frac{\lambda^2(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \mathbb S^1}u(a + \tau\cdot))}{4\pi \nu_{\tau}^\sigma(a)}\log \frac{\tau}{\sigma} \times \\\left [1 - \frac{\sqrt{2\pi}}{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \mathbb S^1}u(a + \tau\cdot))}\Big ( \frac{1}{\log \frac{\tau}{\sigma}}\int_{\B(a;\tau) \setminus (\B(a;\sigma) \cup \Omega)} |Du|^2 \Big)^{\frac{1}{2}} \right]^2
\end{multline*}
where
\begin{equation}\label{eq:cjhksjfgivh}
\nu_{\tau}^\sigma(a) = \frac{1}{2\pi \log \frac{\tau}{\sigma}}\int_{(\B(a;\tau) \setminus \B(a;\sigma)) \cap \Omega}\frac{\d x}{|x - a|^2} \leq 1.
\end{equation}
\end{lemma}
\begin{proof}[Proof of proposition \ref{prop:compactnessthm}]
In the first part of the proof we assume that
\begin{equation*}
u_n \in W^{1,p_n}(\Omega,\mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\bar\Omega\setminus \{a_i^n\}_{i = 1,\dots,k^n},\mathcal N),
\end{equation*}
where \(k^n \in \bb{N}\) and $\{a_i^n\}_{i = 1,\dots,k^n} \subset \Omega$ are such that
\begin{equation*}
\delta = \inf_n\dist(\{a_i\}_{i = 1,\dots,k^n},\partial\Omega) >0.
\end{equation*}
We will treat the general case by density (see section \ref{subsub:density}).
By our assumption \eqref{eq:upperBoundCompactnessthm},
\begin{equation}\label{eq:upperBoundCompactnessthmlimsupfirstorder}
\varlimsup_{n\to \infty}(2 - p_n) \int_\Omega \frac{|Du_n|^{p_n}}{p_n} \leq \mathcal E_{\mathrm{sg}}^{1,2}(\Tr_{\partial \Omega}u_0).
\end{equation}
\subsubsection{Convergence of the disks}
Let $(\eta_m)_m$ be a decreasing sequence such that $\eta_m \in (0,\delta)$ for each $m$ and $\eta_m \searrow 0$. For each $m,n$ we apply proposition \ref{prop:circleconstruction} and get the existence of a finite collection of disjoint disks $\mathcal B_{m,n}$ contained in $\Omega$ such that the sum of the diameter is $\eta_m$ (more details are explained in corollary \ref{coro:lower_bound}) and
\begin{equation}\label{eq:borninf0}
(2-p_n) \int_{\bigcup_{\B \in \mathcal B_{m,n}}}\frac{|Du_n|^{p_n}}{p_n} \geq \displaystyle\Big (\sum_{\B \in \mathcal B_{m,n}}\mathcal E_{\mathrm{sg}}^{1,p_n'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B}u_n) \Big)^{p_n -1} \eta_m^{2 - p_n}.
\end{equation}
We thus have by proposition \ref{lemma:continuite_of_esgp}, \eqref{eq:upperBoundCompactnessthmlimsupfirstorder} and \eqref{eq:borninf0},
\begin{align*}
\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0) &= \lim_{n \to \infty}\mathcal E_{\mathrm{sg}}^{1,p_n'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)^{p_n - 1}\\&\leq \Big(\varlimsup_{n \to \infty}\sum_{\B \in \mathcal B_{m,n}}\mathcal E_{\mathrm{sg}}^{1,p_n'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B}u_n)\Big)^{p_n - 1} \\&\leq \varlimsup_{n \to \infty} (2 - p_n) \int_{\Omega}|Du_n|^{p_n} \leq \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0).
\end{align*}
Thus,
\begin{equation}\label{eq:borninf2}
\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0) = \lim_{n \to \infty} (2 - p_n) \int_{\Omega}|Du_n|^{p_n}.
\end{equation}
Letting $\mathcal B_{m,n}^{\mathrm{Top}}$ be the subcollection of disks $\B$ of $\mathcal B_{m,n}$ with the property that \(\mathcal E_{\mathrm{sg}}^{1,p_n'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B}u_n) > 0\);
in view of \eqref{eq_ius6Cei3Tahwae2ahpoh5Iph} they satisfy the stronger bound
\begin{equation}\label{eq:systolecrucial}
\mathcal E_{\mathrm{sg}}^{1,p_n'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B}u_n) \geq \frac{\sys(\mathcal N)^{\frac{p_n}{p_n - 1}}}{(2\pi)^{p_n' - 1}p_n'} > 0,
\end{equation} we have that the number of elements of this collection satisfies by \eqref{eq:borninf0}
\begin{equation}\label{eq:bornesup}
\varlimsup_{n \to \infty}\# \mathcal B_{m,n}^{\mathrm{Top}} \leq \frac{4\pi\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)}{\sys(\mathcal N)^2}.
\end{equation}
Hence up to some subsequence in $n$ the collection $\mathcal B^{\mathrm{Top}}_{m,n}$ converges to a limit collection $\mathcal B^{\mathrm{Top}}_{m}$ for each $m$ in the sense that $\# \mathcal B_{m,n}^{\mathrm{Top}} \to \# \mathcal B_{m}$ for each $m$ and the vector of the center of the disks and the radii converge to the associated vector of the limit collection. By a Cantor diagonal argument we may ensure that the subsequence does not depend on $m$. Then, \eqref{eq:bornesup} also holds for $\#\mathcal B_m$. Thus repeating the argument we get up to some subsequence a limit collection $\mathcal B$ such that $\mathcal B_m \to \mathcal B$. As $\eta_m \searrow 0$, this collection consists of \(\kappa \doteq \# \mathcal B\) points $\{a_i\}_{i = 1,\dots,\kappa} \subset \bar \Omega$ such that \[\dist(\{a_i\}_{i = 1,\dots,\kappa},\partial \Omega)\ge \delta.\]
\subsubsection{Uniform bound away from singularities and convergence to \texorpdfstring{$u_*$}{u*}}
For each $m,n$, we observe that
\begin{align*}
\int_{\Omega \setminus \bigcup \mathcal B_{m,n}}\frac{|Du_n|^{p_n}}{p_n} &= \int_{\Omega}\frac{|Du_n|^{p_n}}{p_n} - \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\Tr_{\partial\Omega}u_0)}{2 - p_n} \\
&\quad\quad+ \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\Tr_{\partial\Omega}u_0) - \mathcal E_{\mathrm{sg}}^{1,p_n'}(\Tr_{\partial\Omega}u_0)^{p_n - 1}}{2 - p_n} \\
&\quad\quad+ \mathcal E_{\mathrm{sg}}^{1,p_n'}(\Tr_{\partial\Omega}u_0)^{p_n - 1}\frac{1 - \eta_{m}^{2-p_n}}{2 - p_n} \\
&\quad\quad+ \left [ \frac{\mathcal E_{\mathrm{sg}}^{1,p_n'}(\Tr_{\partial\Omega}u_0)^{p_n - 1}\eta_{m}^{2-p_n}}{2 - p_n} - \int_{\bigcup \mathcal B_{m,n}}\frac{|Du_n|^{p_n}}{p_n}\right].
\end{align*}
Hence, by our assumption \eqref{eq:upperBoundCompactnessthm}, the local Lipschitz property of the singular energy (lemma \ref{lemma:continuite_of_esgp}) and the lower bound \eqref{eq:borninf0},
\begin{equation}\label{eq:bigmajoration}
\varlimsup_{n\to\infty}\int_{\Omega \setminus \bigcup \mathcal B_{m,n}}\frac{|Du_n|^{p_n}}{p_n} \leq \Lambda + \mathcal E_{\mathrm{sg}}^{1,2}(\Tr_{\partial\Omega}u_0)\log\frac{1}{\eta_m},
\end{equation}
where we write for future use
\begin{equation}\label{eq:limsuploindesingu}
\Lambda \doteq \varlimsup_{n\to\infty}\left [ \int_\Omega \frac{|Du_n|^{p_n}}{p_n} - \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)}{2 - p_n}\right ] + \varlimsup_{p \nearrow 2}\frac{\mathcal E_{\mathrm{sg}}^{1,2}(\Tr_{\partial\Omega}u_0) - \mathcal E_{\mathrm{sg}}^{1,p'}(\Tr_{\partial\Omega}u_0)^{p' - 1}}{2 - p}.
\end{equation}
From \eqref{eq:bigmajoration}, by proposition \ref{prop:compactnessofboundedseqs} we get a limit map $u_* \in W^{1,2}(\Omega \setminus \bigcup_{i = 1}^k \B(a_i;\rho),\mathcal N)$ for every $\rho \in (0,\bar \rho)$ such that up to some subsequence independent of $m$ we have $u_n \to u_*$ as described by the proposition. This proves \ref{item:boundedseqcompact}.
In the sequel, we will use
\[
\bar \rho \doteq \rho(\{a_i\}_{i = 1,\dots,\kappa})
\]
for the non-intersection radius defined in \eqref{eq:non-intersecting_radius}.
By \eqref{eq:limsuploindesingu} for $m$ large enough,
\begin{equation*}
\int_{\Omega \setminus \bigcup_{i = 1}^\kappa\B(a_i;2\eta_m)}\frac{|Du_*|^2}{2} \leq \varlimsup_{n\to\infty}\int_{\Omega \setminus \bigcup \mathcal B_{m,n}}\frac{|Du_n|^{p_n}}{p_n} \leq \Lambda + \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)\log \frac{1}{\eta_m}.
\end{equation*}
Hence, using definition \ref{def:esg} of the singular energy and lemma \ref{lemma:ainomega},
\begin{align}\label{eq:ineqrestopopt}
\Gamma\log\frac{\bar \rho}{2\eta_m} \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_*)&\leq \Gamma\log\frac{\bar \rho}{2\eta_m}\sum_{i = 1}^\kappa \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \mathbb{S}^1}u_*(a_i + \bar \rho \cdot ))^2}{4\pi \nu_{\bar \rho}^{2\eta_m}(a_i)}\\&\notag\leq \Lambda + \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)\log \frac{1}{\eta_m},
\end{align}
where we have set
\begin{equation*}
\Gamma \doteq \Big [1 - \frac{\sqrt{2\pi}}{\sys(\mathcal N)}\Big (\log \frac{\bar \rho}{2\eta_m} \int_{\{x \in \Omega : \dist(\partial \Omega,x)<\delta\}} |Du_*|^2 \Big)^{\frac{1}{2}} \Big]^2
\end{equation*}
and $\nu_{\bar \rho}^{2\eta_m}(a_i)$ is given by \eqref{eq:cjhksjfgivh}.
From \eqref{eq:ineqrestopopt}, we get in the limit $m\to \infty$,
\begin{equation}\label{eq:topres}
\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_*)\leq \sum_{i = 1}^\kappa \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \mathbb{S}^1}u_*(a_i + \bar \rho \cdot ))^2}{4\pi}\\\leq \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0).
\end{equation}
But \(\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0) = \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_*)\) as $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_* = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0$ by assumption.
Hence we deduce that $\nu_{\bar \rho}^{2\eta_m}(a_i) = 1$ for each $i = 1,\dots,\kappa$ and that equality holds in \eqref{eq:topres} which implies that $(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \mathbb{S}^1}u_*(a_i + \bar \rho \cdot) )_{i = 1,\dots,\kappa}$ is a minimimal topological resolution of $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0$ \emph{i.e.}
\begin{equation}\label{eq:equalitylambdaesg}
\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_*) = \sum_{i = 1}^\kappa \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \mathbb{S}^1}u_*(a_i + \bar \rho \cdot ))^2}{4\pi}
\end{equation}
and
\begin{equation}\label{eq:lmqlkfjmkfd}
\dist(\{a_i\}_{i = 1\dots,\kappa},\partial \Omega)>\delta.
\end{equation}
\subsubsection{Narrow convergence \ref{item:propnarrow}}
By \eqref{eq:bigmajoration} and the convergence results (see proposition \ref{prop:compactnessofboundedseqs}\ref{item:boundedseqcompact}), we have $u_n \to u_*$ almost everywhere and we may futher assume that \[(2 - p_n)|Du_n|^{p_n} \to \sum_{i = 1}^\kappa \alpha_i \delta_{a_i}\] weakly as measures where $\alpha_i \geq 0$. Moreover, \[\sum_{i = 1}^\kappa \alpha_i = \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u)\] by \eqref{eq:borninf2}. By the choice of the disks given by proposition \ref{prop:loc_lower_bound}, we have for each $i = 1,\dots,\kappa$, for large enough $m$,
\begin{equation*}
\alpha_i = \varlimsup_{n\to\infty}(2 - p_n) \int_{\B(a_i, \bar \rho)}\frac{|Du_n|^{p_n}}{p_n} \geq \varlimsup_{n\to\infty}\frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;2\eta_m)}u_n)^2}{4\pi}.
\end{equation*}
We consider a subsequence such that the superior limit is an actual limit for each $i = 1,\dots,\kappa$. By lemma \ref{lemma:weakconvergenceimpliesfatouonaecicle}, there exists $r \in (2 \eta_m,2\eta_{m+ 1})$ such that for a subsequence $u_{n^r_k}$ depending on $r$ one has $u_{n^r_k} \rightharpoonup u_*$ weakly in $W^{1,p}(\partial \B(a_i;r))$.
Using the Sobolev embedding and the Ascoli-Arzela compactness criterion $u_n \to u_*$ uniformly on $\partial \B(a_i;r)$. Hence for $k$ large enough $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;r)}u_{n^r_k}$ is homotopic to $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;r)}u_*$.
We deduce that for $k$ and $m$ large enough $\lambda(\Tr_{\partial \B(a_i;2\eta_m)}u_n) = \lambda(\Tr_{\partial \B(a_i;r)}u_n) = \lambda([u_*,a_i])$ and \[\alpha_i\geq \frac{ \lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial\B(a_i;2\eta_m)}u_*)^2}{4\pi} = \frac{\lambda([u_*,a_i])^2}{4\pi} \geq \mathcal E_{\mathrm{sg}}^{1,2}([u_*,a_i]) \]
which, combined with the fact that $(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \mathbb{S}^1}u_*(a_i + \bar \rho \cdot ))_{i = 1,\dots,\kappa}$, is a minimimal topological resolution of $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0$ results into that \[\alpha_i = \frac{\lambda([u_*,a_i])^2}{4\pi} = \mathcal E_{\mathrm{sg}}^{1,2}([u_*,a_i]).\]
This proves the weak convergence of measures. The narrow convergence \ref{item:propnarrow} is reached by\eqref{eq:borninf2}.
\subsubsection{Lower semicontinuity statements}
We now observe that, by corollary \ref{coro:fatouannulie}, for each $i = 1,\dots,\kappa$ and almost every $r \in (0,\bar \rho)$,
\begin{equation}\label{eq:fatourannuli}
\int_{\partial \B(a_i;r)}|Du_*|^{2} \leq \varliminf_{n \to \infty}\int_{\partial \B(a_i;r)}|Du_n|^{p_n}.
\end{equation}
In order to prove \eqref{eq:presdessingu}. For all $\rho \in (0,\bar \rho)$, we have by \eqref{eq:fatourannuli}
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{ \nonumber\label{eq:qshjkdgf56sdfg64s56}
\int_0^\rho \int_{\partial \B(a_i;r)} \frac{|Du_*|^2}{2}\d \HH^1 - \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;r)}u_*)^{2}}{4\pi r} \d r }\\ \quad
& \leq & \int_0^\rho\varliminf_{n \to \infty}\left [\int_{\partial \B(a_i;r)} \frac{|Du_n|^{p_n}}{p_n}\d \HH^1 - \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;r)}u_*)^{p_n}(2\pi r)^{2 - p_n}}{2p_n\pi r}\right ] \d r \nonumber\\
& \leq & \varliminf_{n \to \infty}\int_0^\rho\left [\int_{\partial \B(a_i;r)} \frac{|Du_n|^{p_n}}{p_n}\d \HH^1 - \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;r)}u_*)^{p_n}(2\pi r)^{2 - p_n}}{2p_n\pi r}\right ] \d r\nonumber \\
&= &\varliminf_{n \to \infty} \int_{\B(a_i;\rho)}\frac{|Du_n|^{p_n}}{p_n} - \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;r)}u_*)^{p_n}}{2p_n\pi }\frac{(2\pi \rho)^{2 - p_n}}{2 - p_n} \nonumber.
\end{IEEEeqnarray}
We prove \ref{eq:scicompactnesslog}. By the integral representation of the renormalized energy, proposition \ref{prop:polarCoordEren}, we have
\begin{IEEEeqnarray}{rCl}
\nonumber \mathcal E_{\ren}^{1,2}(u_*) &=& \int_{ \Omega \setminus \bigcup_{i = 1}^\kappa \B(a_i;\rho)} \frac{|Du_*|^2}{2} - \sum_{i = 1}^{\kappa}\frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;\rho)}u_*)^2}{4\pi}\ln \frac{1}{\rho} \\
&&\nonumber\quad\quad + \sum_{i = 1}^{\kappa}\int_0^\rho \int_{\partial \B(a_i;\rho)} \frac{|Du_*|^2}{2}\d \HH^1 - \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;\rho)}u_*)^2}{4\pi r} \d r \\
& \nonumber\leq & \varliminf_{n \to \infty} \int_{ \Omega \setminus \bigcup_{i = 1}^\kappa \B(a_i;\rho)} \frac{|Du_n|^{p_n}}{p_n} \\
&&\nonumber \quad\quad + \varliminf_{n\to \infty} \sum_{i = 1}^\kappa\frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;\rho)}u_*)^{p_n}}{2p_n \pi}\frac{(2\pi\rho)^{2 - p_n} - (2\pi )^{2 - p_n}}{2 - p_n}\\
&&\nonumber \quad\quad + \sum_{i = 1}^{\kappa}\varliminf_{n \to \infty} \int_{\B(a_i;\rho)}\frac{|Du_n|^{p_n}}{p_n} - \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;\rho)}u_*)^{p_n}}{2p_n\pi }\frac{(2\pi\rho)^{2 - p_n}}{2 - p_n} \\
& \leq & \varliminf_{n \to \infty} \int_{ \Omega } \frac{|Du_n|^{p_n}}{p_n} -\sum_{i = 1}^\kappa\frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;\rho)}u_*)^{p_n}}{2p_n \pi}\frac{(2\pi)^{2 - p_n}}{2 - p_n}.\label{eq:precksmdlgjkqh}
\end{IEEEeqnarray}
Substracting and adding $\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n)/(2 - p_n)$ in \eqref{eq:precksmdlgjkqh} and using the equality \eqref{eq:equalitylambdaesg}, we obtain the first inequality in \ref{eq:scicompactnesslog} (see also \eqref{eq:technicallimit}), whereas the second inequality in \ref{eq:scicompactnesslog} follows from proposition~\ref{proposition_ren_map_to_pts}.
\subsubsection{Density argument}\label{subsub:density}
If $u_n \in W^{1,p_n}(\Omega, \mathcal N)$, and $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0 \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$, we argue by density. By \eqref{eq:extension_of_maps}, we may assume, considering a larger domain, that \[u_n \in W^{1,p_n}(\Omega', \mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\{x \in \Omega' : \dist(x,\partial \Omega')< \delta\},\mathcal N)\] for some $\delta >0$ and $u_n = u_0$ on $\{x \in \Omega' : \dist(x,\partial \Omega')< \delta\}$. Then, by proposition \ref{prop:density_of_the_R_class}, we obtain for each $n$ a finite set $\{a_i^n\}_{i = 1,\dots,k^n} \subset \Omega'$ satisfying $\dist(\{a_i^n\}_{i = 1,\dots,k^n},\partial\Omega')> \delta$ such that
\begin{equation*}
\bar u_n \in W^{1,p_n}(\Omega', \mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\Omega' \setminus \{a_i^n\}_{i = 1,\dots,k^n},\mathcal N),
\end{equation*}
$(\bar u_n - u_n) \to 0$ almost everywhere in \(\Omega'\), $\|\bar u_n - u_n\|_{W^{1,p_n}(\Omega',\mathcal N)} \to 0$ and in particular
\begin{equation*}
\int_{\Omega'} |D\bar u_n|^{p_n} - \int_{\Omega'} |D u_n|^{p_n} \xrightarrow{n\to\infty}0.
\end{equation*}
Under these conditions, the assumption \eqref{eq:upperBoundCompactnessthm} on $(u_n)_n$ implies
\begin{equation*}
\sup_{n}\left [ \int_{\Omega'} \frac{|D\bar u_n|^{p_n}}{p_n} - \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega'}\bar u_0)}{2 - p_n}\right ] < +\infty.
\end{equation*}
By the proof above, we obtain \ref{item:firsitonfg}--\ref{item:lsc} by restriction from $\Omega'$ to $\Omega$ and by observing that the
\begin{equation*}
\lim_{n\to\infty}\int_{\Omega'\setminus \Omega} |D\bar u_n|^{p_n} = \int_{\Omega'\setminus \Omega} |D\bar u_*|^{2}
\end{equation*}
is implied by $\|\bar u_n - u_n\|_{W^{1,p_n}(\Omega',\mathcal N)} \to 0$. The condition $\dist(\{a_i\}_{i = 1,\dots,\kappa},\partial \Omega')> \delta$ (see \eqref{eq:lmqlkfjmkfd}) implies $\{a_i\}_{i = 1,\dots,\kappa} \subset \Omega$.
\end{proof}
\subsection{Mixed Marcinkiewicz estimates of bounded sequences}\label{sec:mixed}
We also obtain mixed Marcinkiewicz estimates for sequences of with bounded renormalized \(p\)--energy.
\begin{proposition}\label{prop:mixedboundedness}
Let $\Omega\subset \bb{R}^2$ be a bounded Lipschitz domain, $\mathcal N$ a Riemannian manifold with $\sys(\mathcal N)> 0$ and let $p_n \in (1,2)$ be a sequence $p_n \nearrow 2$. Let $(u_n)_n$ be a sequence of maps such that $u_n \in W^{1,p_n}(\Omega, \mathcal N)$, $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0 \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$. If $u_0 \in W^{1,2}(\Omega_\delta\setminus\Omega,\mathcal N)$ for some $\delta > 0$, $u_n = u_0$ on $\Omega_\delta\setminus\Omega$ and
\begin{equation}\label{eq:upperBoundCompactnessthmII}
\Lambda \doteq \varlimsup_{n\to \infty}\left [ \int_\Omega \frac{|Du_n|^{p_n}}{p_n} - \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)}{2 - p_n}\right ] < +\infty,
\end{equation}
then, for each $n$, there exists a measurable $U_n : \Omega \to \{0\} \cup [\sys(\mathcal N)/(2 \pi \delta),\infty)$ such that
\begin{align} \label{eq:plusdjU}
\varlimsup_{n \to \infty}\int_{\Omega}\frac{(|Du_n| - U_n)_+^{p_n}}{p_n} &\leq \Lambda +\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)\Big ( \big [\mathcal E_{\mathrm{sg}}^{1,p}(\Tr_{\partial\Omega}u_0) \big]_{\mathrm{Lip}([3/2,2])}+ \log \frac{1}{\delta}\Big )\\
\label{eq:plusdjUII} \varlimsup_{n \to \infty}\sup_{t > 0}t^{p_n} \,\mathrm{vol}{\{U_n > t\}} &\leq \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0), \\
\label{eq:plusdjUIII}\varlimsup_{n \to \infty}\sup_{t>0}t^{p_n - 1}\mathrm{Per}(\{U_n > t\}) &\leq \frac{4\pi}{\sys(\mathcal N)} \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0).
\end{align}
\end{proposition}
In \eqref{eq:plusdjU}, $[\mathcal E_{\mathrm{sg}}^{1,p}(\Tr_{\partial\Omega}u_0) \big]_{\mathrm{Lip}([3/2,2])}$ refers to the Lipschitz constant of the map $p \mapsto \mathcal E_{\mathrm{sg}}^{1,p}(\Tr_{\partial \Omega}u_0)$. Lemma \ref{lemma:continuite_of_esgp} garantees that this quantity is finite and \eqref{eq:lipestimate} gives an estimate on it. Together \eqref{eq:plusdjU} and \eqref{eq:plusdjUII} will imply weak-$L^p$ boundedness of the sequence $D u_n$, see corollary \ref{coro:dfjhmlfd} below.
Under the conclusion of proposition \ref{prop:mixedboundedness}, we have the following lower semi-continuity statements (lemma \ref{lemma:fatoirp}, lemma \ref{coro:convergencofpropgrandU} and lemma \ref{lemma:convergfatou}). As a corollary (see corollary \ref{coro:erenweakL2}), we obtain that the map $u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ resulting of proposition \ref{prop:compactnessthm} has its gradient that lies in weak-$L^2$. We refer to \cite{monteil2021renormalised} for the original proof of this fact for a general manifold and \cite{MR2381162} for the case of the circle.
We first record the following corollary that imply the result \eqref{eq_ooB6EiteiMeegai8bohkie2t} mentioned in the introduction.
\begin{corollary} \label{coro:dfjhmlfd} Any sequence $(u_n)_n$ that satisfies the assumptions of proposition \ref{prop:mixedboundedness} verifies
\begin{multline*}
\varlimsup_{n\to\infty}\sup_{t>0}t^{p_n}\,\mathrm{vol}{\Omega \cap \{|Du_{n}|>2t\}} \\\leq \Lambda +\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)\Big ( \big [\mathcal E_{\mathrm{sg}}^{1,p'}(\Tr_{\partial\Omega}u_0)^{p-1} \big]_{\mathrm{Lip}([3/2,2])}+ \log \frac{1}{\delta}\Big ) + 4\pi \mathcal E_{\mathrm{sg}}^{1,2}(\Tr_{\partial\Omega}u_0).
\end{multline*}
\end{corollary}
\begin{proof}[Proof of corollary \ref{coro:dfjhmlfd}]
Noting that $|Du_{n}|\leq (|Du_n| - U_n)_+ + U_n$ where $U_n$ is the map given by proposition \ref{prop:mixedboundedness}, we have
\begin{multline*}
t^{p_n}\,\mathrm{vol}{\Omega \cap \{|Du_{n}|>2t\}} \\\leq t^{p_n}\,\mathrm{vol}{\Omega \cap \{(|Du_{n}| - U_n)_+>t\}} + t^{p_n}\,\mathrm{vol}{\Omega \cap \{U_n>t\}} \\
\leq \int_\Omega (|Du_{n}| - U_n)_+^{p_n} + t^{p_n}\,\mathrm{vol}{\Omega \cap \{U_n>t\}}.\qedhere
\end{multline*}
\end{proof}
\begin{lemma}\label{lemma:fatoirp}
Let us fix $\Omega \subset \bb{R}^m$ and $\{a_i\}_{i = 1,\dots,\kappa} \subset \Omega$. Consider two sequences $(U_n)_n$ and $(v_n)_n$ such that, for each $n$, $U_n \in L^1(\Omega,\bb{R})$ and $v_n \in L^{p_n}(\Omega,\bb{R}^{m\times \nu})$ such that $U_n \to U$ a.e, $v_n \rightharpoonup v$ weakly in $L^p(\Omega\setminus\bigcup_{i = 1}^\kappa \B(a_i;\rho),\bb{R}^{m\times \nu})$ for every $p \in (1, 2)$ and $\rho \in (0,\rho(\{a_i\}_{i = 1,\dots,\kappa}))$ and
\begin{equation*}
\sup_{n}\int_{\Omega}\frac{(|v_n| - U_n)_+^{p_n}}{p_n} < +\infty.
\end{equation*}
Then,
\begin{equation*}
\int_{\Omega}\frac{(|v| - U)_+^2}{2} \leq \varliminf_{n\to\infty} \int_{\Omega}\frac{(|v_n| - U_n)_+^{p_n}}{p_n}.
\end{equation*}
\end{lemma}
\begin{proof}[Proof of lemma \ref{lemma:fatoirp}]
Let us define for $s > 1,p\in(1,2),\xi \in \bb{R}$ the $C^1(\bb{R},\bb{R})$ non-decreasing convex function
\begin{equation*}
\Phi_{s,p}(\xi) = \begin{cases}
0 & \text{ if } \xi \leq 0\\
\frac{\xi^p}{p} & \text{ if } \xi \in (0,s)\\
\frac{s^p}{p} + s^{p - 1}(\xi - s) & \text{ if } \xi \geq s
\end{cases}
\end{equation*}
for which one has
\begin{equation*}
\Phi_{s,p}'(\xi) = \begin{cases}
0 & \text{ if } \xi \leq 0\\
\xi^{p - 1} & \text{ if } \xi \in (0,s)\\
s^{p - 1} & \text{ if } \xi \geq s. \end{cases}
\end{equation*}
Since \(v\) is measurable, there exists $\zeta \in L^\infty$ such that $|\xi| = 1$ and $\braket{v,\zeta} = |v|$.
By convexity,
\begin{equation*}
\Phi_{s,p}(|v| - U_n) = \Phi_{s,p}(\braket{v,\zeta} - U_n) \\\leq \Phi_{s,p}(\braket{v_n,\zeta} - U_n) - \Phi_{s,p}'(\braket{v,\zeta} - U_n) [\braket{v_n- v,\zeta}].
\end{equation*}
Since \(v_n \rightharpoonup v_n\) weakly in \(L^q\) for some \(p < q < 2\), we have and since \(\Phi'_{s, p}(\braket{v,\zeta} - U_n)\) is bounded and converges almost everywhere to \((\Phi'_{s, p}(\braket{v,\zeta} - U)\), we have for all $\rho \in (0,\rho(\{a_i\}_{i = 1,\dots,\kappa}))$
\begin{equation*}
\lim_{n \to \infty} \int_{\Omega \setminus \bigcup_{i = 1}^\kappa\B(a_i;\rho)} \Phi_{s,p}'(\braket{v,\zeta} - U_n) [\braket{v_n- v,\zeta}] = 0.
\end{equation*}
and thus
\begin{align*}
\int_{\Omega\setminus \bigcup_{i = 1}^\kappa\B(a_i;\rho)}\Phi_{s,p}(|v| - U)
&\leq \varliminf_{n \to \infty} \int_{\Omega\setminus \bigcup_{i = 1}^\kappa\B(a_i;\rho)}\Phi_{s,p}(|v| - U_n)\\
&\leq \varliminf_{n\to \infty}\int_{\Omega\setminus \bigcup_{i = 1}^\kappa\B(a_i;\rho)}\Phi_{s,p}(\braket{v_n,\xi} - U_n) \\
&\leq \varliminf_{n\to \infty}\int_{\Omega}\frac{(|v_n| - U_n)_+^{p}}{p}\\
&\leq \varliminf_{n\to \infty}\int_{\Omega}\frac{(|v_n| - U_n)_+^{p_n}}{p_n}
\end{align*}
where we used Hölder inequality to obtain the last inequality. Next, letting $p\nearrow2$, $s\nearrow\infty$ and $\rho\searrow 0$, we conclude by Fatou's lemma.
\end{proof}
\begin{lemma}
\label{coro:convergencofpropgrandU}
Let $(p_n)_n$ be a sequence such that, for each $n$, $p_n \in(1,2)$ and converging to $p \in (1,2]$, let $U_n : \Omega \to [0, \infty)$ be a sequence of measurable maps such that $U_n\geq 1$. If
\begin{equation}\label{eq:assumtioneleimit}
\sup_{n} \sup_{t>1}t^{p_n - 1}\mathrm{Per}(\{U_n > t\}) < +\infty,
\end{equation}
then a subsequence of $U_n$ converges almost everywhere on $\Omega$. Moreover,
\[
\sup_{t>1}t^{p - 1}\mathrm{Per}(\{U > t\}) \leq \varliminf_{n\to \infty} \sup_{t>1}t^{p_n - 1}\mathrm{Per}(\{U_n > t\}).
\]
\end{lemma}
\begin{proof}[Proof of lemma \ref{coro:convergencofpropgrandU}]
We call $\Lambda$ the supremum in \eqref{eq:assumtioneleimit} and we define for \(T \in \intvo{1}{\infty}\),
\(U_n^T \doteq T^{-1} \vee (T \wedge U)\).
By our assumption and the coarea formula \cite[Theorem 10.3.3]{attouch2014variational}, for $T> 1$,
\begin{align*}
\varlimsup_{n \to \infty}\int_\Omega |D(U_{n}^T)| &= \varlimsup_{n \to \infty}\int_{1/T}^T \mathrm{Per}(\{U_n > t\}) \d t\\
&\leq \Lambda\lim_{n \to \infty}\frac{T^{2 - p_n} - T^{p_n - 2}}{2 - p_n} = \Lambda\begin{cases}2 \log T &\text{ if } p = 2 \\ \displaystyle\frac{T^{2 - p} - T^{p - 2}}{2 - p} &\text{ if } p\in(1,2). \end{cases}
\end{align*}
For fixed $T > 0$, as $U_n^T \leq T$ we obtain by the compact embedding for mappings of bounded variation ($\mathrm{BV}$) \cite[Theorem 10.1.4]{attouch2014variational}
and the partial converse to the dominated convergence theorem \cite[Proposition 4.2.10]{willem2013functional} that, up to some subsequence in $n$ that depends on $T$, $U_n\wedge T$ converges in \(L^1\) and almost everywhere. Considering a sequence $T_n \to \infty$, a Cantor diagonal argument yields, after the extraction of a subsequence, that for all $T>0$ $U_n^T$ converges almost everywhere to $U^T$ where $U : \Omega \to \bb{R}$ is a measurable map and \(U^T \doteq T^{-1} \vee (T \wedge U)\).
Moreover we have
\begin{equation}
\int_{\Omega} \vert U_n^T - U^T\vert = \int_{1/T}^{T} \,\mathrm{vol}{\{U_n > t\} \Delta \{U > t\}} \d t
\end{equation}
where $\Delta$ refers to the symmetric difference of two sets.
Up to a subsequence we can assume that for almost every \(t \in (0, \infty)\),
\[
\lim_{n \to \infty} \,\mathrm{vol}{\{U_n > t\} \Delta \{U > t\}} = 0.
\]
By $\mathrm{BV}$-theory \cite[Theorem 7.3.2]{willem2013functional}, for almost every $t \in \intvo{0}{\infty}$,
\begin{equation}\label{eq:cjsudgk}
\mathrm{Per}(\{U > t\}) = \int_\Omega |D \chi_{\{U > t\}}| \leq \varliminf_{n \to \infty}\int_\Omega |D \chi_{\{U_{n} >t\}} | = \varliminf_{n \to \infty}\mathrm{Per}(\{U_n > t\}),
\end{equation}
and, for all $t> 0$, \(\mathrm{Per}(\{U > t\})\le \liminf_{s \searrow t} \mathrm{Per}(\{U > s\})\), which concludes the proof.
\end{proof}
\begin{lemma}
\label{lemma:convergfatou}
Let $(X,|\cdot|)$ be a measure space.
Assume $U_n \to U$ in measure, $p_n \in [1,\infty)$ and $p_n \to p\in [1,\infty)$. Then, \begin{equation*}
\sup_{t>0}t^{p}\,\mathrm{vol}{\{|U| > t\}} \leq \varliminf_{n\to \infty}\sup_{t>0}t^{p_n}\,\mathrm{vol}{\{|U_n| > t\}}.
\end{equation*}
\end{lemma}
We omit the proof of lemma \ref{lemma:convergfatou}.
\begin{proof}[Proof of proposition \ref{prop:mixedboundedness}]
As in the proof of proposition \ref{prop:compactnessofboundedseqs}, we assume without loss of generality that
\begin{equation*}
u_n \in W^{1,p_n}(\Omega,\mathcal N) \cap W^{1,2}_{\mathrm{loc}}(\bar\Omega\setminus \{a_i^n\}_{i = 1,\dots,k^n},\mathcal N)
\end{equation*}
where $\{a_i^n\}_{i = 1,\dots,k^n} \subset \Omega$ are such that
\begin{equation*}
\delta = \inf_n\dist(\{a_i\}_{i = 1,\dots,k^n},\partial\Omega) >0.
\end{equation*}
The general case follows by density as in section \ref{subsub:density} in the proof of proposition \ref{prop:compactnessofboundedseqs}.
We first note that
\begin{equation*}
\varlimsup_{n\to \infty}(2 - p_n)\int_\Omega \frac{|Du_n|^{p_n}}{p_n} \leq \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0).
\end{equation*}
For each $n$ let $U_n : \Omega \to \{0\} \cup [\sys(\mathcal N)/(2 \pi \delta), \infty]$ be given by proposition \ref{prop:mixedlorentz}.
We then have
\begin{multline*}
\varlimsup_{n \to \infty}\frac{p_n - 1}{p_n}\int_{\Omega}(|Du_n| - U_n)_+^{p_n} \\\leq \varlimsup_{n \to \infty}\frac{(3-p_n)p_n}{2}\int_{\Omega}\frac{|Du_n|^{p_n}}{p_n} - \frac{\mathcal E_{\mathrm{sg}}^{1,p_n'}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)^{p_n - 1}(2 \pi \delta)^{2 - p_n}}{(p_n - 1)^{p_n - 1} p_n^{2-p_n}}
\end{multline*}
and thus
\begin{multline*}
\varlimsup_{n \to \infty}\int_{\Omega}\frac{(|Du_n| - U_n)_+^{p_n}}{p_n} \leq \varlimsup_{n \to \infty}\left [\int_{\Omega}\frac{|Du_n|^{p_n}}{p_n} - \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)}{2 - p_n}\right ] \\
+ \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0)\Big ( \big [\mathcal E_{\mathrm{sg}}^{1,p'}(\Tr_{\partial\Omega}u_0)^{p-1} \big]_{\mathrm{Lip}([3/2,2])}+ \log \frac{1}{\delta}\Big )
\end{multline*}
Lemma \ref{lemma:continuite_of_esgp} and the fact that $\sys(\mathcal N)>0$ imply that $\big [\mathcal E_{\mathrm{sg}}^{1,p'}(\Tr_{\partial\Omega}u_0)^{p-1} \big]_{\mathrm{Lip}([3/2,2])}$ is finite.
In addition, by \eqref{eq:weak-Lp}, \eqref{item:perstuff} and lemmas \ref{coro:convergencofpropgrandU} and \ref{lemma:convergfatou}
\begin{align*}
\varlimsup_{n \to \infty}\sup_{t > 0}t^{p_n} \,\mathrm{vol}{\{U_n > t \}} &\leq \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0), \\
\varlimsup_{n \to \infty}\sup_{t>0}t^{p_n - 1}\mathrm{Per}(\{U_n > t\}) &\leq \frac{4\pi}{\sys(\mathcal N)}\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0).\qedhere
\end{align*}
\end{proof}
As a corollary (see corollary \ref{coro:erenweakL2} below) of the result of section \ref{sec:mixed}, we obtain that the map $u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ resulting of proposition \ref{prop:compactnessthm} has its gradient that lies in weak-$L^2$.
\begin{corollary}\label{coro:erenweakL2}
Let $\Omega\subset \bb{R}^2$ be a bounded Lipschitz domain and $\mathcal N$ a Riemannian manifold with $\sys(\mathcal N)> 0$. If $u \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$, then $|Du| \in L^{2,\infty}(\Omega,\bb{R})$. Moreover, there exists a positive $U \in L^{2,\infty}(\Omega,\bb{R})$,
\begin{equation*}
\sup_{t > 0}t^{2} \,\mathrm{vol}{\{U > t \}} \leq \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u) \text{ and }
\sup_{t>0}t\,\mathrm{Per}(\{U > t\}) \leq \frac{4\pi}{\sys(\mathcal N)} \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u)
\end{equation*}
\begin{multline*}
\int_{\Omega}\frac{(|D u| - U)_+^2}{2}
\leq \mathcal E_{\ren}^{1,2}(u) \\+ \mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u)\Big ( \big [\mathcal E_{\mathrm{sg}}^{1,{p'}}(\Tr_{\partial\Omega}u) \big]_{\mathrm{Lip}([3/2,2])}+ |\log \dist(\{a_i\}_{i = 1,\dots,k},\partial\Omega)|\Big )
\end{multline*}
where $\{a_i\}_{i = 1,\dots,k}\subset \Omega$ are the associated singularities of the renormalizable map $u$.
\end{corollary}
\begin{proof}[Proof of corollary \ref{coro:erenweakL2}]
The sequence defined by $u_n = u$ for all $n$ satisfies the assumption \eqref{eq:upperBoundCompactnessthmII} of proposition \ref{prop:mixedboundedness} by proposition \ref{prop:limitErenpToEren}.
Combining the estimates of proposition \ref{prop:mixedboundedness} with the ones of lemma \ref{lemma:fatoirp}, lemma \ref{coro:convergencofpropgrandU} and lemma \ref{lemma:convergfatou} we obtain the conclusion thanks to proposition \ref{prop:limitErenpToEren}.
\end{proof}
\section{Convergence of minimizers}\label{section:conv_of_mins}
\begin{proposition}\label{prop:conv_of_min}
Let $\Omega\subset \bb{R}^2$ be a bounded Lipschitz domain and $\mathcal N$ a compact Riemannian manifold.
Let $(p_n)_n$ be a sequence satisfying, for each $n$, $p_n \in (1,2)$ and let $(u_n)_n$ be a sequence of minimizing $p_n$-harmonic maps such that, for all $n$, $u_n \in W^{1,p_n}(\Omega, \mathcal N)$ and $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0 \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$.
If $p_n \nearrow 2$, then up to some subsequence $(u_n)_n$ converges almost everywhere to a renormalizable map $u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ of trace $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_* = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0$. We denote the associated singular points of the renormalizable map $u_*$ by $\{a_i\}_{i = 1,\dots,\kappa} \subset \Omega$. In addition to \ref{item:firsitonfg}--\ref{item:lsc} of proposition \ref{prop:compactnessthm}, we have
\begin{enumerate}[(i)] \setcounter{enumi}{4}
\item $Du_n \to Du_*$ almost everywhere in $\Omega$\label{item:dsjklkjvhsd} and
\label{eq:convergenceofthemass}
\begin{equation*}
\int_\Omega |Du_*|^{p_n} - \int_\Omega |Du_n|^{p_n}\xrightarrow{n\to \infty}0,
\end{equation*}
\item \label{item:propminlim}
\begin{equation*}
\begin{split}
\lim_{n \to \infty} \int_{ \Omega } \frac{|Du_n|^{p_n}}{p_n} -\frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n)}{2 - p_n} & = \mathcal E_{\ren}^{1,2}(u_*) + \mathrm{H}([u,a_i])_{i = 1,\dots,\kappa}\\
&=\mathcal{E}^{1, 2}_{\mathrm{top}, \gamma_1, \dotsc, _{\gamma_k}} ([u_*,a_1], \dotsc, [u_*, a_k])
+ \mathrm H ([u_*, a_i])_{i = 1,\dots,\kappa},
\end{split}
\end{equation*}
\item \label{eq:minrenconvmin}
\( \mathcal E_{\ren}^{1,2}(u_*) + \mathrm{H}([u,a_i])_{i = 1,\dots,\kappa}
=
\inf\left\{\mathcal E_{\ren}^{1,2}(u_*) + \mathrm{H}([u,a_i])_{i = 1,\dots,\kappa} : \begin{matrix}u \in W^{1, 2}_{\mathrm{ren}} (\Omega, \mathcal N)\\
\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega} u_* = g
\end{matrix}\right\},\)
\item \label{eq:minrenconvgeom} the charges \(([u_*, a_i])_{i = 1, \dotsc, \kappa}\) and points $(a_i)_{i = 1,\dots,\kappa}$ minimize
the renormalized energy of configuration of points
\begin{multline}
\mathcal{E}^{1, 2}_{\mathrm{top}, \gamma_1, \dotsc, {\gamma_\kappa}} ([u_*,a_1], \dotsc, [u_*, a_k])
+ \mathrm H ([u_*, a_i])_{i = 1,\dots,\kappa}\\
=
\inf\Big \{
\mathcal{E}^{1, 2}_{\mathrm{top}, \gamma_1, \dotsc, \gamma_\kappa} (x_1, \dotsc, x_k) + \mathrm H (\gamma_i)_{i = 1,\dots,\kappa}:\\\begin{matrix}
x_1, \dotsc, x_k \in \Omega\\
(\gamma_1,\dotsc, \gamma_k) \text{ is a minimal topological resolution }
\end{matrix}
\Big \}
\end{multline}
\item for each $i = 1,\dots,\kappa$, and $\rho \in (0,\rho(\{a_i\}_{i = 1,\dots,\kappa}))$
\begin{equation}\label{eq:min_conv_trois}
\int_{\B(a_i;\rho)} \frac{|Du_n|^{p_n}}{p_n}- \frac{\lambda([u_*,a_i])^{p_n}}{(2\pi)^{p_n}p_n|\cdot - a_i|^{p_n}} \xrightarrow{n \to\infty} \int_{ \B(a_i;\rho)}\frac{|Du_*|^2}{2} - \frac{\lambda([u_*,a_i])^2}{8\pi^2 |\cdot - a_i|^2}.
\end{equation}
\end{enumerate}
\end{proposition}
The conclusion of proposition \ref{prop:conv_of_min} implies that $u_n \to u_*$ in $W^{1,p}(\Omega\setminus \bigcup_{i = 1}^k\B(a_i;\rho),\mathcal N)$, for each $p\in[1,2)$ and even more (see \eqref{eq:amotrer}):
\begin{equation}\label{eq:ciogdvdfhi}
\int_{\Omega\setminus \bigcup_{i = 1}^k\B(a_i;\rho)}|Du_n - Du_*|^{p_n} \xrightarrow{n \to\infty}0.
\end{equation}
We will refer to \eqref{eq:ciogdvdfhi} as strong convergence of the gradients. Meanwhile we cannot hope for
\begin{equation}
\label{eq_iecaupho6iechooM2thahPhi}
\varliminf_{n\to\infty}\int_\Omega |Du_n|^{p_n} <+\infty
\end{equation}
which would imply that $W^{1,2}_g(\Omega,\mathcal N)$ is not empty by an application of Fatou lemma.
Note that even if
\begin{equation}\label{eq:ciogdvdfhiII}
\int_{\Omega}|Du_n - Du_*|^{p_n} \xrightarrow{n \to\infty}0
\end{equation} holds, it would not imply \eqref{eq_iecaupho6iechooM2thahPhi}. Therefore one could wonder if \[\int_{\B(a_i;\rho)}|Du_n - Du_*|^{p_n} \xrightarrow{n \to\infty}0\] holds true near the singularities $a_i$ of the map as \eqref{eq:ciogdvdfhi} already holds true. As the following calculation shows, it could be related with the rate of convergence of singularities of $p$-harmonic maps (in \cite{hardt1987mappings} it is shown that $p$-harmonic mappings are smooth outside a finite number of point) to the points $a_i$ of the renormalized maps. As \eqref{eq:min_conv_trois} suggests, the norm of a derivative of a $p$-harmonic map behaves as $1/|x - a_p|$ near one of its singularities $a_p$ and a renormalized map satisfy $\sup_{x \in \B(a;\rho)}|x - a||Du_*|(x) < \infty$ near on of its singularities $a$ (see proposition \ref{prop:reg_of_ren_map}\ref{item:behviooir_neara pont}). On the other hand we have
\begin{equation}\label{eq:inequlaity}
2^{1 - p}3^{p - 2}\pi\frac{|a_p|^{2 - p}}{2 - p} \leq \int_{\B(0;1)}\left | \frac{1}{|x|} - \frac{1}{|x - a_p|}\right |^p\d x \leq 2^{5}\pi\frac{|a_p|^{2 - p}}{2 - p}
\end{equation}
if $|a_p| < 1/2$.
\begin{proof}[Proof of \eqref{eq:inequlaity}]
\textit{Lower bound.} For all $x \in \B(a;|a|/3)$, $|x - a| < |x|/4$ and
\[
\left | \frac{1}{|x|} - \frac{1}{|x - a|}\right |^2 \geq \frac{1}{|x - a|} \left ( \frac{1}{|x - a|} - \frac{2}{|x|}\right ) \geq \frac{1}{2|x - a|^2}
\]
and
\[
\int_{\B(a;|a|/3)}\left | \frac{1}{|x|} - \frac{1}{|x - a|}\right |^p\d x \geq \frac{2\pi|a|^{2 - p}}{3^{2 - p}2^p(2 - p)}.
\]
\textit{Upper bound.} Setting $a_t = (1 - t) 0 + t a$ for $t \in (0,1)$, there exists a $t \in (0,1)$ such that for all $x \in \B(0;1) \setminus \B(a; 2|a|)$
\[
\left | \frac{1}{|x|} - \frac{1}{|x - a|}\right | \leq \frac{|a|}{|x- a_t|^2}.
\]
Also,
\begin{align*}
\int_{\B(0;1)}\left | \frac{1}{|x|} - \frac{1}{|x - a|}\right |^p\d x & \leq |a|^p\int_{\B(0;1)\setminus \B(a_t;2|a|)}\frac{\d x}{|x- a_t|^{2p}} + 2^p\int_{\B(0;4|a|)}\frac{\d x}{|x|^p} \\
& \leq \frac{2^{4p}\pi |a|^{2 - p}}{2p - 2} + 2^{p + 1 + 2(2 - p)}\pi \frac{|a|^{2 - p }}{2 - p} . \qedhere
\end{align*}
\end{proof}
For each $a_i$ with $i = 1, \dots,\kappa$, the points in proposition \ref{prop:conv_of_min}, the following lemma (lemma \ref{lemma:convergenceonalsmosteiioeg}) implies that on almost every disk $\partial \B(a_i;\rho)$ and all $i = 1, \dots,\kappa$, for $n$ large enough depending on $\rho$, $u_n|_{\partial \B(a;\rho)}$ is in the same homotopy class that $u_*|_{\partial \B(a;\rho)}$. Since $u_* \in W^{1,2}_\mathrm{loc}(\B(a_i;\rho(\{a_i\}_{i = 1,\dots,\kappa}))\setminus \{a_i\}_{i = 1,\dots,\kappa})$, we obtain that for $n$ large enough $u_n|_{\partial \B(a_i;\rho)} \in [u_*,a_i]$. In particular, for $n$ large enough, $\lambda(\Tr_{\mathbb S^1(a;\rho)}u_n) = \lambda([u_*,a_i])$.
\begin{lemma}\label{lemma:convergenceonalsmosteiioeg}
Let $p \in [1,\infty)$ and $\Omega \subset \bb{R}^2$.
Let $(u_n)_n$ be a sequence converging to a map $u$ in $W^{1,p}(\Omega,\mathcal N)$. Then for all $a \in \Omega$ there exists an unrelabelled subsequence depending on $a$ such that for almost every $\rho \in (0,\dist(a,\partial \Omega))$, $u_n \to u$ uniformly on $\partial \B(a;\rho)$.
\end{lemma}
\begin{proof}[Proof of lemma \ref{lemma:convergenceonalsmosteiioeg}] Fix $a \in \Omega$.
Integration in poloar coordinates combined with the partial converse to the Lebesgue dominated convergence theorem yields a subsequence of $(u_n)_n$ converges in $W^{1,p}(\partial \B(a;\rho),\mathcal N)$. By the Sobolev embedding in dimension $1$, along this subsequence $u_n \to u$ uniformly on $\partial \B(a;\rho)$.
\end{proof}
\subsection{Uniform convexity of the \texorpdfstring{$p\nearrow 2$}{p↗2} \texorpdfstring{$L^p$}{Lᵖ}-convergence}
We will obtain the strong convergence of the gradients using this proposition concerning the notion of convergence that induces
\begin{equation*}
\int_X |u_p - u|^p \xrightarrow{p \nearrow 2} 0.
\end{equation*}
In the case of Lebesgue spaces this proposition would have been a consequence of \emph{uniform convexity} \cite{clarkson1936uniformly}\cite[Theorem 5.4.2]{willem2013functional}.
\begin{lemma}\label{lemma:uniformconvexitypq}
Let $X$ be a measure space. Let $v_p \in L^p(X,\bb{R}^\mu)$ for $p \in [1,\infty)$ and $v\in L^q(X,\bb{R}^\mu)$, for some $q \in (1,\infty)$. If
\begin{equation}
\lim_{p \to q}\int_X |v_p|^p=\int_X |v|^q \text{ and } \label{eq:weakSum} 2^q\int_X |v|^q \leq \varliminf_{p\to q}\int_X|v + v_p|^p,
\end{equation}
then
\begin{equation*}
\lim_{p \to q}\int_X |v_p - v|^p =0.
\end{equation*}
\end{lemma}
The proof of lemma \ref{lemma:uniformconvexitypq} rely on Hanner's inequality \cite{hanner1956uniform}\cite[Theorem 4.1.9(c)]{willem2013functional} (see \eqref{eq:hanner} in the proof).
\begin{proof}[Proof of lemma \ref{lemma:uniformconvexitypq}]
We assume $q \in (1,2]$ and write $\|\cdot \|_r$ for $\| \cdot \|_{L^r(X,\bb{R}^\mu)}$. Since
\begin{equation*}
\varlimsup_{p\to q}\|v_p - v\|_p \leq 2\|v\|_q <\infty,
\end{equation*}
we consider a sequence $p_n \to q$ such that the superior limit is actually a limit that we denote $\eta \geq 0$. We further assume that $p_n \leq q \leq 2$. For each $n$, Hanner's inequality for exponent below $2$ implies
\begin{equation}\label{eq:hanner}
2^{p_n} (\|v_n\|_{p_n}^{p_n} + \|v\|_{p_n}^{p_n}) \geq \big (\|v + v_n\|_{p_n} + \|v - v_n\|_{p_n}\big)^{p_n} + \big |\|v + v_n\|_{p_n} + \|v - v_n\|_{p_n}\big|^{p_n}.
\end{equation}
Letting $n \to \infty$, we obtain
\begin{equation*}
2\|v\|_{p}^{p} \geq \big (\|v\|_p + \eta\big)^p + \big |\|v\|_p - \eta\big|^p.
\end{equation*}
By lemma \ref{lemma:sublemma} below this forces $\eta =0$ if $\|v\|_p \neq 0$ as $\eta$ is nonnegative. If $q \geq 2$ or $p_n \geq q$ one proceeds the same way by eventually using Hanner's inequality for exponent above $2$.
\end{proof}
\begin{lemma}
\label{lemma:sublemma}
If $p \in (1,2]$ and if $x\geq 0$ satisfy $2 \geq (1 + x)^p + |1 - x|^p$ then $x = 0$.
\end{lemma}
\subsection{Proof of convergence of minimizers (proposition \ref{prop:conv_of_min})}
\begin{proof}[Proof of proposition \ref{prop:conv_of_min}]
By proposition \ref{prop:upperBound}, the sequence $(u_n)_n$ satisfies the assumption of the compactness proposition \ref{prop:compactnessthm}. This shows the existence of the limit map $u_* \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ of trace $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_* = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0$. We call $\{a_i\}_{i = 1,\dots,\kappa} \subset \Omega$, the associated singular points of the renormalizable map $u_*$.
Let $v \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$ such that $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}v = \mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_0$ and its singularities $\{a_i^v\}_{i = 1,\dots,k} \subset \Omega$ satisfy for all $\rho \in (0,\rho(a_i^k)_{i = 1,\dots, k})$,
\begin{equation*}
\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}v) = \sum_{i = 1}^k \frac{\lambda(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \B(a_i;\rho)}v)^2}{4\pi}.
\end{equation*}
Hence, by \ref{eq:scicompactnesslog} in proposition \ref{prop:compactnessthm}, the minimality assumption and the continuity proposition \ref{prop:limitErenpToEren}, we get
\begin{multline}\label{eq:scicompactnessmin}
\mathcal E_{\ren}^{1,2}(u_*) - \sum_{i = 1}^\kappa \frac{\lambda([u_*, a_i])^2}{4\pi}\log\frac{\lambda([u_*, a_i])}{2\pi} \\\leq \varliminf_{n \to \infty} \int_{ \Omega } \frac{|Du_n|^{p_n}}{p_n} -\frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}u_n)}{2 - p_n} - \frac{\mathcal E_{\mathrm{sg}}^{1,2}(\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_\Omega u_*)}{2} \\\leq \mathcal E_{\ren}^{1,2}(v)- \sum_{i = 1}^k \frac{\lambda([v, a_i])^2}{4\pi}\log\frac{\lambda([v, a_i])}{2\pi}.
\end{multline}
This shows that $u_*$ has the minimizing property \ref{eq:minrenconvmin}.
To get \ref{item:propminlim}, we combine the integral representation of the renormalized energy (proposition \ref{prop:polarCoordEren}) and \ref{eq:minrenconvmin} in proposition \ref{prop:compactnessthm}: for all $v \in W^{1,2}_\mathrm{ren}(\Omega,\mathcal N)$
\begin{multline}
\mathcal E_{\ren}^{1,2}(u_*) + \mathrm H([u_*,a_i])_{i = 1,\dots,\kappa} \leq \int_{\Omega \setminus \bigcup_{i = 1}^\kappa \B(a_i;\rho)} \frac{|Dv|^2}{2} - \log \frac{1}{\rho} \sum_{i = 1}^\kappa \frac{\lambda([v,a_i])^2}{4\pi} \\+ \sum_{i =1}^\kappa \int_0^\rho\left [\int_{\partial \B(a_i;r)}\frac{|Dv|^2}{2}\d \HH^1 - \frac{\lambda([v, a_i])^2}{4\pi r} \right] \d r.
\end{multline}
Minimizing over $u$ as in the definition \eqref{eq_def_renorm_top} of the renormalized geometrical energy and letting $\rho\searrow 0$ we obtain $\mathcal E_{\ren}^{1,2}(u_*) + \mathrm H([u_*,a_i])_{i = 1,\dots,\kappa} \leq \mathcal E^{1,2}_{\mathrm{top}, [u_*,a_1], \dotsc, [u_*,a_\kappa]}(v) + \mathrm H([u_*,a_i])_{i = 1,\dots,\kappa}$. The reverse inequality is given by proposition \ref{prop:compactnessthm}\ref{item::scicompactnesslog}.
The assertion \ref{eq:minrenconvgeom} follows from the arbitrariness of $v$.
Let us now show that for all $\rho \in (0,\rho(\{a_i\}_{i = 1,\dots,\kappa}))$,
\begin{equation} \label{eq:amotrer}
\varlimsup_{n \to \infty}\int_{\Omega \setminus \bigcup_{i = 1}^k \B(a_i;\rho)} |Du_{n}|^{p_n} \leq \int_{\Omega \setminus\bigcup_{i = 1}^k \B(a_i;\rho)} |Du_{*}|^{2}
\end{equation}
because then we will get the strong convergence of the gradients in the sense of \eqref{eq:ciogdvdfhi} in view of lemma \ref{lemma:uniformconvexitypq} and the weak convergence of $Du_n$ given by proposition \ref{prop:compactnessthm}.
It implies by the partial converse of the Lebesgue dominated convergence theorem that $Du_n \to Du_*$ almost everywhere.
Let us observe by the integral expression of the renormalized energy, proposition \ref{prop:polarCoordEren} and \eqref{eq:scicompactnessmin} evaluated at $v = u_*$,
\begin{IEEEeqnarray}{rCl}
\int_{\Omega\setminus \bigcup_{i = 1}^k\B(a_i;\rho)}\frac{|Du_*|^2}{2} &=& \mathcal E_{\ren}^{1,2}(u_*) + \sum_{i = 1}^k \frac{\lambda([u_*,a_i])^2}{4\pi}\log\frac{1}{\rho} \\
\notag&&\quad\quad - \sum_{i = 1}^\kappa\int_0^\rho \left [\int_{\partial \B(a_i;r)}\frac{|Du_*|^2}{2}\d \HH^1 - \frac{\lambda([u_*,a_i])^2}{4\pi r}\right ]\d r \\
\notag&=&\label{eq:fjcàsdjkhgf}\lim_{n \to \infty}\int_\Omega \frac{|Du_n|^{p_n}}{p_n} - \sum_{i = 1}^\kappa \frac{\lambda([u_*,a_i])^{p_n}}{2\pi p_n }\frac{(2\pi)^{2 - p_n}}{2 - p_n} \\
\notag&&\quad\quad+ \lim_{n\to \infty}\sum_{i = 1}^\kappa \frac{\lambda([u_*,a_i])^{p_n}}{2 p_n\pi}\frac{(2\pi)^{2 - p_n} - (2\pi \rho)^{2 - p_n}}{2 - p_n}\\
\notag&&\quad\quad - \sum_{i = 1}^\kappa\int_0^\rho \left [\int_{\partial \B_r(a_i)}\frac{|Du_*|^2}{2}\d \HH^1 - \frac{\lambda([u_*,a_i])^2}{4\pi r}\right ]\d r.
\end{IEEEeqnarray}
Hence,
\begin{IEEEeqnarray}{rCl}\label{eq:limsupineq}
\int_{\Omega \setminus \bigcup_{i = 1}^k\B(a_i;\rho)}\frac{|Du_*|^2}{2}&\geq& \varlimsup_{n\to \infty}\int_{\Omega \setminus \bigcup_{i =1 }^k\B(a_i;\rho)}\frac{|Du_n|^{p_n}}{p_n} \\
&&\nonumber\quad\quad\quad + \varliminf_{n\to \infty} \int_{\bigcup_{i = 1}^\kappa\B(a_i;\rho)} \frac{|Du_n|^{p_n}}{p_n}- \sum_{i = 1}^\kappa\frac{\lambda([u_*,a_i])^{p_n}}{2p_n\pi }\frac{(2\pi \rho)^{2 - p_n}}{2 - p_n} \\
&&\nonumber\quad\quad\quad - \sum_{i = 1}^\kappa\int_0^\rho \left [\int_{\partial \B(a_i;r)}\frac{|Du_*|^2}{2}\d \HH^1 - \frac{\lambda([u_*,a_i])^2}{4\pi r}\right ]\d r. \\
&\geq&\nonumber \varlimsup_{n\to \infty}\int_{\Omega \setminus \bigcup_{i =1 }^k\B(a_i;\rho)}\frac{|Du_n|^{p_n}}{p_n},
\end{IEEEeqnarray}
where we used for the last inequality the limit \eqref{eq:presdessingu}.
Moreover, we have equality in \eqref{eq:limsupineq} by the weak convergence of the weak derivatives $Du_n$. Therefore, we deduce \eqref {eq:min_conv_trois} as \eqref{eq:presdessingu} allow us to localize on balls $\B(a_i;\rho)$ for $\rho \in (0,\rho(\{a_i\}_{i = 1,\dots,\kappa})$.
By proposition \ref{prop:limitErenpToEren}, and by \eqref{eq:limsupineq},
\begin{align}
\mathcal E_{\ren}^{1,2}(u_*) & = \lim_{n\to \infty}\int_\Omega \frac{|Du_*|^{p_n}}{p_n} - \sum_{i = 1}^k\frac{\lambda([u_*,a_i])^{p_n}}{(2\pi)^{p_n - 1} p_n}\frac{1 - \rho^{2 - p_n}}{2 - p_n}, \\
& = \lim_{n \to \infty}\int_\Omega \frac{|Du_n|^{p_n}}{p_n} - \sum_{i = 1}^\kappa \frac{\lambda([u_*,a_i])^{p_n}}{(2\pi)^{p_n - 1} p_n }\frac{1 - \rho^{2 - p_n}}{2 - p_n}, \label{eq:dkshcsrjhg}
\end{align}
and thus we get \ref{eq:convergenceofthemass}, concluding the proof.
\end{proof}
\section{The case of non-compact manifolds}\label{sec:whattodononcomapct}
We point out assumptions the complete Riemmanian manifold $\mathcal N$ to which the present method of proof applies without changing one iota.
\begin{enumerate}[(i)]
\item (\emph{Positive systole}) The manifold $\mathcal N$ should have a positive systole $\sys(\mathcal N)$. This result is crucial to obtain the continuity of $\mathcal E_{\mathrm{sg}}^{1,p}(g)$ in $p$, see lemma \ref{lemma:continuite_of_esgp} and counting the balls in the proof of proposition \ref{prop:compactnessthm}, see \eqref{eq:systolecrucial}.
\item (\emph{Nonemptiness of $W^{1,2}_{\mathrm{ren},g}$}) For $g \in W^{\sfrac{1}{2},2}(\partial\Omega, \mathcal N)$, $W^{1,2}_{\mathrm{ren},g}(\Omega,\mathcal N)$ is not empty. Proposition \ref{prop:wrenpeutetreempty} shows it can be empty in the non-compact case. This guarantees the existence of competitors, see proposition \ref{prop:upperBound}. One can check that the manifold $\mathcal N \doteq \mathbb S^1 \times \bb{R}$ and the map $g = \mathrm{Id}_{\mathbb S^1}\times 0$ verify $W^{1,2}_{\mathrm{ren},g}(\B(0,1),\mathcal N) \neq \Oset$.
\item (\emph{$W^{1,2}$-extension of the boundary data}) There exists a $\delta > 0$ such that for the boundary data $g \in W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$ one can construct a map $U \in W^{1,2}(\partial \Omega_\delta,\mathcal N)$ such that $\mathrm{Tr}} \newcommand{\Tr}{\mathrm{Tr}} \newcommand{\scauthor}[1]{\textsc{#1}_{\partial \Omega}U = g$ where $\partial \Omega_\delta$ means $\{x \in \bb{R}^2 : \dist(x,\partial \Omega) < \delta\}$. This replaces proposition \ref{prop:non-surjectivity_of_the_trace} and holds if $g \in L^\infty\cap W^{\sfrac{1}{2},2}(\partial \Omega, \mathcal N)$ by proposition \ref{prop:non-surjectivity_of_the_trace} for instance.
\end{enumerate}
\bibliographystyle{abbrv}
|
2,877,628,091,306 | arxiv | \section{Introduction}
\typeout{SET RUN AUTHOR to \@runauthor}
\section{Introduction}
PKS 2155-304 is the archetypical X-ray-selected BL Lac object (XBL). It is one of the
brightest BL Lacs at x-ray through optical wavelengths where it has a relatively featureless
continuum and displays rapid, large amplitude variability. This continuum is thought
to be direct synchrotron emission from a distribution of ultra-relativistic electrons which
extends to unusually high energies \cite{E1} .
The gamma-ray emission from PKS 2155-304 constitutes a second, separate, spectral component.
Observations with the EGRET telescope aboard the Compton Gamma Ray Observatory (CGRO) show that the
spectral energy distribution of this gamma-ray component must peak at energies above 10 GeV \cite{V1}. This,
plus the realization that the extention of the synchrotron component into the x-ray band meant that
ambient photons would be scattered to TeV energies, led to predictions \cite{V1,S1} that PKS 2155-304 would be a
detectable TeV gamma ray source.
The University of Durham group has recently reported the discovery of TeV gamma ray emission from
PKS 2155-304 \cite{C1,C2}. The TeV emission was detected in 1996 September and 1997 October/November, with
the largest fluxes being measured in 1997 November. During 1997 November, we detected a record high GeV gamma-ray
flux from PKS 2155-304 with CGRO/EGRET \cite{K1} and subsequently very high x-ray fluxes were measured
with BeppoSAX \cite{C3}. Here we report, for the first time, on the record x-ray fluxes measured with
the Rossi X-Ray Timing Explorer (RXTE) during the GeV/TeV outburst.
\section{X-Ray Observations}
In November 1997, after our detection of an extremely high GeV gamma-ray flux from
PKS 2155-304, we began a short series of target of opportunity observations with RXTE to
determine the x-ray properties of the source during the gamma-ray flare. Specifically, we made Proportional
Counter Array (PCA) and High Energy X-ray Timing Experiment (HEXTE) observations of nominal 2.5 ksec duration
on 20, 21, and 22 November 1997. Our analysis of those data indicate that the x-ray
flux on 20 and 21 November 1997 was $F(2-10\ $keV$)=2.3 \times 10^{-10}$ erg cm$^{-2}$s$^{-1}$.
By 17:00 UT on 22 November, when BeppoSAX measured its highest flux value \cite{C4}, the flux had slightly
dropped to $1.6 \times 10^{-10}$ erg cm$^{-2}$s$^{-1}$. The 20-21 November x-ray fluxes measured by RXTE
are the highest ever observed in the 2-10 keV band for PKS 2155-304.
The x-ray spectral shapes we measured during the 1997 November flare show downward curvature consistent
with the idea that the synchrotron component is rolling off at keV energies.
If we fix the column depth at the galactic value n$_H=1.36 \times 10^{20}$cm$^{-2}$ \cite{L1},
then the 2.5-30.0 keV spectrum measured by the PCA on 20 November 1997 at 22:44-23:39 UT cannot be fit with a single
power law. However, a good fit is obtained if we use a broken power law
with a low energy photon index $\alpha_{L}=2.51(\pm0.08)$, a high-energy photon index $\alpha_{H}=3.04(\pm0.03)$, and a
break energy $E_{b}=4.01(\pm0.22)$ keV. The 21 November measurements taken at 15:16-15:41 UT show a similar
spectrum with $\alpha_{L}=2.72(\pm0.08)$, $\alpha_{H}=3.06(\pm0.04)$, and $E_{b}=4.33(\pm0.40)$\ keV.
While yielding a slightly lower flux, the 22 November measurements taken at 17:00-17:33 UT are also
well fit by a broken power law but with parameters: $\alpha_H=2.98(\pm0.03)$, $\alpha_L=2.20(+0.30/-1.41)$
and a break energy $E_b=3.50(\pm0.45)$ keV.
To act as control observations, we examined PCA measurements of PKS 2155-304 made when either the
TeV or GeV gamma-ray fluxes were known to be low. During 30 December 1997-13 January 1998 we made
follow-up EGRET observations of the GeV emission from PKS 2155-304 and marginally detected GeV flux
at a level approximately a factor of four smaller than the November 1997 peak flux. Our
simultaneous PCA observations on 9-11 January 1998 measured 2-10 keV fluxes that had decreased
by a factor of seven from those observed on 20-21 November 1997. Unlike the November 1997 spectra,
the 2.5-15.0 keV spectrum measured on 9 January 1998 at 2:59-3:40 UT can be acceptably fit
($\chi^{2}_{\nu}=0.95$ for 32 d.o.f.) using the galactic column depth and a single power law
having photon index $\alpha=2.83(\pm0.04)$. Chadwick et al. \cite{C1,C2} report detection of significant TeV flux from
PKS 2155-304 in September 1996, however the TeV flux apparently decreased and they were unable
to detect it in October or November 1996. While we do not have simultaneous TeV and x-ray observations,
the contemporaneous PCA observations made on 14 November 1996 show an x-ray flux which is a factor of
five smaller than those observed during the November 1997 TeV gamma-ray flare. Our observations are
therefore consistent with the pattern of correlated x-ray and gamma-ray flux outbursts observed
in the two well-studied TeV emitting XBLs, Mrk 421 and Mrk 501 \cite{B1}.
\begin{figure*}
\centerline{\epsfig{file=vestrand_fig1.ps,width=2.5in,height=3in
bbllx=0pt, bblly=180pt,bburx=400pt,bbury=580pt,angle=-90}}
\vspace{3cm}
\caption{PKS 2155-304 x-ray count spectra and folded photon models derived from RXTE/PCA measurements
taken on 20 November 1997 (top) and 9 January 1998 (bottom).}
\label{f1}
\end{figure*}
Measurements taken with the All-Sky Monitor (ASM) aboard the RXTE satellite also suggest a correlation
between elevated x-ray and gamma-ray emission. While substantially less sensitive than the PCA, the broad
field of view of the ASM provides much better temporal coverage and is sensitive enough
to detect major flaring activity from PKS 2155-304. Comparison of monthly ASM counting rates derived
by averaging over days when TeV observations were made with 5 months of TeV gamma-ray monthly counting
rates indicates a positive correlation \cite{C1}. Our GeV gamma-ray measurements with EGRET suggest
that the correlation between gamma-ray and x-ray flux exists on even shorter timescales (see Figure 2). Subdivision
of the November 1997 EGRET observations indicates that the bulk of the $>$100 MeV emission was detected during
11-14 November. While the statistics are poor, measurements by the ASM hint at a strong x-ray flare
on 12-13 November simultaneous with the GeV flare and perhaps a second smaller flare on 19-20 November. Since the TeV gamma-ray
and pointed x-ray observations did not begin until the 19th and 20th respectively, we suspect that they
missed an even larger outburst on 12-13 November.
\begin{figure*}
\centerline{\epsfig{file=vestrand_fig2.ps,width=2.5in,height=3in
bbllx=200pt, bblly=180pt,bburx=450pt,bbury=580pt,angle=90}}
\vspace{3cm}
\caption{A comparison of the average daily count rate in the 2-10 keV x-ray band measured by RXTE/ASM
with the gamma-ray flux measured above 100 MeV by CGRO/EGRET. Plotted are measurements of PKS 2155-304
taken during November 1997. The shaded horizontal bars indicate the time intervals when TeV gamma-ray and
pointed x-ray observations were made.}
\label{f2}
\end{figure*}
\section{Concluding Remarks}
The available PKS 2155-304 data show a correlation between the 2-10 keV x-ray outbursts and GeV/TeV
gamma-ray outbursts. The ASM on RXTE has demonstrated the importance of all-sky x-ray monitoring as a trigger for
gamma-ray studies of XBLs and other blazars. The utility of this technique is currently limited by the sensitivity
and duty cycle of the ASM. We expect that the launching of MOXE $-$ an all-sky x-ray monitor
which is a factor of four more sensitive than the ASM and has a duty cycle of nearly unity \cite{I1} $-$ and GLAST, the next generation
GeV gamma-ray telescope, in conjunction with the construction of the ground-based VERITAS array will initiate an exciting new era of
blazar study.
|
2,877,628,091,307 | arxiv | \section{Introduction}
Geometric topology is an inherently algorithmic subject, with
fundamental questions such as the \emph{homeomorphism problem}
(find an algorithm to determine whether two given spaces
are topologically equivalent) and the \emph{identification problem}
(find an algorithm to determine the topological name and/or structure
of a given space). Three-dimensional topology is of particular
interest, since in lower dimensions such problems become trivial
\cite{massey91}, and in higher dimensions they become unsolvable
\cite{markov60-insolubility}.
Throughout this paper we restrict our attention to \emph{closed
3-manifolds}. In essence, a closed $3$-manifold is a compact
$3$-dimensional topological space that locally looks like $\mathbb{R}^3$ at
every point.
Much recent progress has been made on algorithms in 3-manifold topology.
For example:
\begin{itemize}
\item Rubinstein gave an algorithm in 1992 for recognising
the simplest of all closed 3-manifolds, namely the
3-sphere \cite{rubinstein95-3sphere,rubinstein97-3sphere};
this algorithm has been refined several times since
\cite{burton09-quadoct,jaco03-0-efficiency,thompson94-thinposition}.
\item In 1995, Jaco and Tollefson gave an algorithm for breaking a
3-manifold down into a connected sum decomposition (essentially a
topological ``prime decomposition'')
\cite{jaco95-algorithms-decomposition}.
\item Perelman's proof of the geometrisation conjecture in 2002
finally resolved the general homeomorphism problem for 3-manifolds,
completing a programme initiated decades earlier by pioneers
such as Haken \cite{haken62-homeomorphism} and
Thurston \cite{thurston82-geometrisation}.
The full homeomorphism algorithm is a fusion of diverse
and complex components, including both the 3-sphere recognition and
connected sum decomposition algorithms above.
\end{itemize}
A recurring theme in these algorithms (and many others) is that they rely upon
\emph{normal surface theory}, a tool that allows us to convert difficult
topology problems into simpler linear programming problems.
In particular, we can search for an interesting surface within
a 3-manifold by (i)~constructing a high-dimensional polytope,
(ii)~enumerating the ``admissible'' vertices of this polytope,
and then (iii)~testing each admissible vertex to see whether it
encodes the interesting surface that we are searching for.\footnote{%
Some other algorithms (such as knot genus \cite{hass99-knotnp} and
Heegaard genus \cite{li10-genus})
replace step~(ii) with the more difficult enumeration of
a Hilbert basis for a polyhedral cone, yielding what are
known as \emph{fundamental surfaces}.}
The concept of an ``interesting surface'' depends on the application at
hand. For instance, in the connected sum decomposition algorithm we
search for embedded spheres within our 3-manifold;
in other algorithms we might search for non-trivial embedded discs
\cite{haken61-knot} or embedded incompressible surfaces \cite{jaco84-haken}.
However, in all of these applications the high-dimensional
polytope and its admissible vertices remain the same. That is,
the polytope vertex enumeration problem is a \emph{common component}
for all of these topological algorithms and many others besides.
Furthermore, this common vertex enumeration problem is in fact the
computational bottleneck for many of these algorithms
\cite{burton09-quadoct,burton09-ws}.
It is therefore important to improve the efficiency and
understand the complexity of this vertex enumeration problem,
since any improvements or results will have a widespread impact on
computational 3-manifold
topology as a whole. This impact also extends beyond three
dimensions---for instance, in \emph{4-manifold topology}, to
understand whether a given triangulation represents a 4-manifold
we require all of the complex machinery of 3-sphere recognition as
discussed above.
In general, polytope vertex enumeration is difficult.
The general problem is known to be NP-hard
\cite{dyer83-complexity,khachiyan08-hard},
and the range of available algorithms is matched by a range of
pathological cases that exploit their weaknesses \cite{avis97-howgood-compgeom}.
However, in our context we have two advantages:
\begin{itemize}
\item We are not dealing with an arbitrary polytope, but rather one
that derives from the machinery of normal surface theory; this polytope
is known as the \emph{projective solution space}. Such polytopes
have additional constraints on their dimensions and the equalities
and inequalities that define them.
\item We do not need to enumerate all vertices of the polytope, but
only the \emph{admissible vertices}. These are the vertices
that satisfy an additional family of non-linear constraints,
known as the \emph{quadrilateral constraints}.
\end{itemize}
These contextual advantages can be exploited in vertex enumeration
algorithms with great success; see
\cite{burton09-convert,burton09-quadoct,burton10-dd,tollefson98-quadspace}
for details. Nevertheless, the enumeration problem remains a difficult
one. In particular, Agol et~al.\ \cite{agol02-knotgenus} show that
determining knot genus---yet another problem that employs normal
surface theory---is in fact NP-complete.
In this paper we concern ourselves with the \emph{complexity} of the
enumeration problem. More specifically, we focus on the
\emph{number of admissible vertices} of the projective solution space,
which we denote by $\sigma$. This quantity is important for the
following reasons:
\begin{itemize}
\item The admissible vertex count $\sigma$ gives a lower bound for
the time complexity of vertex enumeration. Moreover, for the
quadrilateral-to-standard conversion algorithm (a key component of
the current state-of-the-art enumeration algorithm), there is
strong evidence to suggest that the running time is in fact a
low-degree polynomial in $\sigma$ \cite{burton09-convert}.
\item Each admissible vertex corresponds to a surface in our
3-manifold upon which we must run some subsequent test. For some
problems (such as Hakenness testing \cite{burton09-ws,jaco84-haken})
this test is extremely expensive, and so the number of admissible
vertices becomes a critical factor in the overall time complexity.
\end{itemize}
The input for a typical normal surface algorithm is a \emph{3-manifold
triangulation}, formed from $n$ tetrahedra by joining their $4n$ faces
together in pairs. We call $n$ the \emph{size} of the triangulation;
not only does $n$ represent the complexity of the input, but both the
dimension and the number of facets of the projective solution space are
linear in $n$.
The growth of $\sigma$ as a function of $n$ is currently
not well understood. The only general theoretical bound in the literature
is $\sigma \leq 128^n$, proven by Hass et~al.\ \cite{hass99-knotnp};
in the special case of a one-vertex triangulation
this has been improved to $\sigma \in O(15^n)$ \cite{burton10-dd}.
Very little is known about the growth of $\sigma$ in practice, though
initial observations suggest that $\sigma$ is in fact far smaller
\cite{burton09-convert}. For example, in the proof that the
Weber-Seifert dodecahedral space is non-Haken (one of the first
significant computer proofs to employ normal surface theory),
a ``typical'' triangulation of size $n=23$ is found to generate just
$\sigma=1751$ admissible vertices \cite{burton09-ws}.
In this paper we shed more light on the growth of $\sigma$, including
new theoretical bounds and comprehensive practical experimentation.
Following a brief outline of normal surface theory in
Section~\ref{s-prelim}, we present the following results:
\begin{itemize}
\item In Section~\ref{s-theory} we show that $\sigma \in O(\phi^{7n})$,
where $\phi$ is the golden ratio
$(1+\sqrt{5})/2$. This tightens the general theoretical
bound on $\sigma$ from $128^n$ to just over $O(29^n)$. We prove this
by extending McMullen's upper bound theorem
\cite{mcmullen70-ubt} to show that any convex polytope with
$k$ facets must have $O(\phi^k)$ vertices.
We push this bound from the other direction in Section~\ref{s-extreme} by
constructing an infinite family of 3-manifold triangulations
for which $\sigma = 17^{n/4} + n/4$. This yields the first known
family for which $\sigma$ is exponential in $n$, and disproves an
earlier conjecture of the author that $\sigma \in O(2^n)$.
By extending this family to all $n > 5$ we show that any theoretical upper
bound must grow at least as fast as
$\Omega(17^{n/4}) \simeq \Omega(2.03^n)$.
\item In Section~\ref{s-practice} we build a comprehensive census of
\emph{all} 3-manifold triangulations of size $n \leq 9$, and measure
$\sigma$ for each of the $\sim 150$~million triangulations that ensue.
We find a remarkably slow growth rate---for $n > 5$ the
worst cases are precisely the infinite family above, suggesting that
the lower limit of $\Omega(17^{n/4}) \simeq \Omega(2.03^n)$
may in fact be tight.
In the average case the mean $\overline{\scount}$ appears to grow even slower,
with an apparent growth rate of less than $\phi^n$ and a final mean of
just $\overline{\scount} \simeq 78.49$ for $n = 9$.
This analysis is the first of its kind, primarily because the complex
algorithms and software required for such a comprehensive study did
not exist until very recently \cite{burton07-nor10,burton09-convert}.
Previous censuses have focused on restricted classes of
triangulations (such as minimal triangulations of irreducible
or hyperbolic manifolds
\cite{burton07-nor10,callahan99-cuspedcensus,martelli01-or9,matveev05-or11}),
and previous measurements of $\sigma$ have been for isolated or ad-hoc
collections of cases \cite{burton09-convert,burton09-ws,matsumoto00-fig8}.
\end{itemize}
Throughout this paper we work with Haken's original formulation of
normal surface theory \cite{haken61-knot,haken62-homeomorphism}.
Tollefson defines an alternative formulation called \emph{quadrilateral
coordinates} \cite{tollefson98-quadspace},
which is only applicable for some problems but where the
polytope becomes much simpler. In quadrilateral coordinates an upper
bound of $\sigma \leq 4^n$ can be obtained through an
analysis of \emph{zero sets} \cite{burton10-dd}, but again the growth
rate is found to be significantly slower in practice. We address
quadrilateral coordinates in detail in the full version of this paper.
\section{Preliminaries} \label{s-prelim}
Throughout this paper we assume that we are working with a
\emph{3-manifold triangulation of size $n$}. By this we mean a collection
of $n$ tetrahedra, some of whose $4n$ faces are affinely identified (or
``glued together'') in pairs so that the resulting topological space
is a 3-manifold (possibly with boundary). If all $4n$ faces are
identified in $2n$ pairs then we obtain a closed 3-manifold; otherwise
we obtain a \emph{triangulation with boundary}, and the unidentified
faces become \emph{boundary faces}. Unless otherwise specified, all
triangulations in this paper are of closed 3-manifolds.
There is no need for a 3-manifold triangulation to be rigidly
embedded in some larger space---tetrahedra can be ``bent'' or
``stretched''. Moreover, we allow multiple vertices of the
same tetrahedron to be identified as a result of our face gluings,
and likewise with edges. This allows us to build
triangulations using very few tetrahedra, which becomes useful for computation.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{s2xs1}
\hspace{2cm}
\includegraphics[scale=0.5]{s2xs1-normal}
\caption{A 3-manifold triangulation and an embedded normal surface}
\label{fig-s2xs1}
\end{figure}
To illustrate, the left-hand diagram of Figure~\ref{fig-s2xs1} shows a
triangulation of the product space $S^2 \times S^1$ using just
$n=2$ tetrahedra---the back two faces of each tetrahedron are identified
with a twist, and the front two faces of the left tetrahedron are
identified directly with the front two faces of the right tetrahedron.
All eight vertices become identified together, and the 12 edges become
identified in three distinct classes (represented in the diagram by three
different types of arrowhead). We say that the resulting triangulation has
\emph{one vertex} and \emph{three edges}.
Normal surfaces were introduced by Kneser \cite{kneser29-normal}, and
further developed by Haken \cite{haken61-knot,haken62-homeomorphism}
for use in algorithms. A \emph{normal surface} is a
2-dimensional surface embedded within a 3-manifold triangulation
that meets each tetrahedron in a (possibly empty) collection of
\emph{triangles} and/or \emph{quadrilaterals}, as illustrated
in Figure~\ref{fig-normaldiscs}. For example, a normal surface within
our $S^2 \times S^1$ triangulation is shown on the right-hand side of
Figure~\ref{fig-s2xs1}; as a consequence of the tetrahedron gluings,
the six triangles and quadrilaterals join together to form a
2-dimensional sphere.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{normaldiscs}
\caption{Normal triangles and quadrilaterals within a tetrahedron}
\label{fig-normaldiscs}
\end{figure}
There are four distinct \emph{types} of triangle and three distinct
\emph{types} of quadrilateral within each tetrahedron (defined by which edges
of the tetrahedron they meet). The \emph{vector representation} of a
normal surface is a collection of $7n$ integers counting the number of
pieces of each type in each tetrahedron; from this vector in $\mathbb{R}^{7n}$
we can completely reconstruct the original surface. We treat surfaces
and their vectors interchangeably (so, for instance,
``adding'' two surfaces means adding their two vectors and
reconstructing a new surface from the result).
An early result of Haken is a set of necessary and sufficient conditions
for a vector to represent a normal surface: (i)~all coordinates must be
non-negative; (ii)~the vector must satisfy a set of linear homogeneous
equations (the \emph{matching equations}); and (iii)~there can be at most
one non-zero quadrilateral coordinate corresponding to each tetrahedron (the
\emph{quadrilateral constraints}). Vectors that satisfy all of these
conditions are called \emph{admissible}.
Jaco and Oertel \cite{jaco84-haken} define the \emph{projective solution
space} to be the polytope in $\mathbb{R}^{7n}$
obtained as a cross-section of the cone defined by (i) and (ii) above.
A \emph{vertex normal surface} lies on an extremal ray of
this cone and is not a multiple of some smaller surface.
The vertex normal surfaces are in bijection with the admissible vertices of
the projective solution space; we let $\sigma$ denote
the number of vertex normal surfaces, and we call $\sigma$ the
\emph{admissible vertex count}.
The enumeration of vertex normal surfaces is a critical component---and
often the computational bottleneck---of many important topological
algorithms. This is because one can often prove that,
if an interesting surface exists (such as an incompressible surface or
an essential sphere), then one must appear as a vertex normal surface.
See Hass et~al.\ \cite{hass99-knotnp}
for a more detailed introduction to normal surface theory and its
role in computational topology.
\section{Theoretical Bounds} \label{s-theory}
As noted in the introduction, the best bound known to date
for the admissible vertex count is $\sigma \leq 128^n$, proven by
Hass et~al.\ \cite{hass99-knotnp}. We begin by tightening this
exponential bound as follows:
\begin{theorem} \label{t-ubound}
Let $\phi = (1+\sqrt{5})/2$. Then the admissible vertex count
$\sigma$ is bounded above by $O(\phi^{7n}) \simeq O(29.03^n)$.
\end{theorem}
We prove this through a simple extension of McMullen's upper bound
theorem \cite{mcmullen70-ubt}. McMullen gives a tight bound on the
number of vertices for a convex polytope with $k$ facets and $d$
dimensions; we extend this here to a loose bound that covers all
possible dimensions.
\begin{lemma} \label{l-ubound-fib}
Let $F_0=0$, $F_1=1$, $F_2=1$, \ldots\ represent the Fibonacci
sequence, where $F_{i+2} = F_{i+1} + F_i$. Then for any $k \geq 3$,
a convex polytope with precisely $k$ facets has $\leq F_{k+1}$ vertices.
\end{lemma}
\begin{proof}
Suppose the polytope $P$ is $d$-dimensional with precisely $k$ facets.
Then McMullen's theorem (taken in dual form) shows that $P$ has at most
\begin{equation} \label{eqn-ubt}
\binom{k - \lfloor \frac{d+1}{2} \rfloor}{k - d} +
\binom{k - \lfloor \frac{d+2}{2} \rfloor}{k - d}
\quad = \quad
\binom{k - \lfloor\frac{d+1}{2}\rfloor}{d - \lfloor\frac{d+1}{2}\rfloor} +
\binom{k - \lfloor\frac{d+2}{2}\rfloor}{d - \lfloor\frac{d+2}{2}\rfloor}
\end{equation}
vertices.\footnote{This is the number of facets of the cyclic
$d$-dimensional polytope with $k$ vertices \cite{grunbaum03}.}
For even $d$ this can be rewritten as
$\binom{k-a}{a} + \binom{(k-2)-b}{b}$ for suitable integers $a,b$,
and for odd $d$ it can be rewritten as $2\binom{(k-1)-a}{a}$
for a suitable integer $a$.
We now claim that $\binom{k-a}{a} \leq F_k$ for any $k,a$ with
$k \geq 1$. This is easily established for $k=1,2$, and the full
claim follows from the inductive step
$\binom{k-a}{a} = \binom{k-1-a}{a} + \binom{k-1-a}{a-1}
= \binom{(k-1)-a}{a} + \binom{(k-2)-(a-1)}{a-1}
\leq F_{k-1} + F_{k-2} = F_k$.
From here our lemma is straightforward. If $d$ is even then
the number of vertices of $P$ is at most
$\binom{k-a}{a} + \binom{(k-2)-b}{b} \leq F_k + F_{k-2}
\leq F_k + F_{k-1} = F_{k+1}$,
and if $d$ is odd then the number of vertices is at most
$2\binom{(k-1)-a}{a} \leq 2 F_{k-1} \leq F_k + F_{k-1} = F_{k+1}$.
\end{proof}
Unlike McMullen's result, Lemma~\ref{l-ubound-fib} is not tight.
Nevertheless, it gives us a very good\footnote{Experimentation shows
that this asymptotic upper bound of $\phi^k \simeq 1.618^k$ is close
to optimal. If we maximise equation~(\ref{eqn-ubt})
over all $d$ for each $k=100,\ldots,200$, the maximum grows at a rate
of approximately $1.613^k$.}
asymptotic upper bound of $O(\phi^k)$,
which is enough to prove our main theorem.
\begin{proof}[Proof of Theorem~\ref{t-ubound}]
The facets of the projective solution space in $\mathbb{R}^{7n}$
are defined by the $7n$ inequalities $x_1 \geq 0$, \ldots, $x_{7n} \geq 0$,
and so there are at most $7n$ facets in total. Lemma~\ref{l-ubound-fib}
then shows that the projective solution space has at most
$F_{7n+1}$ vertices, and so $\sigma \leq F_{7n+1}$.
Using the standard formula $F_k = \lfloor\phi^k/\sqrt{5}+\frac12\rfloor$
it follows that $\sigma \in O(\phi^{7n})$.
\end{proof}
It is interesting to note that Theorem~\ref{t-ubound} makes no use of
admissibility---this suggests that, although the bound of $\phi^{7n}$
is a strong improvement on $128^n$, this bound is still very loose.
We confirm this through experimentation in Section~\ref{s-practice}.
Although we only consider closed 3-manifolds in this paper, it
should be noted that Theorem~\ref{t-ubound} and its proof apply equally
well to triangulations with boundary, and also to the
\emph{ideal triangulations} of Thurston \cite{thurston78-lectures}.
\section{Extreme Cases} \label{s-extreme}
Having tightened the upper bound from above, we now turn our attention
to limiting the upper bound from below. We do this by building
pathological triangulations for which
$\sigma \in \Theta(17^{n/4}) \simeq \Theta(2.03^n)$.
This growth rate shows that an exponential upper bound on $\sigma$ is
unavoidable, and furthermore disproves
an earlier conjecture of the author that $\sigma \in O(2^n)$.
We begin by describing 4-blocks, which are small building blocks that appear
repeatedly throughout our triangulations. Using these building blocks,
we then construct the family of pathological triangulations
$\mathcal{X}_1,\mathcal{X}_2,\ldots$.
\begin{defn}[4-block]
A \emph{4-block} is a triangulation with boundary,
built from the four tetrahedra $\Delta_1,\Delta_2,\Delta_3,\Delta_4$
using the following construction.
\begin{figure}[htb]
\centering
\includegraphics{pillow}
\caption{The two-tetrahedron triangular pillow
at the centre of a 4-block}
\label{fig-pillow}
\end{figure}
We begin by folding together two faces of $\Delta_1$, and then
wrapping $\Delta_2$ around the remaining two faces as illustrated
in Figure~\ref{fig-pillow}. This forms a \emph{triangular pillow}
with three vertices, three boundary edges, two internal edges,
and two boundary faces.
\begin{figure}[htb]
\centering
\includegraphics{block}
\caption{Building a 4-block from two tetrahedra and a
triangular pillow}
\label{fig-block}
\end{figure}
Next we fold together two faces of $\Delta_3$ and two faces of
$\Delta_4$, as illustrated in the leftmost column of
Figure~\ref{fig-block}. To finish, we join the pillow to
both $\Delta_3$ and $\Delta_4$ as illustrated in the central column
of Figure~\ref{fig-block}---the upper face $A_1B_1A_2$ of the pillow
is glued to the lower face $A_3B_2A_3$ of $\Delta_3$, and the
lower face $A_1B_1A_2$ of the pillow is glued to the upper face
$A_4B_3A_4$ of $\Delta_4$.
The final result is shown in the rightmost column of
Figure~\ref{fig-block}, with three boundary vertices and one
internal vertex. The triangular pillow is buried in the middle of
this structure, wrapped around the internal vertex; for simplicity
the two edges inside the pillow are not shown.
\end{defn}
\begin{defn}[Pathological triangulation $\mathcal{X}_k$]
For each integer $k \geq 1$, the \emph{pathological triangulation}
$\mathcal{X}_k$ is constructed from $n=4k$ tetrahedra in the following manner.
From these $4k$ tetrahedra we build $k$ distinct 4-blocks, labelled
$\mathcal{B}_1,\ldots,\mathcal{B}_k$. Within each 4-block $\mathcal{B}_i$ we label the
three boundary vertices $P_i,Q_i,R_i$, where $P_i$ sits between both
boundary triangles as illustrated in Figure~\ref{fig-path}.
\begin{figure}[htb]
\centering
\includegraphics{path}
\caption{Building the pathological triangulation $\mathcal{X}_k$
from $k$ distinct 4-blocks}
\label{fig-path}
\end{figure}
For each $i=1,\ldots,k$ we join blocks $\mathcal{B}_i$ and $\mathcal{B}_{i+1}$
as follows (where $\mathcal{B}_{k+1}$ is taken to mean $\mathcal{B}_1$).
Triangle $P_iP_iR_i$ is joined to triangle $Q_{i+1}P_{i+1}P_{i+1}$;
note that this is ``twisted'', not a direct gluing, since it maps
$P_i \leftrightarrow Q_{i+1}$ and $P_{i+1} \leftrightarrow R_i$.
There are in fact two ways this gluing can be performed (one a
reflection of the other); we resolve this ambiguity by orienting each
block consistently, and then choosing the gluing that preserves
orientation.
An effect of these gluings is to identify all of the $P_i$, $Q_i$
and $R_i$ to a single vertex, so that
$\mathcal{X}_k$ has $k+1$ vertices in total (counting also the
$k$ internal vertices from each original block).
\end{defn}
It is not clear that each $\mathcal{X}_k$ is a 3-manifold triangulation (in
particular, that $\mathcal{X}_k$ looks like $\mathbb{R}^3$ in the vicinity of each
vertex). The following sequence of results proves this by showing that
every $\mathcal{X}_k$ is in fact a triangulation of the 3-sphere.
\begin{lemma} \label{l-block}
A 4-block is a triangulation of the 3-ball (i.e., the solid
3-dimensional ball), with
a boundary consisting of two triangles in the formation shown in
Figure~\ref{fig-blockbdry}.
\end{lemma}
\begin{figure}[htb]
\centering
\includegraphics{blockbdry}
\caption{A 3-ball whose boundary consists of two triangles}
\label{fig-blockbdry}
\end{figure}
\begin{proof}
This is evident from the construction in Figure~\ref{fig-block}.
It can also be verified computationally using the software
package {\emph{Regina}} \cite{regina}, which implements 3-sphere and
3-ball recognition \cite{burton04-regina}.
\end{proof}
\begin{lemma} \label{l-ball-join}
Let $\mathcal{T}_1$ and $\mathcal{T}_2$ each be triangulations of the 3-ball
with boundaries in the formation shown in Figure~\ref{fig-blockbdry}.
If we identify one boundary triangle of $\mathcal{T}_1$ with one boundary
triangle of $\mathcal{T}_2$ under any of the six possible identifications,
the result is always another triangulation of the 3-ball
with boundary in the formation shown in Figure~\ref{fig-blockbdry}.
\end{lemma}
\begin{lemma} \label{l-ball-wrap}
Let $\mathcal{T}$ be a triangulation of the 3-ball with boundary in the
formation shown in Figure~\ref{fig-blockbdry}. If we
identify the two boundary triangles under any of the three
possible orientation-preserving identifications, the result is
always a closed 3-manifold triangulation of the 3-sphere.
\end{lemma}
\begin{proof}
Both of these results are essentially properties of 3-manifolds,
not their underlying triangu\-lations---if they hold for any selection
of triangulations $\mathcal{T}_1,\mathcal{T}_2,\mathcal{T}$ then they must hold for all
such selections. We verify these results using {\emph{Regina}} by
choosing 4-blocks for our triangulations and testing all
six/three possible identifications.
\end{proof}
Since each $\mathcal{X}_k$ is built by joining
together 4-blocks along boundary triangles in an orientation-preserving
fashion, the following result follows
immediately from Lemmata~\ref{l-block}--\ref{l-ball-wrap}.
\begin{corollary}
For each $k \geq 1$, $\mathcal{X}_k$ is a closed 3-manifold triangulation
of the 3-sphere.
\end{corollary}
We turn our attention now to counting the vertex normal surfaces for
each triangulation $\mathcal{X}_k$. Recalling that $k=n/4$,
the following result shows that for these pathological triangulations
we have $\sigma \in \Theta(17^{n/4}) \simeq \Theta(2.03^n)$.
\begin{lemma} \label{l-worst-4k}
For each $k \geq 1$, $\mathcal{X}_k$ has precisely
$\sigma = 17^k + k$ vertex normal surfaces.
\end{lemma}
\begin{proof}
%
Consider a single 4-block with boundary vertices labelled $P,Q,R$
as before, and let $S$ denote the internal vertex.
Define $\alpha$, $\beta$ and $\gamma$ to be small loops on the
4-block boundary surrounding $P$, $Q$ and $R$ respectively,
as illustrated in Figure~\ref{fig-curves}.
\begin{figure}[htb]
\centering
\includegraphics{curves}
\caption{The curves $\alpha,\beta,\gamma$ on the boundary of a
4-block}
\label{fig-curves}
\end{figure}
Using the software package {\emph{Regina}}, we can construct the
projective solution space for this 4-block. There are
17 admissible vertices in total, corresponding to 17 vertex
normal surfaces: one with empty boundary, and 16 whose boundary
consists of some combination of $\alpha$, $\beta$ and $\gamma$.
These surfaces are summarised in Table~\ref{tab-dd-block},
and we label them $\s{a},\s{b},\ldots,\s{q}$ as shown.
\begin{table}[htb]
\small
\centering
\begin{tabular}{c|c|l}
\textbf{Label} & \textbf{Boundary} &
\multicolumn{1}{|c}{\textbf{Description}} \\
\hline
$\s{a}$ & --- & Small sphere around internal vertex $S$ \\
$\s{b}$ & $\baraw\phantom{{}+\bbraw+\bcraw}$ & Small disc around boundary vertex $P$ \\
$\s{c}$ & $\phantom{\baraw+{}}\bbraw\phantom{{}+\bcraw}$ & Small disc around boundary vertex $Q$ \\
$\s{d}$ & $\phantom{\baraw+\bbraw+{}}\bcraw$ & Small disc around boundary vertex $R$ \\
$\s{e}$ & $\baraw\phantom{{}+\bbraw+\bcraw}$ & Tube from $P$ to $S$, closed around $S$ \\
$\s{f}$ & $\phantom{\baraw+{}}\bbraw\phantom{{}+\bcraw}$ & Tube from $Q$ to $S$, closed around $S$ \\
$\s{g}$ & $\phantom{\baraw+\bbraw+{}}\bcraw$ & Tube from $R$ to $S$, closed around $S$ \\
$\s{h}$ & $\baraw+\bbraw\phantom{{}+\bcraw}$ & Tube from $P$ to $Q$ via $S$, open at both ends \\
$\s{i}$ & $\baraw\phantom{{}+\bbraw}+\bcraw$ & Tube from $P$ to $R$ via $S$, open at both ends \\
$\s{j}$ & $\phantom{\baraw+{}}\bbraw+\bcraw$ & Tube from $Q$ to $R$ via $S$, open at both ends \\
$\s{k}$ & $\baraw+\bbraw+\bcraw$ & Forked tube joining all of $P,Q,R$ via $S$, open
at all three ends \\
$\s{l}$ & $\baraw\phantom{{}+\bbraw+\bcraw}$ & Surface $\s{b}$ with large ``balloon'' disc attached
inside the pillow \\
$\s{m}$ & $\baraw\phantom{{}+\bbraw+\bcraw}$ & Surface $\s{b}$ with punctured torus attached
inside the pillow \\
$\s{n}$ & $\baraw\phantom{{}+\bbraw+\bcraw}$ & Surface $\s{e}$ with punctured torus attached
inside the pillow \\
$\s{o}$ & $\baraw+\bbraw\phantom{{}+\bcraw}$ & Surface $\s{h}$ with punctured torus attached
inside the pillow \\
$\s{p}$ & $\baraw\phantom{{}+\bbraw}+\bcraw$ & Surface $\s{i}$ with punctured torus attached
inside the pillow \\
$\s{q}$ & $\baraw+\bbraw+\bcraw$ & Surface $\s{k}$ with punctured torus attached
inside the pillow
\end{tabular}
\caption{The $17$ vertex normal surfaces within a 4-block}
\label{tab-dd-block}
\end{table}
It is important to note that $\s{a},\s{b},\ldots,\s{q}$ are all
\emph{compatible}; that is, no combination of their vectors can
ever violate the quadrilateral constraints.\footnote{This is
because, within each tetrahedron, we observe that two of the three
quadrilateral types never appear \emph{anywhere} amongst the surfaces
$\s{a},\s{b},\ldots,\s{q}$.}
This is an unusual
but extremely helpful state of affairs, since we can effectively
ignore the quadrilateral constraints from here onwards.
Now consider the full set of 4-blocks $\mathcal{B}_1,\ldots,\mathcal{B}_k$;
let $\s{a}_i,\s{b}_i,\ldots,\s{q}_i$ denote the corresponding surfaces in
$\mathcal{B}_i$, and let $\alpha_i,\beta_i,\gamma_i$ denote the
corresponding boundary curves.
Any normal surface in $\mathcal{X}_k$ is a union of normal
surfaces in $\mathcal{B}_1,\ldots,\mathcal{B}_k$, and hence can be expressed as
\begin{equation*}
(\lambda_{1,1}\, \s{a}_1 + \ldots + \lambda_{1,17}\, \s{q}_1) +
\ldots +
(\lambda_{k,1}\, \s{a}_k + \ldots + \lambda_{k,17}\, \s{q}_k)
\end{equation*}
for some family of constants
$\lambda_{1,1},\ldots,\lambda_{k,17} \geq 0$.
In this form, it can be shown\footnote{The argument uses the
facts that curves $\alpha_i,\beta_i,\gamma_i$ surround
vertices $P_i,Q_i,R_i$ respectively, and that all of these vertices
are identified together in the overall triangulation $\mathcal{X}_k$.}
that the matching equations for $\mathcal{X}_k$
reduce to the following statement:
\begin{quote}
There is some non-negative $\mu \in \mathbb{R}$ such that,
for every $i$, the sum
$\lambda_{i,1}\, \s{a}_i + \ldots + \lambda_{i,17}\, \s{q}_i$
has boundary $\mu\alpha_i+\mu\beta_i+\mu\gamma_i$.
\end{quote}
In other words, the portion of the overall surface within each
4-block $\mathcal{B}_i$ must have boundary
$\mu\alpha_i+\mu\beta_i+\mu\gamma_i$, where $\mu$ is independent of $i$.
Return now to a single 4-block with admissible
vertices $\s{a},\ldots,\s{q}$, and let
$\lambda_{1}\, \s{a} + \ldots + \lambda_{17}\, \s{q}$
be some point in the projective solution space for this 4-block.
We can ensure that the corresponding surface has boundary of the
form $\mu\alpha + \mu\beta + \mu\gamma$ by imposing the following
constraints:\footnote{Each line in these constraints corresponds to
a section of the \emph{Boundary} column in Table~\ref{tab-dd-block}.}
\[ \small
\begin{array}{l@{\quad}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}l@{}}
& \lambda_{2}+{} & & & \lambda_{5}+{} & & & \lambda_{8}+{} &
\lambda_{9}+{} & & \lambda_{11}+{} & \lambda_{12}+{} &
\lambda_{13}+{} & \lambda_{14}+{} & \lambda_{15}+{} &
\lambda_{16}+{} & \lambda_{17} \\
= & & \lambda_{3}+{} & & & \lambda_{6}+{} & & \lambda_{8}+{} &
& \lambda_{10}+{} & \lambda_{11}+{} & & & & \lambda_{15}+{} &
& \lambda_{17} \\
= & & & \lambda_{4}+{} & & & \lambda_{7}+{} & & \lambda_{9}+{} &
\lambda_{10}+{} & \lambda_{11}+{} & & & & & \lambda_{16}+{} &
\lambda_{17}
\end{array} \]
This has the effect of intersecting the original projective solution
space for the 4-block with two new hyperplanes.
A standard application of the filtered double description method
\cite{burton10-dd} shows that the resulting polytope has 18 admissible
vertices, described by the following 18 normal surfaces:
the original $\s{a}$ with no boundary, and 17 new surfaces%
\footnote{These are the six surfaces
$(\s{c}+\s{g},\ \s{d}+\s{f},\ \mathrm{or}\ \s{j}) +
(\s{b}\ \mathrm{or}\ \s{l})$, the five surfaces
$\s{c}+\s{d} + (\s{b},\ \s{e},\ \s{l},\ \s{m},\ \mathrm{or}\ \s{n})$,
and the six surfaces
$\s{c}+\s{i}$,
$\s{c}+\s{p}$,
$\s{d}+\s{h}$,
$\s{d}+\s{o}$,
$\s{k}$ and $\s{q}$.}
all with boundary $\alpha+\beta+\gamma$.
Within each block $\mathcal{B}_i$, we label these 17 new surfaces
$\s{v}_{i,1},\ldots,\s{v}_{i,17}$.
Given the formulation of the matching equations above, it follows
that the normal
surfaces in $\mathcal{X}_k$ are described completely by the
linear combinations
\[ \rho_{1,1}\, \s{v}_{1,1} + \ldots + \rho_{k,17}\, \s{v}_{k,17}
+ \eta_1 \s{a}_1 + \ldots + \eta_k \s{a}_k, \]
where each $\rho_{i,j},\eta_i \geq 0$ and where
$\sum_j \rho_{1,j} = \sum_j \rho_{2,j} = \ldots = \sum_j \rho_{k,j}$.
The full projective solution space for $\mathcal{X}_k$ therefore has
$17^k + k$ admissible vertices, corresponding to the
$k$ surfaces $\s{a}_1,\ldots,\s{a}_k$ and the $17^k$ combinations
$\s{v}_{1,j_1} + \s{v}_{2,j_2} + \ldots + \s{v}_{k,j_k}$ for
$j_1,j_2,\ldots,j_k \in \{1,\ldots,17\}$.
\end{proof}
The pathological triangulations $\mathcal{X}_1,\mathcal{X}_2,\ldots$ cover all
sizes of the form $n=4k$. We can generalise this construction to include
$n=4k+1$, $4k+2$ and $4k+3$ by replacing one of our 4-blocks with a
single ``exceptional'' block. The general constructions and analyses are
detailed in the full version of this paper, and the results are summarised
in the following theorem.
\begin{theorem} \label{t-worst-cases}
For every positive $n \neq 1,2,3,5$, there exists a closed 3-manifold
triangulation of size $n$ whose admissible vertex count is as follows:
\begin{equation} \label{eqn-worst-cases}
\begin{array}{ll@{\quad\Longrightarrow\quad}l}
n = 4k & (k \geq 1) & \sigma = 17^k + k \\
n = 4k+1 & (k \geq 2) & \sigma = 581 \cdot 17^{k-2} + k + 1 \\
n = 4k+2 & (k \geq 1) & \sigma = 69 \cdot 17^{k-1} + k \\
n = 4k+3 & (k \geq 1) & \sigma = 141 \cdot 17^{k-1} + k + 2
\end{array}
\end{equation}
\end{theorem}
Lemma~\ref{l-worst-4k} proves this result for the first
case $n=4k$. For an extra measure of verification,
equation~(\ref{eqn-worst-cases}) has been confirmed numerically for
all $n \leq 14$ by building the relevant triangulations and using
{\emph{Regina}} to enumerate all vertex normal surfaces.
The main result of this section is the following limit on any upper
bound for $\sigma$, which follows immediately from
Theorem~\ref{t-worst-cases}. Moreover, as we discover in the following
section, there is reason to believe that this may in fact
give the tightest possible asymptotic bound.
\begin{corollary}
Any upper bound for the admissible vertex count $\sigma$
must grow at a rate of at least $\Omega(17^{n/4}) \simeq \Omega(2.03^n)$.
\end{corollary}
\section{Practical Growth} \label{s-practice}
We turn now to a comprehensive study of the admissible vertex count
$\sigma$ for real 3-manifold triangulations. The basis of this study
is a complete census of \emph{all} closed 3-manifold triangulations
of size $n \leq 9$. This is a significant undertaking, and such a
census has never been compiled before; the paper \cite{burton07-nor10}
details some of the sophisticated algorithms involved.
The result is a collection of $149\,676\,922$ triangulations, each
counted once up to \emph{isomorphism} (a relabelling of tetrahedra
and their vertices). It is worth noting that within this large collection of
triangulations there is a much smaller number of distinct 3-manifolds,
as indicated by the 3-manifold census data of Martelli and Petronio
\cite{martelli01-or9} and the author \cite{burton07-nor10}.
For each of these $\sim 150$ million triangulations we enumerate all
vertex normal surfaces using the algorithms described in
\cite{burton09-convert,burton10-dd}.
The resulting admissible vertex counts $\sigma$ are summarised in
Table~\ref{tab-stats}. All computations were performed using the
software package {\emph{Regina}} \cite{regina,burton04-regina}.
\begin{table}[htb]
\centering
\small
\newlength{\statwidth}
\settowidth{\statwidth}{Std dev}
\begin{tabular}{r|r|p{\statwidth}|p{\statwidth}|p{\statwidth}|p{\statwidth}}
\multicolumn{1}{c|}{Number of} & \multicolumn{1}{c|}{Number of} &
\multicolumn{4}{c}{Admissible vertex count ($\sigma$)} \\
\multicolumn{1}{c|}{tetrahedra ($n$)} &
\multicolumn{1}{c|}{triangulations} &
\multicolumn{1}{c|}{Mean} & \multicolumn{1}{c|}{Std dev} &
\multicolumn{1}{c|}{Min} & \multicolumn{1}{c}{Max} \\
\hline
1 & 4 & \hfill 2.00 & \hfill 0.71 & \hfill 1 & \hfill 3 \\
2 & 17 & \hfill 3.94 & \hfill 1.39 & \hfill 2 & \hfill 7 \\
3 & 81 & \hfill 5.49 & \hfill 1.97 & \hfill 2 & \hfill 11 \\
4 & 577 & \hfill 8.80 & \hfill 3.38 & \hfill 2 & \hfill 18 \\
5 & 5\,184 & \hfill 13.34 & \hfill 5.49 & \hfill 4 & \hfill 36 \\
6 & 57\,753 & \hfill 20.76 & \hfill 9.21 & \hfill 4 & \hfill 70 \\
7 & 722\,765 & \hfill 32.17 & \hfill 15.29 & \hfill 4 & \hfill 144 \\
8 & 9\,787\,509 & \hfill 50.20 & \hfill 25.52 & \hfill 4 & \hfill 291 \\
9 & 139\,103\,032 & \hfill 78.49 & \hfill 42.51 & \hfill 4 & \hfill 584
\end{tabular}
\caption{Summary of admissible vertex counts for all triangulations
($n \leq 9$)}
\label{tab-stats}
\end{table}
The figures that we see are remarkably small. For $n=9$ tetrahedra,
although Theorem~\ref{t-ubound} places the theoretical bound at
$\simeq O(29^n)$, we have just $584$ vertex normal surfaces in the worst
case. The mean admissible vertex count for $n=9$ is much smaller again,
evaluated at just $78.49$. The full distribution of all admissible vertex
counts for $n=9$ is shown in the left-hand graph of Figure~\ref{fig-graphs}.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.5]{finite-std-hist-max}
\qquad
\includegraphics[scale=0.5]{finite-std-stats}
\caption{Aggregate results for admissible vertex counts}
\label{fig-graphs}
\end{figure}
Indeed, our pathological triangulations $\mathcal{X}_1,\mathcal{X}_2$
are the worst cases for $n=4,8$ respectively, giving the maximum
observed values of $\sigma = 17^1+1=18$ and $\sigma = 17^2+2=291$.
More generally, the pathological triangulations
of Theorem~\ref{t-worst-cases} give the maximum cases in our census
wherever they are defined (i.e., $n \neq 1,2,3,5$).
This leads us to the following general conjecture:
\begin{conjecture} \label{cj-worst}
For every positive $n \neq 1,2,3,5$, equation~(\ref{eqn-worst-cases})
gives a tight upper bound on the admissible vertex count $\sigma$.
As a consequence, we have $\sigma \in O(17^{n/4})$.
\end{conjecture}
The growth rate of $\sigma$ for $n=1,\ldots,9$ is illustrated in the
right-hand graph of
Figure~\ref{fig-graphs} (note that the vertical axis is plotted on a log
scale). The growth rate of the maximum $\sigma$ is roughly
$17^{n/4} \simeq 2.03^n$ as suggested above; the growth rate of the average
$\overline{\scount}$ is in the range $1.5^n$ to $1.6^n$. This is just
below the Fibonacci growth rate of $\phi^n \simeq 1.62^n$.
Indeed, if we let $\smean{n}$ denote the mean admissible vertex count
amongst all triangulations of size $n$, we find that
$\smean{n} < \smean{n-1} + \smean{n-2}$ throughout our census.
This leads us to our next general conjecture:
\begin{conjecture} \label{cj-mean}
For every $n \geq 3$, the mean admissible vertex count $\smean{n}$
satisfies the relation $\smean{n} < \smean{n-1} + \smean{n-2}$.
As a consequence, $\smean{n}$ is bounded above by $O(\phi^n)$
where $\phi = (1+\sqrt{5})/2$.
\end{conjecture}
In particular, our census analysis gives us the following computational result:
\begin{theorem}
Conjectures~\ref{cj-worst} and \ref{cj-mean} are true for $n \leq 9$.
\end{theorem}
\section{Conclusions}
We have pushed the theoretical bounds on the admissible
vertex count $\sigma$ from both directions, and we have shown through an
exhaustive study of $\sim 150$ million triangulations that
$\sigma$ is surprisingly small in practice. We close with a brief
discussion of the implications of this study.
Most importantly, it suggests that topological algorithms that employ
normal surfaces might not be as infeasible as theory suggests.
Hints of this have already been seen with the quadrilateral-to-standard
conversion algorithm for normal surfaces \cite{burton09-convert}, which
(against theoretical expectations) appears to have a running time
polynomial in its output size.
In many fields, a census for size $n \leq 9$ might not seem large
enough for drawing conclusions and conjectures. However, there is
evidence elsewhere to suggest that 3-manifold triangulations are
flexible enough for important patterns to establish themselves for
very low $n$. For example, the papers \cite{burton07-nor7,matveev98-or6}
discuss several combinatorial patterns for $n \leq 6$; these patterns
have later been found to generalise well for larger $n$
\cite{burton07-nor10,martelli01-or9}, and some are now
proven in general \cite{jaco09-minimal-lens,jaco09-coverings}.
Finally, it is clear from this practical study that the theoretical
bounds on $\sigma$ still have much room for improvement. One
possible direction is to incorporate the quadrilateral constraints
directly into McMullen's theorem. This is difficult because the
quadrilateral constraints break convexity, but the outcome may be
significantly closer to the $O(17^{n/4})$ that we see in practice.
\section*{Acknowledgements}
The author is grateful to both the University of Victoria (Canada)
and the Victorian Partnership for Advanced Computing (Australia)
for the use of their excellent computing resources,
and to the Australian Research Council for their support
under the Discovery Projects funding scheme (project DP1094516).
\small
\bibliographystyle{amsplain}
|
2,877,628,091,308 | arxiv | \section{Introduction and Main Result}
Let $\mathcal{A}$ denote the class of all normalized analytic functions $f$ in the open unit disk
${\mathbb D}:=\{z\in\mathbb{C}:\,|z|<1\}$, i.e. $f$ has the Taylor series expansion
\begin{equation}\label{sec1-eqn1}
f(z)=z+\sum_{n=2}^\infty a_n z^n.
\end{equation}
The Taylor polynomial $s_n(z)=s_n(f)(z)$ of $f$ in $\mathcal{A}$, defined by,
$$s_n(z)=z+\sum_{k=2}^n a_k z^k
$$
is called the $n$-th {\em section/partial sum} of $f$.
Denote by $\mathcal{S}$, the class of {\em univalent} functions in $\mathcal{A}$.
A function $f\in\mathcal{A}$ is said to be {\em locally univalent}
at a point $z_0\in D\subset \mathbb{C}$ if it is univalent in some neighborhood of $z_0$;
equivalently $f'(z_0)\neq 0$. A function $f\in \mathcal{A}$ is called {\em convex} if $f(\mathbb{D})$
is a convex domain. The set of all convex functions are denoted by $\mathcal{C}$. The functions $f\in\mathcal{C}$
are characterized by the well-known fact
$${\rm Re}\left(1+\frac{zf''(z)}{f'(z)}\right)>0,\quad |z|<1.
$$
In this article, we mainly focus on a class, denoted by $\mathcal{L}$, of all
locally univalent {\em odd} functions $f$ satisfying
\begin{equation}\label{sec1-eqn2}
{\operatorname{Re}\,}\left(1+\frac{zf''(z)}{f'(z)}\right)>-\frac{1}{2}, \quad z\in {\mathbb D}.
\end{equation}
Clearly, a function $f\in\mathcal{L}$ will have the Taylor series expansion
$f(z)=z+\sum_{n=2}^{\infty}{a_{2n-1} z^{2n-1}}$.
The function $f_0(z)=z/\sqrt{1-z^2}$ plays the role of an extremal function
for $\mathcal{L}$; see for instance \cite[p.~68, Theorem~2.6i]{MM00}.
This article is devoted to finding the largest disk $|z|<r$ in which every section
$s_{2n-1}(z)=z+\sum_{k=2}^na_{2k-1}z^{2k-1}$, of $f\in \mathcal{L}$, is convex;
that is, $s_{2n-1}$ satisfies
$${\rm Re}\left(1+\frac{zs_{2n-1}''(z)}{s_{2n-1}'(z)}\right)>0.
$$
Our main objective in this article is to prove
\medskip
\noindent
{\bf Main Theorem.} {\em Every section of a function in $\mathcal{L}$ is convex in the disk $|z|< \sqrt{2}/3$.
The radius $\sqrt{2}/3$ cannot be replaced by a greater one.}
\medskip
\noindent
This observation is also explained geometrically in Figure~\ref{fig1}
by considering the third partial sum, $s_{3,0}$, of the extremal function
$f_0$. We next discuss some motivational background of our problem.
\begin{figure}[H]\label{fig1}
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=6.7cm]{RadiusSqrt2by3.eps}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\hspace*{-0.3cm}\includegraphics[height=5.3cm,width=6.7cm]{Radius2by3.eps}
\end{minipage}
\caption{The first figure shows convexity of the image domain $s_{3,0}(z)$
for $|z|<\sqrt{2}/3$ and the second figure
shows non-convexity of the image domain $s_{3,0}(z)$ for $|z|<2/3=:r_0~(r_0>\sqrt{2}/3)$.}
\end{figure}
Considering odd univalent functions and studying classical problems
of univalent function theory such as (successive) coefficient bounds,
inverse functions, etc. are quite interesting and found throughout the
literature; see for instance \cite{Ke86,LZ84,Mil81,Ye05}.
In fact, an application of the Cauchy-Schwarz inequality shows that
the conjecture of Robertson: $1+|c_3|^2+|c_5|^2+\cdots+|c_{2n-1}|^2\le n$,
$n\ge 2$, for each odd function $f(z)=z+c_3z^3+c_5z^5+\cdots$ of $\mathcal{S}$,
stated in 1936 implies the well-known Bieberbach conjecture \cite{Rob36};
see also \cite{Dur77}.
In our knowledge, studying radius properties for sections of odd univalent functions
are new (as we do not find in the literature).
Note that a subclass denoted by $\mathcal{F}$, of the class, $\mathcal{K}$,
of close-to-convex functions, consisting of all locally univalent
functions $f\in\mathcal{A}$ satisfying
the condition (\ref{sec1-eqn2}) was considered in \cite{PSY14}. In this paper,
we consider functions from $\mathcal{F}$ that have odd Taylor coefficients.
Note that the following inclusion relations hold:
$$\mathcal{L}\subsetneq \mathcal{F}\subsetneq \mathcal{K}\subsetneq \mathcal{S}.
$$
The fact that functions in $\mathcal{F}$ are close-to-convex may be obtained
as a consequence of the result due to Kaplan (see \cite[p. 48, Theorem 2.18]{Dur83}).
In \cite{PSY14}, Ponnusamy et. al. have shown that every section of a function in the class $\mathcal{F}$ is convex in the disk $|z|<1/6$ and the radius $1/6$ is the best possible.
They conjectured that every section of
functions in the family $\mathcal{F}$ is univalent and close-to-convex in the disk $|z|<1/3$.
This conjecture has been recently settled by Bharanedhar and Ponnusamy in \cite[Theorem~1]{BP}.
The problem of finding the radius of univalence of sections of $f$ in $\mathcal{S}$
was first initiated by Szeg\"o in 1928.
According to the Szeg\"o theorem
\cite[Section~8.2, p. 243-246]{Dur83}, every section $s_n(z)$ of a function
$f\in \mathcal{S}$ is univalent in the disk $|z|<1/4$; see \cite{Sze28} for the original paper.
The radius $1/4$ is best
possible and can be verified from the second partial sum of the Koebe function
$k(z)=z/(1-z)^2$.
Determining the exact (largest) radius of univalence
$r_n$ of $s_n(z)$ ($f\in\mathcal{S}$) remains an open problem.
However, many other related problems
on sections have been solved for various geometric subclasses of $\mathcal{S}$,
eg. the classes $\mathcal{S}^*$, $\mathcal{C}$ and $\mathcal{K}$ of starlike, convex and
close-to-convex functions, respectively (see Duren \cite[\S8.2, p.241--246]{Dur83},
\cite{Goo83,Rob41,Rus72,Sma70} and the survey articles \cite{Ili79,Rav12}).
In \cite{Mac62}, MacGregor considered the class
$$\mathcal{R}=\{f\in \mathcal{A}:{\operatorname{Re}\,} f'(z)>0, z\in {\mathbb D}\}
$$
and proved that the partial sums $s_n(z)$ of $f\in \mathcal{R}$ are univalent
in $|z|<1/2$, where the radius $1/2$ is best possible. On the other hand, in \cite{Sin70},
Ram Singh obtained the best radius, $r=1/4$, of convexity for sections of functions in the class
$\mathcal{R}$. The reader can refer to \cite{PP74} for related information. Radius of
close-to-convexity of sections of close-to-convex functions is obtained in
\cite{Miki56}.
By the argument principle, it is clear that the $n$-th section $s_n(z)$ of an arbitrary
function in $\mathcal{S}$ is univalent in each fixed compact subdisk
$\overline{{\mathbb D}_r}:=\{z\in {\mathbb D}:|z|\le r\}(r<1)$ of ${\mathbb D}$ provided that $n$ is sufficiently
large. In this way one can get univalent polynomials in $\mathcal{S}$ by setting
$p_n(z)= \frac{1}{r}s_n(rz)$. Consequently, the set of all univalent polynomials
is dense in the topology of locally uniformly convergence in
$\mathcal{S}$. The radius of starlikeness of the partial sums $s_n(z)$ of $f\in\mathcal{S}^*$
was obtained by Robertson in \cite{Rob41}; (see also \cite[Theorem~2]{Sil88}) in the following form:
\medskip
\noindent
{\bf Theorem~A.} \cite{Rob41}
{\em If $f\in \mathcal{S}$ is either starlike, convex, typically-real, or convex in the direction of imaginary axis, then there
is an $N$ such that, for $n\ge N$, the partial sum $s_n(z)$ has the same property
in ${\mathbb D}_r:=\{z\in {\mathbb D}:|z|<r\}$, where $r\ge 1-3(\log n)/n$.}
\medskip
\noindent
However, Ruscheweyh in \cite{Rus88} proved a stronger result by showing that the partial
sums $s_n(z)$ of $f$ are indeed starlike in ${\mathbb D}_{1/4}$ for functions $f$ belonging not only
to $\mathcal{S}$ but also to the closed convex hull of $\mathcal{S}$.
Robertson \cite{Rob41} further showed that sections of the Koebe function $k(z)$ are univalent
in the disk $|z|<1-3 n^{-1} \log n$ for $n\ge 5$,
and that the constant $3$ cannot be replaced by a smaller constant. However, Bshouty
and Hengartner \cite{BH91} pointed out that the Koebe function is not extremal for
the radius of univalency of the partial sums of $f\in \mathcal{S}$. A well-known
theorem by Ruscheweyh and Sheil-Small \cite{RS73} on convolution allows us to conclude immediately
that if $f$ belongs to $\mathcal{C},\mathcal{S}^*,$ or $\mathcal{K}$, then its $n$-th section is
respectively convex, starlike, or close-to-convex in the disk
$|z|<1-3n^{-1} \log n$, for $n\ge 5$.
Silverman in \cite{Sil88} proved that the radius of starlikeness for sections of
functions in the convex family $\mathcal{C}$ is $(1/2n)^{1/n}$ for all $n$.
We suggest readers refer to \cite{PSY14,Rus72,Sma70,Sze28} and recent articles \cite{OP11,OP13,OP14,OPW13} for further interest on this topic. It is worth recalling that
radius properties of harmonic sections have recently been studied in \cite{KPV14,LS13-1,LS13-2,LS15,PS15}.
\section{Preparatory results}
In this section we derive some useful results to prove our main theorem.
\begin{lemma}\label{sec2-lem1}
If $f(z)=z+\sum_{n=2}^{\infty}{a_{2n-1} z^{2n-1}}\in \mathcal{L}$, then the following estimates are obtained:
\begin{itemize}
\item[\bf (a)] $|a_{2n-1}|\leq \frac{(2n-2)!}{2^{2n-2}(n-1)!^2}$ for $n\ge2$. The equality holds for
$$f_0(z)=\frac{z}{\sqrt{1-z^2}}
$$
or its rotation.
\medskip
\item[\bf (b)] $\left|\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}\right|\leq \frac{3r^2}{1-r^2}$ for $|z|=r<1$. The inequality is sharp.
\medskip
\item[\bf (c)] $\frac{1}{(1+r^2)^{3/2}}\leq |f^{\prime}(z)|\leq \frac{1}{(1-r^2)^{3/2}}$ for $|z|=r<1$. The inequality is sharp.
\medskip
\item[\bf (d)] If $f(z)=s_{2n-1}(z)+\sigma_{2n-1}(z)$, with $\sigma_{2n-1}(z)=\sum_{k=n+1}^{\infty}{a_{2k-1} z^{2k-1}}$,
then for $|z|=r<1$ we have
$$|\sigma_{2n-1}^{\prime}(z)|\leq A(n,r)
~~\mbox{ and }~~
|z\sigma_{2n-1}^{\prime\prime}(z)|\leq B(n,r),
$$
where
$$A(n,r)=\sum_{k=n+1}^{\infty}\frac{(2k-1)!}{2^{2k-2}(k-1)!^2}r^{2k-2}
~~\mbox{ and }~~
B(n,r)=\sum_{k=n+1}^{\infty}\frac{(2k-2)(2k-1)!}{2^{2k-2}(k-1)!^2}r^{2k-2}.
$$
The ratio test guarantees that both the series are convergent.
\end{itemize}
\end{lemma}
\begin{proof}
{\bf (a)} Set
\begin{equation}\label{Neweqn}
p(z)=1+\displaystyle\frac{2}{3}\left(\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}\right).
\end{equation}
Clearly, $p(z)=1+\sum_{n=1}^{\infty}{p_n z^n}$ is analytic in ${\mathbb D}$ and ${\operatorname{Re}\,} p(z)>0$ there.
So, by Carath{\'e}odory Lemma, we obtain that $|p_n|\leq 2$ for all $n \ge 1$.
Putting the series expansions for $f'(z),~f''(z)$ and $p(z)$ in (\ref{Neweqn}) we get
$$\sum_{n=2}^{\infty}{(2n-1)(2n-2)a_{2n-1}z^{2n-1}}
=\frac{3}{2}\sum_{n=2}^{\infty}\left(\sum_{k=1}^{n-1} p_{2k-1}(2n-2k-1)a_{2n-2k-1}\right)z^{2n-2}
$$
$$+\frac{3}{2}\sum_{n=2}^{\infty}\left(\sum_{k=1}^{n-1} p_{2k}(2n-2k-1)a_{2n-2k-1}\right)z^{2n-1}.
$$
Equating the coefficients of $z^{2n-1}$ and $z^{2n-2}$ on both sides, we obtain
$$\sum_{k=1}^{n-1} p_{2k-1}(2n-2k-1)a_{2n-2k-1}=0
$$
and
\begin{equation}\label{sec2-eqn0}
(2n-1)(2n-2)a_{2n-1}=\frac{3}{2}\sum_{k=1}^{n-1} p_{2k}(2n-2k-1)a_{2n-2k-1},
\quad \mbox{ for all } n\ge 2.
\end{equation}
Hence,
\begin{equation}\label{sec2-eq1}
|a_{2n-1}|\leq\frac{3}{(2n-1)(2n-2)}\sum_{k=1}^{n-1}(2k-1)|a_{2k-1}|.
\end{equation}
For $n=2$, we can easily see that $|a_3|\leq 1/2$, and for $n=3$, we have
$$|a_5|\leq \frac{3}{20}(1+3|a_3|)\leq \frac{3}{8}.
$$
Now, we can complete the proof by method of induction. Therefore, if we assume $|a_{2k-1}|\leq \frac{(2k-2)!}{2^{2k-2}(k-1)!^2}$ for $k=2, 3, \ldots , n-1$, then we deduce
from (\ref{sec2-eq1}) that
$$|a_{2n-1}|\leq\frac{3}{(2n-1)(2n-2)}\sum_{k=1}^{n-1} {\frac{(2k-1)!}{2^{2k-2}(k-1)!^2}}.
$$
Induction principle tells us to show that
$$|a_{2n-1}|\leq \frac{(2n-2)!}{2^{2n-2}(n-1)!^2}.
$$
It suffices to show that
$$\frac{3}{(2n-1)(2n-2)}\sum_{k=1}^{n-1} {\frac{(2k-1)!}{2^{2k-2}(k-1)!^2}}=\frac{(2n-2)!}{2^{2n-2}(n-1)!^2}
$$
or,
$$\sum_{k=1}^{n-1} {\frac{3(2k-1)!}{2^{2k-2}(k-1)!^2}}=\frac{(2n-2)(2n-1)!}{2^{2n-2}(n-1)!^2}.
$$
Again, we prove this by method of induction. It can easily be seen that for $k=1$ it is true.
Assume that it is true for $k=2, 3, \ldots , n-1$, then we have to prove that
$$\sum_{k=1}^{n} {\frac{3(2k-1)!}{2^{2k-2}(k-1)!^2}}=\frac{(2n)(2n+1)!}{2^{2n}(n)!^2},
$$
which is easy to see, since
$$\sum_{k=1}^{n} {\frac{3(2k-1)!}{2^{2k-2}(k-1)!^2}}=\frac{(2n-2)(2n-1)!}{2^{2n-2}(n-1)!^2}+\frac{3(2n-1)!}{2^{2n-2}(n-1)!^2}=\frac{(2n)(2n+1)!}{2^{2n}(n)!^2}.
$$
Hence, the proof is complete. For equality, it can easily be seen that
$$ f_0(z)=\frac{z}{\sqrt{1-z^2}}=z+\sum_{n=2}^{\infty} \frac{(2n-2)!}{2^{2n-2}(n-1)!^2}
z^{2n-1}
$$
belongs to $\mathcal{L}$.
The image of the unit disk ${\mathbb D}$ under $f_0$ is
shown in Figure~\ref{sec2-fig1} which indicates that $f_0({\mathbb D})$ is not convex.
\begin{figure}[H]
\includegraphics[height=10cm,width=7cm]{Figure_1.eps}
\caption{The image domain $f_0({\mathbb D})$, where $f_0(z)=\frac{z}{\sqrt{1-z^2}}$.}\label{sec2-fig1}
\end{figure}
{\bf (b)} We see from the definition of $\mathcal{L}$ that
$$1+\frac{zf^{\prime\prime}(z)}{f^\prime (z)}\prec \frac{1+2z^2}{1-z^2},\quad
\mbox{i.e.}, \frac{zf^{\prime\prime}(z)}{f^\prime (z)}\prec \frac{3z^2}{1-z^2}=:h(z),
$$
where $\prec$ denotes the usual subordination. The poof of (b) now follows easily.
{\bf (c)} Since
$$ \frac{zf^{\prime\prime}(z)}{f^\prime (z)}\prec h(z),
$$
it follows by the well-known subordination result due to Suffridge \cite{Suf70} that
$$f^{\prime}(z) \prec \exp\left(\int_0 ^ z {\frac{h(t)}{t}\mbox{d}t}\right)
=\exp\left(3\int_0^z {\frac{t}{1-t^2}\mbox{d}t}\right)=\frac{1}{(1-z^2)^{3/2}}.
$$
Hence, the proof of (c) follows.
{\bf (d)} By $(a)$, we see that
$$|\sigma_{2n-1}^{\prime}(z)|\leq \sum_{k=n+1}^{\infty}(2k-1)|a_{2k-1}|r^{2k-2}
\leq A(n,r).
$$
and
$$|z\sigma_{2n-1}^{\prime\prime}(z)|\leq \sum_{k=n+1}^{\infty}(2k-1)(2k-2)|a_{2k-1}|r^{2k-2}\leq B(n,r).
$$
The proof of our lemma is complete.
\end{proof}
\section{Proof of the Main Theorem}
For an arbitrary $f(z)=z+\sum_{n=2}^{\infty}{a_{2n-1} z^{2n-1}}\in \mathcal{L}$, we first consider its third section $s_3(z)=z+a_3z^3$ of $f$. Simple computation shows
$$1+\frac{zs_3 ^{\prime\prime}(z)}{s_3^{\prime}(z)}=1+\frac{6a_3z^2}{1+3a_3z^2}.
$$
By using Lemma~\ref{sec2-lem1}(a), we have $|a_3|\le 1/2$ and hence
$${\operatorname{Re}\,}\left(1+\frac{zs_3 ^{\prime\prime}(z)}{s_3^{\prime}(z)}\right)\ge 1-\frac{6|a_3||z|^2}{1-3|a_3||z|^2} \ge 1-\frac{3|z|^2}{1-\frac{3}{2}|z|^2}
$$
which is positive for $|z|<\sqrt{2}/3$. Thus, $s_3(z)$ is convex in the disk $|z|<\sqrt{2}/3$. To show that the constant $\sqrt{2}/3$ is best possible, we consider the function $f_0(z)$ defined by
$$f_0(z)=\frac{z}{\sqrt{1-z^2}}.
$$
We denote by $s_{3,0}(z)$, the third partial sum $s_3(f_0)(z)$ of $f_0(z)$ so that $s_{3,0}(z)=z+(1/2)z^3$ and hence, we find
$$1+\frac{zs_{3,0} ^{\prime\prime}(z)}{s_{3,0}^{\prime}(z)}=\frac{2+9z^2}{2+3z^2}.
$$
This shows that
$${\operatorname{Re}\,}\left(1+\frac{zs_{3,0} ^{\prime\prime}(z)}{s_{3,0}^{\prime}(z)}\right)=0
$$
when $z^2=(-2/9) \mbox{ or } (-2/3)$ \quad i.e., when $|z|^2=(2/9) \mbox{ or } (2/3)$.
Hence, the equality occurs.
Next, let us consider the case $n=3$.
Our aim in this case is to show that
$${\operatorname{Re}\,}\left(1+\frac{zs_5^{\prime\prime}(z)}{s_5^{\prime}(z)}\right)={\operatorname{Re}\,}\left(\frac{1+9a_3 z^2+25 a_5 z^4}{1+3a_3 z^2+5a_5 z^4}\right)>0
$$
for $|z|<\sqrt{2}/3$. Since the real part ${\operatorname{Re}\,}[(1+9a_3 z^2+25 a_5 z^4)/(1+3a_3 z^2+5a_5 z^4)]$ is harmonic in $|z|\leq \sqrt{2}/3$, it suffices to check that
$${\operatorname{Re}\,}\left(\frac{1+9a_3 z^2+25 a_5 z^4}{1+3a_3 z^2+5a_5 z^4}\right)>0
$$
for $|z|=\sqrt{2}/3$. Also we see that
$${\operatorname{Re}\,}\left(\frac{1+9a_3 z^2+25 a_5 z^4}{1+3a_3 z^2+5a_5 z^4}\right)=3-{\operatorname{Re}\,}\left(\frac{2-10 a_5 z^4}{1+3a_3 z^2+5a_5 z^4}\right)\ge 3-\left|\frac{2-10 a_5 z^4}{1+3a_3 z^2+5a_5 z^4}\right|
$$
and, so by considering a suitable rotation of $f(z)$, the proof reduces to $z=\sqrt{2}/3$; this means that it is enough to prove
$$\frac{3}{2}> \left|\frac{81-20 a_5}{81+54a_3 +20a_5}\right|.
$$
From (\ref{sec2-eqn0}), we have
$$ a_3=\frac{p_2}{4} \quad \mbox{and}\quad a_5=\left(\frac{3}{40}\right)\left(\frac{3}{4} p_2^2+p_4\right).
$$
Since $|p_2|\le 2$ and $|p_4|\le 2$, it is convenient to rewrite the last two relations as
$$a_3=\frac{\alpha}{2} \quad \mbox{and} \quad a_5=\frac{3}{40}(3\alpha^2+2\beta)
$$
for some $|\alpha|\le 1$ and $|\beta|\le 1$.
Substituting the values for $a_3$ and $a_5$, and applying the maximum principle in
the last inequality, it suffices to show the inequality
$$\frac{3}{2}\left|81+27\alpha+\frac{9\alpha^2}{2}+3\beta\right|> \left|81-\frac{9\alpha^2}{2}-3\beta\right|
$$
for $|\alpha|=1=|\beta|$. Finally, by the triangle inequality, the last inequality follows if we can show that
$$9\left|9+3\alpha+\frac{\alpha^2}{2}\right|-6\left|9-\frac{\alpha^2}{2}\right|>5
$$
which is easily seen to be equivalent to
$$9\left|9\overline{\alpha}+3+\frac{\alpha}{2}\right|-6\left|9\overline{\alpha}-\frac{\alpha}{2}\right|>5
$$
as $|\alpha|=1$. Write ${\operatorname{Re}\,} (\alpha)=x$. It remains to show that
$$T(x):=9\sqrt{18x^2+57x+\frac{325}{4}}-6\sqrt{\frac{361}{4}-18x^2}> 5
$$
for $-1\leq x\leq1$.
\begin{figure}[H]
\includegraphics{Figure_greaterthan_5.eps}
\caption{Graph of $T(x)$.}
\end{figure}
It suffices to show
$$9\sqrt{18x^2+57x+\frac{325}{4}}> 5+6\sqrt{\frac{361}{4}-18x^2}.
$$
Squaring both sides we have
$$ 2106 x^2+4617x+\frac{13229}{4} > 60\left(\sqrt{\frac{361}{4}-18x^2}\right).
$$
Again by squaring both sides we have
$$ \left(2106 x^2+4617x+\frac{13229}{4}\right)^2 > 3600\left(\frac{361}{4}-18x^2\right).
$$
After computing, it remains to show that $\phi(x)>0$, where
$$\phi(x)=ax^4+bx^3+cx^2+dx+e
$$
and the coefficients are
$$a=4435236, b=19446804, c=35311626, d=30539146.5, e=10613002.5625.
$$
Here we see that $\phi^{iv}(x)=24a>0$. Thus the function $\phi^{\prime\prime\prime}(x)$
is increasing in $-1\le x\le 1$ and hence $\phi^{\prime\prime\prime}(x)\ge \phi^{\prime\prime\prime}(-1)=10235160>0$. This implies $\phi^{\prime\prime}(x)$ is increasing. Hence $\phi^{\prime\prime}(x)\ge \phi^{\prime\prime}(-1)=7165260>0$.
Consequently, $\phi^{\prime}(x)$ is increasing and we have $\phi^{\prime}(x)\ge \phi^{\prime}(-1)=515362.5>0$. Finally we get, $\phi(x)$ is increasing and hence we have $\phi(x)>\phi(-1)=373914.0625>0$.
This completes the proof for $n=3$.
We next consider the general case $n\ge 4$. It suffices to show that
$${\operatorname{Re}\,}\left(1+\frac{zs_{2n-1}^{\prime\prime}}{s_{2n-1}^\prime}\right)>0 \quad \mbox{for} \quad |z|=r
$$
with $r=\sqrt{2}/3$ for all $n\ge 4$. From the maximum modulus principle, we shall then conclude that the last inequality holds for all $n\ge 4$
$${\operatorname{Re}\,}\left(1+\frac{zs_{2n-1}^{\prime\prime}}{s_{2n-1}^\prime}\right)>0
$$
for $|z|<\sqrt{2}/3$. In other words, it remains to find the largest $r$ so that
the last inequality holds for all $n\ge 4$.
By the same setting of $f(z)$ as in Lemma~\ref{sec2-lem1}(d),
it follows easily that
$$1+\frac{zs_{2n-1}^{\prime\prime}}{s_{2n-1}^\prime}=1+\frac{z(f^{\prime\prime}(z)-\sigma_{2n-1}^{\prime\prime}(z))}{f^{\prime}(z)-\sigma_{2n-1}^{\prime}(z)}=1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}+\frac{\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}\sigma_{2n-1}^{\prime}(z)-z\sigma_{2n-1}^{\prime\prime}(z)}{f^{\prime}(z)-\sigma_{2n-1}^{\prime}(z)}
$$
or,
$${\operatorname{Re}\,}\left(1+\frac{zs_{2n-1}^{\prime\prime}}{s_{2n-1}^\prime}\right)\ge 1-\left|\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}\right|-\frac{\left|\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}\right||\sigma_{2n-1}^{\prime}(z)|+|z\sigma_{2n-1}^{\prime\prime}(z)|}{|f^{\prime}(z)|-|\sigma_{2n-1}^{\prime}(z)|}.
$$
Then by using Lemma~\ref{sec2-lem1}, we obtain
$${\operatorname{Re}\,}\left(1+\frac{zs_{2n-1}^{\prime\prime}}{s_{2n-1}^\prime}\right)\ge 1-\frac{3r^2}{1-r^2}-\frac{\left(\frac{3r^2}{1-r^2}\right)A(n,r)+B(n,r)}{\frac{1}{(1+r^2)^{(3/2)}}-A(n,r)}.
$$
Thus, we conclude that
$${\operatorname{Re}\,}\left(1+\frac{zs_{2n-1}^{\prime\prime}}{s_{2n-1}^\prime}\right)>0
$$
provided
$$\frac{1-4r^2}{1-r^2}-\frac{(1+r^2)^{3/2}}{1-r^2}\left(\frac{3r^2 A(n,r)+(1-r^2)B(n,r)}{1-(1+r^2)^{3/2}A(n,r)}\right)>0,
$$
or, equivalently
$$(1+r^2)^{3/2}\left(\frac{3r^2 A(n,r)+(1-r^2)B(n,r)}{1-(1+r^2)^{3/2}A(n,r)}\right)<1-4r^2.
$$
We show that the above relation holds for all $n\ge 4$ with $r=\sqrt{2}/3$.
The choice $r=\sqrt{2}/3$ brings the last inequality to the form
$$\left(\frac{11}{9}\right)^{3/2}\left(\frac{\frac{2}{3} A(n,\frac{\sqrt{2}}{3})+\frac{7}{9}B(n,
\frac{\sqrt{2}}{3})}{1-(\frac{11}{9})^{3/2}A(n,\frac{\sqrt{2}}{3})}\right)< \frac{1}{9}.
$$
Set
$$ C\left(n,\frac{\sqrt{2}}{3}\right):=1-\left(\frac{11}{9}\right)^{3/2}A\left(n,\frac{\sqrt{2}}{3}\right).
$$
We shall prove that $C\left(n,\frac{\sqrt{2}}{3}\right)>0$ for $n\ge 4$ i.e., $$A\left(n,\frac{\sqrt{2}}{3}\right)<\frac{27}{(11)^{3/2}}$$
and
$$A\left(n,\frac{\sqrt{2}}{3}\right)+B\left(n,\frac{\sqrt{2}}{3}\right)<\frac{27}{7\times (11)^{3/2}}\quad \mbox{ for } n\ge 4.
$$
If the last inequality is proved, then automatically the previous one follows.
Hence, it is enough to prove the last inequality. Now,
\begin{eqnarray*}
A(n,r)+B(n,r) &=& \sum_{k=n+1}^{\infty}\frac{(2k-1)(2k-1)!}{2^{2k-2}(k-1)!^2}(r^2)^{k-1}\\ &\le & \sum_{k=5}^{\infty}\frac{(2k-1)(2k-1)!}{2^{2k-2}(k-1)!^2}(r^2)^{k-1}\\
&= & \sum_{k=1}^{\infty}\frac{(2k-1)(2k-1)!}{2^{2k-2}(k-1)!^2}(r^2)^{k-1}
- \sum_{k=1}^{4}\frac{(2k-1)(2k-1)!}{2^{2k-2}(k-1)!^2}(r^2)^{k-1}\\
&=& \frac{1+2 r^2}{(1-r^2)^{5/2}}-\left(1+\frac{9}{2}r^2+\frac{75}{8}r^4+
\frac{245}{16}r^6 \right).
\end{eqnarray*}
Substituting the value $r=\sqrt{2}/3$, we obtain
$$A\left(n,\frac{\sqrt{2}}{3}\right)+B\left(n,\frac{\sqrt{2}}{3}\right) \le 0.076\cdots < 0.105\cdots = \frac{27}{7\times (11)^{3/2}}.
$$
This completes the proof of our main theorem.
\hfill{$\Box$}
\vskip 1cm
\noindent
{\bf Acknowledgement.} The work of the first author is supported by University
Grants Commission, New Delhi (grant no. F.2-39/2011 (SA-I)).
|
2,877,628,091,309 | arxiv | \section{Introduction}
\IEEEPARstart{D}{ue} to the rapid proliferation of mobile phones, tablets, and other handheld devices, an increasing traffic demand is observed worldwide. This drastically increasing capacity demand has driven the evolution of cellular networks from macro base stations (BSs) deployment to small, pico, and even nano BSs. For instance, the 5G evolution for cellular networks dictates 1000 fold capacity improvement, which is expected to be fulfilled by an evolutionary heterogeneous network densification phase~\cite{1}. Deploying more BSs within the same geographical region shrinks the footprint of each BS, which increases the spatial spectral efficiency and offers more capacity. However, the foreseen capacity gains offered by network densification are achieved at the expense of increased handover (HO) rates. Such important negative impact of BS densification (i.e., HO rate) is usually overlooked~\cite{5G}. In addition to the HO signaling overhead, the HO procedure interrupts the data flow to the user due to link termination with the serving BS and link establishment with the target BS. Increasing the HO rate increases the frequency of such undesirable interruptions as well as the associated signaling overhead, which may diminish or can even nullify the foreseen network densification capacity gains. Consequently, any discussion about network densification is never complete without incorporating the corresponding HO cost.\\
The HO process by itself is a core element of cellular networks to support user mobility. Hence, HO management has always been a focal research point in the context of cellular networks (see \cite{trends} for GSM/CDMA and \cite{LTEsurvey2,HOsurvey} for LTE systems). Modeling and improving the handover performance has been extensively addressed in the cellular network literature. For instance, the cell dwell time is characterized in~\cite{celldwell} for the circular and hexagonal shaped cells. An analytical model, based on application-specific signal strength tuning mechanism is presented in~\cite{vertical-optimization} to help optimizing the vertical HOs. HO signaling overhead reduction algorithms are proposed in~\cite{zhangsignalling} for two tier networks and in~\cite{zhangCRAN} for cloud-RAN based heterogeneous networks. A HO management technique, based on self organizing maps is proposed in~\cite{kernel} to reduce unnecessary HOs for indoor users in two tier cellular networks. The authors in~\cite{HOdelay1} present a study to avoid unnecessary vertical HOs and reduce the overall packet delay for low speed users in two tier downlink cellular networks. HO delay for a single tier network is characterized in~\cite{HOdelayrelation}. However, none of the aforementioned studies tackle the interplay between HO cost and capacity gain as a function of the BS intensity.\\
Stochastic geometry, which is a widely accepted mathematical tool to model and analyze cellular networks, enables performance characterization in terms of the BS intensity as well as other physical layer parameters (see~\cite{6a} for a survey and~\cite{tutorial} for a tutorial). For cellular networks modeled via the Poisson point process (PPP),~\cite{jeff_rate} studies the throughput gains as a function of the BS intensity for static users. Such model is extended to the case of Poisson cluster process (PCP) in \cite{Ghrayeb}. The HO rate for PPP cellular networks in terms of the BS intensity is characterized in~\cite{Lin} for single-tier scenario and in~\cite{10a} for multi-tier scenario. The work in~\cite{10a} is extended to the case of PCP in~\cite{Hocluster}. The HO rate for the recently proposed Phantom cells is characterized in \cite{macro-assisted}. However, none of the aforementioned studies investigate the integrated effect of network densification (i.e., BS intensity) in terms of both the HO cost and the throughput gains. The HO negative impact on the average throughput is studied in~\cite{sadr2015handoff, zhangdelay, ge2015user}. However, none of~\cite{sadr2015handoff, zhangdelay, ge2015user} proposed a solution for the HO problem. The authors in~\cite{cu-split} proposed a control/data plane split architecture with macro BS anchoring to mitigate the HO effect on user throughput. However, the proposed solution in~\cite{cu-split} necessitates a massive network upgrade. The authors in~\cite{icc, globe} propose a simple HO skipping scheme that is compatible with the current cellular architecture to mitigate the HO cost in a single tier cellular network. Particularly,~\cite{icc, globe} advocate sacrificing the best BS association and skip some HOs along the user trajectory at high speeds to reduce the number of handovers. Such skipping strategy has shown a potential to improve the user throughput at high velocities despite sacrificing the always best connectivity strategy. The skipping scheme in~\cite{icc, globe} is extended to two-tier networks in~\cite{velocityaware}. However, the HO skipping schemes presented in~\cite{icc,globe,velocityaware} are topology agnostic, which may result in non-efficient skipping decisions. Particularly,~\cite{icc,globe,velocityaware} advocate an alternate HO skipping approach in which the user skips associating to every other BS along its trajectory irrespective of the cell-size (coverage area) and/or the path of the trajectory through the cell as shown in Fig.~\ref{model}(scheme a). Consequently, there could be cases where the dwell time inside the cell of a skipped BS is larger than the dwell time inside a non-skipped BS. Articulated differently, the user may skip necessary HOs to BSs that cover a large sections of the user trajectory. To this end, devising smarter HO skipping schemes still entails to be inscribed for future cellular networks.\\
In this paper, we exploit topology awareness and user trajectory estimation to propose smart HO management schemes in single and two tier downlink cellular networks.\footnote{We assume that the network has the information about the user trajectory within the target BS footprint. In some cases like users riding monorails in downtowns, the user trajectory across the target BS footprint is fixed and known. For other cases, several studies including~\cite{trajectory1,trajectory} are conducted in the literature on the estimation of mobile user trajectory.} The proposed schemes account for the location of the trajectory within the cells and/or the cell-size to take the HO skipping decision. The performance of the proposed schemes is analyzed using tools from stochastic geometry. More specifically, we consider two network scenarios, namely, a single-tier cellular network abstracted by PPP and a two tier cellular network abstracted by PPP macro BSs overlaid with PCP small BSs. Then we study the impact of HO delay on user throughput in the two network scenarios and show the gains and effectiveness of the proposed schemes by Monte Carlo simulations. The results manifest the HO problem of the always best connectivity scheme at high speeds in dense cellular environments. The user throughput via the proposed skipping schemes outperforms the always best connected scheme at velocities starting from 30 km/h. Particularly, for BSs intensity of 50 BS/km$^2$, the proposed schemes show up to $8\%$ more rate gains with respect to (w.r.t.) best connectivity and $23\%$ w.r.t. alternating skipping over the user velocity of 100 km/h, which is the average monorail speed in downtown. More gains are expected at higher BSs intensities. Finally, several insights into the design of HO skipping and the effective range of velocities for each of the proposed skipping schemes are presented.\\
The rest of paper is organized as follows. Section II overviews the HO procedure and presents the proposed HO schemes. Section III analyzes the performance metrics (e.g. coverage probability, HO cost, and average throughput) for proposed HO skipping schemes in single tier network. Section IV shows the significance of proposed model in two-tier networks. Finally, the paper is concluded in Section V.
\section{Overview of Handover Process}
HO is the process of changing the user equipment (UE) association with mobility such that the best serving BS is always selected. One popular and simple rule for determining the best serving BS is based on the received signal strength (RSS) level. That is, the UE changes its association if another BS provides a higher RSS than the serving BS, which may happen when the user moves away from the serving BS towards another BS. With the increasing heterogeneity of cellular networks, many other criteria are developed for selecting the best serving BS which may include load balancing, delay, and throughput metrics~\cite{alpha-load,MIMO,Hesham-traffic}. Despite of the selection rule, the UE will always change its association with mobility and the HO rate increases with the BS density. Hence, the HO cost is always an increasing function of the BS density.\\
In general, HO is performed in three phases: initiation, preparation, and execution. During the initiation phase, the user reports reference signals measurements from neighboring BSs to the serving BS. For instance, the signal measurement report in 4G Long Term Evolution (LTE) includes, but not limited to, reference signal received power (RSRP) and reference signal received quality (RSRQ) (see~\cite{LTEHO} for the HO procedure in LTE). Also, the HO may be initiated based on downlink and/or uplink signal measurement reports. In the next phase, which is the preparation phase, signaling is exchanged between the serving BS, target BS, and the admission controller. The admission controller makes a decision about the initiation of the HO based on network defined HO criteria. Once the HO criteria are met, the user releases the serving BS and attempts to synchronize and access the target BS using the random access channel (RACH). Upon synchronization with the target BS, the UE sends the confirmation message to notify the network that the HO is executed. The aforementioned HO procedure involves signaling overhead between the user, serving BS, target BS, and core network, which interrupts the data flow and decreases the throughput of mobile user. The frequency at which such interruptions happen is a function of the relative values of the BS intensity and user velocity. The duration of each interruption, denoted as HO delay, measured from the beginning of initiation phase to the end of execution phase can be significant~\cite{14a}. Consequently, at high velocities and/or dense cellular environment, it is desirable to decrease the frequency of such HO interruptions, which motivates the HO skipping scheme. Note that high mobility can exist in dense cellular environments such as riding monorails or driving over elevated highways that go through downtowns.\\
\begin{figure}[!t]
\centering
\hspace{-0.5cm}
\includegraphics[width=1 \linewidth]{combinedlatest.pdf}
\small \caption{Voronoi tessellation of a PPP based single tier cellular network with black circles representing the BSs' locations in 30 km x 30 km grid. (a), (b), and (c) represent alternating, user location aware, and cell-size aware HO skipping schemes, respectively. Blue line represents user trajectory while green and red colors denote serving (circles) and skipped (cross) BSs' coverage areas, respectively.}
\label{model}
\end{figure}
HO skipping reduces the frequency at which the HO process is performed by sacrificing some of the best BS connectivity associations. Hence, maintaining longer service durations with the serving BSs and reducing HO delay.
For instance, in an RSS based association scheme with universal frequency reuse, HO skipping sacrifices some best signal-to-noise-plus-interference-ratio (SINR) associations along the trajectory. When the user skips the HO to the BS providing the strongest signal, denoted as blackout (BK) phase, the interference from the skipped BS may be overwhelming to the SINR. To improve the SINR during blackout, nearest BS interference cancellation (IC)~\cite{16a} and non-coherent cooperative BS service via coordinated multipoint (CoMP) transmission~\cite{3gpp, crancomp1, crancomp2} can be exploited to improve the SINR during blackout.\footnote{Non-coherent CoMP is used as channel state information is hard to predict at high velocities.} In the cooperative BS service, the user can be jointly served by the serving BS and the next target BS. In addition to IC and CoMP, the performance of the skipping scheme can be further improved by reducing the blackout durations along the users trajectories via smart skipping schemes. Different from the topology agnostic (i.e., alternating skipping) approach proposed in \cite{icc,globe,velocityaware}, this paper focuses on the following three novel variants of HO skipping. Note that all of the following skipping schemes assume that the trajectory within the target BS footprint is known via some trajectory estimation techniques~\cite{trajectory1,trajectory}.
\subsubsection{\bf Location-Aware HO Skipping}
The location aware HO skipping scheme accounts for the shortest distance between the user trajectory and the target BS to decide the HO skipping. That is, the HO skips associating to the target BS if and only if the minimum distance along the user trajectory and the target BS exceeds a pre-defined threshold $L$. The threshold $L$ can be designed such that the user skips the BSs in which the trajectory passes through the cell edge only. The location aware HO skipping scheme is illustrated in Fig.~\ref{model}(scheme b).\\
\subsubsection{\bf Cell-Size Aware HO Skipping}
Cell-size aware HO skipping scheme allows users to skip HOs to target BSs that have a footprint (i.e., service area) less than a pre-defined threshold $s$. Since the cell dwell time depends of the BS footprint size, size aware skipping scheme aims at avoiding large blackout durations. Hence, it allows users to skip small sized cells and associate with large cells. The cell-size aware HO skipping scheme is illustrated in Fig.~\ref{model}(scheme c). Note that it is implicitly assumed that the service areas of all BSs are known to the network, which can be inferred from several network planning tools used by cellular operators such as Aircom Asset~\cite{asset} and Mentum Planet~\cite{mentum}. Such tools take antenna characteristics, BS configuration, terrain and clutter information into account to predict cell-sizes.\\
\subsubsection{\bf Hybrid HO Skipping}
Neither the location aware skipping nor the size aware skipping alone accurately reflects the true cell dwell time. Hence, combining both schemes gives a better inference about the cell dwell time, which can improve the HO skipping decisions and performance. Consequently, the hybrid HO skipping scheme combines both location awareness and cell-size awareness to decide which BS to skip. That is, it takes user location and cell area into account while making the decision for HO.\\
\section{Handover Skipping in Single Tier Networks}
In this section, we consider a single tier downlink cellular network, where the BSs' locations are modeled via a two-dimensional homogenous PPP $\Phi$ of intensity $\lambda$. It is assumed that all BSs transmit with the same power $P$. A general path loss model with path loss exponent $\eta>2$ is assumed. Without loss of generality, we focus on a test mobile user and index all BSs with an ascending order according to their instantaneous distances from the test user. By Slivnyak-Mecke theorem for the PPP~\cite{20a}, the performance of any other user in the network is equivalent to the performance of the test user. Defining $R_{k}$ as the distance from the test user to the $k^{th}$ BS, then the inequalities $(R_1 < R_2 < R_3 <....)$ always hold. Channel gains are assumed to have $i.i.d.$ Rayleigh distributions with unit variance i.e., $h\sim \exp(1)$. We consider a universal frequency reuse scheme and study the performance of one frequency channel. We consider user mobility with constant velocity $v$ over an arbitrary long trajectory and assume that a HO is triggered when the user enters the voronoi cell of the target BS. We first analyze the coverage probability for all HO skipping cases and then evaluate the HO cost and average throughput with the simulation parameters shown in Table~\ref{tab1}.
\begin{table}[!t]
\caption{\: Simulation parameters for PPP based cellular network}
\center
\vspace{-0.5cm}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{|c c c c|}
\hline
\rowcolor{cyan}
\multirow{-1}{*}{\textcolor{white}{\textbf{Parameter}}} & \multirow{-1}{*}{\textcolor{white}{\textbf{Value}} } &\multirow{-1}{*}{ \textcolor{white}{ \textbf{Parameter}}} & \multirow{-1}{*}{\textcolor{white}{ \textbf{Value }}} \\ \hline \hline
& & & \\
\multirow{-2}{*}{Overall Bandwidth $W$:} & \multirow{-2}{*}{10 MHz } & \multirow{-2}{*}{ Path loss exponent $\eta$: } & \multirow{-2}{*}{4} \\
\cellcolor{cyan!20!} & \cellcolor{cyan!20!} &\cellcolor{cyan!20!} & \cellcolor{cyan!20!} \\
\multirow{-2}{*}{\cellcolor{cyan!20!} BS intensity $\lambda$:} & \multirow{-2}{*}{\cellcolor{cyan!20!}50 BS/km$^{2}$} & \multirow{-2}{*}{\cellcolor{cyan!20!}HO delay $d$:} & \multirow{-2}{*}{\cellcolor{cyan!20!} 1 s} \\
& & & \\
\multirow{-2}{*}{Size Threshold $s$:} & \multirow{-2}{*}{ 1.28/$\lambda$ km$^2$} & \multirow{-2}{*}{Location Threshold $L$:} & \multirow{-2}{*}{ 2.3/$\lambda$ m} \\
\cellcolor{cyan!20!} & \cellcolor{cyan!20!} & \cellcolor{cyan!20!} &\cellcolor{cyan!20!} \\
\multirow{-2}{*}{\cellcolor{cyan!20!} Hybrid Thresholds $s$, $L$:} & \multirow{-2}{*}{\cellcolor{cyan!20!} 0.38/$\lambda$ km$^2$, 1.8/$\lambda$ m } & \multirow{-2}{*}{\cellcolor{cyan!20!} Tx Power $P$:} & \multirow{-2}{*}{\cellcolor{cyan!20!} 1 watt} \\ \hline
\end{tabular}
}
\vspace{3mm}
\label{tab1}
\end{table}
\subsection{Coverage Probability}
The coverage probability is defined as the probability that the received SINR by the test user exceeds a certain threshold $T$. The coverage probability for the best connected case is given by
\begin{eqnarray}
\mathcal{C}_{BC} &=& \mathbb{P} \left\{ \frac{P h_1 R_{1}^{-\eta}}{ \sum_{i\epsilon \Phi \backslash b_1}{}P h_{i} R_{i}^{-\eta} + \sigma^2} > T \right\}
\label{C1}
\end{eqnarray}
\noindent where the nearest BS, denoted as $b_1$, is removed from the interfering BSs in \eqref{C1} because the serving BS do not contribute to the aggregate interference.
In the blackout case, the test user is not served from the nearest BS due to HO skipping. Instead, the test user maintains the association with the serving BS (which is not the nearest anymore) or handovers the connection to the next target BS depending on their relative distances during blackout. If CoMP is enabled, then the test user is simultaneously served by both the serving and the next target BSs during the blackout phase. Let $R_s$ and $R_t$ denote the distances from the test user to the serving BS (denoted as $b_s$) and next target BS (denoted as $b_t$) during the blackout phase. Then the coverage probabilities for the blackout case with IC capabilities without and with BS cooperation are given by $\mathcal{C}^{(1)}_{BK(IC)}$ and $\mathcal{C}^{(2)}_{BK(IC)}$, respectively.
\begin{eqnarray}
\hspace{0.4cm} \mathcal{C}^{(1)}_{BK(IC)}=\mathbb{P}\left\{\frac{P h_x \min(R_{s},R_t)^{-\eta}}{\sum_{i\epsilon\Phi\backslash b_{1},b_{x}}^{} Ph_{i} R_{i}^{-\eta}+\sigma^2} > T \right\}
\label{C2}
\end{eqnarray}
\noindent where the subscript $ x=s$ if $R_s<R_t$ and $ x=t$ otherwise.
\begin{eqnarray}
\mathcal{C}^{(2)}_{BK(IC)}=\mathbb{P} \left\{\frac{|\sqrt{P}g_{x}R_{s}^{-\eta/2}+\sqrt{P}g_{t} R_{t}^{-\eta/2}|^2}{\sum_{i\epsilon\Phi\backslash b_{1},b_{s},b_{t}}^{} Ph_{i} R_{i}^{-\eta}+\sigma^2}>T\right\}
\label{C3}
\end{eqnarray}
where $g_x$ and $g_t$ are zero-mean and unit-variance complex Gaussian channels to reflect the non-coherent CoMP transmission. Note that $b_1$ in~\eqref{C2} and~\eqref{C3} is the skipped BS whose signal power is eliminated from the aggregate interference by virtue of IC.
The coverage probability for the best connected scenario given in equation \eqref{C1} is mathematically characterized in~\cite{7a}. Furthermore, the coverage probability for the HO skipping (i.e., blackout) scenarios given in \eqref{C2} and \eqref{C3} are mathematically characterized in \cite{icc} and \cite{globe}. However, it is highly difficult to conduct tractable analysis for the proposed HO skipping schemes due to the random shape of the voronoi cell and the random location and orientation of the trajectory within the voronoi cell. Therefore, we show the coverage probabilities for the best connected and HO skipping cases based on Monte Carlo simulations. The simulations in this paper follow~\cite{icc,globe,velocityaware} where both the mathematical analysis and the simulations are used and validated.
Figs.~\ref{CP1} and \ref{CP2} show the coverage probability plots for the best connected and HO skipping cases without and with BS cooperation, respectively. As expected, sacrificing the always best connectivity reduces the average coverage probability over the user trajectory. Nevertheless, employing a smart skipping scheme via location and size awareness can mitigate such coverage probability reduction. Furthermore, comparing the results in Figs.~\ref{CP1} and \ref{CP2} quantifies the contribution of BS cooperation to the coverage probability. Note that the hybrid scheme shown in Figs.~\ref{CP1} and \ref{CP2} uses more relaxed size and distance constraints than the locations and size aware schemes as shown in Table I. Hence, it is able to have more HO skips with comparable coverage probability to the locations and size aware schemes. Note that the coverage probabilities shown in Figs.~\ref{CP1} and \ref{CP2} reflect the negative impact only of the HO skipping. The next section incorporates the HO cost into the analysis in order to fairly assess HO skipping.
\begin{figure}[!t]
\centering
\includegraphics[width=0.7 \linewidth]{Coverage.pdf}
\small \caption{Coverage probability vs. SINR threshold for best connected and HO skipping cases.}
\label{CP1}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7 \linewidth]{CoverageIC.pdf}
\small \caption{Coverage probability vs. SINR threshold for best connected and HO skipping cases with CoMP transmission.}
\label{CP2}
\end{figure}
\subsection{Handover Cost}
This section evaluates the HO cost for the best connected and HO skipping cases. The HO cost $\mathcal{D}$ is defined in terms of the normalized HO delay, which is given by
\begin{align}
\mathcal{D}= \min(\mathcal{H}_t \times d,1)
\end{align}
\noindent where $\mathcal{H}_t$ is the handover rate per unit time and $d$ is the delay in seconds per handover. Hence, the handover cost $\mathcal{D}$ is a unit-less quantity used to quantify the fraction of time wasted without useful data transmission along the user trajectory, which is due to handover signaling and radio link switching between serving and target BSs. Note that if $\mathcal{H}_t \times d \geq1$, this means that the cell dwell time is less than the handover delay. Hence, the entire time is wasted in handover signaling without useful data transmission and $\mathcal{D}$ is set to one.
The HO rate for a PPP based single tier network is characterized in~\cite{10a} for a generic trajectory and mobility model as
\begin{align}
\mathcal{H}_t= \frac{4v}{\pi}\sqrt{\lambda}
\end{align}
In order to calculate the HO rate via simulations, we first calculate the number of HOs per unit length and then multiply it with the user velocity. The number of handover per unit length is calculated by dividing the number of handovers by the trajectory length. Thus, $\mathcal{D}$ can be expressed as
\begin{align}
\mathcal{D}&= \mathcal{H}_l \times v \times d
\end{align}
\noindent where $\mathcal{H}_l$ is the number of HOs per unit length.
Fig~\ref{DHO} depicts the HO cost for the best connected and HO skipping schemes. Since the HO cost depends on the number of HOs, the best connectivity association shows significant HO cost as compared to the HO skipping strategies. The alternating HO skipping shows the minimal handover cost as it results in the maximum number of HO skips. However, the alternating skipping is topology agnostic and can have inefficient skipping decisions. Such inefficient decision can be reduced via location and size awareness on the expense of having higher HO cost (cf. Fig~\ref{DHO}) but better coverage probability (cf. Figs.~\ref{CP1} and \ref{CP2}). While Figs.~\ref{CP1} and \ref{CP2} focus on the negative impact of the skipping schemes, Fig~\ref{DHO} focuses on their positive impact. In the next section, the integrated effect (i.e., both the negative and positive) of the skipping schemes are assessed in the context of user throughput.
\begin{figure}[!t]
\centering
\includegraphics[width=0.7 \linewidth]{DHO.pdf}
\small \caption{Handover cost for conventional and HO skipping cases.}
\label{DHO}
\end{figure}
\subsection{Average Throughput}
Average throughput is a key performance indicator (KPI) for the cellular operators, which can be used to show the interplay between HO cost and capacity gain imposed by network densification.
In this section, we quantify the effect of HO rate, and the impact of each of HO skipping schemes, on the average user throughput. The average throughput, denoted as $\mathcal{T}$, is defined as:
\begin{eqnarray}
\mathcal{T} &=& W\mathcal{R}(1-\mathcal{D}).
\label{through}
\end{eqnarray}
where $W$ denotes the overall bandwidth and $\mathcal{R}$ represents the ergodic spectral efficiency, which is defined by Shannon capacity formula as
\begin{eqnarray}
\label{rates}
\mathcal{R}&=&\mathbb{E}\big(\ln\big(1+{\rm SINR}\big)\big)
\end{eqnarray}
\noindent Table~\ref{tab2} shows the spectral efficiencies for the best connected and HO skipping cases with and without IC capabilities, which are obtained via simulations using the definition in~\eqref{rates}. The spectral efficiencies given in Table~\ref{tab2} are used to obtain throughput plots via \eqref{through} as shown in Fig.~\ref{th}. The figure clearly shows the HO cost impact on the user throughput when the velocity increases. The figure also shows that the negative impact of the HO could be relieved using the skipping schemes. Particularly, the location aware HO skipping outperforms all other schemes at low velocities i.e. 30 km/h. With the proper adjustment of the hybrid skipping scheme, it outperforms the best connected association at 45 km/h and the location aware scheme at 135 km/h. Note that the slope of the hybrid curve is tunable through the cell-size threshold $s$ and the minimum distance threshold $L$. Size aware HO skipping is the least effective compared to the location aware and hybrid schemes, even though it shows considerable gains as compared to the topology agnostic (i.e., alternating HO skipping) scheme. Note that the size aware scheme is easier to implement than the location aware and hybrid schemes as it does not require complete information about the user trajectory in the target cell. Finally, the alternating HO skipping becomes comparable in performance with other schemes at very high user velocity (beyond 280 km/h) because the HO cost is significant and requires high number of skips to be mitigated.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.05}
\caption{\: Spectral Efficiency for all cases in nats/sec/Hz}
\center
\resizebox{0.3\textwidth}{!}{
\begin{tabular}{|c c c|}
\hline
\rowcolor{cyan}
\multirow{-1}{*}{\textcolor{white}{\textbf{Scenario}}} & \multirow{-1}{*}{\textcolor{white}{\textbf{Non-IC}} } &\multirow{-1}{*}{ \textcolor{white}{ \textbf{IC}}} \\ \hline \hline
& & \\
\multirow{-2}{*}{Best connected ($\mathcal{R}_{BC}$)} & \multirow{-2}{*}{1.49} & \multirow{-2}{*}{-} \\
\cellcolor{cyan!20!} & \cellcolor{cyan!20!} &\cellcolor{cyan!20!} \\
\multirow{-2}{*}{\cellcolor{cyan!20!} Location Aware ($\mathcal{R}_{LA}$)} & \multirow{-2}{*}{\cellcolor{cyan!20!}1.40} & \multirow{-2}{*}{\cellcolor{cyan!20!}1.45} \\
& & \\
\multirow{-2}{*}{Hybrid ($\mathcal{R}_{HB}$)} & \multirow{-2}{*}{1.36} & \multirow{-2}{*}{1.42} \\
\cellcolor{cyan!20!} & \cellcolor{cyan!20!} & \cellcolor{cyan!20!} \\
\multirow{-2}{*}{ \cellcolor{cyan!20!} Size Aware ($\mathcal{R}_{SA}$) } & \multirow{-2}{*}{\cellcolor{cyan!20!} 1.21 } & \multirow{-2}{*}{\cellcolor{cyan!20!} 1.28} \\
& & \\
\multirow{-2}{*}{ Alternating ($\mathcal{R}_{AL}$)} & \multirow{-2}{*}{1.02 } & \multirow{-2}{*}{1.11} \\ \hline
\end{tabular}
}
\vspace{3mm}
\label{tab2}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7 \linewidth]{Throughput.pdf}
\small \caption{Average throughput vs. user velocity for conventional and HO skipping cases.}
\label{th}
\end{figure}
\section{Handover Skipping in Two Tier Networks}
Current cellular networks are evolving towards a multi-tier architecture in which the macro BSs are overlaid with small BSs to cover traffic hotspots. Since hotspots are usually concentrated around popular/social regions, the small BSs are better modeled via a PCP~\cite{Ghrayeb}. The PCP is generated from a parent PPP in which each point in the parent PPP is replaced by multiple cluster points. The distribution of the cluster points around each parent point location determines the type of the PCP. In this work, we consider the Mat\'ern cluster process in which the parent points are generated via a homogenous PPP with intensity $\lambda_p$ while the daughter points are uniformly distributed within a ball of radius $r$, where the number of daughter points in each cluster follows poisson distribution with intensity $\lambda_c$. The parent points represent the macro BSs of tier-1 and the daughter points represent small BSs of tier-2 as shown in Fig.~\ref{pcp}. The total intensity of the BSs in the network becomes $\lambda^\prime= \lambda_{p}\lambda_{c}+\lambda_{p}$. It is assumed that the BSs belonging to the $i^{th}$ tier have same transmit power $P_{i}$, $i\in\{1,2\}$ and unity bias factor. A power-law path-loss model with path loss exponent $\eta_i>2$ is considered. Channel gains are assumed to have $i.i.d.$ Rayleigh distributions. Due to the different powers used by the macro and small BSs, the coverage regions in Fig.~\ref{pcp} are represented via a weighted voronoi tessellation~\cite{voronoi}.\\
For the considered two-tier network, we follow the same methodology in Section~III and study the users throughput to characterize the HO cost and assess the skipping solutions. We conduct our study on a test user moving with velocity $v$ and assume an RSS based association such that the HO is triggered when the user enters the voronoi cell of the target BS. Motivated by its superior performance when compared to all skipping schemes, this section focuses on the location aware skipping. Particularly, we compare the location aware skipping scheme for different distance thresholds to the always best connected strategy.
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\linewidth]{pcpmodel2.pdf}
\small \caption{Weighted Voronoi tessellation of two tier PCP based downlink cellular network with $\lambda_p= 0.04$ BS/km$^2$, $\lambda_c= 1$ BS/km$^2$, $P_1= 1$ watt, $P_2= 0.5P_1$ watt, $r= 2$ km. Black squares represent macro BSs while red circles denote femto BSs.}
\label{pcp}
\end{figure}
To assess the user throughput, we first evaluate the coverage probabilities, spectral efficiencies, and HO costs. Then the average throughput is calculated as in~\eqref{through}. Table~\ref{tab3} shows the spectral efficiencies for the best connected and location aware HO skipping schemes with distance threshold $L=0.77/\lambda^\prime$, $2.56/\lambda^\prime$. Fig.~\ref{thpcp} shows the average throughput plots for the best connected and location aware HO skipping cases. It is observed that the location aware HO skipping scheme in a PCP based cellular network outperforms the best connected association once the user exceeds 40 km/h. The results show up to $47\%$ throughput gains, which can be harvested through proposed smart handover strategy. From Fig.~\ref{thpcp}, it is observed that the location awareness with less threshold outperforms location awareness with higher distance threshold once the user exceeds 210 km/h. This is because decreasing the distance threshold $L$ relaxes the skipping constraint and increases the number of skips, which compensates for the excessive HO cost that happens at high mobility. It is worth noting that the considered clustering scheme in this paper is used for illustrative purposes only, in which similar results and insights apply to other clustering schemes.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.05}
\caption{\: Spectral Efficiency for PCP Network in nats/sec/Hz}
\center
\resizebox{0.4\textwidth}{!}{
\begin{tabular}{|c c c|}
\hline
\rowcolor{cyan}
\multirow{-1}{*}{\textcolor{white}{\textbf{Scenario}}} & \multirow{-1}{*}{\textcolor{white}{\textbf{Non-IC}} } &\multirow{-1}{*}{ \textcolor{white}{ \textbf{IC}}} \\ \hline \hline
& & \\
\multirow{-2}{*}{Best connected ($\mathcal{R}_{BC}$)} & \multirow{-2}{*}{1.26} & \multirow{-2}{*}{-} \\
\cellcolor{cyan!20!} & \cellcolor{cyan!20!} &\cellcolor{cyan!20!} \\
\multirow{-2}{*}{\cellcolor{cyan!20!} Location Aware $L=2.56/\lambda^\prime$ ($\mathcal{R}_{LA}$)} & \multirow{-2}{*}{\cellcolor{cyan!20!}1.18} & \multirow{-2}{*}{\cellcolor{cyan!20!}1.22} \\
& & \\
\multirow{-2}{*}{ Location Aware $L=0.77/\lambda^\prime$ ($\mathcal{R}_{LA}$)} & \multirow{-2}{*}{1.01} & \multirow{-2}{*}{1.08} \\ \hline
\end{tabular}
}
\vspace{3mm}
\label{tab3}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7 \linewidth]{ThroughputPCP.pdf}
\small \caption{Average throughput vs. user velocity for PCP based two tier network with $P_1= 1$ watt, $P_2= 0.1$ watt, $\lambda_p= 4$ BS/km$^2$, $\lambda_c= 12$ BS/km$^2$, $d= 1$ s, $r= 0.6$ km, $\eta_1=\eta_2=4$}
\label{thpcp}
\end{figure}
\section{Conclusion}
This paper sheds light on the negative impact of cellular network densification due to the imposed excessive handover rate. Particularly, the paper studies the average throughput decay with user velocity in dense cellular environments. To this end, the paper proposes simple yet effective HO management schemes via topology aware HO skipping. The proposed schemes take user location and/or cell-size into account to make HO decisions, thus avoiding unnecessary HOs along the user trajectory. The effectiveness of the proposed schemes are validated in two network scenarios, namely, a PPP single tier cellular network and PCP two tier cellular network. When compared to the conventional best RSS based connected strategy, the proposed skipping models show up to $47\%$ gains in the average throughput over the user velocity ranging from 30 km/h to 240 km/h at BS intensity of 50 BS/km$^2$. Higher gains are expected at higher BSs intensities.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,091,310 | arxiv | \section{Background}
The volumetric water content (VWC) of soil, often represented as
$\Theta$, is defined as the ratio of the volume of the water in the
soil to the volume of the soil plus water:
\begin{equation}
\Theta = \frac{V_{water}}{V_{wet soil}}
\end{equation}
It is measured in units of $cm^3/cm^3$. The most accurate way to
measure VWC is by taking a soil sample of a known volume, weighing it,
drying it in an oven for 24+ hours, and then re-weighing
it~\cite{Noborio2001}. This process is time-consuming and requires
physical removal of soil at the depth you wish to measure, which makes
it impractical for irrigation purposes. Instead, commercial sensors
approximate VWC by measuring properties that are closely
correlated. One such property is the permittivity, $\varepsilon$,
which increases with the VWC of soil.
Recall that permittivity is the ability of a substance to hold an
electrical charge. It is often treated as a complex number:
\begin{equation}
\varepsilon = \varepsilon' + j\varepsilon ''
\end{equation}
where $\varepsilon '$ is the real component and $\varepsilon''$ is the
complex.
The \emph{dielectric permittivity constant} (also known as relative
permittivity), $\varepsilon_r$, is the ratio of the permittivity of
the substance to the permittivity of free space, $\varepsilon_0$:
\begin{equation}
\varepsilon_r = \varepsilon/\varepsilon_0
\end{equation}
There are two main types of sensors that measure permittivity to
approximate soil moisture: capacitive and time domain reflectometry
(TDR).
\subsection{Capacitive sensors}
Capacitive sensors measure the charge time of a capacitor,
which is a roughly linear function of
$\varepsilon$~\cite{sensorOverview}. The resistance of the soil is
also used to measure moisture, since as moisture increases resistance
decreases. However, every measurement degrades a resistive sensor via
electrolysis. Capacitive sensors are less prone to corrosion than
resistive sensors and are more accurate. For this reason,
commercial-grade soil moisture sensors are usually capacitive instead
of resistive\footnote{The SparkFun soil moisture sensor retailing for
\$6.95 is resistive and known for corroding quickly, so it is not
used on farms}. In this work we compare the accuracy of our system
against a Teros 12 capacitive sensor, which retails for \$250.
\subsection{TDR sensors}
Time domain reflectometry (TDR) is another common method for measuring
soil moisture. It measures the propagation time of EM waves by sending
a pulse down a cable and into the soil probe and measures how long it
takes for the signal to return. This time, represented as $\tau$, is
often called \emph{time of flight} (ToF) in the context of wireless
transmissions. $\tau$ is used to approximate the \emph{apparent
dielectric constant} $K_a$, which is a function of
$\varepsilon_r', \varepsilon_r'', \varepsilon_0$ and electrical
conductivity (EC) $\sigma$:
\begin{equation}
K_a = \frac{\varepsilon_r'}{2}\Bigg[\sqrt{1+\bigg(\frac{\varepsilon_r'' + \frac{\sigma}{2\pi f\varepsilon_0}}{\varepsilon_r'}\bigg)^2}+1\Bigg]
\end{equation}
At high frequencies, $\epsilon_r$ is dominated by the real part
$\epsilon_r'$, so
\begin{equation}
K_a \approx \varepsilon_r'
\end{equation}
The velocity of a wave in a media is
\begin{equation}
v = c \left(\frac{\mu\varepsilon_r'}{2} \left[1+\sqrt{1+ \left(\tfrac{\sigma}{\omega}\right)^2}\right] \right)^{-1/2}
\end{equation}
where $c$ is the speed of light in free space, $\mu$ is the relative
magnetic permeability of the material, $\sigma$ is the conductivity (EC)
and $\omega$ is the angular frequency of the wave.
In soil, the magnetic permeability is very close to
1 H/m~\cite{patitz1995measurement} and the conductivity is typically less
than 0.3 S/m ~\cite{idealEC} for any soil. Therefore, when $\omega$ is sufficiently large the velocity simplifies to
\begin{equation}
v = c/\sqrt{\varepsilon_r'}
\end{equation}
If we know the distance $d$ that the wave travels through soil and the
ToF $\tau$, then $v = d/\tau$ and
\begin{equation}
K_a \approx \left(\frac{c\tau}{d}\right)^2
\end{equation}
Soil moisture $\Theta$ can be related to $K_a$ using formulas that depend on the soil type. One such formula is the Topp Equation ~\cite{Topp1980}, which is applicable to typical soils\footnote{A typical soil is 50\% solids (45-49\% minerals, 1-6\% organic matter) and 50\% water/air~\cite{typicalSoil}}:
\begin{equation}
\Theta = 4.3\times 10^{-6}K_a^3-5.5\times10^{-4}K_a^2+2.92\times 10^{-2}K_a-5.3\times 10^{-2}
\end{equation}
Substituting,
\begin{equation}
\Theta \approx 4.3\times 10^{-6} (\tfrac{c\tau}{d})^6-5.5\times10^{-4} (\tfrac{c\tau}{d})^4+2.92\times 10^{-2} (\tfrac{c\tau}{d})^2-5.3\times 10^{-2}
\end{equation}
Therefore, if we know the distance $d$ that an RF wave travels, and we
can accurately measure ToF, then we are able to approximate $\Theta$.
In TDR soil sensors, RF travels along a waveguide of a known
distance. They generate wideband signals (100Mhz-3Ghz for agricultural
grade) to ensure sufficient time resolution. Like capacitive sensors,
the probes must maintain good contact with the soil for accurate
measurements. The cost of a TDR sensor is about \$1000. Next we
discuss how soil moisture can be measured wirelessly without a
waveguide using radar technology.
\subsection{RADAR and GPR}
RADAR (RAdio Detection And Ranging), colloquialized as radar, is a
well-developed technology with a history dating to before World War
II. Radars use the principle of \emph{RF backscatter}, phenomenon of
RF bouncing off reflectors back towards the originating transmitter. A
radar transmitter sends out a known waveform, and the receiver
correlates the incoming RF with that known waveform. The received
samples are used to determine Time of Flight and angle of arrival
information, which can then be used to calculate the distance of
objects and their speed. This technique is conceptually similar to
echolocation. Radar was originally used primarily in military
contexts, but nowadays it has a diverse set of applications such as
predicting weather patterns, enforcing road speed, assisting
autonomous vehicles with navigation~\cite{Ward2016}, and even
monitoring human breathing~\cite{Li2016}.
There are two primary types of radar waveforms: continuous wave and
pulsed~\cite{Richards2010}. Continuous wave radars are transmitting
and receiving at all times, while pulsed radars periodically transmit a
short-duration pulse and listen for the reflections to come back.
Pulsed radars can easily determine the distance of a target by
measuring the time that elapses between pulse transmission and the
return reflection. They can also measure target speed. When the pulse
width of the radar is very short, it is known as an impulse radar or
Ultra Wideband (UWB) radar~\cite{hussain1998ultra}. Commodity UWB radars were enabled by a key circuits
discovery in 1994: the single-shot transient
digitizer~\cite{Azevedo1997}. This device is capable of high speed,
high accuracy digitization of very short pulses of energy (<
5ns). This allowed for the construction of a significantly cheaper and
smaller radar transceiver that digitizes incoming RF and correlates it
with the transmitted pulse samples.
Impulse radar needs to be wideband because of Fourier duality: pulses
that are short in the time domain require wide bandwidth in the
frequency domain. The bandwidth of a UWB radar typically ranges
between 2 and 8 Ghz. Furthermore, the transmit power of UWB radars is
usually regulated to be very low to avoid causing interference to
other users on the same spectrum. This also makes UWB radar very
difficult to detect, as the transmitted waveform looks like white
noise. UWB radars are also popular because the low transmission power
ensures that the signal is harmless to living organisms.
Ground penetrating radars (GPR) use low frequencies that can penetrate
under the earth to do underground imaging. In addition to imaging,
these radars can also be used to calculate soil moisture using ToF
and/or signal strength. GPRs are usually wideband and cost thousands
of dollars. Generally the equipment is large (the size of a lawnmower
or larger) and needs to be dragged across the surface of the
soil. This is a labor intensive process which may not always be
possible in dense crops. Non-contact GPRs exist that can be attached
to drones, but these have lower resolution and can only reliably
measure to a depth of about 10cm~\cite{wu2019new}.
In the next section we discuss how pairing a radar with an RFID-like
underground backscatter tags allows us to measure soil moisture with
consumer-grade radars that are considerably less expensive and more
portable than traditional GPRs.
\section{Discussion and Conclusion}
We believe our system has the potential to be useful to farmers with both large and small farms. There are a number of steps between this work and
real-world deployment, however.
The environmental impact of the tags needs to be considered. Wireless
tags are more easily lost. Research into biodegradable printed circuit
boards~\cite{Guna2016} and soil batteries~\cite{Lin2015} may allow for
a more eco-friendly version that doesn't leach harmful elements into the soil over time.
We acknowledge that there is a lot of engineering and political work
to go from laboratory prototype to field trials, especially trials in
developing nations. We hope to trial our system at local farms soon, and
someday at farms in developing nations.
We also realize that there is a lot of active research in how to
best utilize agricultural robots and drones. These technologies have
seen some adoption, but in general much research remains for
determining how to scale.
There are also a number of additional research opportunities we would
like to explore:
\begin{enumerate}
\item Creating a tag with two antennas/oscillators would allow us to relative ToF in addition to absolute. Our system currently measures absolute ToF, which corresponds to average soil moisture, but sometimes point soil moisture is preferred. Furthermore it can be calculated without knowledge of the tag deployment depth if the separation between antennas is known.
\item Encoding additional information into radar backscatter so that the tag itself can store information like deployment depth and location, eliminating the need for the operator to use a lookup system.
\item Sensing opportunities beyond soil moisture such as measuring EC and contaminant mapping.
\end{enumerate}
\section{Summary}
In this paper we presented a two-part system for sensing soil moisture with RF that combines low-cost backscatter tags with a consumer-grade UWB radar acting as a reader. We achieve completely wireless soil moisture sensing with an accuracy comparable to that of state-of-the-art commercial and scientific sensors at an order of magnitude lower cost. We acknowledge
that there is a large gap between small-scale prototype and systems
deployed at scale, but we believe that it has the potential to
become an effective soil sensing solution for farmers in both
developing and developed nations.
\section{Design}
\begin{figure}
\centering \includegraphics[scale=0.25]{graphs/tagModel}\\
\caption{The tag is buried under soil at a known depth
$d_s$. The radar can measure the ToF/distance between itself and
the surface of the ground, $d_a$. Using these values, we can
calculate $\Delta \tau$, the amount ToF increases due to traveling
through soil instead of air}
\label{figure:tagModel}
\end{figure}
Our key design insight is that we can make soil moisture
sensing orders of magnitude more affordable by using a system model
similar to RFID: cheap tags paired with a more expensive reader. Using
a GPR as the reader is possible, but these devices are often
not small enough to be hand-held or mounted on a drone. Instead, we look to consumer-grade
radars. Consumer-grade radar systems are becoming increasingly
affordable and accessible, which introduces exciting new sensing
possibilities. There are multiple radar devices on the market as cheap as \$50-400, and modern smartphones have even started integrating
them~\cite{soli}. In addition to being affordable, these radars tend to
be lightweight and portable/handheld.
Consumer-grade UWB radars have the ability to accurately measure the
ToF of reflected signals, since their high bandwidth corresponds to a
time resolution of 0.15-0.5ns. For comparison, agricultural TDR
sensors range from 100Mhz to 3Ghz~\cite{Pelletier2012} which leads to
a resolution of at most 0.33ns. Therefore an appropriate selection of
UWB radar will allow us to measure ToF with the same accuracy as TDR
soil sensors.
Accurate measurement of ToF is not alone sufficient for measuring soil
moisture, though. A reference point buried a known distance beneath
the soil is required. One work used a large metallic object object
underground~\cite{Shamir2018}. We realized that versus a plain piece
of metal, using a grounded wideband directional antenna would
significantly increase the SNR of the signal returning to the
radar. This allows us to deploy the reference points deeper, and collect the measurements with inexpensive consumer-grade radars. These
radars also have a higher center frequency than GPRs, which allows
for smaller antennas and increased portability.
At mass production, these radar backscatter tags would cost similarly
to underground RFID tags that municipalities use for utility marking,
which cost \$5-10 in bulk. The cost to densely outfit a large
farm with tags and a few radars would be less then \$100,000, which is under a
tenth the cost of using traditional soil sensors. Like traditional sensors, the tags would be re-usable across growing seasons. Furthermore, if the tag is buried sufficiently deep it could even remain in the ground if the field is cultivated/tilled, which disturbs the top 15-30cm of soil~\cite{till}.
For a smaller farm, 1-2 acres, taking measurements by hand is
feasible. The reader could even be integrated with a mobile phone,
which would significantly facilitate adoption in developing
nations. For larger farms, taking readings by hand may not scalable,
so we assume that the reader would be mounted to a tractor, or
possibly an agricultural robot~\cite{Aroca2018} or
drone~\cite{farmbeats}, which are becoming increasingly popular.
In the following sections we expand on the design
details behind our system.
\subsection{Radar considerations}
A modern radar operates using \emph{frames}, which is the set of
samples from a single sweep across the radar's sensing area. For
example, a radar could sweep everything in front of it that lies
greater than 1 meter but less than 2 meters away. The sensing area is
typically adjustable. The shape of the sensing area depends on the
number and type of antennas. We use a pair (one TX, one RX) of
directional antennas, as the area of interest lies straight down.
For a pulsed radar, a frame typically has one complex sample per
\emph{range bin}. Each range bin corresponds to a range of possible
distances from the radar. For example, if the radar range resolution
is 5cm, the magnitude of the sample from the 10th range bin would
correspond to an object that is 45-50cm away from the radar. The
number and size of range bins depends on both the bandwidth and
sensing area. The wider the bandwidth, the smaller the range bins can
be. The larger the sensing area, the larger the range bins will
be. The size of the range bin determines the ToF resolution as well.
Three key specifications driving our choice of radar are the
bandwidth, frame rate. and center frequency. As discussed earlier, any
UWB radar with a sufficiently wide bandwidth will provide suitable ToF
resolution. All of the radars we considered had a bandwidth of 3 or
more Ghz.
The frame rate determines how fast the radar can sweep the sensing
area. A faster frame rate means that measurements can be taken more
quickly and/or at a higher SNR. A number of radar settings impact the
achievable frame rate, including the sensing area and ADC/DAC
levels. All three of the radars we tested achieved a frame rate of at
least 200fps. For moisture levels typically seen on farms, this allows
us to take measurements within 10s for a tag buried at a depth
of 30cm.
\begin{figure}
\centering \includegraphics[scale=0.5]{graphs/snrRadars}\\
\caption{SNR vs soil depth for three radar center frequencies}
\label{figure:radarSNR}
\end{figure}
The center frequency determines how deep beneath the ground a radar
can penetrate with a given transmission power. As the wavelength of
the RF decreases, the wave attenuates faster due to obstacles and
water (see Fig.~\ref{figure:radarSNR}). We ultimately selected the radar
centered at 1.5Ghz (a Novelda X1) due to its ability to penetrate
soil.
\subsection{Backscatter tag}
RF backscatter is the principle behind radar, but it has also long
been used as a low-power communication technique. RFID, for example,
uses an antenna as a reflector and changes the impedance to modulate
information on top of the reflected RF. The simplest kind of
modulation is binary on-off keying, which toggles the antenna between
grounded and open.
By using backscatter to communicate, instead of an active radio chain,
these backscatter tags use orders of magnitude less power than
traditional radios. Since we are burying our tag underground, long
battery life is a primary design goal. Backscatter tags can be passive,
semi-passive or active. Passive tags, such as those in anti-theft
stickers or door security badges, both harvest their operating power
via RF and communicate using backscatter. This depends on having an
incoming source of RF with a strong enough signal to enable power
harvesting. Semi-passive tags such as~\cite{Zhang2017} use a battery
instead of harvesting RF power. Active tags are used in long-distance
scenarios, such as toll transponders. These tags both use a battery
\emph{and} amplify the outgoing backscatter reflection. They do not
have a full radio chain so they still rely on incoming RF to
communicate.
There are no off-the-shelf backscatter tags designed for radar, so we
built our own prototype (see Fig~\ref{fig:tag}). UWB radars transmit
at low power to avoid causing interference, so passive tags are not an
option because the incoming RF is far too quiet for power
harvesting. Instead, we implemented semi-passive and active
designs. The semi-passive tag has a very simple design, consisting of
a UWB Vivaldi antenna, an RF switch and an oscillator. A waterproof
case creates an air pocket around the antenna, which acts as a radome
and ensures proper impedance matching, as direct contact with soil could cause a mismatch. One might wonder why we need
anything beyond an antenna---is the strong reflection not enough? In
open air, the answer is yes. Underground, though, the tag is just one
reflector among many, many other reflective particles of dirt and
rock. We also want a way to isolate reflections that are coming from the
tag.
\subsubsection{Identifying the tag among dense reflectors}\label{findTag}
Recall that radars are used to measure speed as well as distance. Our
key insight behind how we isolate the signal from the tag leverages
the fact that the environment the tag lies within is very
stable. Roots grow and water seeps, but at slow speeds. If we make the
tag seem like it is moving quickly, the signal will stand out strongly
against an effectively stationary backdrop.
Let the impulse the radar transmits be represented by $p(t)$, and the
received signal by $r(t)$. Then, the digitized sample for the $n$th
range bin can be written as
\begin{equation}
r[n] = \alpha p(nT - \tau)
\end{equation}
where $T$ is the sampling period, $\alpha$ is the complex attenuation
and $d_k$ is the distance of the range bin in meters. Time of flight,
$\tau_k = 2d_k/c$, is the time for a radar impulse to travel to an
object and then reflect back again.
Above, for simplicity we have assumed that there is only one reflector
per range bin, but in reality there are multiple reflections in a
range bin since dirt is small and dense. Taking that into account, the
resulting sample will then become be the linear combination of all $k$
reflectors in the same bin:
\begin{equation}
r[n] = \sum\limits_{k=1}^K \alpha_k p(nT - \tau_k)
\end{equation}
Because pulse-based radars transmit at a regular interval known as
the pulse repetition interval (PRI), they can be used to obtain the speed of moving objects. If
an object is moving at a constant speed of $v$ m/s, then every frame
the object's ToF changes by $2v\Delta/c$ where $\Delta$ is the PRI.
This change in the time-domain corresponds to a phase change in the
frequency domain: $\phi_k = 2\pi f (2v_k\Delta/c)$. Although the phase
changes for each frequency within the bandwidth of the impulse, we can
simplify the math by using only the radar's center frequency
$f_c$\footnote{This approximation is only valid for signals where the
bandwidth of the signal is small compared to the center
frequency. Some radars use pulse compression to help overcome this
issue}. Then, the value for the $n$th bin in the $m$th frame will be
\begin{equation}
r_m[n] = \sum\limits_{k=1}^K \alpha_k p(nT - \tau_k)e^{-j2\pi \frac{2 v_k \Delta(m-1)}{\lambda}}
\label{eq:timedomain}
\end{equation}
where $\lambda$ is the wavelength of the radar center frequency,
$f_c$.
Note how similar Eq.~\ref{eq:timedomain} is to the discrete Fourier
transform:
\begin{equation}
F_k = \sum\limits_{k=1}^{N-1} f_n e^{\frac{-j2\pi}{N}kn}
\end{equation}
If we apply a 1-D inverse Fourier transform to each range bin across a
collection of P pulses, we get a \emph{range-Doppler image} which
tells us the speed of moving objects:
\begin{equation}
R[n,s] = \frac{1}{P}\sum\limits_{i=1}^{P} r_m[n] e^{j2\pi\frac{(i-1)(s-1)}{P+1}}
\label{eq:rangedopp}
\end{equation}
In Eq.~\ref{eq:rangedopp} above, $s$ is the Doppler bin and $n$ is
the range bin and $P$ is the total number of pulses transmitted over
the collection time.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{graphs/rangedoppler.png}\\
\caption{Range-Doppler image generated from a 10s radar capture. The bright spots at 212 and 293 Hz correspond to the oscillation of an antenna that is
grounded and ungrounded at 212 Hz. The additional frequencies are harmonics caused by the square-wave nature of the tag pattern. }
\label{figure:rdplot}
\end{figure}
Figure~\ref{figure:rdplot} shows an example range-Doppler plot. We can
see that there are bright spots at 212 and 293 Hz. This plot is a 10
second capture where the radar is pointed at an antenna that is
grounded and ungrounded at 212 Hz.If there are no other
fast-moving objects in the radar's field of view, and the frame rate
of the radar is sufficiently high, we can use this property to discern
the signal from our backscatter tag from the signals due to other
reflectors.
Ultimately we set our tag to oscillate at 80Hz, which is below the Nyquist
frequency of our 200fps frame rate, but between the 60 and 120Hz
interference caused by AC power\footnote{this is only relevant for our
indoor experiments}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{graphs/bin268}\\
\caption{Plot of the range-Doppler vector corresponding to 80Hz, the
frequency our tag oscillates at. When the tag is on, we see a very
strong peak compared to when the tag is off}
\label{fig:peak}
\end{figure}
Figure ~\ref{fig:peak} shows the plot of the vector corresponding to
the 80Hz frequency bin in our range-Doppler matrix. A strong peak
appears in bin 268 only when the tag is on. Thus we've successfully
identified the signal from the tag. This approach has the additional advantage the SNR of the signal of the tag increases with integration time. That is, the more frames we capture, the better the signals gets. So if a capture yields an ambiguous peak, we can simply capture additional frames.
\subsubsection{Amplification}
A semi-passive tag works well in typical soil conditions, but for tags
in especially deep and/or wet soil, the tag signal is too weak to
reliably detect. Therefore we also designed an active variant of our
tag that adds an amplifier. The incoming RF is amplified before going
into the RF switch, which has another antenna attached to it. The
oscillator causes this second antenna to toggle on and off with the
amplified signal, allowing the tag to operate in more extreme
conditions. This active tag is completely indistinguishable to the
radar receiver, so we can use the same procedure to measure soil
moisture for both tags.
\subsection{Putting it all together}
This system relies on the user knowing a.) where the tag is located in
the field and b.) how deep the tag is buried. With this information we
know $d_s$, the amount of soil the direct path of RF has to travel
through (Fig.~\ref{figure:tagModel}). A few simple options make tracking this data easy, such as using marking flags,
annotated GPS coordinates, or slightly changing the oscillation
frequency of each tag allowing for a lookup table that maps between
frequency and tag depth/location. Taking measurements with the radar
not located directly over the tag does add error. Using directional
antennas in both the tag and radar helps minimize the chances of
taking measurements off-center, since the signal strength will be much
weaker when the radar isn't directly overhead. Another option is using
a more sophisticated radar with multiple receive antennas, which would
enable adjusting $d_s$ with angle of arrival data.
Another piece of information is still needed to measure soil moisture: the distance between the radar and the surface of the ground,
$d_a$. The radar itself can easily measure this. In our experiments, $d_a$ ranged between 0.5-2m.
Then, using the technique outlined in~\ref{findTag}, our system finds the
range bin of the backscatter tag, $b_r$. The expected range bin if
there were no dirt on top of the tag is $b_T = d_a/r + d_s/r$, where
$r$ is the range resolution of the radar.
Now we can calculate $\Delta \tau$, the change in ToF caused by the
soil:
\begin{equation}
\Delta \tau = \frac{(b_r - b_T)r}{c}
\end{equation}
The approximate apparent dielectric constant of the soil is
\begin{equation}
K_a \approx \left(\frac{c\Delta \tau}{d}\right)^2
\end{equation}
Finally, this apparent permittivity is fed directly into a known
equation like the Topp equation to determine VWC.
\section{Evaluation}
We evaluate our system in both laboratory and \emph{in situ}
settings. The laboratory evaluations were done using a large bin
containing about a cubic meter of soil to ensure that the backscatter
tag is covered equally by soil on all sides. The \emph{in situ}
evaluations were done at a local organic farm whose fields contain
sandy silt loam soil (see Fig.~\ref{fig:farmSetup}). We used the same
local tap water for all experiments. Below we discuss experimental
considerations in more detail.
\subsection{Soil type selection}
Soil can be broadly classified into three main types: clay, sand and
silt. Soils that are purely one type are rare, however, and most are a
mixture of two or more types (see Fig.~\ref{fig:soilTriangle}). Loam
is a roughly even combination of clay, silt and sand. Loamy soils are
considered to be ideal for agriculture~\cite{loamIdeal}, and most
crops are grown in soil that lies within the loam spectrum. We test
on three sub-types of loam: sandy clay loam, silt loam and
clay loam. Testing a variety of soils is important because the soil
type can strongly impact RF propagation properties. For example, clay
soils have fine particle that allow it to hold water very well, and
they also tend to high higher organic matter content which impacts the
electrical conductivity (EC), which makes it more difficult for RF to
penetrate.
We collected our soils by consulting a recent map of farmland soil
classifications~\cite{soilMap}. All three soils are identified by the USDA as suitable for agriculture.
\begin{figure}
\includegraphics[width=65mm]{graphs/soilTriangle}
\caption{Soil textural triangle~\cite{soilTriangle}}
\label{fig:soilTriangle}
\end{figure}
\subsection{Electrical conductivity}
As mentioned earlier, the depth that RF of a given wavelength can
penetrate into the ground (skin/penetration depth) is heavily impacted
by the soil's moisture content (which dictates the permittivity) and also
the EC.
Increasing water content increases both the permittivity and EC. Soil
with high organic matter content sees a greater increase in EC when
water is added. For example, our measurements found that the EC of
fully-saturated potting soil is 10x that of fully-saturated sandy clay
loam. It still works in potting soil, but the maximum reliable deployment
depth is only 10-15cm vs the up to 75cm possible with loam soils typically seen on
farms.
Furthermore, soil amendments such as compost or liquid fertilizer can
also increase the EC of soil. Most farms maintain an EC between
0.75-2mS/cm~\cite{idealEC}. In our experiments the EC remained
within that range for all soils except potting soil. Fortunately, we
did not see significant RF attenuation until EC levels rose above
2mS/cm.
\subsection{Root zone depth}
Root zone depth, or maximum root zone depth, is the maximum depth of a
plant's roots. The effective root zone depth is the depth of soil that
a plant's roots extract the most moisture. About 70\% of the moisture
extracted by a plant's roots is from the top half of the maximum root
zone. For example, celery has a maximum root depth of 60cm, and
an effective root zone depth of 30cm. This means that moisture
should be monitored within the top foot of soil.
Most crops have an effective root depth between 15-60cm, however fruit
crops (especially those that grow on trees) can extend as deep as
75cm~\cite{rootZone}. Our laboratory experiments were done at
a depth of 30cm primarily due to the limitations of container sizes
that could ensure the sensor was covered on all sides, but our \emph{in situ} experiments (see Fig.~\ref{fig:SNRdepth}) suggest that it could be deployed at depths up 75cm.
\subsection{Calibration}
All soil moisture sensors require a one-time soil-specific calibration
to achieve high accuracy. One common calibration procedure is
gravimetric, which involves weighing wet samples, oven drying them to
calculate the ground-truth VWC, and then fitting those measurements to
the sensor readings that were taken at the time of sample collections
to produce a custom equation that relates sensor output to VWC. We
performed gravimetric calibrations for both our commercial sensor and
radar sensor.
If lower accuracy is acceptable, soil-specific calibration is not
necessary and a general equation like the Topp equation can be used
instead.
Unlike other RF-based solutions such
as~\cite{Ding2019}~\cite{Aroca2018}, our system does not require any
additional calibration as compared to commercial soil
sensors. However, accurate records of the depth the sensor was
deployed at are required. This depth can be measured manually, or by
using the radar itself once the sensor is placed in the hole (but
before the soil is replaced). We use the latter approach in our
evaluations.
\section{Implementation}
\hspace*{-6em}
\begin{table*}
\centering
\caption{Always-on power consumption of prototypes \label{table:power_breakdown}}
\begin{tabular}{l|l|l|l|l|l|l|l}
& \textbf{Oscillator} & \textbf{RF switch} & \textbf{Power management} & \textbf{RF detector} & \textbf{Amplifier} & \textbf{MCU} & \textit{\textbf{TOTAL}} \\ \hline
\textbf{Active tag} & 2.7uW & 63uW & 51uW & 87mW (3uW shutdown) & 267mW & 378uW (2.2uW) & 354.495mW \\ \hline
\textbf{Semi-passive tag} & 2.7uW & 63uW & 51uW & --- & --- & --- & 0.116mW
\end{tabular}
\end{table*}
The radar chip we used was the X1 (NVA6100) by Novelda, which is
centered at 1.5Ghz and has a bandwidth of 3Ghz. The chip is \$100 per
unit. We interface with the radar via a development kit made by Flat
Earth Inc~\cite{chipotle} that runs on a BeagleBone Black single-board
computer. For these evaluations the radar captures were processed via
MATLAB, but the signal processing required could relatively easily be
ported to run in a low-level language on on a BeagleBone or
smartphone. All of our source code will be released to ensure
reproducibility.
The backscatter tags have three primary components: an SiT1534
programmable oscillator, an HMC1118 RF switch and a Vivaldi
ultra-wideband antenna (see Fig.~\ref{fig:tag}). A TPS76933 voltage regulator manages power
when the tag is powered by battery. The active tag has an additional antenna and an HMC374 amplifier.
\begin{figure}
\includegraphics[width=55mm]{graphs/passiveProto}
\caption{Prototype of a semi-passive tag}
\label{fig:tag}
\end{figure}
\subsection{Power consumption}
The primary area of concern with regards to power consumption is the
backscatter tag, since it is underground and the batteries cannot be
easily replaced. The power consumption of the radar is still
important, especially if the readings will be collected via drone, but
we assume that the radar reader system can be charged at least
daily. The NVA6100 radar chip we use consumes 116mW of
power~\cite{X1datasheet}, and the entire reader system (radar chip
plus BeagleBone Black board) consumes 450mW, about a
quarter of what smartphones consume. Power consumption could be further reduced in the future by using a low-power microcontroller platform (e.g. MSP430) instead of a BeagleBone
Black.
The power consumption for both the active and semi-passive tags is
presented in Table~\ref{table:power_breakdown}. The battery lifetime
of the semi-passive sensor is projected to be 15.02 years on
4$\times$AA batteries rated for 2500mAh. The active sensor consumes an
order of magnitude more power, so without duty cycling the battery
life would be about two months. However, using an RF detector such as the LT5538 that is powered up once per second to check for a wake signal, the battery
lifetime could be 3-4 years\footnote{assuming the high-power
components wake for a total of 5-7 minutes per day}. This comes at
the cost of a more complicated system---the transmit power of UWB
radar is required to be very low by federal regulation in most
countries, which makes it insufficient for providing a wake
signal. However, since the tag antenna is wideband, a narrowband
signal such as WiFi or RFID can be used to wake up the tag
instead. Narrowband signals can be transmitted at powers up to 4W when
a directional antenna is used. Using a VNA we measured the attenuation of an omnidirectional wake signal centered at 2.4Ghz, and found that travelling through 30cm of fully-saturated clay loam causes losses of about 80-90dB. A 30-36dB transmission would be high enough power to overcome that and successfully activate many RF detectors.
Unless the tag needs to be deployed in adverse conditions where
the soil is extremely wet and/or has high clay content, the active sensor is
probably not worth the added
complication and decreased battery life.
\section{Introduction}
Agriculture is the single largest pressure on the world's sources of
fresh water--- 69\% of the global fresh water supply is used for
agriculture~\cite{water}. Paired with the fact that the global
population is projected to exceed 9 billion by 2050~\cite{population}
with most of that growth coming from developing nations in Africa and
Asia, conservation of fresh water and sufficient food production are
key concerns that need to be addressed for future generations. Soil
moisture is the most important measurement for ensuring the
maximization of crop yield without water waste.
Multiple studies show that soil moisture sensors lead to a water
savings of at least 15\%\cite{watersavings}, and in some cases more
than 50\%, while maintaining crop yields or even increasing them up to
26\%~\cite{Zotarelli2009}. Yet soil sensors are still not widely
deployed on working farms despite decades of research confirming the
benefits. Fewer than 10\% of irrigated crops in the United States use moisture
sensors~\cite{USDAirrigation}, and that number is even lower in
developing nations. The lack of widespread adoption can be attributed
to three key challenges: 1.) high sensor cost 2.) difficulty of
deploying and maintaining the sensors and 3.) difficulty collecting
and processing the sensor data.
The average commercial soil moisture sensor costs more than \$100,
which does not include a power source or data logger to record and/or
transmit the measurement samples. Since soil moisture is not uniform
across a field, multiple sensors are needed to accurately measure
moisture for irrigation purposes. The average farm in the United
States is 444 acres~\cite{farmsize}. For a farm of that size, the
conservative cost of deploying the recommended density of 20
sensors~\cite{sensorDensity} per acre would be more than a million
dollars. Even for sparse deployments, benefits exceed the costs
only a third of the time~\cite{USDAprofits}. This makes it difficult
for farmers in even the wealthiest nations to justify investing in
moisture sensors. Consequently, soil moisture sensing is currently
infeasible for smallholder farmers in developing nations, which is
where the most of the food and water insecurity will be.
In addition to cost barriers, current sensors are not simple to deploy
and maintain. Very few companies offer a product that includes the
sensor, logger and power source ready for immediate use. Therefore
some amount of setup labor is required for each sensor. Though the
sensor is waterproof, the data logger (e.g., an Arduino) may need to be
waterproofed and powered separately. The sensor probe also needs to be buried. To
supply power, many opt to attach solar panels to a battery pack, which
requires mounting the panels to a wooden or metal post. These
laborious processes needs to be repeated for every sensor node on the
farm, and again each time the field is tilled/cultivated. Furthermore, excess cables and bulky battery boxes make the
system prone to entanglement in farm equipment and tools. This all
adds up to a significant amount of manual labor to deploy and maintain the
sensor network.
Finally, data needs to be collected from loggers. For a large farm,
the most practical method is wireless collection. Installing WiFi or
cellular communication modules on every sensor is costly and
exacerbates power issues. Extending wireless coverage to a large farm
is not simple; cellular coverage in rural areas tends to be poor. A
number of recent works have considered the issue of networking sensors
in rural environments. For example,~\cite{farmbeats} uses TV
whitespace technology to provide a wireless gateway from the field to
the Internet. LoRaWAN, Sigfox and NB-IoT are all low-power wide-area networks (LPWANs) that target large scale IoT deployments~\cite{RazaKS16}.
In contrast to the networked sensor model used on farms, geophysicists
and remote sensing experts use a centralized approach. They have been
using ground penetrating radar (GPR) instead of wired sensors to
measure soil moisture for years. GPR has the advantage that it can
measure soil moisture completely wirelessly eliminating the need for
sensor probes, solar panels and data loggers. The signal strength and
propagation speed of an RF wave is impacted by the media it travels
though. RF travels 2-6 times more slowly in soil than air~\cite{Jol2008},
and the speed and signal strength decrease as moisture content
increases. Radars allow us to very accurately measure these changes RF
waves.
The drawback is that the radars used in these studies are either
deployed in satellites~\cite{Fares2013} whose data do not provide the necessary resolution, or use terrestrial radars that require contact
(or very close proximity) with the ground~\cite{Shamir2018}. These
terrestrial GPRs are also bulky, most being at least the size of a
lawnmower. Furthermore, the depth and accuracy achievable using a
terrestrial GPR alone is limited. This makes traditional GPR soil
moisture measurement techniques impractical for agriculture.
To address these concerns, we propose a hybrid approach. Instead of
using radar alone, we pair the radar with completely wireless
underground backscatter tags. Unlike traditional backscatter tags,
these tags do not have any additional sensors attached to them whose
measurements need to be communicated. Instead they merely provide a
known reference point in the ground and increase the strength of the
signal returning to the radar. This allows us to measure soil moisture
with RF using a significantly cheaper and more portable radar than
traditional terrestrial GPRs. In the future this radar reader could
even be integrated with farm equipment, drone or mobile phone.
This two-part system allows us to implement a low-maintenance and
low-cost soil moisture sensing system that does not require cellular or other wireless connectivity. Backscatter tags are more simple to install than wired sensors, and do not require additional power or network infrastructure. The tags enable the radar reader to be mobile, which makes collecting measurements much easier and less destructive than using a traditional rolling GPR. The components are also lower cost than traditional moisture measurement systems. Mass-produced weather-poof backscatter tags such as~\cite{greenlee} are between \$5-10, and consumer-grade UWB radars are \$400-2500.
We designed two
prototype backscatter tags: one active and the other
semi-passive. Both measured soil moisture with an average error of
0.01-0.02$cm^3/cm^3$, a 90th percentile of $0.034cm^3/cm^3$, and a maximum error of at most 0.055$cm^3/cm^3$,
which is comparable to the accuracy of commercial soil
sensors~\cite{Datta2018}. The active tag measures soil moisture
accurately to saturation with a projected battery life of 3-4 years (3 months without duty cycling) on $4\times$AA batteries, where the semi-passive tag is accurate within ranges typically
seen in agriculture and has a projected battery life of more than 15
years. We also show that the system can deployed at depths of 30cm or
more.
\section{Related Work}
In addition to GPR techniques, other works have also used RF to
measure soil moisture. Strobe~\cite{Ding2019} uses commodity WiFi
transmissions to measure relative ToF between buried antennas. These
antennas require being wired to a power-hungry 802.11 WiFi chip. Furthermore, there are significant additional calibration procedures required. Non-radar UWB transceivers have also used ToF to perform sensing such as localizaton~\cite{grobetawindhager2019snaploc} and ECG~\cite{Toll2019WirelessEB}.
Researchers in Israel used ToF measured via a \$20,000 GPR paired
with buried metal bars~\cite{Shamir2018} to measure soil moisture. The
radar used requires direct contact with the surface of the soil,
though, and the accuracy to our system is similar.
Our system is inspired by RFID, which has itself been used to classify
food and beverages~\cite{Ha2018}~\cite{Wang2017} and measure soil
moisture~\cite{Aroca2018}. The latter work uses commodity RFID tags
paired with neural networks to determine soil moisture via RSSI. This
work has the drawback that the neural network has to be re-trained for
every single soil type \emph{and} deployment depth. However, they have
successfully gotten an agricultural robot to collect the moisture
measurements autonomously.
\section{Results}
\begin{figure}
\includegraphics[width=55mm]{graphs/dirt}
\caption{Setup for laboratory experiments}
\label{fig:farmSetup}
\end{figure}
\subsection{Laboratory}
\begin{figure*}
\begin{minipage}[b]{0.55\textwidth}
\centering
\includegraphics[width=85mm]{graphs/farmVWC}\\
\subcaption{Sandy clay loam}
\end{minipage}\\
\begin{minipage}[b]{0.55\textwidth}
\centering
\includegraphics[width=85mm]{graphs/siltVWC}\\
\subcaption{Silt loam}
\end{minipage}\\
\begin{minipage}[b]{0.55\textwidth}
\centering
\includegraphics[width=85mm]{graphs/clayVWC}\\
\subcaption{Clay loam}
\end{minipage}%
\caption{Active tag VWC measurements of three soil types from dry
to saturated with the tag buried at a depth of 30cm. Moisture level 1 is completely dry soil which was then gradually dampened in 7 liter increments until saturation at level 5. Note that saturation depends on the soil type.}
\label{fig:active}
\end{figure*}
Figure~\ref{fig:active} shows the results from our active tag. Each
radar datapoint is the average and standard deviation of 10
measurements. The radar captures used for the measurements lasted
10-30s with the exception of the saturation moisture levels which used
100s\footnote{Faster measurements are possible with higher radar frame rates. The radar development kit we used runs a Linux distribution, and IO interrupts limited our achievable frame rate to 200fps. Porting the development kit to run on a barebones system may further increase the framerate without having to upgrade the radar hardware itself.} captures. Each commercial sensor datapoint is the average and
standard deviation of 3-5 measurements, where each measurement is
taken in a different part of the soil. The size of the container we
conducted experiments in limited the number of commercial sensor
datapoints. To conduct the experiment, we began with about a cubic
meter air-dried soil and gradually dampened in 7 liter
increments. In these laboratory experiments we homogenized the soil
moisture by mixing the added water vigorously by hand. This was to
ensure that the Teros 12 sensor we compared against was not biased by
wet or dry pockets of soil.
We see that for all soil types both our system and the commercial sensor
closely track the ground truth, which is the average of two oven-based
volumetric measurements per moisture level. The average error of
our system is $0.015cm^3/cm^3$, compared to $0.007cm^3/cm^3$ on the commercial Teros 12 sensor. Though our system'ss
average error is higher than the commercial sensor, it is not
significant. Calibrated commercial sensors are advertised having an
average error between $0.01-0.03cm^3/cm^3$. The greatest error is seen
with the sandy clay loam soil at saturation, where our system
underestimates VWC by $0.05cm^3/cm^3$. This maximum level of error is
also typical among commercial sensors.
\begin{figure*}
\begin{minipage}[b]{0.55\textwidth}
\centering
\includegraphics[width=85mm]{graphs/farmPassVWC}\\
\subcaption{Sandy clay loam}
\end{minipage}\\
\begin{minipage}[b]{0.55\textwidth}
\centering
\includegraphics[width=85mm]{graphs/siltPassVWC}\\
\subcaption{Silt loam}
\end{minipage}\\
\begin{minipage}[b]{0.55\textwidth}
\centering
\includegraphics[width=85mm]{graphs/clayPassVWC}\\
\subcaption{Clay loam}
\end{minipage}
\caption{Passive tag VWC measurements of three soil types from dry
to loss of signal with the tag at a depth of 30cm. Moisture level 1 is completely dry soil which was then gradually dampened in 4-5 liter increments.}
\label{fig:passive}
\end{figure*}
Figure~\ref{fig:active} shows the results from our passive tag. This
time we added water in $4-5L$ increments and stopped when the signal
from the passive tag was no longer visible. This was typically 5-15\%
before saturation. Again, both our system and the commercial sensor
closely track the ground truth, with our system achieving an average
error of $0.014cm^3/cm^3$ and the commercial sensor
$0.013cm^3/cm^3$. The maximum error for both our system and the
commercial sensor was $0.04cm^3/cm^3$.
\begin{figure}
\subcaptionbox{Active tag}{%
\includegraphics[width=85mm]{graphs/activeSNR}%
}
\subcaptionbox{Passive tag}{%
\includegraphics[width=85mm]{graphs/passiveSNR}%
}
\caption{SNR vs VWC of 100s captures for passive and active tags in
the laboratory}
\label{fig:snrLab}
\end{figure}
Figure~\ref{fig:snrLab} shows the results of SNR vs soil moisture for
both tags across the three different soil types. As expected, the SNR
for the passive tag decreases more quickly than the active tag as
moisture level increases. Also, the signal in both silt and clay loam
soils is weaker than the sandy clay loam. There does not appear to be
a significant difference between silt and clay loams, though.
\subsection{\emph{In situ}}
\begin{figure}
\includegraphics[width=55mm]{graphs/radarInSitu}
\caption{Setup for \emph{in situ} experiments at a local farm. The
radar is pointed at an underground tag, and the surface of the soil
is watered in 7L increments using a watering can.}
\label{fig:farmSetup}
\end{figure}
Figure~\ref{fig:VWCsitu} shows the results of both passive and active
tags for the \emph{in situ} VWC experiments. In these experiments our
tag was buried under 30cm of soil. For comparison, we used two Teros
12 sensors, one at a depth of 30cm and the other near the surface at a
depth of 5cm. Unlike the laboratory experiments, we do not disturb the
soil and instead let the water seep over time. The deeper commercial
sensor showed no change in soil moisture across the experiments, even
after applying more than 20 liters of water and letting it seep
overnight. Water did successfully seep into the soil around the
shallow sensor, but there was still a delay of up to an hour between
watering and seeing the change of moisture level.
As expected, since our system measures the average moisture of the soil
between the tag and the surface, it closely tracks the average of the two Teros
sensors. Furthermore, our system reacts immediately to the addition of
water. This suggests that it provides faster feedback after water application
than traditional sensors, which might prevent
over-watering. Furthermore, it inherently reflects the average
soil moisture across the whole effective root zone, whereas multiple
commercial sensors are required to accomplish the same.
\begin{figure}
\subcaptionbox{Active tag}{%
\includegraphics[width=85mm]{graphs/farmInSituPassive
}
\subcaptionbox{Passive tag}{%
\includegraphics[width=85mm]{graphs/farmInSituActive
}
\caption{VWC \emph{in situ} on a farm field with passive and active
tags buried at a depth of 30cm. Measurements were taken every
30 minutes, with 7 liters of water poured on the soil at times 2, 4, 6
and 8.}
\label{fig:VWCsitu}
\end{figure}
One of the limitations of our laboratory experiments is that it is
difficult to evaluate with the tag buried more than 30 cm
deep, since covering the tag with $>$30cm of dirt on all sides would
require bringing a prohibitively large amount of dirt
indoors. Outdoors, we were able to dig a hole more than 75cm deep and
gradually cover the tag with dirt. Fig.~\ref{fig:SNRdepth} shows the
results of these experiments. At a VWC of $0.15cm^3/cm^3$ we were
still able to detect both tags successfully at a depth of 77cm. This
suggests that it can probably be deployed deeper than the 30cm
evaluated in-laboratory to accurately measure the VWC typically seen
on farms.
\begin{figure}
\includegraphics[width=85mm]{graphs/maxDepth}
\caption{SNR vs tag depth of 100s captures for passive and active
tags. Measurements were performed in an actively watered farm field
containing sandy clay loam. The VWC of the soil was about 15\% at
the time of measurement. }
\label{fig:SNRdepth}
\end{figure}
Figure~\ref{fig:SNRInSitu} shows how the SNR for both active and
passive tags changes with VWC. Compared to the laboratory experiments
(Fig~\ref{fig:snrLab}), the passive tag SNR drops off much less
steeply. This further suggests that the passive version of the tag is
well-suited for agriculture and that the added complication and
reduced battery life of the active tag will usually not be necessary.
\begin{figure}
\includegraphics[width=85mm]{graphs/inSituSNR}
\caption{SNR vs VWC \emph{in situ} for 100s captures on a farm
field with passive and active tags buried at a depth of 30cm}
\label{fig:SNRInSitu}
\end{figure}
|
2,877,628,091,311 | arxiv | \section{Introduction}
\label{sec:1}
Hypothesis testing is the task of assigning one of a discrete set of models to describe an observed system.
Measurements on quantum systems have random outcomes and the discrimination of two hypotheses $h_0$ and $h_1$ is a statistical inference problem. Viz. there is a probability $P(m=i|h_j)$ that measurement data processed to yield a binary outcome $m=0,1$ is (in)consistent with the true hypothesis $(i\neq j)i=j$
and a corresponding average probability that an erroneous hypothesis will be assigned,
\begin{align}\label{eq:Qe
Q_{\mathrm{e}} = P(m=1|h_0)P(h_0)+ P(m=0|h_1)P(h_1),
\end{align}
where $P(h_0)$ and $P(h_1)$ are the prior probabilities of each hypothesis.
Distinguishing two different Hamiltonians $\hat{H}_0$ and $\hat{H}_1$, governing the evolution of a closed quantum system, is achieved by discriminating the two quantum states $\rho_0(t) = \ket{\psi_0(t)}\bra{\psi_0(t)}$ (hypothesis $h_0$) and $\rho_1(t) = \ket{\psi_1(t)}\bra{\psi_1(t)}$ (hypothesis $h_1$), resulting from time evolution under each candidate Hamiltonian from a common initial state of the system.
Only orthogonal states can be discriminated unambiguously while, in general, the overlap between the candidate states defines a minimum error probability for any measurement protocol,
$Q_{\mathrm{e}}\geq Q_{\mathrm{e}}^{(\text{min})}$ where \cite{Helstrom1969}
\begin{align}\label{eq:QeMinPure
Q_{\mathrm{e}}^{(\text{min})} = \frac{1}{2}\left(1-\sqrt{1-4P(h_0)P(h_1)|\bra{\psi_0}\psi_1\rangle|^2}\right).
\end{align}
As derived by Helstrom \cite{Helstrom1969}, this bound can be saturated by performing a projective measurement of the operator
\begin{align}\label{eq:PiPure
\hat{A} = P(h_0)\rho_0-P(h_1)\rho_1,
\end{align}
and assigning hypothesis $h_0(h_1)$ if the outcome is one of the positive(negative) eigenvalues of $\hat{A}$.
\begin{figure*}
\includegraphics[trim=0 0 0 0,width=1.9\columnwidth]{generalMeasurement.pdf}
\caption{
(a) A projective measurement on a system and its environment is performed after they have interacted for a time $T$. Based on the outcome, one of two hypotheses $h_0$ and $h_1$ about their evolution is judged to be more likely than the other.
An experimentally more realistic approach monitors the radiation emitted by the probe system for a time $T$ by (b) photon counting or (c) homodyne detection. The conditional state of the emitter defines the optimal system projection to be performed at the final time $T$ and the most likely hypothesis is inferred from the combined monitoring signal and projection outcome.
}
\label{fig:intro}
\end{figure*}
In this article, we study the use of an open quantum system to distinguish between different Hamiltonian hypotheses. This can, in principle, be accomplished by measuring the optimal observable (\ref{eq:PiPure}) where the $\ket{\psi_i(t)}$ denote the combined states of the system and its environment.
We focus on the common example of a quantum system coupled to a broadband radiation reservoir. A driven system and the quantized radiation field evolves into
entangled states in an infinite dimensional Hilbert space. While these states may differ significantly for different Hamiltonians the, potentially, highly non-local projective measurement (\ref{eq:PiPure}) of combined system and environment observables, see \fref{fig:intro}(a), becomes difficult to achieve in practice. Instead, as illustrated in \fref{fig:intro}, one often has recourse to perform photon counting (b) or field quadrature measurements (e.g. homodyne detection (c)) on the environment.
If the candidate Hamiltonians cause the system to evolve into different steady states, the mean number of emitted photons or the mean homodyne detection signal may be averaged over long enough time to suppress statistical uncertainty about their values such that the Hamiltonian can be inferred with certainty. Faster, and hence more efficient, inference can be made from observation of the correlations in the full noisy measurement record.
To give an example, the steady state yields identical emission rates from atoms excited by one of two strong laser fields, while the time intervals between photon detection events follow distinct oscillatory waiting time distributions.
Optimal inference from any measurement record is obtained by a Bayesian analysis which yields the probabilities $P(h_0|D_t)$ and $P(h_1|D_t)$ ascribed to each hypothesis based on their prior probabilities and on the full data record $D_t$ retrieved until the time $t$ \cite{1355-5111-8-6-002,PhysRevA.64.042105,PhysRevLett.108.170502,PhysRevA.87.032115,PhysRevA.79.022314,PhysRevA.95.022306,PhysRevA.94.032103,PhysRevA.89.052110,PhysRevA.91.012119}.
In this work, we investigate to what extent supplementing continuous monitoring of the emitted radiation from the initial time $t=0$ to a final time $t=T$ by a final projective measurement on the emitter system, allows better distinction between different hypotheses governing the system dynamics.
Due to the measurement backaction associated with continuous monitoring of the environment, the state of the emitter evolves in a conditional manner according to the stochastic measurement signal \cite{QMC}.
In any particular realization of the measurement sequence, the optimal final measurement on the system (\ref{eq:PiPure}) is thus conditioned on the detection record $D_T$ obtained up until the final time $T$,
\begin{align}\label{eq:PiConditional}
\hat{A}^D_T = P(h_0|D_T)\rho^{(D_T)}_0(T)-P(h_0|D_T)\rho^{(D_T)}_1(T).
\end{align}
Here the information extracted from the environment is incorporated in the conditional candidate states $\rho_i^{(D_T)}(T)$ and their probabilities
updated by Bayes rule, $P(h_i) \rightarrow P(h_i|D_T)$.
The continuous monitoring and conditioned evolution of quantum states have for instance been realized in experiments with superconducting qubits \cite{murch2013observing,PhysRevA.96.022104,PhysRevX.6.011002} and optomechanical systems \cite{PhysRevLett.114.223601}.
After homodyne or heterodyne detection of the radiation signal has been performed until time $T$ on for example a super conducting qubit, a final system projection can be achieved in these experiments by applying a strong, dispersively coupled probe field \cite{murch2013observing,PhysRevLett.114.090403}.
We compare such realistic measurement strategies with the theoretical limit for distinguishing different hypotheses.
See also \cite{Jacobs2007FeedbackCF} for an alternative, adaptive approach to hypothesis testing and state discrimination with continuous measurements.
The article is organized as follows.
In Section~\ref{sec:2} we outline the main ideas of hypothesis testing with monitored quantum systems and we recall a lower (quantum) bound for the error probability.
In Section~\ref{sec:3} we present numerical simulations which illustrate and exemplify different aspects of our theory.
In Section~\ref{sec:4} we provide a conclusion and an outlook.
\section{Bayesian analysis of a measurement record}\label{sec:2}
We consider a system subject to a sequence of measurements or continuous monitoring from time $t=0$ to a final time $t=T$.
During this phase, a signal $dD_t$ is recorded and by $D_t$
we denote the full signal obtained between time $0$ and $t$. Under hypothesis $h_i$ any given realization of $D_t$ has a probability $P(D_t|h_i)$ determined from the conditional candidate quantum state $\rho^{(\D)}_i(t)$.
Bayes rule yields the corresponding update of the likelihood $P(h_i|D_t)$ assigned to each hypothesis,
\begin{align}\label{eq:BayesMonitor}
P(h_i|D_t) = \frac{P(D_t|h_i)P(h_i)}{\sum_j P(D_t|h_j)P(h_j)}.
\end{align}
The backaction of the measurement associated with the outcome $dD_t$ applies directly on the current state of the system, $\rho^{(\D)}(t) \rightarrow {\hat{M}(dD_t)\rho^{(\D)}(t)\hat{M}^\dagger(dD_t)}/{P(dD_t)}$. Here the sum (integral) of the positive-operator valued measure (POVM) over all possible detection outcomes yields the identity, $\sum_{dD_t} \hat{M}^\dagger(dD_t)\hat{M}(dD_t) = I$.
The POVM formalism includes both projective measurements, in which case the $\hat{M}(dD_t)$ denote projection operators, as well as more
general measurements, involving, e.g., projective measurements on ancilla systems after they have interacted with the system. Between measurements, the system evolves subject to the Hamiltonian that we want to discriminate.
If the state is not renormalized after application of the POVM backaction operators, we retain the evolution of an unnormalized state $\tilde{\rho}^{(\D)}(t)$,
\begin{align}\label{eq:POVMunnormlized}
\tilde{\rho}^{(\D)}(t) \rightarrow \hat{M}(dD_t)\tilde{\rho}^{(\D)}(t)\hat{M}^\dagger(dD_t),
\end{align}
whose reduction in norm is just the probability to obtain the signal $dD_t$. This implies that at the final time $T$, the probability $P(dD_T)\cdots P(dD_{2dt})P(dD_{dt})P(dD_0)$
for the full signal $D_T$ is given by the trace of $\tilde{\rho}^{(\D)}(T)$. Hence, by evolving the unnormalized state under each of the two candidate hypotheses conditioned on the signal \textit{actually} recorded in a given experiment, one may by \eqref{eq:BayesMonitor} obtain the relative likelihoods of each hypothesis as $P(h_i|D_t)\propto \mathrm{Tr}(\tilde{\rho}^{(\D)}_i(t))$. Since any specific trajectory for $D_t$ is very unlikely, $\mathrm{Tr}(\tilde{\rho}^{(\D)}_i(T))$ becomes very small even for the true hypothesis and for numerical purposes it is favorable to propagate instead the log-likelihood $\log[P(h_i|D_t)]$. See \cite{PhysRevA.87.032115} for a detailed account of Bayesian inference with continuously monitored quantum systems.
In the next subsection we specialize to cases, where the measurements are carried out continuously in time on the radiation field emitted by the quantum system of interest. The two generic setups of counting-type measurements with discrete detection events and diffusion-type measurements with continuous but infinitesimal backaction are discussed, and \eqref{eq:POVMunnormlized} is replaced by stochastic master equations, suitable for numerical propagation of $\tilde{\rho}^{(\D)}(t)$. For simplicity we assume that there is only a single decay channel but the expressions may readily be generalized to multi-channel cases and alternative environmental couplings.
\subsection{Photon counting and homodyne detection}
In \fref{fig:intro}(b), the florescence from the probe system is detected by a photon counter with quantum efficiency $0\leq\eta\leq1$ and the photon counting signal $N_t$ until time $t$ constitutes the detection record $D_t$.
During each short time interval $dt$ there are two possible detection outcomes: no photon $dN_t = 0$ or one photon $dN_t = 1$, where $P(dN_t = 1)= \eta\Tr{\Cd\C\rho^{(N_t)}(t)}dt$ is given by the (normalized) state $\rho^{(N_t)}(t)$ of the system. Here $\C=\sqrt{\gamma}|g\rangle \langle e|$ denotes the quantum jump operator from an excited $|e\rangle$ to a lower state $|g\rangle$.
The conditional evolution of the unnormalized state, in turn, obeys a linear stochastic master equation \cite{PhysRevLett.68.580,QMC},
\begin{align}\label{eq:meCount}
\begin{split}
d\tilde{\rho}^{(N_t)} =& \left(\mathcal{K}dt+ \mathcal{B}dN_t\right)\tilde{\rho}^{(N_t)},
\end{split}
\end{align}
where $\mathcal{K}\rho = -i[\H,\rho] +(1-\eta)\C\rho\Cd-\frac{1}{2}\{\Cd\C,\rho\}$ and $\mathcal{B}\rho = \eta\left(\C \rho\Cd-\rho\right)$.
As depicted in \fref{fig:intro}(c), a homodyne detector mixes the florescence with a strong local oscillator field on a beam splitter, and the signal $dY_t$ is obtained as the intensity difference between the two output ports.
Homodyne detection is sensitive to the phase of the emitted radiation which may be favorable when probing certain dynamics of the system.
The recorded signal $dY_t$ in each short time interval $dt$ has a mean value determined by the current state $\rho^{(Y_t)}(t)$ of the system,
\begin{align}\label{eq:dY}
dY_t = \Tr{\mathcal{X}_\Phi\rho^{(Y_t)}(t)}dt+dW_t.
\end{align}
with $\mathcal{X}_\Phi\rho =\sqrt{\eta}\left(\C \mathrm{e}^{-i\Phi}\rho+\rho\Cd \mathrm{e}^{i\Phi}\right)$ where $\Phi$ is the phase of the local oscillator.
Random, white-noise fluctuations around the mean are represented by infinitesimal Wiener increments which are uncorrelated, normal distributed stochastic elements with zero mean and variance $dt$.
Since the signal depends only weakly on the state of the system, the backaction associated with homodyne detection is infinitesimal and \eqref{eq:POVMunnormlized} is equivalent to a diffusion type linear stochastic master equation for the conditional evolution of the unnormalized state \cite{QMC},
\begin{align}\label{eq:meHomo}
d\tilde{\rho}^{(Y_t)}= \left(\mathcal{L} dt+\mathcal{\mathcal{X}}_\Phi dY_t \right) \tilde{\rho}^{(Y_t)},
\end{align}
where $\mathcal{L}\rho = -i[\H,\rho] +\C\rho\Cd -\frac{1}{2}\{\Cd\C,\rho\}$.
Upon acquiring a measurement signal,
the relevant stochastic master equation, (\ref{eq:meCount}) or (\ref{eq:meHomo}), may be solved for each hypothesis.
The corresponding candidate states are all initialized in the (known) initial state of the system, but normalized to the prior probabilities assigned the particular hypothesis, $\mathrm{Tr}(\rho_i^{D_0}(t=0)) = P(h_i)$. This way the evolving likelihood distribution over the possible hypotheses is directly given by the traces of the corresponding conditioned density matrices, $\tilde{\rho}_0^{D_t}(t),\ \tilde{\rho}_1^{D_t}(t)$.
To illustrate the Bayesian inference protocol, we simulate in \fref{fig:homodyneCountingComparison} perfect monitoring of a two-level system with the purpose of discriminating two hypotheses for the resonant driving with a Rabi frequency of either
$\Omega_0$ or $\Omega_1$. I.e, we test the two Hamiltonian hypotheses:
$
\H_0 = \frac{\hbar\Omega_0}{2}\sx
$
and
$
\H_1 = \frac{\hbar\Omega_1}{2}\sx
$.
The signals, $dN_t$ from photon counting and $dY_t$ from homodyne detection, in the upper panels of (a) and (b) are sampled from the true hypothesis which we assume to be $h_0$. Conditioned on these signals, the (unnormalized) candidate states $\tilde{\rho}^{(\D)}_i(t)$ with $D_t=N_t,Y_t$ evolve according to Eqs.~(\ref{eq:meCount})~and~(\ref{eq:meHomo}), respectively. Their traces and the condition $P(h_0|D_t)+P(h_1|D_t)=1$ yield the time evolution of the inferred probabilities for each hypothesis as shown in the lower panels of (a) and (b). We assume equal priors $P(h_0)= P(h_1) = 1/2$.
\begin{figure}
\includegraphics[trim=0 0 0 0,width=0.95\columnwidth,left]{Pi_conditional.pdf}
\caption{
Simulated monitoring of a driven two-level system by (a) photon counting and (b) homodyne detection with the purpose of discriminating two hypotheses $h_0$ ($\Omega_0 = 2\gamma$) and $h_1$ ($\Omega_1 = 4\gamma$) for the Rabi frequency.
The simulations are made assuming $h_0$ to be the true hypothesis and with a detector efficiency $\eta=1$ and in (b) a local oscillator phase $\Phi = -\pi/2$.
The second and fourth panels show the evolution of the probabilities (\ref{eq:BayesMonitor}) for the two hypotheses conditioned on (a) the photon counting signal and (b) the noisy homodyne current shown in the first and third panels.
The lower panel (c), shows the $z$-component $z_{\hat{A}_t^{D_t}} \propto \tr{\sz\hat{A}_t^{D_t}}$ of the optimal Pauli measurement observable (clarified in the main text) if monitoring is stopped at any given time. We observe that this optimal system measurement differs for the three cases of counting, homodyne detection and unobserved, dissipative emitter dynamics.
}
\label{fig:homodyneCountingComparison}
\end{figure}
For photon counting in (a) the probability updates are dominated by three photo detection events while periods with no detections lead to a less pronounced, continuous update.
The noisy homodyne signal in (b), on the other hand, holds only very little information in each individual time-bin and here the probabilities continuously converge to reveal the true hypothesis.
In both cases, at the final time $t=5\gamma^{-1}$ the accumulated signals are seen to favor the true hypothesis ($h_0$) with almost unit probability.
A figure of merit for a particular measurement strategy is the speed at which we arrive at perfect distinction.
\subsection{Supplementing continuous monitoring by a projective measurement}
If the hypotheses are not sufficiently discriminated at the end of the probing at time $T$, it may be possible to extract further information by a direct measurement on the emitter system. Due to the continuous monitoring, the emitter is assigned the conditional candidate states $\rho^{(D_T)}_i(T)$, while the probabilities that we ascribe to these states, $P(h_i|D_T)$ are given by the traces of the unnormalized density matrices.
The optimal projective measurement we can perform on the system then concerns the system observable $\hat{A}^{D_T}_T$ defined in \eqref{eq:PiConditional}.
For a two-level system, the projective measurement of any observable $\hat{A}$ is equivalent to the measurement of a Pauli spin component along a specific unit vector $(x_A,y_A,z_A)$ with $u_A\propto \tr{\hat{\sigma}_u A}$.
In \fref{fig:homodyneCountingComparison}(c) we visualize the optimum observable $\hat{A}^{D_T}_T$ if the continuous monitoring, yielding the signals in the upper panels of (a) and (b), is terminated at the corresponding point in time. In this example the unit vector, designating the direction of the spin measurement, is confined to the $(y,z)$-plane and we show its $z$-component
During each experimental realization, $\hat{A}_t^{D_t}$ assumes a stochastic value, which is different from the one that optimally discriminates the states of an unobserved system governed by the corresponding Lindblad master equation $d\rho/dt = \mathcal{L}\rho$.
With homodyne detection the measurement observable, represented by the blue noisy trace in \fref{fig:homodyneCountingComparison}(c), is seen to fluctuate around the full, yellow curve, pertaining to the unmonitored system, while with photon counting, large deviations arise accompany the quantum jumps of the system state.
The possible eigenvalues $\lambda$ of the measurement observable $\hat{A}^{D_T}_T$ occur under hypothesis $h_i$ with probability $P(\lambda|h_i)=\Tr{\Pi_\lambda \rho_i(t)}$, where $\Pi_\lambda$ is the projector on the affiliated eigenstate of the operator $\hat{A}^{D_T}_T$.
According to Bayes rule the combined information from the monitoring and from the system projection hence leads to an update of the probabilities assigned to each hypothesis
\begin{align}\label{eq:assignProb}
P(h_i|D_T,\lambda) = \frac{P(\lambda|h_i)P(h_i|D_T)}{P(\lambda)}.
\end{align}
The hypothesis $h_m$ with the largest likelihood $P(h_m|D_T,\lambda)$ is the preferred one, and averaged over many independent realizations of the final projective measurement, the fraction of erroneous assignments based on that choice will be given by the generalization of \eqref{eq:QeMinPure} to mixed states,
\begin{align}\label{eq:QeMin}
Q_{\mathrm{e}} = \frac{1}{2}\left[1-\left|P(h_0|D_T)\rho^{(D_T)}_0(T) -P(h_1|D_T) \rho^{(D_T)}_1(T)\right|\right],
\end{align}
where $|O| \equiv \Tr{\sqrt{O^\dagger O}}$. To obtain the error probability of a given measurement scheme, we however still need to numerically evaluate the conditional states and probabilities and average \eqref{eq:QeMin} over the random outcomes of the continuous monitoring.
Note that \eqref{eq:QeMin} can also be applied to the distinction of mixed states or of the (unconditioned) candidate density matrices of a system evolving under different Hamiltonian hypotheses and leaking into an un-monitored environment. A recent comparison of probing by measurements on a system alone and on both a system and its environment shows the ability of the latter to better exploit (initial) entanglement among its sub-components \cite{albarelli2018restoring}.
\subsection{The quantum bound}\label{sec:Qbound}
The minimum achievable error associated with any hypothetical detection of the radiation emitted by a system and a final detection on that system itself is determined by our ability to discriminate the pure states of the combined system and environment, resulting from the different Hamiltonian hypotheses. These (un-monitored) states are themselves intractable by numerical means, but if the Born-Markov approximation applies for the radiative emission process, their quantum overlap can be evaluated as the trace of an effective density matrix $\rho_{01}(t)$ acting only on the state space of the emitter system:
$\langle \psi_0(t)|\psi_1(t)\rangle = \Tr{\rho_{01}(t)}$.
This matrix evolves from the initial pure state of the system according
to the following master equation \cite{PhysRevLett.114.040401,kiilerich2018multi},
\begin{align}\label{eq:2sided
\begin{split}
\frac{d\rho_{01}}{dt} = &-i\left(\H_0\rho_{01}-\rho_{01} \H_1\right)
\\ &+
\sum_j\left[\hat{c}_{0j}\rho_{01}\hat{c}_{1j}^\dagger-\frac{1}{2}\left(\hat{c}_{0j}^\dagger\hat{c}_{0j}\rho_{01}+\rho_{01}\hat{c}_{1j}^\dagger\hat{c}_{1j}\right)\right].
\end{split}
\end{align}
Note that the matrix evolves under the action from the left and right with the different candidate Hamiltonians and with different relaxation operators, $\hat{c}_{0j}$ and $\hat{c}_{1j}$, representing cases where the hypotheses concern the damping of the system.
Unlike the conventional Lindblad master equation, \eqref{eq:2sided} does not preserve the trace, and the overlap between candidates for the full system and environment quantum states attains non-trivial values, resulting in a time dependent value of $Q_{\mathrm{e}}^{(\text{min})}(t)$ as given in \eqref{eq:QeMinPure}.
This quantity represents a lower (quantum) bound on the probability of assigning a false hypothesis based on \textit{any} combined quantum measurement performed on the environment in the time interval $[0,t]$ and on the emitter system at the time $t$, corresponding to the situation depicted in \fref{fig:intro}(a). In the next section we compare the achievements of testing using continuous measurements and Bayesian discrimination with this minimum.
\section{Numerical investigations}\label{sec:3}
\subsection{Error probabilities under different detection models}
\label{sec:EP}
\begin{figure}
\includegraphics[trim=0 0 0 0,width=0.95\columnwidth,left]{homodyneCountingComparison.pdf}
\caption{Temporal evolution of the error probability in assigning one of two hypotheses $\Omega_0$ and $\Omega_1$ for the Rabi driving frequency of a two-level system.
The three plots correspond to different pairs of Rabi frequency candidates as annotated in the figure windows and the system is prepared in the ground state at $t=0$.
Results are shown for each of the different measurement schemes discussed in this paper. The error probabilities pertaining to monitoring protocols with perfect detection $\eta=1$ are sampled from $M=100.000$ simulations (see main text).
}
\label{fig:QeHomodyneCounting}
\end{figure}
To address the performance of the different monitoring schemes, we turn to the associated error probability $Q_{\mathrm{e}}$. We consider both the case where the probability update is based solely on the detection signal, \eqref{eq:BayesMonitor}, and the case where the signal is combined with a final optimized projective measurement on the system, \eqref{eq:assignProb}.
The probabilities pertain to the average over many independent experimental realizations. However, they are non-linear functionals of the conditional states so there is no deterministic theory which allows their evaluation.
Instead we have recourse to perform a large number $M$ of simulations of the full measurement sequence and Bayesian inference.
We repeated the simulations assuming each of the two hypotheses $h_0$ and $h_1$ to be true.
In testing based on the detection signal $D_T$ alone, hypothesis $h_i$ is assigned if
$P(h_i|D_T)>1/2$.
The probability in \eqref{eq:Qe} to discard a true hypothesis $h_j$ is then estimated by
$
P(m=i|h_j) = n_{i}^{(j)}/M,
$
where $n_{i}^{(j)}$ is the number of samples assigning $h_i$ when $h_j$ is true.
When a final system projection with outcome $\lambda$ is included in the procedure, the assignment is dictated by $P(h_i|D_T,\lambda)>1/2$ and the error probability is given directly by \eqref{eq:QeMin}.
The resulting error probabilities for our two-level example are compared to the quantum bound and to that of a projective measurement on the open system alone in \fref{fig:QeHomodyneCounting}.
Curves are shown for three pairs of Rabi frequency candidates. They are all separated by $\Omega_1-\Omega_0 = 4\gamma$, and therefore the error probabilities have the same quantum lower bound \cite{PhysRevLett.114.040401}, but their particular offsets make either counting or homodyne detection more advantageous.
All protocols yield larger error probabilities than the quantum bound. This means that none of the measurement strategies are optimal in the sense that they are able to extract all information from the full state of the system and its environment.
A photon counting signal is sensitive to the intensity of the emitted radiation and hence reflects the excitation of the two-level system. As seen in (a) this makes it near ideal to distinguish $\Omega_0 = 0$, which leads to no photon emissions, from a strong drive $\Omega_1=4\gamma$.
The counting signal alone generates a much smaller error probability than the homodyne signal and approaches zero on a timescale similar to that of the quantum bound. When combining the counting signal with a final system projection, the error probability follows the quantum bound closely at short times and it shows that we may at specific finite probing times distinguish the hypotheses with certainty. These are points in time where the non-zero Rabi frequency $\Omega_1$ assures an atomic or a photonic excitation.
The photon count is, however, insensitive to the phase of the emitted radiation and to the coherences in the two-level system. As a consequence, the two candidates $\Omega_0=-2\gamma$ and $\Omega_1 = 2\gamma$ in (b) can not be discriminated by the photon counting signal alone; i.e. $Q_{\mathrm{e}}(t)=1/2$ for all times. Homodyne detection is, on the other hand, highly sensitive to the phase of the emitted radiation and when combined with a final system projection, the associated error matches the quantum bound for $\gamma t\lesssim 1.5$ after which it remains close to the bound.
Since for the case studied in (b) the photon count alone holds no discriminatory power, one might expect
that supplementing a counting signal with a final system projection yields
an error probability identical to that pertaining to a projective measurement on the mixed state of an unmonitored system. Nevertheless, it is seen than for $\gamma t\gtrsim 1.75$, counting the photo emissions reduces the final error probability by around $10\%$.
This illustrates an additional advantage of monitoring the environment. Subject to backaction, the system state remains pure and experiences a transient behavior which generally depends more strongly on the particular hypothesis than the mixed state of the unmonitored system. This allows more information to be extracted from the final system measurement. Previous works identify similar mechanisms at play in parameter estimation with monitored systems \cite{PhysRevA.87.032115,PhysRevA.89.052110,PhysRevA.91.012119,PhysRevA.94.032103,albarelli2018restoring}.
The candidate values $\Omega_0=2\gamma,\ \Omega_1=6\gamma$ in (c) can be distinguished both by the excitation and the coherence of the system. It is evident that while homodyne detection is slightly better than counting for these particular values, they both perform well and reach within $5-10\%$ of the quantum bound.
\subsection{Finite detector efficiency}
While the simulations in Figures~\ref{fig:homodyneCountingComparison}~and~\ref{fig:QeHomodyneCounting} assume perfect monitoring, any real experiment suffers from finite detection efficiency $\eta<1$.
If the environment is monitored with perfect efficiency $\eta=1$, the system state remains pure but if, e.g., some photo emissions are missed by the detector we are unable to perfectly track the state of the system and the conditional state $\rho^{(\D)}(t)$ evolves to a statistical mixture.
Consequently, in addition to the direct decrease in information available from the monitoring signal, the final system measurement is performed on a mixed state with, in general, less discriminatory power.
\begin{figure}
\includegraphics[trim=0 0 0 0,width=1\columnwidth,left]{etaDependenceDone.pdf}
\caption{
Temporal evolution of the error probability in assigning one of two hypotheses $\Omega_0$ and $\Omega_1$ for the Rabi driving frequency of a two-level system.
The candidate values are annotated in the figure windows and the system is prepared in the ground state at $t=0$.
The full, blue curves, concerning monitoring by photo detection (a) and by homodyne detection (b) combined with a final system projection, are sampled from $M=100.000$ simulations (see main text) with different values of the detection efficiency $\eta$ as indicated on the right hand side of each plot.
For comparison, we show also the quantum bound (dotted curve) and error probability associated with a projective measurement on an open system (dashed, red curve).
}
\label{fig:etaDependence}
\end{figure}
To probe these effects, we show in \fref{fig:etaDependence} the (sampled) error probability for different values of $\eta$. For (a) photon counting we focus on the candidates $\Omega_0=0,\, \Omega_1 = 4\gamma$ and for (b) homodyne detection $\Omega_0=-2\gamma,\, \Omega_1 = 2\gamma$ where each of the two methods work particularly well.
As $\eta$ decreases, the error probability $Q_{\mathrm{e}}(t)$ undergoes a smooth transition from the perfect detection case studied in \fref{fig:QeHomodyneCounting} to the case of a projection measurement performed on the mixed state of the system alone
in the limit $\eta\rightarrow 0$.
For the parameters used in this example, the photon counting protocol in (a) is surprisingly robust to detector imperfections. This is due to the fact, that as explained in Section~\ref{sec:EP}, even a single photo detection completely rules out the hypothesis $\Omega_0=0$.
While the homodyne example in (b) shows a more linear increase in the error probability as the detector efficiency deteriorates, both plots demonstrate that even with fairly large imperfections, monitoring the environment severely
improves the hypothesis testing capabilities of an open quantum system.
This is due to the fact that the monitoring induces transient evolution in the system which depends more strongly on the system parameters than the steady state.
\section{Conclusion and outlook}
\label{sec:4}
We have investigated how hypothesis testing with an open quantum system may be improved by monitoring the radiative environment to which it is coupled.
We propose to supplement the information retrieved directly from the monitoring signal with a final system measurement optimized according to the conditional state.
For reasons of clarity, we restricted our attention to just two distinct hypotheses, but the Bayesian analysis is readily generalized to cases with multiple candidates and in Ref.~\cite{kiilerich2018multi} we present an efficient numerical approach to evaluate the quantum bound and define the optimal system projection when multiple hypotheses are in play.
It was found that, while monitoring by a photon counter or a homodyne demodulator allows the extraction of much of the information leaked from the open system into the field, the error probability in these schemes does not reach the fundamental quantum bound.
As explained in the introductory section~\ref{sec:1}, this is not surprising since generally the optimal measurement is highly non-local on the full system and environment.
\begin{figure}
\includegraphics[trim=0 0 0 0,width=0.95\columnwidth]{hybridMeasurement.pdf}
\caption{
(a)
A fraction $\beta$ of the radiation emitted by a probe system is collected by a homodyne demodulator while the remaining fraction $1-\beta$ is directed to a photon counter. The system state, which defines the optimal system projection to perform at the final time $T$, is conditioned on both the photon count and the homodyne signal.
(b) Temporal evolution of the error probability in assigning one of three hypotheses $\Omega=0,\pm 2 \gamma$ for the Rabi frequency of a driven two-level system based on the two monitoring signals, $N_t$ and $Y_t$ of the hybrid monitoring scheme in (a). The cases of pure counting ($\beta=0$) and pure homodyne detection ($\beta=1$) are compared to different hybrid schemes with $0 < \beta < 1$. Pure homodyne detection is only optimal for times $\gamma t\lesssim 2.3$ (shaded area).
The error probabilities are sampled from $M=100.000$ simulations.
}
\label{fig:hybrid}
\end{figure}
From the results presented in \fref{fig:QeHomodyneCounting}, it is clear that homodyne detection and photon counting yield different reductions in the error probability at different stages in the evolution. I.e., at some points in time either homodyne detection or photon counting is more efficient than the other.
To allow both possibilities in a single experiment, the setup illustrated in \fref{fig:hybrid} splits the
radiation emitted by the system such that a fraction $1-\beta$ is monitored by a photon counter and the remaining $\beta$ fraction is subject to homodyne detection.
The conditional, unnormalized state $\tilde{\rho}^{(N_t,Y_t)}(t)$ then evolves according to both monitoring signals,
\begin{align}\label{eq:meCountHom}
\begin{split}
d\tilde{\rho}^{(N_t,Y_t)} &= \Big(\left[(1-\beta)\mathcal{K}+\beta\mathcal{L}\right]dt
\\
&+(1-\beta)\mathcal{B}dN_t
+\sqrt{\beta}\mathcal{X}_\Phi dY_t
\Big)\tilde{\rho}^{(N_t,Y_t)}.
\end{split}
\end{align}
A similar scheme applies the homodyne setup, \fref{fig:intro}(c) but with a local oscillator of variable strength $\xi$ \cite{zhang2012mapping}.
Conventional homodyne detection is realized in the limit of large $\xi$, while with a weak local oscillator the setup effectively counts photons.
The significance of such \textit{hybrid} schemes is more apparent in scenarios with multiple distinct hypotheses, and in \fref{fig:hybrid}(b) we illustrate this by considering the differentiation of three discrete values $\Omega=0,\pm 2\gamma$ of the Rabi frequency in our two-level model. For sake of argument, we consider only monitoring without a final system projection.
As discussed in Section~\ref{sec:EP}, pure photo detection $(\beta=0)$ is only sensitive to the absolute value of $\Omega$, and hence the error probability never reaches values lower than $Q_\mathrm{e}=1/3$, signifying perfect discrimination between $\Omega=0$ and the values $\pm 2 \gamma$ which are, on the contrary, indistinguishable.
When even a small fraction $\beta>0$ of the intensity of the emission signal is monitored by a homodyne demodulator, however, the combined signal is able to perfectly distinguish the three hypotheses if sufficient time is alloted.
Interestingly, while pure homodyne detection ($\beta=1$) is optimal for times $\gamma t\leq 2.3$ (shaded area), hybrid schemes with $0<\beta<0.9$ converge faster to perfect discrimination because a photon counting signal very efficiently discriminates $\Omega=0$ from any non-zero values.
Notice, finally, the large reduction in the error probability from the $\beta=0$ to the $ \beta=0.01$ case. This is because just $1\%$ of the intensity amounts to $10\%$ of the amplitude, which is the relevant observable in homodyne detection, and leaves the counting signal virtually unaltered.
By using a beamsplitter with a tunable transmittance $\beta(t)$ or by adjusting the local oscillator strength $\xi(t)$, the effective monitoring scheme can be updated in a time dependent manner in order to further optimize the information extracted at each point in time.
Such a task may be guided by intuition or achieved by numerical optimal control based on the formalism presented in this article.
\section{Acknowledgements}
The authors would like to thank Peng Xu for helpful discussions and acknowledge financial support from the Villum Foundation.
A.\,H.\,K. further acknowledges financial support from the Danish Ministry of Higher Education and Science.
|
2,877,628,091,312 | arxiv | \section{Introduction and Overview}
In this contribution I review the properties of the
hot, X-ray emitting gas in elliptical galaxies. The
investigation of elliptical galaxies using X-ray observations is a
less mature and more volatile field than its radio and optical counterparts;
however, X-ray studies provide unique and complementary insights
into the nature of these systems. I concentrate on
two topics of relevance to the subject of this conference,
star formation in early-type galaxies.
The first topic is the nature of the X-ray emission,
focusing on the hot gas metallicity.
The mass,
distribution, and relative abundances of metals
constrain the enrichment and, therefore, the star formation
history of these galaxies.
The complicated nature of optical abundance
studies emphasizes the value of the complementary
method of X-ray spectroscopy of hot
gas that originates as stellar mass loss.
In this portion of the review,
I explain and evaluate the ``standard''
model for the X-ray emission from elliptical galaxies, and review
the low abundances in elliptical galaxy hot interstellar media
derived using such models.
I also compare X-ray and optical
abundances, summarize measurements of the Si-to-Fe ratio in the
hot gas,
and discuss some of the implications of the observed abundances.
Many of the results I discuss here are based on the work of
Kyoko Matsushita (Tokyo Metropolitan University),
Hironori Matsumoto (Kyoto University) and, especially,
Richard Mushotzky (NASA/GSFC).
The second part of this review is a summary
of a recently
completed project, in collaboration with Ray White
(University of Alabama), on
the existence and
properties of dark matter halos in the population of bright
elliptical galaxies.
Dark matter is a determining factor in the
feedback processes that occur during the star formation epoch
and are responsible for such correlations as the color-magnitude
relation.
I review our modeling methods and
assumptions and summarize the following
results: (a) a demonstration of the
ubiquitousness of dark matter in ellipticals, (b) how the dark halo properties
must scale with optical properties to
match the observed X-ray/optical correlations, and (c)
implications for galaxy formation and cosmology.
\section{X-ray Emission from Elliptical Galaxies -- General Considerations}
Surprisingly -- since it was expected that
galactic winds would drive out most gas (Mathews \& Baker 1971) --
the {\it Einstein} Observatory
discovered that many elliptical galaxies are bright in X-rays, with
luminosities up to $\sim 10^{42}$ erg s$^{-1}$
(Fabbiano 1989). The X-ray luminosity is a
steep function of optical luminosity ($L_X\propto L_{opt}^{\sim 2.5}$), and
X-ray studies are highly biased towards the optically brightest
systems, many of which are in the Virgo cluster.
In the brightest
systems, the
emission is dominated by $\sim 10^7$ K gas, and the gas mass within
the optical galaxy can be as high as a few percent that of the
stars -- much less than the stellar mass loss rate
integrated over a Hubble time.
The gross
properties of the hot gas are well explained by
hydrodynamical models where gas is heated by stellar
motion-induced shocks and (possibly) Type Ia supernovae (SNIa), and
settles into hydrostatic equilibrium in a gravitational potential that
includes dark matter (Loewenstein \& Mathews 1987).
Some elliptical galaxies have
very extended ($>100$ kpc in radius) X-ray coronae.
This is not surprising -- a galaxy of $10^{11}$ L$_{\odot}$,
stellar $M/L=10$, and baryon
fraction 5\% has a viral radius $\sim 500$ kpc.
It is not clear
whether the gas in these
extended halos represents a primordial baryon
reservoir or was ejected from
the galaxy during an early star forming epoch, nor is it understood why
other galaxies have compact X-ray halos.
The study of the X-ray emission from elliptical galaxies has greatly
intensified this decade due to the superior spectral and
imaging capabilities of the
{\it ROSAT} and {\it ASCA} observatories. This has led to tremendous
improvements in the accuracy and extent of derived gas density and
temperature profiles, as well as the first significant sample of
accurate gas abundances.
These new observations are the foundation of the insights into
elliptical galaxy structure and evolution that I now discuss.
\section{The ``Standard'' Model}
{\it ASCA} spectra can generally be decomposed into soft and hard
components (Matsumoto {\it et al.} 1997, Matsushita 1997).
The soft component originates in the hot (0.3--1.2 $10^7$ K) ISM, and
shows a wide range of X-ray-to-optical flux ratios and X-ray extents for
any optical luminosity.
The hard component is roughly co-spatial with the optical galaxy, and
scales linearly
with optical luminosity with a relative
normalization and spectrum
consistent with measurements of
the integrated emission from low mass X-ray binaries in spiral galaxy
bulges (although some galaxies appear to have enhanced
hard emission from a spatially unresolved nucleus).
I will refer to this two-component model as the ``standard'' model, since
it is the simplest model
that describes the data.
Since
abundance uncertainties become large as the hard component begins
to dominate
and emission line equivalent widths
are diluted, accurate abundances are
derivable only in
gas-rich ellipticals, of which there
are about 20 in
the {\it ASCA} archive. This includes galaxies with both extended and
compact X-ray morphologies.
\section{Hot Gas Abundances in the Standard Model}
Figure 1 shows a plot of abundance versus temperature
derived from {\it ASCA} spectra
extracted from the inner $5R_e$
using the standard model.
The soft component is modeled using the Raymond-Smith thermal
plasma emission code
with abundances fixed at their solar photospheric ratios
(abundance of Fe relative to H
$4.68\ 10^{-5}$ by number). The abundances --
essentially the Fe abundance as X-ray spectra
at these temperatures are dominated by Fe L emission lines --
range from about 0.1 to 0.7 solar. Since it is
usually assumed that abundances of the mass-losing stars that
are the origin of the hot gas are supersolar, these may seem surprisingly
low. Moreover, Type Ia supernovae exploding at a
rate $R_{SNIa}$ SNU (1 SNU $=1$ SN per $10^{10}$
L$_{B_\odot}$ per 100 yr)
should further enrich the hot gas by
an additional $\sim 25R_{SNIa}$ solar -- or $\sim 2.5$ times solar
using the rate from Cappellaro et al. (1997) for
$H_0=65$ km s$^{-1}$ Mpc$^{-1}$. Thus,
X-ray abundances of elliptical galaxies
are $>5$ times less than what might
naively be expected.
\begin{figure}[htbp]
\centerline{
\psfig{file=loewensteinm1.eps,width=4.0in,height=3.2in,angle=-90,clip=}}
\caption{Hot gas metal abundance, assuming solar photospheric
ratios, versus temperature --
mostly adapted from Matsushita (1997).}
\end{figure}
\section{X-ray/Optical Metallicity Comparison Revisited}
\subsection{X-ray Advantages and Disadvantages}
Physical quantities such as hot gas abundances and temperatures are
derived from model fitting of X-ray spectra.
For abundance studies,
a great advantage of X-ray spectroscopy is that --
given sufficient signal-to-noise, spectral resolution, bandpass,
and knowledge of the
the important atomic transitions -- emission line strengths provide
{\it direct} measurements of elemental abundances. This
is not the case for optical abundances
(``the intensity of $Mg_2$ does not simply correlate with the abundance
of Mg''; Tantalo, Bressan, \& Chiosi 1998).
{\it ASCA} has those qualities required for
abundance determinations of the X-ray brightest elliptical galaxies.
However, there are limitations. The bandpass
and energy resolution are insufficient
to obtain
clean measurements of many elements;
the atomic parameters for some
of the prominent Fe emission features are uncertain; spectra can be
complicated by multiple components not spatially separable due
to the limited {\it ASCA} angular resolution.
``Contamination''
by SNIa explosions or accretion of intergalactic gas
may complicate the comparison with stellar abundances. However the
low measured hot gas Fe measurements argues against the former, while the
lack of an anti-correlation between hot gas metallicity and X-ray luminosity
apparently rules out the latter.
One can obtain fairly accurate Fe abundances for 20 or
so galaxies, Si abundances for about half that number, and
occasionally some constraints on O, Mg, and S.
A comparison with the optical results
seems to be meaningful if we focus on the X-ray emission from the
optical region of the galaxy.
\subsection{Is There Evidence of an X-ray/Optical Discrepancy?}
Measurements of the nuclear $Mg_2$ index imply that
elliptical galaxies have supersolar abundances only if one assumes
(a) that metallicities are constant with
radius, (b) that abundances have
solar ratios, and (c) that
all stars were formed
a Hubble time ago.
For a meaningful
comparison with the X-ray abundances, one
needs to estimate a globally averaged stellar Fe abundance. Negative abundance
gradients indicate that the average
metallicity is typically a factor of two below the central value
(Arimoto et al. 1997), and Fe is
underabundant, typically by an additional factor of two,
relative to Mg (Worthey, Faber, \& Gonzalez 1992).
These factors bring the optical and
X-ray Fe abundances into fair agreement. This is illustrated by
the {\it ASCA} Fe abundance profile for NGC 4636
(Mushotzky et al. 1994, Matsushita et al. 1998) shown
in Figure 2, where a comparison is made
with the extrapolated optical $Mg_2$ profile
(Davies, Sadler, \& Peletier 1993) converted to [Fe/H]
assuming three separate values of [Mg/Fe]
(Matteucci, Ponzone, \& Gibson 1998). There is no clear
discrepancy once the effects of gradients and non-solar abundance ratios
are properly accounted for.
\begin{figure}[htb]
\begin{minipage}[t]{65mm}
\centerline{
\psfig{file=loewensteinm2.eps,width=2.5in,height=2.5in,clip=}}
\caption{Hot ISM Fe abundance
compared with the extrapolated optical estimates with
Mg/Fe $=1$ (solid curve), 2 (dotted curve),
and 3 (dashed curve) times solar.}
\end{minipage}
\hspace{\fill}
\begin{minipage}[t]{65mm}
\centerline{
\psfig{file=loewensteinm3.eps,width=2.5in,height=2.5in,clip=}}
\caption{X-ray versus optical global Fe abundance.}
\end{minipage}
\end{figure}
Additional complications have emerged from
recent work in this field.
Balmer emission line measurements indicate that
many elliptical galaxies have undergone star formation relatively
recently, compromising the simple conversion from
a single line index to metallicity.
Scott Trager has kindly provided me with {\it very preliminary}
estimates, based on work underway in collaboration with
J. Gonzalez, S. Faber, and D. Burstein
that accounts for the effects of differences in
stellar population and
non-solar abundance ratios,
of the Fe abundance at the half-light radius ($R_e$)
that I have converted to a global average. There is an overlap of
eight galaxies with the gas-rich {\it ASCA} sample; and,
there is generally a rough consistency (Figure 3).
The (unweighted) average optical
Fe abundance is $\sim 0.45$ solar compared to
$\sim 0.3$ solar for the hot gas, with
the offset dominated by two extreme gas-underabundant systems.
The ``optical/X-ray abundance discrepancy'' is
primarily an artifact of abundance gradients
and non-solar abundance ratios, greatly diminished
once the proper comparison --
of average measurements of
the same element over the same aperture -- is made.
\subsection{Is the Standard Model Correct?}
The fair
consistency of optical and X-ray abundance determinations mitigates one
of the primary objections to the
adequacy of the simple two-component ``standard''
model.
Inaccuracies in our knowledge of Fe L atomic physics
(Arimoto et al. 1997), has been shown
to be no more than a 20--30\% effect
(Hwang et al. 1997, Buote and Fabian 1998a).
A more formidable alternative to the standard model
has been constructed by Buote and Fabian (1998b).
They found
that the best fit to {\it ASCA} data often consists of a two-temperature
plasma, with the
secondary component having a temperature of 1.5--2 keV,
and that the abundances in such fits are systematically higher by
about a factor of two compared
to the standard model. However, we have found
that He-like to H-like Si
line ratios are in precise agreement with the predictions of
the standard model (Figure 4; Mushotzky \& Loewenstein 1998),
although {\it ASCA} spectra are not of
sufficient quality to generally and unambiguously
rule out the Buote and Fabian two-phase model.
A final argument for the correctness of the standard model comes from
the demonstration that, in NGC 4649 and NGC 4472, the mass profile
obtained from the hot gas temperature in single phase fits is in perfect
accord with the mass determined optically (Brighenti \& Mathews 1997).
\begin{figure}[htbp]
\centerline{
\psfig{file=loewensteinm4.eps,width=4.0in,height=3.2in,angle=-90,clip=}}
\caption{68, 90, and 99\% confidence contours
for H- and He-like Si line strengths (in photons cm$^{-2}$ s$^{-1}$) in
the elliptical galaxy NGC 4636.
The solid lines show the ratios
expected in three single-temperature thermal plasma models.
The temperature derived from
global spectral fitting
is in precise agreement with
the line diagnostic value.}
\end{figure}
\section{Si-to-Fe Ratio in the Hot Gas}
Elemental abundance ratios provide constraints on the primordial
IMF and relative numbers of Type Ia and Type II supernovae.
Renormalized to the meteoritic Fe abundance, the
Si-to-Fe ratio lies between 0.5 and 1.5 times solar (Figure 5).
This is lower than the
Mg-to-Fe ratio derived from nuclear optical spectra, and is more in line with
values measured from the X-ray spectra of intragroup media.
This implies that
either the $\alpha$-to-Fe enhancement
is a phenomenon restricted to the inner ($<R_e$) galaxy, or that
the Si-to-Mg ratio is subsolar. Evidently, intracluster media
(Loewenstein \& Mushotzky 1996) and
elliptical galaxy cores have the enhanced $\alpha$-to-Fe
elemental ratios characteristic
of rapid high mass star formation where enrichment is dominated by Type II
supernovae, while groups and the
outer regions of elliptical galaxies tend toward the solar supernovae
mix where a larger fraction of Fe originates in SNIa.
\begin{figure}[htbp]
\centerline{
\psfig{file=loewensteinm5.eps,width=4.0in,height=3.2in,angle=-90,clip=}}
\caption{Si versus Fe abundance in the hot X-ray emitting gas.
The solid line denotes Si:Fe in the ratio 1:1, while
the broken lines denote the ratios 3:2 and 1:2
with respect to the (meteoritic) solar ratio.}
\end{figure}
The Si abundance provides a robust
upper limit on the effective SNIa
rate that is consistent with that derived using Fe. The
conservative (assuming that all of the Si in the hot gas originates
from SNIa)
limit is typically $\sim 0.03$ SNU --
about four times lower than the
recent estimate of Cappellaro {\it et al.} (1997).
\section{Implications of Low Abundances}
A globally averaged Fe abundance in elliptical galaxies of
about half-solar is in accord with
optical and X-ray spectroscopic measurements, as well as with
predictions of
chemically consistent evolutionary
models (M\"oller, Fritze-v. Alvensleben, \& Fricke 1997).
As this is only slightly
higher than the ICM Fe abundance, and the ICM dominates the
cluster baryon mass,
there is considerably more Fe
in the ICM than is locked
up in cluster galaxy stars. This implies one of the following.
(1) If the stellar and ICM metals come from the same SNII-enriched
proto-elliptical galaxy gas, then 50--90\% of the original galaxy mass was
lost and much of the ICM is not primordial but was ejected
from galaxies.
(2) However, the actual mass of material directly associated
with the SNII ejecta is much less significant.
Selective mass-loss of nearly pure SNII ejecta would enable expulsion
of much of the metals while retaining most of the baryonic mass.
There is both
observational (Kobulnicky \& Skillman 1997) and theoretical
(Mac Low \& Ferrara 1998) evidence
for super-enriched outflows in dwarf galaxies that may serve as
analogues of pre-merger elliptical galaxy sub-units.
(3) It is possible that the ICM enrichment originates in
some other source. The most plausible candidates are dwarf galaxies, but
Gibson \& Matteucci (1997) have shown that this scenario is not consistent
with the color-magnitude relation in these systems. Therefore,
one may have to appeal to a population of
dwarf galaxies that destroy
themselves in the process of enriching the ICM.
Models of the chemical evolution of elliptical
galaxies are often tuned to produce supersolar
stellar abundances, reproduce the
color-magnitude diagram, explain ICM enrichment, etc.
They also tend to predict high ISM metallicities
(e.g., Matteucci \& Gibson 1995).
Re-evaluation of these models in light of the
downward revision of elliptical galaxy metallicities may be in order.
\section{Dark Matter in Elliptical Galaxies: Background and Motivation}
I now turn to the second main topic of this review, dark matter in
elliptical galaxies.
There is a strong consensus that dark matter dominates the mass content
of spiral galaxies,
galaxy groups and
clusters, and the universe as a whole. Although
traditionally less forthcoming and more controversial,
evidence for dark matter in elliptical galaxies has
rapidly accumulated in recent years from improved
stellar dynamical data and modeling,
gravitational lensing
observations, and high-quality X-ray images and spectra from
the
{\it ROSAT} and {\it ASCA} satellites.
For example, the extended flat hot gas temperature profiles
measured using {\it ASCA} (Matsushita 1997) are analogous
to flat HI rotation curves in spiral galaxies as indicators
of the presence of massive dark matter halos.
Although the case for dark matter in some ellipticals
is now overwhelmingly strong, we (Loewenstein \& White 1998) were
motivated by published measurements of X-ray temperatures in a
complete optically selected
sample (Davis \& White 1996), to attempt to
answer the following more general
questions:
(1) Do bright elliptical galaxies have dark matter halos {\it in general}?
(2) How do the dark halo properties scale with optical luminosity?
\section{Modeling and Assumptions}
The primary diagnostic observable in this work is
the ratio of stellar to hot gas temperatures,
$\beta_{\rm spec}\equiv
\mu m_p\sigma^2/k\langle T\rangle$, where
$\mu m_p$ is the mean mass per particle,
$\sigma$ the
projected central optical velocity dispersion, and
$\langle T\rangle$ the globally ({\it i.e.}, over
$6R_e$) averaged
gas temperature. From the fundamental plane relations and
virial theorem for the the gas, it follows that
$\beta_{\rm spec}$ is an excellent diagnostic of
the total mass-to-light ratio.
The following
characterize the ``$T$--$\sigma$'' relation, and must
be reproduced by any successful model of the dark matter in
elliptical galaxies:
(1) $\beta_{\rm spec}<1$
(the gas is always hotter than the stars, typically by factors of 1.5--2),
and (2) $\langle T\rangle\propto \sigma^{1.45}$ or
$\beta_{\rm spec}\propto \sigma^{0.55}$ (Davis \& White 1996).
(1) Stars and gas are assumed to be in hydrostatic
equilibrium in a spherically symmetric gravitational potential.
(2) Stellar orbits are assumed to vary monotonically
from isotropic at the center to radial at infinity.
(3) Stellar density profiles and scaling relations
are determined by {\it HST} observations (Faber et al. 1997)
and the
fundamental plane.
(4) The ``NFW'' dark-matter parameterization
(Navarro, Frenk, \& White 1997) is adopted,
\begin{equation}
\rho_{\rm dm}(r)\propto\left({r\over {R_{\rm dm}}}\right)^{-1}
\left(1+{r\over {R_{\rm dm}}}\right)^{-2},
\end{equation}
where $\rho_{\rm dm}$ and $R_{\rm dm}$ are the dark matter density distribution
and scale length, respectively.
$\beta_{\rm spec}$ is primarily
determined by the dark-to-luminous
mass ratio inside the optical radius ($R_{\rm opt}$,
defined here as $6R_e$,
the radius enclosing $\approx 90$\% of the light), and the
dark halo concentration
(the ratio of dark matter
to stellar scale lengths). A global observable,
$\beta_{\rm spec}$ is
not sensitive to the functional
form of the dark matter density distribution;
the choice of
equation (1) enables us to connect our results with
numerical structure formation simulations.
\section{Dark Matter Universality and Limits}
$\beta_{\rm spec}>1.2$
for models without dark matter --- greater than in any
observed galaxy (Davis \& White 1996).
A typical
value of $\beta_{\rm spec}=0.5$ requires a dark-matter
fraction of $\approx 75$\%
within $R_{\rm opt}$ for $R_{\rm dm}\approx R_e$.
Although
the dark matter distribution
is not constrained in detail,
more than half of the mass within $R_e$ is
baryonic for models with $\beta_{\rm spec}=0.5$ if $R_{\rm dm}>R_e$.
Even for
extreme stellar models,
{\it $\beta_{\rm spec}$ always exceeds $\approx 0.75$ unless ellipticals
have dark matter}. Therefore, dark halos
must be generic to $L>L_*$ elliptical galaxies.
We place lower limits
on the dark-matter scale length, $R_{\rm dm}$ --- if the
dark matter is too concentrated
$\sigma$ increases relative to $\langle T\rangle$, raising $\beta_{\rm spec}$.
The
minimum value of $R_{\rm dm}$ consistent with
$\beta_{\rm spec}\approx 0.5$ is $\approx 0.3R_e$
$\approx 2(L_V/3L_*)^{3/4}h^{-1}_{80}$ kpc, where
$L_*\approx 1.7\times 10^{10}{h_{80}}^{-2}$L$_{V_\odot}$ (Figure 6).
We also derive upper limits on the baryon fraction, analogous
to maximum disk models for spiral galaxies (Figure 7).
The minimum dark matter mass fraction is
$\approx30$--$57$\% within $R_{\rm opt}$ for
$\beta_{\rm spec}=0.4$--0.7,
and is $<$20\% within $R_e$.
\begin{figure}[htb]
\begin{minipage}[t]{65mm}
\centerline{
\psfig{file=loewensteinm6.eps,width=2.5in,height=2.5in,clip=}}
\caption{Minimum dark-matter scale length in units of the ``break radius,''
$0.03$ $R_e$.}
\end{minipage}
\hspace{\fill}
\begin{minipage}[t]{65mm}
\centerline{
\psfig{file=loewensteinm7.eps,width=2.5in,height=2.5in,clip=}}
\caption{Maximum values of baryon fractions within $R_e$, and $R_{\rm opt}$.}
\end{minipage}
\end{figure}
\section{Explaining the $T$--$\sigma$ Relation}
Elliptical galaxies with the same dark-to-luminous
mass and scale length ratios have the same
$\beta_{spec}$. As
can be inferred from the virial theorem
and fundamental plane relations,
the observed trend wherein
$\beta_{\rm spec}$ increases with increasing $\sigma$
(or, equivalently, with $L_V$) implies that
more luminous
galaxies are {\it less} dark-matter dominated within $R_{\rm opt}$
(and in such a way that the total mass-to-light ratio is
nearly constant).
Extending our models to virial radius and mass scales, we
investigate what dark matter scaling
relations predict such a trend.
Two successful scenarios are the following (Figure 8).
(1) The dark matter scale length
$R_{\rm dm}$ increases weakly
with $M_{\rm virial}$ as predicted in CDM simulations, but
the baryon fraction ($f_{\rm bar}$) is
an increasing function of optical luminosity, as
expected if smaller galaxies undergo more intense
supernova-driven mass loss during their star forming epoch
(dotted curves in Figures 8, 9, and 10).
If all ellipticals formed with the same $f_{\rm bar}$, then
the average $L>L_*$ galaxy has lost more than half its original mass.
(2) All elliptical galaxies have the same $f_{\rm bar}$
but $R_{\rm dm}$ increases {\it much more
steeply} with $M_{\rm virial}$ than in CDM models
(dashed curves in Figures 8, 9, and 10). In this case,
less luminous galaxies have relatively more dark matter within
$R_{\rm opt}$ because of a more concentrated dark-matter distribution
rather than a larger overall dark-matter fraction.
This deviation from CDM predictions of dark halo scaling
on mass scales $<10^{14}$M$_{\odot}$
could result from a relatively flat primordial fluctuation spectrum
or the effects on the
dark matter density profile from
evolution of the baryonic component.
\begin{figure}[htb]
\centerline{
\psfig{file=loewensteinm8.eps,width=4.0in,height=3.2in,angle=-90,clip=}}
\caption{Observed and predicted
correlation of $\beta_{\rm spec}$ with dimensionless
luminosity ($L_o=5.2\times 10^{10}{h_{80}}^{-2}$L$_{V_\odot}$).
Solid curve denotes constant $f_{\rm bar}$ and CDM
dark matter concentration scaling; dotted curve has $f_{\rm bar}$ increasing
with luminosity; dashed curve has a steeper-than-CDM
scaling of concentration with dark halo mass.}
\end{figure}
{\it Models with dark halos that scale as predicted by CDM,
but with constant $f_{\rm bar}$,
badly fail to reproduce the observed $T$--$\sigma$ trend} (solid curve in
Figure 8).
\section{How Dark Matter Scales with Optical Luminosity}
For the two scenarios described above that successfully reproduce the
$T$--$\sigma$ relation, total masses within
$R_e$ and $R_{\rm opt}$ are perfectly consistent
with gravitational
lensing results (Griffiths et al. 1996;
see Figure 9), as well as with those
from studies of
ionized gas disks as discussed by R. Morganti at this meeting.
Integrated properties within $R_{\rm opt}$
are robust (Figure 10):
$M/L_V\approx 25h_{80}$M$_{\odot}$/L$_{V_\odot}$,
($f_{\rm bar}\approx 0.35(L_V/3L_*)^{1/4}$).
On scales both larger and smaller than $R_{\rm opt}$, dark-matter scaling in
the two scenarios described above diverges (Figures 9 and 10).
In the constant $f_{bar}$, non-CDM scaling
scenario, dark
matter becomes increasingly important inside $R_e$ as $L_V$
decreases, becoming dominant for $L<0.6L_*$.
\begin{figure}[htb]
\begin{minipage}[t]{65mm}
\centerline{
\psfig{file=loewensteinm9.eps,width=2.5in,height=2.5in,clip=}}
\caption{$M$ vs $L_V$ within
(bottom to top)
$R_e$, $R_{\rm opt}$ (stars), $R_{\rm opt}$ (total), and $R_{\rm virial}$.
Solid curves show
$M(R_e)$ and $M(R_{\rm opt})$ inferred from statistical weak lensing.}
\end{minipage}
\hspace{\fill}
\begin{minipage}[t]{65mm}
\centerline{
\psfig{file=loewensteinm10.eps,width=2.5in,height=2.5in,clip=}}
\caption{Same as Figure 9 for $M/L_V$ at
(bottom to top): $r=0, R_e, R_{\rm opt}, R_{\rm virial}$.}
\end{minipage}
\end{figure}
We have calculated dark matter and
stellar
velocity dispersion distributions, assuming isotropic orbits.
These are compared in Figure 11 for an
$L_V=5.2\times 10^{10}{h_{80}}^{-2}$L$_{V_\odot}$ galaxy for both of
the successful scenarios described above and in Figures 9 and 10.
Both distributions have maxima since the total gravitational
potential is not isothermal. The ratio
(dark-matter-to-stars) of the squares of these maxima
is greater than 1.4
over the luminosity range in Figures 9 and 10,
and is $\approx 2$ over the range
$L_*<L_V<5L_*$.
In fact, the minimum value of this ratio for any model
that produces $\beta_{\rm spec}<0.7$ is greater than one. In this sense
the dark matter is hotter than the stars, as simply reflected by the
observation that the gas temperature exceeds that of the stars.
\begin{figure}[htb]
\centerline{
\psfig{file=loewensteinm11.eps,width=4.0in,height=3.2in,angle=-90,clip=}}
\caption{1-d velocity dispersion distributions, assuming isotropic orbits,
for an $L_V=L_0=5.2\times 10^{10}{h_{80}}^{-2}$L$_{V_\odot}$ galaxy.}
\end{figure}
\section{Summary}
This review has focussed on two major investigations of the
hot, X-ray emitting ISM in elliptical galaxies of relevance to the
issue of star formation in elliptical galaxies -- the nature and
metallicity of the hot gas, and the properties of the dark matter
halos confining the hot gas.
\subsection{Abundances in the Hot ISM: Concluding Remarks}
X-ray spectra of elliptical galaxies are adequately fit by models
consisting of hot gas with subsolar Fe abundance and
roughly solar Si-to-Fe ratio, plus
a hard component from an ensemble of X-ray binaries.
The consistency of the magnitude and spectrum
of the hard component with that expected from X-ray binaries and
its compact morphology, support this model over
ones where the hard component is primarily due to a hotter
gas phase. Complications in the form of an extra soft continuum
or multiple phases can be considered, but the consistency of the Si line
diagnostic and continuum temperatures demonstrates that
the data -- at the present level of sensitivity and spectral resolution --
do not require these.
Optical and X-ray
Fe abundance estimates are converging, although there are some
cases with anomalously low X-ray values.
Occam's razor would seem to demand that we provisionally
accept the reality of
low abundances in elliptical galaxies. As a result, we need to
seriously
reevaluate our notions of elliptical galaxy chemical
evolution, intracluster enrichment, and Type Ia supernova rates.
\subsection{Dark Matter in Elliptical Galaxies: Main Conclusions}
We (Loewenstein \& White 1998)
have constructed mass models of elliptical galaxies
consistent with the fundamental plane
scaling relations and {\it HST}
results on the structure of the centers of elliptical galaxies, with
dark halos as predicted by
large scale structure
formation simulations. These models allow us to
calculate the
diagnostic parameter
$\beta_{\rm spec}$ as a function of the relative (to luminous)
dark matter mass and scale length. Comparison with the
observed mean $T$--$\sigma$ relation -- the main features of which are that
the X-ray emitting gas is always hotter than the stars, and by an amount
that increases for
galaxies of lower velocity dispersion/optical luminosity --
provides constraints on the
properties of dark halos around elliptical galaxies. Our main results are as
follows.
(1) In the absence of dark matter, $\beta_{\rm spec}$ generally
exceeds 1.2, with an absolute lower limit of 0.75. Since
galaxies are observed to have $\beta_{\rm spec}=0.3$--0.8, we conclude
that dark halos are generic to $L>L_*$
elliptical galaxies.
(2) The most natural explanation of the
observed correlation of $\beta_{\rm spec}$
with luminosity is that
less luminous galaxies are more dark-matter dominated inside $R_{\rm opt}$
in such a way that the total mass-to-light ratio
is nearly constant. This ratio,
$\approx 25h_{80}$M$_{\odot}$/L$_{V_\odot}$, is exactly what
is predicted for mass models of elliptical galaxies designed to
explain the gravitational shear of background field galaxies
measured for a disjoint sample of elliptical galaxies.
(3) Our models can be embedded within theories of large scale structure
by specifying how the dark
matter concentration
scales with virial mass, and linking the virial mass
to the observed luminosity by specifying a global baryon fraction.
The standard CDM scaling with constant baryon fraction badly
fails to reproduce the observed $T$--$\sigma$ relation, since it
predicts an increase in dark-to-luminous ratio (inside $R_{\rm opt}$)
with luminosity.
The following two successful variations are obtained by relaxing one
of the two assumptions
of the constant baryon fraction CDM scenario:
(a) standard CDM scaling for the dark halos, but with smaller
galaxies losing an increasingly large fraction of their initial
baryonic content; or,
(b) a constant baryon fraction, but with the
dark-matter concentration
varying much more strongly with virial mass
than CDM models predict so that
more luminous galaxies are less dark-matter dominated due to a
relatively diffuse (rather than less massive) dark halo.
\acknowledgments
I am grateful to Scott Trager and Kyoko Matsushita
for providing unpublished results, and to
Richard Mushotzky and Ray White for their collaboration
on this work.
I would also like to thank Patricia Carral and the organizers for
a meeting that was outstanding in every way, and to
Omar Lopez-Cruz for his guidance.
\begin{question}{Vladimir Avila Reese}
Have you included the gravitational pull of the collapsing baryonic matter
on the dark matter halo?
\end{question}
\begin{answer}{Loewenstein}
I've calculated such distortions using the
adiabatic approach of Blumenthal et al., but have not
incorporated the altered halos into our models -- primarily because such an
orderly collapse now seems to be a poor approximation to the actual
formation of ellipticals. Clearly the effects of baryon evolution
on dark halo structure are important and require further study; however,
much depends on the relative timescales of merging, dissipation,
and star formation.
\end{answer}
\begin{question}{Richard Bower}
Why do you need to assume a model for the dark matter profile? Why not infer
this directly (and uniquely) from the temperature and density profiles?
What uncertainty does this introduce?
\end{question}
\begin{answer}{Loewenstein}
Because our goal in this project was to examine the dark matter properties in
a statistically meaningful sample, we necessarily include galaxies with
only moderately good X-ray data, {\it i.e} where only
a single integrated temperature is derivable and the dark matter profile
{\it cannot} be uniquely determined. For the sake of uniformity
we consider only an average temperature for each galaxy in
the sample, where the
average is taken over an identical {\it metric} radius of $6R_e$.
The total amount of dark matter
within this radius is very well determined, but its detailed distribution is
not; the NFW function is chosen as a matter of convenience.
We hope to follow this general study up
with a closer look at individual cases where
moderate constraints may be placed on the
form of the dark matter distribution, although the crude spatial
resolution of X-ray
temperature profiles imposes severe limitations.
\end{answer}
\begin{question}{Paul Eskridge}
How much is the $T$--$\sigma$ relation of Davis and White effected
by the hard (non-gaseous) X-ray emission from the relatively faint,
low-$L_x$ galaxies?
\end{question}
\begin{answer}{Loewenstein}
Although Davis and White do not include the hard component in their
spectral fits to {\it ROSAT} data, the temperatures they derive
are in superb agreement with {\it ASCA} spectral analysis that
{\it does} include the hard component. I also believe that
the trend is primarily driven by gas-rich galaxies that have the smallest
temperature uncertainties by virtue of their higher luminosities.
\end{answer}
\begin{question}{Paul Goudfrooij}
I think that a significant number of ``normal'' ellipticals exhibiting
X-ray emission from hot gas are dominant members of galaxy groups
(small groups of the Huchra/Geller type), so that their low values of
$\beta_{\rm spec}$ may be partly due to the fact that they reside in the
center of the group, as well as their own, potential. That is, the
hot gas temperature should be compared to the equivalent of the ``combined''
velocity dispersion of the galaxy plus that of the group in which it resides.
Could you comment on this?
\end{question}
\begin{answer}{Loewenstein}
The relevant velocity dispersion for our study
that aims to constrain the dark halo relative to
optical galaxy properties
within the (optically) luminous part of the galaxy
is the {\it central} velocity dispersion, since it
is one of the fundamental plane parameters and sets
the stellar mass scale. The group velocity dispersion becomes of interest if
compared with
with the outer temperatures of the very extended X-ray
halos.
\end{answer}
\begin{question}{Michael Pahre}
There is recent evidence that the scaling of velocity dispersion
from central to large radial values may be varying systematically as a
function of galaxy luminosity (e.g., Busarello et al. 1997 on mergers
of dissipationless systems). How might your results on the
luminosity-dependence of dark matter be effected by this property?
\end{question}
\begin{answer}{Loewenstein}
The relative unavailability of velocity dispersion profiles, as well as
the sort of complications you raise, motivated our exclusive
consideration of
central velocity dispersions in this study. For the more detailed study
we have planned, whatever dark halo structure we consider must confront
the observations you describe.
\end{answer}
\begin{question}{Daniel Thomas}
Bright ellipticals host $\alpha$-enhanced stellar populations. If it is
mainly these galaxies that enrich the ICM, would you expect a
galaxy/ICM asymmetry (Renzini et al. 1993) in the sense that
Mg/Fe is underabundant in the ICM? {\it ASCA} data point towards ratios
that are at least not subsolar. What do you think is the best way out of this
dilemma?
\end{question}
\begin{answer}{Loewenstein}
Although we have little information on Mg/Fe in the ICM,
ratios of other $\alpha$-elements relative to Fe are supersolar.
The {\it observed} lack of a
galaxy/ICM asymmetry implies that one of the assumptions of
Renzini et al. (1993) -- that star formation in proto-ellipticals has the
same mix of SNIa and SNII as our own Galaxy and that the enrichment process in
the ICM is prolonged compared to that of the stars -- must be
abandoned. Also, I believe Renzini et al. probably
overestimated the total amount of Fe locked up in stars by a factor of
2--3. More puzzling to me is why the Si/Fe ratio in the hot ISM is
$\sim$solar, in apparent conflict with the stellar Mg/Fe ratio.
\end{answer}
|
2,877,628,091,313 | arxiv | \section{Preamble}
In 2008, I submitted a short semi-technical paper \cite{sayir2008} to
the IT transactions on the
occasion of James L.~Massey's $75^{th}$ birthday. One aim
of the paper was to please Jim who often expressed his liking
for conceptual papers with simple technical content.
Although the paper was dropped for reasons that will become
apparent, it achieved its aim of pleasing Jim who repeatedly
commented positively on the paper in the years
before he passed away. I would speculate that Jim also liked
the pun on the ``role model'' metaphor in the paper mirrorring
our relationship as past student to PhD advisor.
The present paper re-visits the ideas presented in \cite{sayir2008} and
brings a fresh perspective on the subject.
In the following section, we will introduce and discuss
the role model strategy. Section~\ref{sec:mcint}
shows how the solution of the role model convex program reduces to
Monte Carlo integration in the non-parametric case, a much simpler
technique well known in the Bayesian community. This realization
is the reason why the original paper
project \cite{sayir2008} was dropped.
Section~\ref{sec:parcase} discusses the parametric case, where the
role model strategy may be of use after all, and why
its relevance was not immediately obvious because
we operate in the domain of discrete probability mass functions where
parametric estimation is not normally considered. An example
involving the constraint node operation in a factor graph based
SUDOKU solver is presented where parametric estimation is useful,
and other potential applications are discussed.
\section{Introduction and the Role Model Estimator}
The role model framework introduced in \cite{sayir2008} is illustrated in
Figure~\ref{fig:rolemodel}.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\draw (0,6) rectangle (2,7.5);
\node [align=center] at (1,6.75) {Discrete \\ Memoryless \\ Source};
\draw [->] (1,6) -- (1,5);
\node [right] at (1.1,5.5) {$X_k$};
\draw (0,3.5) rectangle (2,5);
\node [align=center] at (1,4.25) {Discrete \\ Memoryless \\ Channel 1};
\draw [->] (1,3.5) -- (1,2.5);
\node [left] at (.9,3) {$Y_k$};
\draw (0,1) rectangle (2,2.5);
\node [align=center] at (1,1.75) {Discrete \\ Memoryless \\ Channel 2};
\draw [->] (2,1.75) -- (3,1.75);
\node [above] at (2.5,1.85) {$Z_k$};
\draw (3,1) rectangle (5,2.5);
\node [align=center] at (4,1.75) {Estimator in \\ Training};
\draw [->] (1,3) -- (2.5,3) -- (2.5,4.75) -- (3,4.75);
\draw (3,4) rectangle (5,5.5);
\node [align=center] at (4,4.75) {Role Model \\ Estimator};
\draw [->] (5,4.75) -- (6,4.75);
\draw [->] (5,1.75) -- (6,1.75);
\node [align=left,above] at (6,4.75) {$P_{X|Y_k=y}$};
\node [align=left,above] at (6,1.75) {$Q_{X|Z_k=z}$};
\end{tikzpicture}
\caption{The Role Model Framework}
\label{fig:rolemodel}
\end{figure}
The discrete random variables $X_k$, $Y_k$ and $Z_k$ form a Markov chain
for every $k$. In the following, we drop the time index $k$ when not
essential as our source and channels are assumed to be memoryless.
The role model estimator is the optimal estimator for $X$ given the
observation $Y$, which provides for every observation $Y=y$ the full
a-posteriori probability mass function $P_{X|Y=y}$ over the domain of $X$.
Our aim is to train an estimator for $X$ using the random variable $Z=z$,
which is labeled ``estimator in training'' in the figure. The output
of this estimator is labeled $Q_{X|Z=z}$ to reflect the fact that it is
not necessarily the true a-posteriori probability mass function of $X$
given the observation $Z=z$, but an approximation thereof. The estimator
in training is Bayesian optimal if $Q_{X|Z=z}=P_{X|Z=z}$ for every $P_Z(z)>0$.
The reason why $P_{X|Z=z}$ is not available directly may be that
the channel $P_{Z|Y}$ is unknown, or that
the channel $P_{Z|Y}$ is known but that the resulting exact computation
of $P_{X|Z=z}$ is too complex for practical use. The particularity
of the role model framework, in contrast to more complicated estimation
frameworks such as those where the EM and similar algorithms operate,
is that we {\em have access} to the role model estimator
and to its output to help design the estimator in training. This
brings up the justified question of why we don't just use the role
model estimator directly instead of training an estimator based on $Z$.
This may have
several reasons:
\begin{itemize}
\item the observations $Y_k$ and the resulting a-posteriori distributions
$P_{X|Y=y}$ may only be available during a training phase but not
when our estimator goes live;
\item the observations $Y_k$ may only be available intermittently and our
estimator in training is required to fill the gaps at times $k$ when
$Y_k$ is not available;
\item the computation of $P_{X|Y=y}$ may be too costly and only feasible
offline during a simulation, or online intermittently for the purpose
of training the estimator $Q_{X|Z=z}$.
\end{itemize}
We will later discuss a few technical examples in the context of iterative decoding
and communication receivers where these conditions are fulfilled.
\cite{sayir2008} gives hypothetical general examples outside
the domain of communications where this scenario could also be of interest.
What we call the {\em role model strategy} consists in aiming to minimize the expected divergence
between the a-posteriori distribution $P_{X|Y=y}$ computed by the role model estimator,
and the distribution-valued heuristic output $Q_{X|Z=z}$ of the estimator in training,
i.e., to seek the $Q_{X|Z=z}$ for every $z$ that minimizes
\[
ED(P_{X|Y}||Q_{X|Z}) \stackrel{\mathrm{def}}{=} \sum_z\sum_y P(yz) D(P_{X|Y=y}||Q_{X|Z=z}),
\]
where we use the notation $ED(.||.)$ as in \cite{coverthomas} to signify the
expected information divergence, where expectation is always taken on the joint
distribution of the conditioning variables.
The averaging required to compute this expression may be impractical, and hence
we use the law of large numbers and the fact that all our processes are ergodic
to state
\begin{equation}
ED(P_{X|Y}||Q_{X|Z}) = \lim_{N\rightarrow\infty}\frac{1}{N} \sum_{k=1}^N D(P_{X|Y_k=y_k}||Q_{X|Z_k=z_k}),
\label{eq:time-average}
\end{equation}
and approximate the quantity to be minimized by a time average of the
divergence between the two distribution-valued outputs of our estimators. Note
that this may look like a frequentist/empirical approach, but we are at no point counting
frequencies here, so the divergences being averaged are true divergences. It is
only the average divergence that becomes an approximation if we perform the
time averaging over a finite time interval of length $N$ rather than taking the
limit as $N$ goes to infinity. We note that
$ED(P_{X|Y}||Q_{X|Z})$ is convex in $Q_{X|Z}$, and hence the set of distributions
$Q_{X|Z=z}$ for every $z$ that we need can be sought using numerical convex
optimization techniques.
We devised the role model strategy as a heuristic approach to address this
type of scenario.
We had no expectation that this strategy could be optimal.
The divergence that is minimized cannot in general be reduced to zero,
unless $Z$ is a sufficient statistic for $Y$ with respect to $X$, which is
never the case in the applications of interest. Hence, this is not a system
identification problem, where the estimator in training eventually models
the role model. It therefore came as a surprise when we realized that
the following holds:
\begin{theorem}[The ``role model'' theorem] If $X$, $Y$ and $Z$ form a
Markov chain $X-Y-Z$, then
\[
ED(P_{X|Y}||Q_{X|Z}) = H(X|Z) - H(X|Y) + ED(P_{X|Z}||Q_{X|Z}).
\]
In particular,
\[
ED(P_{X|Y}||Q_{X|Z}) \geq H(X|Z) - H(X|Y)
\]
with equality if and only if $Q_{X|Z=z} = P_{X|Z=z}$
for all $z$ such that $P(z)>0$.
\label{th:rolemodel}
\end{theorem}
The theorem shows that the minimization we suggested converges to the
optimal solution $Q_{X|Z}=P_{X|Y}$. Hence, by imitating the
role model, we converge to the best solution given our degraded observations,
despite the fact that the role model we seek to imitate has better
observations. The proof of the theorem is trivial and given in~\cite{sayir2008}.
Note that the theorem requires the Markov property. A similar-looking result
can be shown when the Markov property does not hold by stating the
identity
\begin{eqnarray*}
\sum_{xyz}P(yz) P(x|yz) \log\frac{P(x|y)}{Q(x|z)} &=& ED(P_{X|Z}||Q_{X|Z}) \\ &&+ H(X|Z) - H(X|Y),
\end{eqnarray*}
effectively showing that $Q_{X|Z}=P_{X|Z}$ minimizes the expression on the left, but this
expression is only equal to $ED(P_{X|Y}||Q_{X|Z})$ when the Markov condition
is verified.
It should be stressed that the appellation ``theorem'' was chosen for this
result not on the basis of its mathematical intricacy, which it clearly lacks,
but on the basis of its conceptual counter-intuitiveness (from the author's perspective)
and central role it was thought to have in the applications
under consideration.
In the following section, we will show that the role
model strategy reduces to a much simpler form that is well known in the
Bayesian estimation community, after discussing a class of applications and their
constraints. The role model strategy will regain some meaning in the last section
of the paper, where we show a class of applications where the simpler method does not
apply but where the role model strategy remains a valid approach.
\section{The Non-Parametric Case and Monte Carlo Integration}
\label{sec:mcint}
Initial interest for the scenario described was born out of efforts to design
optimal post-processing procedures for sub-optimal components in iterative
decoders. In the min-sum approximation of the sum product algorithm for decoding
low-density parity-check (LDPC) codes, the optimal Bayesian operation under
independence assumption in the
constraint nodes of the decoder is replaced by a sub-optimal operation as
illustrated in Figure~\ref{fig:min-sum}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\draw (1,1) rectangle (1.5,1.5) ;
\draw [->] (0.2,0.2) -- (1,1);
\draw [->] (0,1.25) -- (1,1.25);
\draw [->] (0.2,2.3) -- (1,1.5);
\draw [->] (1.5,1.25) -- (2.5,1.25);
\node [align=right,below] at (.9,.7) {$Y_1$};
\node [align=left,below] at (.2,1.25) {$Y_2$};
\node [align=right,below] at (.3,2) {$Y_3$};
\node [align=right,above] at (2.5,1.25) {$P_{X|\underline{Y}}$};
\node [align=right,below] at (2.5,1.2) {$Q_{X|Z}$};
\end{tikzpicture}
\caption{The min-sum approximation for LDPC decoders}
\label{fig:min-sum}
\end{figure}
In the figure, the incoming observations $Y_1,Y_2,Y_3$ are aggregated
from channel observations during previous decoder iterations and are
assumed independent, as is common practice in the design of belief
propagation algorithms.
In this case, the role model estimator, expressed as a mapping of log-likelihood ratios,
is given by
\[
L(X|\underline{Y}) = 2\tanh^{-1}\left(\prod_i \tanh \frac{L(X|Y_i)}{2}\right),
\]
which is a fairly complex scalar function of multiple variables often considered
too costly for implementation,
while the estimator in training is a function $Q_{X|Z=z}$ of
\[
Z = \left(\min_i|L(Y_i)|,\prod_i\mbox{sign}L(Y_i)\right).
\]
For this simple binary case, the optimal post-processing $P_{X|Z=z}$ can
be computed analytically \cite{lechner2004} under Gaussian assumption and
is fairly simple to compute when the variances of all incoming observations
are identical. However, as soon as we deviate from this case, i.e., when
the incoming observations have different variances as is the case for irregular LDPC
codes, or if we wish to go beyond the Gaussian simplifying assumption,
$P_{X|Z=z}$ becomes very difficult to compute. Hence the role model approach
allows us to use numerical optimization algorithms to train a post-processing
function to converge to the optimal estimator by running the
sum-product rule and the estimator-in-training in parallel offline during a simulation,
and then using the resulting low-complexity estimator online in the device
(e.g., a mobile handset).
Another potential practical scenario consists in running both estimators in parallel
in the device for a limited time while training the low complexity estimator,
then shutting off the more complex estimator to save energy and conserve
battery time. Note that the random variable $Z$ in this example
is continuous but scalar. A fairly accurate estimator can be trained
by quantizing $Z$ finely and computing a lookup table of the a-posteriori
distributions of $X$ for each quantized value of $Z$.
We will now show that, for this non-parametric approach that aims to estimate
the a-posteriori distributions of $X$ for all values of $Z$, the role model
strategy reduces to a much simpler method well known in the Bayesian
community as a case of Monte Carlo integration.
For now, let us approach the optimization problem via the time-averaging
formulation (\ref{eq:time-average}) where we operate on a finite block
length and drop the limit for simplicity. It is easy to see that the
minimization with respect to the matrix $Q_{X|Z}(x|z)$ for all $x$ and $z$ that we
require simplifies to separate maximizations for each individual $z$ of the
type
\[
\begin{cases}
\max_{Q(.|z)} &\sum_{k:z_k=z}\sum_x P(x|y_k)\log Q(x|z) \\
\text{subj.~to} &\sum_xQ(x|z) = 1 \\
&Q(x|z)\geq 0, \forall x
\end{cases}
\]
We now take the liberty of ignoring the inequality constraints and
setting up the Lagrange conditions rather than the KKT conditions,
because the solution will show that there is no danger of any
variables becoming negative.
For any $z$, we obtain by differentiating with respect to $Q(x|z)$
\[
\sum_{k:z_k=z} \frac{P(x|y_k)}{Q(x|z)} = \lambda
\]
and hence
\[
Q(x|z) = \lambda^{-1} \sum_{k:z_k=z} P(x|y_k).
\]
The normalization condition requires that
$\lambda = \vert \{k:z_k=z\}\vert$ and the solutions
clearly satisfy $Q(x|z)\geq 0$ since they are obtained
as an average of probabilities.
We conclude that the solution of the role model strategy for any $z$ in the time
averaging case is simply the time average of the a-posteriori distributions
computed by the role model for all $Y_k$ such that $Z_k=z$. Again, we insist
that this is not simply a frequentist/empirical approach as may appear. The quantities
being added here are not numbers of occurrences but true Bayesian a-posteriori
distributions computed by the role model. The correct training for our estimator
of $X$ for the symbol $z$ is to average the distribution-valued estimations
of the role model componentwise over the time instances when $z$ is observed.
Although we showed this for finite $N$, it is easy to see that the same
holds in the limit as $N$ goes to infinity, and hence for the expectation
$ED(P_{X|Y}||Q_{X|Z})$. An alternative view is that the optimal strategy
is to evaluate the sum
\[
P(x|z)=\sum_yP(x|y)P(y|z)=E_{P_{Y|Z=z}}[P(X|Y)]
\]
as a time average, as briefly stated in \cite{sayir2010-isit}.
In the min-sum algorithm discussed above, and in any similar
applications where it is possible to adapt the full $\vert\mathcal{Z}\vert\times\vert\mathcal{X}\vert$
parameter set for $Q_{X|Z}$, the role model strategy is an overkill
and Monte Carlo integration gives the same solution by elementary
averaging without resorting to complicated numerical convex
optimization methods. In the next section, we will see that
there is still a niche for the role model strategy when the
full parameter set is too large for practice.
\section{The Parametric Case: an Example}
\label{sec:parcase}
When the full parameter set is not available, Monte Carlo integration
is not an option and the role model strategy becomes a possibly interesting
approach. While this is easy to state, it is not an obvious proposition
because we don't tend to think of parametric estimation for discrete
random variables. Indeed, we are not proposing to constrain the conditional
probability mass functions $Q_{X|Z}$ to be parametric distributions in the
sense that a Gaussian density is a parametric probability density function.
Rather, as we will see in our examples, there are scenarios where the
domain $\mathcal{Z}$ of $Z$ makes it impractical to estimate an a-posteriori
model $Q_{X|Z=z}$ for every possible $z$. In such scenarios, we may be
constrained to using a parametric function of $Z$, i.e., $Q_{X|Z=z}=f_\alpha(z)$.
In such a case, the role model strategy loses its optimality as the
space of possible functions $f_\alpha(.)$ will not in general include the
mapping that makes $Q_{X|Z}$ converge to $P_{X|Z}$. Hence, the role
model strategy in this context is a purely heuristic approach that may
or may not exhibit advantages or weaknesses with respect to other
heuristic optimization
criteria and can be judged solely on the basis of its numerical performance.
An early example applying the role model strategy in a semi-parametric manner
was described in \cite{sayir2010-turbo} for a hypothetical rank-based
message-passing decoder for non-binary LDPC codes. In fact, a more pertinent
question than that studied in \cite{sayir2010-turbo} would be to design post-processing
operations for the suboptimal operations in the Extended Min-Sum (EMS) algorithm \cite{declercq2006},
a reduced complexity version of the sum-product algorithm for non-binary LDPC
codes. However, the EMS algorithm is quite a difficult construct to understand, so
that a full study of parametric post-processing, while practically relevant,
would obscure rather than clarify matters in the context of this paper. Hence,
we have chosen to treat an alternative example of lesser practical relevance
but that is easier to understand.
The example is the use of graph-based decoding for solving soft SUDOKU puzzles.
We omit an introduction to universally known SUDOKU puzzles and refer the
reader to \cite{sudoku} for futher details and definitions.
By ``soft'' SUDOKU, we mean puzzles that receive
general noisy observations of the correct entries in the grid rather than
observations that are either correct or erased. Observations for every
entry in the grid are available as a-posteriori probability
mass functions over the 9-ary alphabet using a known and accurate channel model.
SUDOKU puzzles can be represented as a factor graph where every one of the 81
variables is connected to 3 constraints and every constraint (9 rows, 9 columns
and 9 subgrids) involves 3 variables. The factor graph of a $4\times 4$ SUDOKU
defined over the alphabet $\{1,2,3,4\}$ is represented in Figure~\ref{fig:sudoku}.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{fg_sudoku4.pdf}
\caption{The factor graph of a $4\times 4$ SUDOKU solver}
\label{fig:sudoku}
\end{figure}
Our interest in the context of this paper
is for the operation in the constraint nodes. Constraint nodes receive 9
a-posteriori observations of their participating variables, which we assume
to be independent in line with common practice in belief propagation algorithms.
The constraint node's task is to return to each variable its a-posteriori
probability given the observations of the remaining 8 variables in the
constraint. Let us denote by $\mathbf{M} = [m_{ij}]$ the $9\times 9$ matrix
of incoming messages into a constraint node, where
\[
m_{ij} = P(X_i=j|\underline{Y}_i)
\]
where $\underline{Y}_i$ generically denotes the set of channel observations
that led to the incoming message on the $i$-th branch into the constraint
node. It is clear that the probability that the first variable in the constraint
has value $j$, given observations of the other 8 variables, is the sum of
probabilities of all configurations of the other 8 variables that don't
include the value $j$. If we denote by $\mathbf{M}_{\setminus ij}$ the matrix
$\mathbf{M}$ with its $i$-th row and $j$-th column removed, we can
state that an outgoing message component from the constraint node can
be expressed as
\[
m'_{ij} = \perm(\mathbf{M}_{\setminus ij})
\]
where $\perm(\mathbf{A})$ denotes the Cauchy permanent \cite{permanent}
of the matrix $\mathbf{A}$. The permanent is a patently difficult
function to compute and the best algorithms known compute an approximation
of the permanent in probabilistically polynomial time, which polynomial
is considerably larger than $n!$ for sizes $n$ of interest to us. We can
hence assume that $8!=40320$ operations are needed to compute the permanent
above. This is a large number of operations for every node at every
iteration of a belief propagation solver, but well within the range of
offline simulation, so a perfect testing ground for our role model strategy.
We can now try to replace the permanent computation by any approximation
and use the role model strategy to design post-processing functions for the
approximation.
For example, we can opt to do the following:
\begin{itemize}
\item take the 3 largest elements in each row of $\mathbf{M}$ and replace
the remaining entries by a uniform distribution adding to the same sum
to obtain the matrix $\mathbf{M'}$,
\item re-write the matrix $\mathbf{M'}$ as $\mathbf{H}+\mathbf{T}$ where
$\mathbf{T}$ contains uniform rows whose values are consistent with the uniform
tails produced in the previous step, and $\mathbf{H}$ contains the
non-uniform values minus the uniform tail value for those head entries not
in the tails, and zero where the tail entries of $\mathbf{M'}$ are;
\item we now approximate the required permanent as
\[
\perm\mathbf{M}_{\setminus ij} \approx \perm\mathbf{H}_{\setminus ij} + \perm\mathbf{T}_{\setminus ij}
\]
\item we compute the elements of the outgoing matrix using this approximation
and normalize the rows so they sum to 1 and look like true probability mass
functions.
\end{itemize}
This is much easier to compute because $\mathbf{H}$ is sparse and $\mathbf{T}$
has uniform rows, but it is a very poor approximation because the permanent of
a sum of matrices is not at all well approximated by the sum of their permanents.
Figure~\ref{fig:EXIT} shows the EXIT chart of a factor-graph
based SUDOKU solver, where the two red curves correspond to the optimal and
sub-optimal constraint node operations, and the blue curves correspond to the
constraint node operations for various channel signal to noise ratios (SNR).
\begin{figure}[!h]
\centering
\includegraphics[clip=true,trim=280 50 260 40,width=\columnwidth]{EXIT.pdf}
\caption{EXIT chart of a SUDOKU solver using the optimal and approximate constraint node operations}
\label{fig:EXIT}
\end{figure}
Surprisingly, the red curves are not too far apart in particular in the top
half of the EXIT chart, indicating that despite the very rough approximation
we are using, the result is sufficiently informative to achieve acceptable
performance, and the full complexity permanent computation should only be used
in the early iterations.
Now for the application of the role model strategy. The observations for our
role model postprocessor in this case consist of the rows of the outgoing
matrix computed using the permanent approximations. The observation
space is the set of 9-ary probability distributions. This is a continuous space and
is no longer scalar like in the binary min-sum case. Hence, we cannot simply
quantize it finely in order to apply Monte Carlo integration and converge to the
optimal a-posteriori estimation. What we can do, however, is to apply arbitrary
transformations to the probability vectors. For example, we can take
replace the sum $\perm\mathbf{H}+\perm\mathbf{T}$ by a weighted sum
$\alpha_i\perm\mathbf{H}+(1-\alpha_i)\perm\mathbf{T}$ and
optimize the weights $\alpha_i$.
Hence the problem becomes
one of finding the best parameters $\alpha_i$ to optimize the solver performance.
The problem is that solver performance itself is difficult to measure and
can only be optimized by exhaustive search algorithm. The role model
strategy in this case yields a tracktable convex optimization procedure
where $ED(P_{X|Y}||Q_{X|Z})$ is the optimization metric. $P_{X|Y}$ here
is the correct a-posteriori distribution obtained with the true
permanent, and $Q_{X|Z}$ is the $\alpha_i$-corrected result of the
sum of permanents approximation. Note that with this approach, we have lost
any claim of optimality, and anyone who prefers another metric over ours
is entitled to do so. The only valid criterion for comparing metrics is simulated
performance of the resulting optimized solvers.
\section{Conclusion}
We have described the role model strategy as a convex program whose solution
is the Bayesian optimal estimator in training. We
showed that the strategy reduces to Monte Carlo integration in the non-parametric
case, and discussed the parametric case with an example where the strategy can
be used but Monte Carlo integration would not work.
In fact, applications of post-processing optimization for
sub-optimal estimators are burgeoning in the literature and many
metrics have been proposed for optimizing the post-processing stage of,
say, the EMS algorithm for non-binary LDPC codes, demodulators for
Bit-Interleaved Coded Modulation (BICM) and many others. Some, such as
\cite{nguyen2011} claim theoretical motives for
their approaches, while others, such as \cite{szczecinski2012},
are self-declaredly heuristic in their approach. Given our analysis
so far and the fact that these are all parametric models, we tend
to agree with the latter.
\section*{Acknowledgment}
I learned that ``my'' role model strategy reduces to elementary Monte Carlo integration
upon joining the University of Cambridge in a conversation with my then officemate
and now friend Simon Hill, a fact that earns him my warmest gratitude as well as my
sincere apologies for the inappropriate language he may have heard
when I found out. I also wish to thank to the then associate editor of the IT
transactions Michael Gastpar who handled my submission and the three gracious anonymous reviewers
who I realize put considerable effort into providing constructive feedback,
with apologies for never writing back to explain that I was not revising the paper as a result
of the conversation mentioned above.
|
2,877,628,091,314 | arxiv | \section{Introduction}
Abelian equivalence of words has long been a subject of great interest (see for instance Erd\"os problem, \cite{CRSZ, CovHed, CurRam2009, Dekking1979, Keranen1992ICALP, PZ, RSZ1, RSZ2, aleksi}). Given a finite non-empty set $A,$ let $A^*$ denote the set of all finite words over $A.$ Two words $u$ and $v$ in $A^*$ are {\it Abelian equivalent,} denoted $u\thicksim_{\mbox{ab}} v,$ if and only if $|u|_a=|v|_a$ for all $a\in A,$ where $|u|_a$ and $|v|_a$ denote the number of occurrences of $a$ in $u$ and $v,$ respectively. It is readily verified that $\thicksim_{\mbox{ab}}$ defines an equivalence relation (in fact a congruence) on $A^*.$
We consider the following natural generalization: Fix $k \in {\mathbb Z} ^+ \cup \{+\infty\}.$ Two words $u$ and $v$ in $A^*$ are said to be $k$-{\it Abelian equivalent}, written $u\thicksim_k v,$ if $|u|_x=|v|_x$ for each non-empty word $x$ with $|x|\leq k$ (where $|x|$ denotes the length of $x,$ and $|u|_x$ and $|v|_x$ denote the number of occurrences of $x$ in $u$ and $v,$ respectively).
We note that $u\thicksim _{+\infty} v$ if and only if $u=v,$ while $\thicksim_1$ corresponds to the usual notion of Abelian equivalence $\thicksim_{\mbox{ab}}.$ Thus one may regard the notion of $k$-Abelian equivalence as gradually bridging the gap between Abelian equivalence ($k=1$) and equality ($k=+\infty).$ It is readily verified that $\thicksim_k$ defines an equivalence relation (in fact a congruence) on $A^*.$ Clearly, if $u\thicksim_k v,$ then $|u|=|v|$ and $u\thicksim_{\ell} v$ for each positive integer $\ell \leq k.$
The notion of $k$-Abelian equivalence was first introduced by the first author in \cite{Ka80}
in connection with formal languages and decidability questions of various fundamental problems.
It was shown that the well known Parikh Theorem on the equivalence of Parikh images of regular and context-free languages does not hold for $k$-abelian equivalence.
In contrast various highly nontrivial decidability questions including the D0L sequence equivalence problem \cite{ER} or the Post Correspondence Problem \cite{Post}, turned out to be easily decidable in the context of
$k$-Abelian equivalence. Recently $k$-Abelian equivalence has been studied in the context of avoidance of repetitions in words (see the discussion at the beginning of \S\ref{last} on $k$-Abelian powers).
In this paper we undergo an investigation of the complexity of infinite words in the framework of $k$-Abelian equivalence. As is the case with various other notions of complexity of words, we will see that $k$-Abelian complexity is intimately linked with periodicity and can be used to detect the presence of repetitions.
Let $A$ be a finite non-empty set. For each infinite word
$\omega= a _0a_1 a_2\ldots $ with $a_i\in A,$
we denote by ${\mathcal F}_{\omega}(n)$ the set of all {\it factors} of $\omega$ of length $n,$ that is, the set of all finite words of the form $a_{i}a_{i+1}\cdots a_{i+n-1}$ with $i\geq 0.$
We set \[\rho_{\omega}(n)=\mbox{Card}({\mathcal F}_{\omega}(n)).\] The function $\rho_{\omega}:{\mathbb N} \rightarrow {\mathbb N}$ is called the {\it factor complexity function} of $\omega.$
Analogously, for each $k \in {\mathbb Z} ^+ \cup \{+\infty\}$ we define \[\mathcal {P}^{(k)}_\omega (n)=\mbox{Card}\left({\mathcal F}_{\omega}(n)/\thicksim_{k}\right).\]
The function
$\mathcal {P}^{(k)}_\omega :{\mathbb N} \rightarrow {\mathbb N},$ which counts the number of $k$-Abelian equivalence classes of factors of $\omega$ of length $n,$ is called the $k$-{\it Abelian complexity} of $\omega.$ In case $k=+\infty$ we have that $\mathcal {P}^{(+\infty)}_\omega (n)=\rho_{\omega}(n),$ while if $k=1,$
$\mathcal {P}^{(1)}_\omega (n),$ denoted $\rho^{\mbox{ab}}_{\omega}(n),$ corresponds to the usual Abelian complexity of $\omega.$
Most word complexity functions, including factor complexity \cite{MorHed1940}, maximal pattern complexity \cite{KZ}, permutation complexity \cite{AFKS, FDFF}, Abelian complexity \cite{CovHed}, and Abelian maximal pattern complexity \cite{KWZ}, may be used to detect
(and in some cases characterize) ultimately periodic words. For instance, a celebrated result due to Morse and Hedlund \cite{MorHed1940} states that an infinite word $\omega \in A^{\mathbb N}$ is ultimately periodic if and only if $\rho_{\omega}(n)\leq n$ for some $n\in {\mathbb Z}^+.$ The third author together with T. Kamae proved a similar result in the context of maximal pattern complexity with $n$ replaced by $2n-1$ (see \cite{KZ}).
Furthermore, amongst all aperiodic (meaning non-ultimately periodic) words, Sturmian words generally have the lowest possible complexity\footnote{ With respect to maximal pattern complexity, and Abelian maximal pattern complexity, Sturmian words are not the only words of lowest complexity.}. We show that these same results hold in the framework of $k$-Abelian complexity.
In order to formulate the precise link between aperiodicity and $k$-Abelian complexity, we define, for each $k \in {\mathbb Z} ^+ \cup \{+\infty\},$ an auxiliary function $q^{(k)}:{\mathbb N} \rightarrow {\mathbb N}$ by
\[ q^{(k)}(n) =
\left\{\begin{array}{ll} n+1 \,\,\,&\mbox{for}\,\, n\leq 2k-1\\
2k \,\,\,&\mbox{for}\,\, n\geq 2k
\end{array}
\right.
\]
We prove that for $\omega \in A^{\mathbb N}$, if $\mathcal{P}^{(k)}_\omega (n_0)<q^{(k)}(n_0)$ for some $k \in {\mathbb Z} ^+ \cup \{+\infty\}$ and $n_0 \geq 1,$ then $\omega$ is ultimately periodic.
This result is already well known in the special cases $k=+\infty$ and $k=1$ (see \cite{MorHed1940} and \cite{CovHed} respectively). By the Morse-Hedlund result mentioned earlier, this condition gives a characterization of ultimately periodic words in the special case $k=+\infty.$
In contrast, $k$-Abelian complexity does not yield such a characterization. Indeed, both Sturmian words and the ultimately periodic word $01^\infty = 0111\cdots$ have the same constant $2$ Abelian complexity. More generally, we shall see that the ultimately periodic word
$0^{2k-1}1^\infty$ has the same $k$-Abelian complexity as a Sturmian word.
Nevertheless $k$-Abelian complexity gives a complete characterization of Sturmian words amongst all aperiodic words. More precisely, we prove that for an aperiodic word $\omega \in A^{\mathbb N},$ the following conditions are equivalent:
\begin{itemize}
\item $\omega$ is a balanced binary word, that is, {\it Sturmian}.
\item $\mathcal {P}^{(k)}_\omega (n)=q^{(k)}(n)$ for each $k \in {\mathbb Z} ^+ \cup \{+\infty\}$ and $n\geq 1.$
\end{itemize}
Again, the special cases of $k=+\infty$ and $k=1$ were already known (see
\cite{MorHed1940} and \cite{CovHed} respectively).
Finally we investigate the question of avoidance of $k$-Abelian $N$ powers: By a $k$-Abelian $N$ power we mean a word $U$ of the form $U=U_1U_2\ldots U_N$ such that $U_i\thicksim_k U_j$ for all $1\leq i,j\leq N.$ Using Szemer\'edi's theorem \cite{Sz}, we show that if $\omega$ has bounded $k$-Abelian complexity, then for every $D\subset {\mathbb N}$ with positive upper density and for every positive integer $N,$ there exists a $k$-Abelian $N$ power occurring in $\omega$ at some position $j\in D.$
The paper is organized as follows: In \S\ref{back} we recall some basic definitions and notation and establish various basic properties of $k$-Abelian equivalence of words. Also in \S\ref{back} we compute the rate of growth of the number of $k$-Abelian equivalence classes of words in $A^n.$
In \S\ref{periodicity} we develop the link between $k$-Abelian complexity and periodicity of words. In \S\ref{Sturmwords} we compute the $k$-Abelian complexity of Sturmian words and show that it completely characterizes Sturmian words amongst all aperiodic words. Finally in \S\ref{last} we study $k$-Abelian complexity in the context of repetitions in words.
\section{$k$-Abelian equivalence}\label{back}
\subsection{Definitions and first properties}
Given a finite non-empty set $A,$ we denote by $A^*$ the set of all finite words over $A$ including the empty word, denoted by $\varepsilon,$ by $A^+$ the set of all finite non-empty words over $A,$ by $A^{\mathbb N}$ the set of (right) infinite words over $A,$ and by
$A^{\mathbb Z}$ the set of bi-infinite words over $A.$
Given a finite word $u =a_1a_2\ldots a_n$ with $n \geq 1$ and $a_i \in A,$ we denote the length $n$ of $u$ by $|u|$ (by convention we set $|\varepsilon|=0.)$ For each $x\in A^+,$ we let $|u|_x$ denote the number of occurrences of $x$ in $u.$ For $u\in A^*,$ we denote by $\bar u$ the reverse of $u.$
A factor $u$ of $\omega=a_0a_1a_2\ldots \in A^{\mathbb N}$ is called {\it right special} (respectively {\it left special}) if there exists distinct symbols $a,b\in A$ such that
both $ua$ and $ub$ (respectively $au$ and $bu$) are factors of $\omega.$ We say $u$ is {\it bispecial} if $u$ is both left and right special.
An infinite word $\omega\in A^{\mathbb N}$ is said to be \emph{periodic} if there exists a positive integer $p$ such that
$a_{i+p} = a_i$ for all indices $i.$ It is said to be \emph{ultimately periodic} if $a_{i+p} = a_i$ for all sufficiently large $i$.
It is said to be \emph{aperiodic} if it is not ultimately periodic.
Sturmian words are the {\it simplest} aperiodic infinite words;
Sturmian words are infinite words over a binary alphabet having exactly $n+1$ factors of length
$n$ for each $n \geq 0.$ Their origin can be traced back to the astronomer J. Bernoulli III in 1772. A fundamental result due to Morse and Hedlund \cite{MorHed1940} states that each aperiodic (meaning non-ultimately periodic) infinite word must contain at least $n+1$ factors of each length $n\geq 0.$ Thus Sturmian words are those aperiodic words of lowest factor complexity. They arise naturally in many different areas of mathematics including combinatorics, algebra, number theory, ergodic theory, dynamical systems and differential equations. Sturmian words are also of great importance in theoretical physics and in theoretical computer science and are used in
computer graphics as digital approximation of straight lines.
If $\omega \in \{a,b\}^{\mathbb N}$ is Sturmian, then for each positive integer $n$ there exists a unique right special (respectively left special) factor of length $n,$ and one is the reversal of the other. In particular, if $x$ is a bispecial factor, the $x$ is a {\it palindrome}, i.e., $x =\bar x.$ For more on Sturmian words, we refer the reader to \cite{Lothaire1983book}.
\begin{definition}\label{df}\rm {Let $k \in {\mathbb Z} ^+ \cup \{+\infty\}.$ We say two words $u,v\in A^+$ are $k$-{\it Abelian equivalent} and write $u\thicksim_k v,$ if $|u|_x=|v|_x$ for all words $x$ of length $|x|\leq k.$ }
\end{definition}
\noindent We note that if $u,v\in A^+$ and $|u|=|v|\leq k,$ then $u\thicksim_k v$ if and only if $u=v.$
\begin{example}The words $u=010110$ and $v=011010$ are $3$-Abelian equivalent but not $4$-Abelian equivalent since the prefix $0101$ of $u$ does not occur in $v.$ The words $u=0110$ and $v=1101$ are not $2$-Abelian equivalent (since they are not Abelian equivalent) yet for every word $x$ of length $2$ we have $|u|_x=|v|_x.$
\end{example}
The next lemma
gives different equivalent ways of defining $k$-Abelian equivalence.
For example,
item \eqref{item:lessk} corresponds to the Definition~\ref{df}
and item \eqref{item:prefsuff} corresponds to another common definition:
Words $u$ and $v$ of length at least $k - 1$ are $k$-Abelian equivalent
if they share the same prefixes and suffixes of length $k - 1$ and
if $|u|_x = |v|_x$ for every word $t$ of length $k$.
\begin{lemma} \label{prefixsuffix}
Let $u$ and $v$ be words of length at least $k - 1$ and
let $|u|_t = |v|_t$ for every word $t$ of length $k$.
The following are equivalent:
\begin{enumerate}
\item \label{item:lessk}
$|u|_s = |v|_s$ for all $s \in A^{\leq k - 1}$,
\item \label{item:k-1}
$|u|_s = |v|_s$ for all $s \in A^{k - 1}$,
\item \label{item:prefsuff}
$\pref{k - 1}(u) = \pref{k - 1}(v)$ and
$\suff{k - 1}(u) = \suff{k - 1}(v)$,
\item \label{item:pref}
$\pref{k - 1}(u) = \pref{k - 1}(v)$,
\item \label{item:suff}
$\suff{k - 1}(u) = \suff{k - 1}(v)$,
\item \label{item:prefsuffi}
$\pref{i}(u) = \pref{i}(v)$ and
$\suff{k - 1 - i}(u) = \suff{k - 1 - i}(v)$
for some $i \in \{0, \dots, k - 1\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{item:lessk} $\Rightarrow$ \eqref{item:k-1}:
Clear.
\eqref{item:k-1} $\Rightarrow$ \eqref{item:prefsuff}:
Let $\{t_1, \dots, t_n\}$ be the multiset
of factors of $u$ (and of $v$) of length $k$.
The multiset of factors of $u$ of length $k - 1$ is
\begin{equation*}
\{\pref{k - 1}(u)\} \cup \{\suff{k - 1}(t_1), \dots, \suff{k - 1}(t_n)\},
\end{equation*}
and the multiset of factors of $v$ of length $k - 1$ is
\begin{equation*}
\{\pref{k - 1}(v)\} \cup \{\suff{k - 1}(t_1), \dots, \suff{k - 1}(t_n)\}.
\end{equation*}
These multisets must be the same, so $\pref{k - 1}(u) = \pref{k - 1}(v)$.
Similarly, $\suff{k - 1}(u) = \suff{k - 1}(v)$.
\eqref{item:prefsuff} $\Rightarrow$ \eqref{item:pref}, \eqref{item:suff}:
Clear.
\eqref{item:pref} or \eqref{item:suff} $\Rightarrow$ \eqref{item:prefsuffi}:
Clear.
\eqref{item:prefsuffi} $\Rightarrow$ \eqref{item:lessk}:
Let $\{t_1, \dots, t_n\}$ be the multiset
of factors of $u$ (and of $v$) of length $k$.
Every
\begin{equation*}
s \in A^{k - 1} \smallsetminus \{\pref{k - 1}(u), \suff{k - 1}(u)\}
\end{equation*}
appears in the multiset
\begin{equation} \label{eq:prefsuffmultiset}
\{\pref{k - 1}(t_1), \dots, \pref{k - 1}(t_n)\} \cup
\{\suff{k - 1}(t_1), \dots, \suff{k - 1}(t_n)\}
\end{equation}
$2 |u|_s$ times.
A word $s \in \{\pref{k - 1}(u), \suff{k - 1}(u)\}$
appears $2 |u|_s - 1$ times if $\pref{k - 1}(u) \ne \suff{k - 1}(u)$,
and $2 |u|_s - 2$ times if $\pref{k - 1}(u) = \suff{k - 1}(u)$.
Similarly, every
\begin{equation*}
s \in A^{k - 1} \smallsetminus \{\pref{k - 1}(v), \suff{k - 1}(v)\}
\end{equation*}
appears $2 |v|_s$ times, and
a word $s \in \{\pref{k - 1}(v), \suff{k - 1}(v)\}$
appears $2 |v|_s - 1$ times if $\pref{k - 1}(v) \ne \suff{k - 1}(v)$,
and $2 |v|_s - 2$ times if $\pref{k - 1}(v) = \suff{k - 1}(v)$.
If some words appear an odd number of times in \eqref{eq:prefsuffmultiset},
then these must be $\pref{k - 1}(u)$ and $\suff{k - 1}(u)$,
and they must also be $\pref{k - 1}(v)$ and $\suff{k - 1}(v)$.
If follows that $|u|_s = |v|_s$ for every $s \in A^{k - 1}$.
(In this case the assumption \eqref{item:prefsuffi} was not needed.)
If all words appear an even number of times in \eqref{eq:prefsuffmultiset},
then necessarily
$\pref{k - 1}(u) = \suff{k - 1}(u)$ and $\pref{k - 1}(v) = \suff{k - 1}(v)$.
From \eqref{item:prefsuffi} it follows that
$\pref{k - 1}(u) = \pref{k - 1}(v)$ and $\suff{k - 1}(u) = \suff{k - 1}(v)$,
and thus $|u|_s = |v|_s$ for every $s \in A^{k - 1}$.
The fact that $|u|_s = |v|_s$ also for every $s$ of length less than $k - 1$
can be proved in a similar way.
\end{proof}
\noindent The next lemma lists some basic facts on $k$-Abelian equivalence:
\begin{lemma} \label{lem:basic}
Let $u, v \in A^*$ and $k \geq 1$.
\begin{itemize}
\item If $|u| = |v| \leq 2 k - 1$ and $u \kae{k} v$, then $u = v$.
\item If $u \kae{k} v$, then $u \kae{k'} v$ for all $k' \leq k$.
\item If $u_1 \kae{k} v_1$ and $u_2 \kae{k} v_2$,
then $u_1 u_2 \kae{k} v_1 v_2$.
\end{itemize}
\end{lemma}
The bound $2k-1$ in Lemma~\ref{lem:basic} is optimal as for each positive integer $k$ there exist words $u\neq v$ of length $2k$ such that
$u\thicksim_{k} v.$ For example, the words $u=0^{k-1}010^{k-1}$ and $v=0^{k-1}100^{k-1}$ of length $2k$ are readily verified to be $k$-Abelian equivalent (see Proposition~\ref{simplificationprop}).
\begin{lemma}\label{centralword} Fix $2\leq k<+\infty.$ Suppose $aub\thicksim_{k} cvd$ with $a,b,c,d\in A$ and $u,v\in A^*.$ Then $u\thicksim_{k-1}v.$
\end{lemma}
\begin{proof} Let $x\in A^*$ with $|x|\leq k-1.$ We can assume that $|x|<|aub|$ for otherwise $0=|u|_x=|v|_x.$ If $x$ is neither a prefix nor a suffix of $aub,$ then by Lemma~\ref{prefixsuffix} $x$ is neither a prefix nor suffix of $cvd$ and hence $|u|_x=|aub|_x=|cvd|_x=|v|_x.$ If $x$ is either a prefix of $aub$ or a suffix of $aub$ but not both, the $|u|_x=|aub|_x-1=|cvd|_x-1=|v|_x.$ Finally if $x$ is both a prefix and a suffix of $aub$ then
$|u|_x=|aub|_x-2=|cvd|_x-2=|v|_x.$
\end{proof}
\subsection{A first connection to Sturmian words}
\noindent The next theorem gives a complete classification of pairs of $k$-Abelian equivalent words of length $2k$ and establishes a first link to Sturmian words:
\begin{theorem}\label{2k} Fix a positive integer $k,$ and let $u,v\in A^*$ be distinct words of length $2k.$ Then $u\thicksim_{k}v$ if and only if there exist distinct letters $a,b\in A,$ a Sturmian word $\omega \in \{a,b\}^{\mathbb N}$ and a right special factor $x$ of $\omega$ of length $k-1$ (or empty in case $k=1)$ such that
\[u=xab\bar x\,\,\,\,\,\,\mbox{and} \,\,\,\,\,\, v=xba\bar x.\]
In particular $u$ and $v$ are both factors of the same Sturmian word $\omega.$\end{theorem}
\begin{remark}\label{bispecial}\rm{It follows that if $u$ and $v$ are distinct $k$-Abelian equivalent words of length $2k,$ then both $u$ and $v$ are on a binary alphabet and in fact factors of the same Sturmian word $\omega.$ In fact, if $B$ is a bispecial factor of $\omega$ then both $BabB$ and $BbaB$ are factors of $\omega.$
Also, if $x$ is a right special factor of $\omega,$ then there exists a bispecial factor $B$ of $\omega$ with $x$ a suffix of $B$ and $\bar x$ a prefix of $B.$
Thus both $xab\bar x$ and $xba \bar x$ are factors of $\omega.$}
\end{remark}
\noindent We will need the next result applied to Sturmian words, but we prove it more generally for episturmian words. We refer the reader to \cite{DJP} for the definition and basic properties of episturmian words.
\begin{proposition}\label{simplificationprop} Fix a positive integer $k\geq 2.$ Let $u$ and $v$ be factors of the same episturmian word $\omega$. Then $u$ and $v$ are $k$-Abelian equivalent if and only if $u$ and $v$ are $(k-1)$-Abelian equivalent and share a common prefix and a common suffix of length $\mbox{min}\{|u|,k-1\}.$ Thus, $u$ and $v$ are $k$-Abelian equivalent if and only if $u$ and $v$ are Abelian equivalent and share a common prefix and a common suffix of length $\mbox{min}\{|u|,k-1\}.$
\end{proposition}
\begin{proof} One direction follows immediately from Lemma~\ref{prefixsuffix}. Next suppose that $u$ and $v$ are $(k-1)$-Abelian equivalent factors of the same episturmian word $\omega,$ and that $u$ and $v$ share a common prefix and a common suffix of length $\mbox{min}\{|u|, k-1\}.$ To prove that $u\thicksim _k v$ it suffices to show that whenever $axb\in {\mathcal F}_{\omega}(k)$ (with $a,b \in A$ and $x\in A^*),$ we have $|u|_{axb}=|v|_{axb}.$
First let us suppose that $ax$ is not a right special factor of $\omega$ so that every occurrence in $\omega$ of $ax$ is a occurrence of $axb.$ Then, if $ax$ is not a suffix of $u$ (and hence not a suffix of $v)$ we obtain
\[|u|_{axb} =|u|_{ax}=|v|_{ax}=|v|_{axb}.\]
On the other hand if $ax$ is a suffix of $u$ (and hence also a suffix of $v)$ we have
\[|u|_{axb}=|u|_{ax}-1=|v|_{ax}-1=|v|_{axb}.\]
Similarly, in case $xb$ is not a left special factor of $\omega$ we obtain $|u|_{axb}=|v|_{axb}.$
Thus it remains to consider the case when $ax$ is right special in $\omega$ and $xb$ is left special in $\omega.$
In this case $x$ is bispecial and $a=b.$ For each $c\in A,$ let $n_c=|u|_{axc}$ and $n'_c=|v|_{axc}.$ We must show that $n_a=n'_a.$ However we know that $n_c=n'_c$ for all $c\neq a$ since $xc$ is not left special in $\omega.$
Now, if $ax$ is not a suffix of $u$ (and hence not a suffix of $v)$ we have
\[ \sum _{c\in A} n_c = |u|_{ax} =|v|_{ax} = \sum _{c\in A} n'_c\]
whence $n_a=n'_a.$ On the other hand if $ax$ is a suffix of $u$ (and hence a suffix of $v)$ then
\[ \sum _{c\in A} n_c = |u|_{ax}-1 =|v|_{ax}-1 = \sum _{c\in A} n'_c\]
whence $n_a=n'_a$ as required.
\end{proof}
\begin{remark} \rm{The following example illustrates that the assumption in Proposition~\ref{simplificationprop} that $u$ and $v$ are factors of the same Sturmian word is necessary: Let $u=aabb$ and $v=abab.$ The $u$ and $v$ are Abelian equivalent and share a common prefix and suffix of length $1,$ yet they are not $2$-Abelian equivalent.}
\end{remark}
\begin{proof}[Proof of Theorem~\ref{2k}]
We start by showing that if $\omega \in \{a,b\}^{\mathbb N}$ is a Sturmian word, and $x$ a right special factor of $\omega$ of length $k-1,$ then $u=xab\bar x $ and $v= xba \bar x$ are $k$-Abelian equivalent. This follows from Proposition~\ref{simplificationprop} since $u$ and $v$ share a common prefix and a common suffix of lengths $k-1$ and are Abelian equivalent.
Next we suppose that $u$ and $v$ are distinct $k$-Abelian equivalent words of length $2k$ and show that both $u$ and $v$ have the required form.
We proceed by induction on $k.$ In case $k=1,$ we have that $u$ and $v$ are distinct Abelian equivalent words of length $2$ whence $u$ and $v$ may be written in the form $u=ab$ and $v=ba$ for some $a\neq b$ in $A.$
Next suppose the result of Theorem~\ref{2k} is true for $k-1$ and we shall prove it for $k.$ So let $u$ and $v$ be distinct $k$-Abelian equivalent words of length $2k$ with $k>1.$ Then by Lemma~\ref{prefixsuffix} we can write $u=a'u'b'$ and $v=a'v'b'$ for some
$a',b'\in A$ and $u',v'\in A^*$ where $|u'|=|v'|= 2(k-1)\geq 2.$
Since $u$ and $v$ are distinct, it follows that $u'\neq v'.$
Also, by Lemma~\ref{centralword} it follows that $u'\thicksim _{k-1} v'.$ Thus by induction hypothesis, there exist distinct letters $a,b\in A$ and a Sturmian word $\omega \in \{a,b\}^{\mathbb N}$ such that $u'$ and $v'$ are both factors of $\omega$
of the form $u'=xab\bar x$ and $v'=xba \bar x$ for some right special factor $x$ of $\omega$ of length $k-2.$
Thus we can write $u=a'xab\bar x b'$ and $v=a'xba\bar x b'.$ Since $u\thicksim _{k} v,$ $|a'xa|=k,$ and $a\neq b$ it follows that $a'x$ must occur in $v'$ and hence $a' \in \{a,b\}.$ Similarly we deduce that $b'\in \{a,b\}.$
Let us first suppose that $x\neq \bar x.$ Then $a'xa$ must occur in $v'$ and $a\bar xb'$ must occur in $u'.$ Hence both $a'xa$ and $a\bar x b'$ are factors of $\omega.$ Moreover, since $x\neq \bar x$ it follows that $x$ is not left special in $\omega$ and $\bar x$ is not right special in $\omega.$ Hence every occurrence of $x$ in $\omega$ is preceded by $a'$ and every occurrence of $\bar x$ is $\omega$ is followed by $b'.$
Since the factors of $\omega$ are closed under reversal, we deduce that $a'=b'$ and $a'x$ is a right special factor of $\omega.$ Moreover, since
$u'$ and $v'$ are both factors of $\omega$ beginning in $x$ and ending in $\bar x,$ it follows that $u=a'xab\bar xa'$ and $v=a'xba\bar x a'$ are both factors of $\omega.$
Finally suppose $x=\bar x$ so that $x$ is a bispecial factor of $\omega.$ We may write the increasing sequence of bispecial factors $\varepsilon =B_0,B_1,\ldots ,x= B_n,B_{n+1},\ldots $ so that $x$ is the $n$th bispecial factor of $\omega.$ We recall that associated to $\omega$ is a sequence $(a_i)_{i\geq 0} \in A^{\mathbb N}$ (called the {\it directive word }of $\omega)$ defined by $a_iB_i$ is right special in $\omega.$ (See for instance \cite{RiZa}).
Without loss of generality we can suppose that $a'=a.$ We claim $b'=a.$ Suppose to the contrary that $b'=b.$ Then both $axa$ and $b\bar x b=bxb$ are factors of $v'$ contradicting that $\omega$ is balanced.
Hence we must have $a'=b'=a$ and so
$u=axab\bar x a$ and $v=axba \bar x a.$ Now $x$ is a bispecial factor of the Sturmian word $\omega.$ If $ax$ is a right special factor of $\omega$ then we are done by Remark~\ref{bispecial}. Otherwise, if $bx$ is a right special factor of $\omega,$ then this means that $a_n=b$ where $a_n$ is the $n$th entry of the directive word of $\omega.$ Let $\omega'$ be a Sturmian word whose directive word $(b_i)_{i\geq 0} $ is defined by $b_i=a_i$ for $i\neq n,$ and $b_n=a.$ Then $x$ is a bispecial factor of $\omega'$ and $ax$ is a right special factor of $\omega'.$ It follows from Remark~\ref{bispecial} that both $u$ and $v$ are factors of $\omega'.$
\end{proof}
\noindent As an immediate consequence of Theorem~\ref{2k} we have:
\begin{corollary} Let $u\in A^*$ be of the form $u=vxab\bar x w$ where $x$ is a right special factor of length $k-1$ of a Sturmian word. Set $u'= v x ba \bar x w.$
Then $u\thicksim_{k} u'.$
\end{corollary}
\subsection{The number of $k$-Abelian classes in $A^n$}
Here we shall estimate the number of $k$-Abelian equivalence classes of
words in $A^n.$ Fix
$k \geq 1$ and let $m \geq 2$ be the cardinality of the set $A.$
\begin{lemma} \label{lem:increasing}
The number of $k$-Abelian equivalence classes of $A^{n + 1}$
is at least as large as
the number of $k$-Abelian equivalence classes of $A^{n}$.
\end{lemma}
\begin{proof}
If $k = 1$ or $n < k - 1$, then the claim is clear.
Otherwise, let $B$ be a set of representatives
of the $k$-Abelian equivalence classes of $A^n$.
The set $A B$ has $m$ times as many words as $B$.
To prove the theorem, we will show that
there can be at most $m$ words in $A B$ that are $k$-Abelian equivalent.
Let $a \in A$ and
let $a u_0, \dots a u_m \in A B$ be $k$-Abelian equivalent.
It needs to be shown that some of these words are equal.
Two of these words must have the same $k$th letter,
let these be $au$ and $av$.
Because also $\pref{k - 1}(au) = \pref{k - 1}(av)$,
it follows that $\pref{k}(au) = \pref{k}(av)$.
If $t \in A^k$,
then either $|u|_t = |au|_t = |av|_t = |v|_t$
(if $t \ne \pref{k}(au)$),
or $|u|_t = |au|_t - 1 = |av|_t - 1 = |v|_t$
(if $t = \pref{k}(au)$).
Thus $u$ and $v$ are $k$-Abelian equivalent and,
by the definition of $B$, $u = v$.
This proves the claim.
\end{proof}
Let $s_1, s_2 \in A^{k - 1}$ and let
\begin{equation*}
S(s_1, s_2, n) = A^n \cap s_1 A^* \cap A^* s_2
\end{equation*}
be the set of words of length $n$ that start with $s_1$ and end with $s_2$.
For every word $w \in S(s_1, s_2, n)$ we can define a function
\begin{equation*}
f_w: A^k \to \{0, \dots, n - k + 1\}, \ f_w(t) = |w|_t.
\end{equation*}
If $u, v \in S(s_1, s_2, n)$, then $u \kae{k} v$ if and only if $f_u = f_v$.
To count the number of $k$-Abelian equivalence classes,
we need to count the number of the functions $f_w$.
Not every function
\begin{math}
f: A^k \to \{0, \dots, n - k + 1\}
\end{math}
is possible.
It must be
\begin{equation} \label{eq:sumf}
\sum_{t \in A^k} f(t) = n - k + 1,
\end{equation}
and there are also other restrictions,
which are determined in Lemma \ref{lem:euler}.
If a function
\begin{math}
f: A^k \to \mathbb N_0
\end{math}
is given, then a directed multigraph $G_f$ can be defined as follows:
the set of vertices is $A^{k-1}$,
and if $t = s_1 a = b s_2$, where $a, b \in A$,
then there are $f(t)$ edges from $s_1$ to $s_2$.
If $f = f_w$, then this multigraph is related to the Rauzy graph of $w$.
In the next lemma,
$\deg^-$ denotes the indegree and $\deg^+$ the outdegree of a vertex in $G_f$.
\begin{lemma} \label{lem:euler}
For a function
\begin{math}
f: A^k \to \mathbb N_0
\end{math}
and words $s_1, s_2 \in A^{k - 1}$, the following are equivalent:
\begin{enumerate}[(i)]
\item \label{euler1}
there is a number $n$ and a word $w \in S(s_1, s_2, n)$
such that $f = f_w$,
\item \label{euler2}
there is an Eulerian path from $s_1$ to $s_2$ in $G_f$,
\item \label{euler3}
the underlying graph of $G_f$ is connected,
except possibly for some isolated vertices,
and $\deg^-(s) = \deg^+(s)$ for every vertex $s$,
except that if $s_1 \ne s_2$,
then $\deg^-(s_1) = \deg^+(s_1) - 1$ and $\deg^-(s_2) = \deg^+(s_2) + 1$,
\item \label{euler4}
the underlying graph of $G_f$ is connected,
except possibly for some isolated vertices, and
\begin{equation} \label{eq:fsystem}
\sum_{a \in A} f(as) = \sum_{a \in A} f(sa) + c_s
\qquad (s \in A^{k - 1}),
\end{equation}
where
\begin{equation*}
c_s = \begin{cases}
-1, &\text{if $s = s_1 \ne s_2$}, \\
1, &\text{if $s = s_2 \ne s_1$}, \\
0, &\text{otherwise},
\end{cases}
\end{equation*}
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{euler1} $\Leftrightarrow$ \eqref{euler2}:
$w = a_1 \dots a_n \in S(s_1, s_2, n)$ and $f = f_w$ if and only if
\begin{equation*}
s_1 = a_1 \dots a_{k-1}
\rightarrow a_2 \dots a_{k}
\rightarrow \dots
\rightarrow a_{n - k + 2} \dots a_{n} = s_2
\end{equation*}
is an Eulerian path in $G_f$.
\eqref{euler2} $\Leftrightarrow$ \eqref{euler3}:
This is well known.
\eqref{euler3} $\Leftrightarrow$ \eqref{euler4}:
\eqref{euler4} is just a reformulation of \eqref{euler3}
in terms of the function $f$.
\end{proof}
In the next lemma we consider the independence of homogeneous systems
related to the equations \eqref{eq:fsystem} and \eqref{eq:sumf}.
\begin{lemma} \label{lem:degrsyst}
Let $x_t$, where $t \in A^k$, be $m^k$ unknowns.
The system of equations
\begin{equation} \label{eq:system}
\sum_{a \in A} x_{as} = \sum_{a \in A} x_{sa}
\qquad (s \in A^{k-1})
\end{equation}
is not independent, but all of its proper subsystems are.
If we add the equation
\begin{equation} \label{eq:eq}
\sum_{t \in A^k} x_{t} = 0
\end{equation}
to one of these independent systems, then the system remains independent.
\end{lemma}
\begin{proof}
The sum of the equations \eqref{eq:system} is a trivial identity
\begin{math}
\sum_{t \in A^k} x_{t} = \sum_{t \in A^k} x_{t},
\end{math}
so every one of these equations follows from the other $m^{k-1}-1$
equations. If $s_1, s_2 \in A^{k-1}$ are two different words,
then
\begin{math}
x_t = |s_1 s_2|_t
\end{math}
for all $t$ is a solution of all the equations, except those with $s
= s_1$ or $s = s_2$. This proves that all subsystems are
independent. Addition of \eqref{eq:eq} keeps them independent,
because
\begin{math}
x_t = 1
\end{math}
for all $t$ is a solution of the system \eqref{eq:system} but not of
\eqref{eq:eq}.
\end{proof}
\begin{theorem}
Let $k\geq 1$ and $m \geq 2$ be fixed numbers and let $A$ be an
$m$-letter alphabet. The number of $k$-Abelian equivalence classes
of $A^n$ is
\begin{math}
\Theta(n^{m^{k} - m^{k - 1}}).
\end{math}
\end{theorem}
\begin{proof}
Let $n \geq 2k-2$,
\begin{math}
f: A^k \to \{0, \dots, n-k+1\}
\end{math}
and $u,v \in A^{k-1}$. By Lemma \ref{lem:euler}, there is a
word $w \in S(u,v,n)$ such that $f = f_w$ only if $f$ satisfies
\eqref{eq:sumf} and \eqref{eq:fsystem}. Consider the system formed
by these equations. The function $f_w$ satisfies the equations for
every $w \in S(u,v,n)$, so the system has a solution. By Lemma
\ref{lem:degrsyst}, the rank of the coefficient matrix of the system
is $m^{k-1}$, so the general solution of this system is of the form
\begin{equation*}
f(r_i) = \sum_{j=1}^{m^k - m^{k-1}} a_{ij} f(s_j) + b_i
\qquad (i = 1, \dots, m^{k-1}),
\end{equation*}
where the words $r_i$ and $s_j$ form the set $A^k$ and $a_{ij},
b_i$ are rational numbers. Because
\begin{math}
0 \leq f(s_j) \leq n-k+1,
\end{math}
there are
\begin{math}
O(n^{m^k-m^{k-1}})
\end{math}
possible functions $f$.
Let $u=v$ and consider the system of equations \eqref{eq:fsystem}.
By Lemma \ref{lem:degrsyst}, the general solution of this
homogeneous system is of the form
\begin{equation} \label{eq:sol}
f(r_i) = \sum_{j=1}^{m^k - m^{k-1} + 1} a_{ij} f(s_j)
\qquad (i = 1, \dots, m^{k-1}-1),
\end{equation}
where the words $r_i$ and $s_j$ form the set $A^k$ and $a_{ij}$
are rational numbers. The coefficients $a_{ij}$ do not depend on
$n$. Let
\begin{equation*}
c = \max \set{ \textstyle \sum_{j=1}^{m^k - m^{k-1} + 1}
|a_{ij}|}{1 \leq i \leq m^{k-1}-1}
\end{equation*}
and let $d$ be the least common multiple of the denominators of the
numbers $a_{ij}$. Every constant function $f$ satisfies the system
of equations. In particular,
\begin{math}
f(t) = \lfloor {n}/{2m^k} \rfloor
\end{math}
for all $t$ is a solution of the system. If we let
\begin{equation*}
f(s_j) = \left\lfloor \frac{n}{2m^k} \right\rfloor + b_j,
\quad \text{where} \quad
|b_j| < \frac{n}{2cm^k} - 1
\quad \text{and} \quad
d | b_j,
\end{equation*}
then the numbers
\begin{equation*}
f(r_i) = \left\lfloor \frac{n}{2m^k} \right\rfloor
+ \sum_{j=1}^{m^k - m^{k-1} + 1} a_{ij} b_j
\end{equation*}
given by \eqref{eq:sol} are integers and
\begin{math}
1 \leq f(t) \leq n / m^k - 1
\end{math}
for all $t \in A^k$. Because $f(t) \geq 1$ for all $t$, the
underlying graph of $G_f$ is connected, so by Lemma \ref{lem:euler}
there is a word $w \in S(u, v, |w|)$ such that $f = f_w$. Because
$f(t) \leq n / m^k - 1$ for all $t$, we get
\begin{equation*}
|w| = \sum_{t \in A^k} f(t) + k - 1
\leq n - m^k + k - 1 < n .
\end{equation*}
There are
\begin{math}
\Theta(n^{m^k - m^{k-1} + 1})
\end{math}
ways to choose the numbers $b_j$. Every choice gives a different
function $f = f_w$ for some $w \in S(u, v, |w|)$ such that $|w| <
n$. Let these words be $w_1, \dots, w_N$. No two of them are
$k$-Abelian equivalent. Among these words there are at least $N / n$
words of equal length.
By Lemma \ref{lem:increasing},
there are at least $N / n$ words of length $n$
such that no two of them are $k$-Abelian equivalent,
and $N / n = \Omega(n^{m^k - m^{k - 1}})$.
\end{proof}
\section{$k$-Abelian complexity \& periodicity}\label{periodicity}
In this section we prove that if $\mathcal {P}^{(k)}_\omega (n_0)<q^{(k)}(n_0)$ for some $k\in {\mathbb Z}^+ \cup \{+\infty\}$ and $n_0 \geq 1,$ then $\omega$ is ultimately periodic (see Corollary~\ref{evperiodic} below). For this purpose we introduce an auxiliary family of equivalence relations $\mathcal {R}_k$ on $A^*$ defined as follows: Let $k\in {\mathbb Z}^+ \cup \{+\infty\}.$ Give $u,v \in A^*$ we write $u\mathcal {R}_k v,$ if and only if $u\thicksim _{1} v$ (i.e., $u\thicksim_{ab}v$) and $u$ and $v$ share a common prefix and a common suffix of lengths $k-1.$ In case $|u|<k-1,$ then $u\mathcal {R}_k v$ means $u=v.$
It follows immediately from Lemma~\ref{prefixsuffix} that
\begin{equation}\label{imp} u\thicksim_{k} v \Longrightarrow u\mathcal {R}_k v.\end{equation}
In general the converse is not true: For example, taking $u=0011$ and $v=0101$ we see that $u\mathcal {R}_2v$ yet $u$ and $v$ are not $2$-Abelian equivalent.
However, in view of Proposition~\ref{simplificationprop} we have:
\begin{corollary} Let $u$ and $v$ be two factors of a Sturmian word $\omega$, and $k\in {\mathbb Z}^+ \cup \{+\infty\}.$ Then $u\thicksim_{k} v$ if and only if $u\mathcal {R}_k v.$
\end{corollary}
Let $\omega \in A^{\mathbb N}.$ Associated to the relation $\mathcal {R}_k$ is a complexity function, denoted $\rho^{(k)}_\omega (n),$ which counts the number of distinct $\mathcal {R}_k$ equivalence classes of factors of $\omega$ of length $n.$ It follows from (\ref{imp}) above that for each $n$ we have
\begin{equation}\label{comps}\rho^{(k)}_\omega (n) \leq \mathcal {P}^{(k)}_\omega (n).\end{equation}
We recall the function $q^{(k)}:{\mathbb N} \rightarrow {\mathbb N}$ ($k\in {\mathbb Z}^+ \cup \{+\infty\})$ defined by
\[ q^{(k)}(n) =
\left\{\begin{array}{ll} n+1 \,\,\,&\mbox{for}\,\, n\leq 2k-1\\
2k \,\,\,&\mbox{for}\,\, n\geq 2k
\end{array}
\right.
\]
\begin{theorem}\label{theoremevperiodic2} Let $\omega=a_0a_1a_2\ldots \in A^{\mathbb N}$ and
$k \in {\mathbb Z} ^+ \cup \{+\infty\}.$ If $\rho_{\omega}^{(k)}(n_0)<q^{(k)}(n_0)$ for some $n_0 \geq 1,$ then $\omega$ is ultimately periodic.
\end{theorem}
\begin{proof} The result is well known in case $k=+\infty$ (see \cite{MorHed1940}). For $k\in {\mathbb Z}^+,$ we proceed by induction on $k.$ In case $k=1,$ then $\mathcal {R}_1$ is simply the usual notion of Abelian equivalence and the result follows from \cite{CovHed}.
Now suppose $k>1$ and that $\rho_{\omega}^{(k)}(n_0)<q^{(k)}(n_0)$ for some $n_0 \geq 1.$ It follows immediately from the definition of $\mathcal{R}_k$ that if $u\mathcal {R}_kv$ and $|u|\leq 2k-1,$ then $u=v.$ Thus, if $\rho_\omega ^{(k)}(n_0)<q^{(k)}(n_0)$ where $n_0\leq 2k-1,$ then $\rho_\omega (n_0)<n_0+1$ and so $\omega$ is ultimately periodic by the well known result of Morse and Hedlund in \cite{MorHed1940}.
Thus we suppose that $\rho_\omega ^{(k)}(n_0)<2k$ for some $n_0\geq 2k.$ We claim that $\omega$ must be ultimately periodic. Suppose to the contrary that $\omega$ is aperiodic. We shall show that this implies that $\rho_\nu ^{(k-1)}(n_0-2)<2(k-1)$ where $\nu=a_0^{-1}\omega$ denotes the first shift of $\omega,$ i.e., the word obtained from $\omega$ by removing the first letter of $\omega.$ Since $n_0-2\geq 2(k-1)$ we deduce that $\rho_\nu ^{(k-1)}(n_0-2)<q^{(k-1)}(n_0-2).$ But then by induction hypothesis on $k,$ it follows that $\nu$ (and hence $\omega)$ is ultimately periodic, a contradiction.
Consider the map
\[\Psi : {\mathcal F}_{\omega}(n_0)/\mathcal {R}_k \longrightarrow {\mathcal F}_{\nu}(n_0-2)/\mathcal {R}_{k-1}\]
defined by
\[\Psi ([aub]_k) =[u]_{k-1}\]
where $a,b\in A,$ and $u\in A^*$ of length $n_0-2$ Here $[u ]_k$ denotes the $\mathcal {R}_k$ equivalence class of $u.$
To see that $\Psi$ is well defined, suppose $aub\mathcal {R}_k cud.$ Then since $k>1,$ it follows that $a=c$ and $b=d$ and thus that $u\mathcal {R}_1 v.$ Moreover as $aub$ and $cud$ share a common prefix and suffix of length $k,$ it follows that $u$ and $v$ share a common prefix and suffix of length $k-1.$ Thus $u\mathcal {R}_{k-1} v$ as required.
Clearly the mapping $\Psi$ is surjective, in fact for each $u\in {\mathcal F}_{\nu}(n_0-2)$ there exist $a,b\in A$ such that $aub\in {\mathcal F}_{\omega}(n_0).$ This is the reason for replacing $\omega$ by $\nu.$
We now show that either there exist distinct classes $[u]_{k-1},[v]_{k-1}\in {\mathcal F}_{\nu}(n_0-2)/\mathcal {R}_{k-1}$ for which
\begin{equation}\label{(*)}\mbox{min}\{\mbox{Card}\left(\Psi^{-1}([u]_{k-1})\right), \mbox{Card}\left(\Psi^{-1}([v]_{k-1})\right)\}\geq 2,\end{equation} or there exists a class $[u]_{k-1}\in {\mathcal F}_{\nu}(n_0-2)/\mathcal {R}_{k-1}$ for which
\begin{equation}\label{(**)}\mbox{Card}\left(\Psi^{-1}([u]_{k-1})\right)\geq 3.\end{equation}
In either case it follows that
\[\mbox{Card}\left({\mathcal F}_{\nu}(n_0-2)/\mathcal {R}_{k-1}\right)\leq\mbox{Card}\left({\mathcal F}_{\omega}(n_0)/\mathcal {R}_k\right) -2< 2(k-1).\]
Since $\omega$ is assumed to be aperiodic, $\omega$ contains both a left special factor of the form $uc$ and a right special factor of the form $dv$ of lengths $n_0-1$ for some choice of $c,d \in A$ and $u,v\in A^*.$ Thus there exist distinct letters $a,b \in A$ such that $auc$ and $buc$ are factors of $\omega.$ Moreover since $a\neq b,$ it follows that $[auc]_k\neq [buc]_k.$
Thus $\mbox{Card}\left(\Psi^{-1}([u]_{k-1})\right)\geq 2.$
Similarly, there exist distinct letters $a',b' \in A$ such that $dva'$ and $dvb'$ are factors of $\omega,$ and since $a'\neq b',$ it follows that $[dva']_k\neq [dvb']_k.$
Thus $\mbox{Card}\left(\Psi^{-1}([v]_{k-1})\right)\geq 2.$ In case $[u]_{k-1}\neq [v]_{k-1},$ we obtain the desired inequality~(\ref{(*)}). In case $[u]_{k-1}= [v]_{k-1},$ since $a\neq b$ and $a'\neq b'$ it follows that
\[\mbox{Card}\{[auc]_k, [buc]_k, [dua']_k,[dub']_k\}\geq 3\]
which yields the inequality~(\ref{(**)})
This completes the proof of Theorem~\ref{theoremevperiodic2}
\end{proof}
\begin{corollary}\label{evperiodic} Let $\omega \in A^{\mathbb N}$ and
$k \in {\mathbb Z} ^+ \cup \{+\infty\}.$ If $\mathcal {P}^{(k)}_\omega (n_0)<q^{(k)}(n_0)$ for some $n_0 \geq 1$ then $\omega$ is ultimately periodic.
\end{corollary}
\begin{proof} As a consequence of the inequality (\ref{comps}), if $\mathcal {P}^{(k)}_\omega (n_0)<q^{(k)}(n_0)$ then
$\rho_{\omega}^{(k)}(n_0)<q^{(k)}(n_0),$ whence by Theorem~\ref{theoremevperiodic2} it follows that $\omega$ is ultimately periodic.
\end{proof}
\noindent The same method of proof of Theorem~\ref{theoremevperiodic2} can be used to prove the following:
\begin{corollary}\label{theoremperiodic} Let $\omega $ be a bi-infinite word over the alphabet $A$ and
$k \in {\mathbb Z} ^+ \cup \{+\infty\}.$ If $\mathcal {P}^{(k)}_\omega (n_0)<q^{(k)}(n_0)$ for some $n_0 \geq 1,$ then $\omega$ is periodic.
\end{corollary}
\noindent We conclude this section with a few remarks:
\begin{remark} \rm{ In the special case $k=+\infty,$ the condition given in Corollary~\ref{evperiodic} gives a characterization of ultimately periodic words by means of factor complexity: $\omega \in A^{\mathbb N}$ is ultimately periodic if and only if $\rho_{\omega}(n_0)<n_0+1$ for some $n_0\geq 1.$
However, $k$-Abelian complexity does not yield such a characterization. Indeed, both Sturmian words and
the ultimately periodic word $01^\infty = 0111\cdots$ have the same Abelian complexity. More generally, the ultimately periodic word
$0^{2k-1}1^\infty\ldots$ has the same $k$-Abelian complexity as a Sturmian word (see Theorem~\ref{thm:factcompl} below).}
\end{remark}
\begin{remark} \rm{The result of Corollary~\ref{theoremperiodic} is already known to be true in the special cases $k=+\infty$ (see \cite{MorHed1940}) and $k=1$ (see Remark 4.07 in \cite{CovHed}). In these special cases, the converse is also true. But for general $2\leq k<+\infty $ the converse is false. For instance, let $\mbox{Card}(A)=5,$ and let $u$ be a word containing at least one occurrence of every $x\in A^3.$ Let $\omega$ be the periodic word $\omega =\ldots uuuu\ldots.$ Then $\rho^{(2)}_\omega (n)\geq 5$ for every $n\geq 1.$ }
\end{remark}
\section{$k$-Abelian complexity of Sturmian words}\label{Sturmwords}
In this section we determine the $k$-Abelian complexity of Sturmian words and show that for each $k,$ the complexity function $\mathcal {P}^{(k)}$ completely characterizes Sturmian words amongst all aperiodic words. More precisely:
\begin{theorem}\label{thm:factcompl}
Fix $k\in {\mathbb Z}^+ \cup \{+\infty\}.$ Let $\omega \in A^{\mathbb N}$ be an aperiodic word. The following conditions are equivalent:
\begin{itemize}
\item $\omega$ is a balanced binary word, that is, {\it Sturmian}.
\item \begin{math}
\fc{k}{\omega}(n) = q^{(k)}(n) =
\begin{cases}
n + 1 & \text{for $0 \leq n \leq 2k - 1$} \\
2k & \text{for $n \geq 2k$}
\end{cases}.
\end{math}
\end{itemize}
\end{theorem}
Our proof of Theorem~\ref{thm:factcompl} will make use of the following functions $\swap{i}$, which transform binary words by changing
the letters around a specific point. For words $w \in \{0,1\}^n$ we
define $\swap{1}, \dots, \swap{n}$ as follows:
\begin{equation*}
\swap{i}(w) = \begin{cases}
u10v, &\text{if $i < n$, $w = u01v$ and $|u0| = i$}, \\
u1, &\text{if $i = n$ and $w = u0$}.
\end{cases}
\end{equation*}
\begin{lemma} \label{lem:facttrans}
Let $n \geq 1$ and let $w \in \{0,1\}^{\omega}$ be Sturmian. There
is a word $u_1 \in \{0,1\}^n$ and a permutation $\sigma$ of $\{1,
\dots, n\}$ such that if $u_{i+1} = \swap{\sigma(i)}(u_i)$ for $i =
1, \dots, n$, then $u_1, \dots, u_{n+1}$ are the factors of $w$ of
length $n$.
\end{lemma}
\begin{proof}
Let $u_1, \dots, u_{n + 1}$ be the factors of $w$ of length $n$
in lexicographic order.
If follows from Theorem 1.1. in \cite{BuLuZa12}
that for every $i$ there is an $m$ such that $u_{i + 1} = \swap{m}(u_i)$.
It needs to be proved that the $m$'s are all different.
Let $u_{i + 1} = \swap{m}(u_i)$ and $u_{i' + 1} = \swap{m}(u_i')$.
For every $j$
\begin{equation*}
|\pref{m}(u_j)|_1 \leq |\pref{m}(u_{j + 1})|_1
\end{equation*}
and for $j \in \{i, i'\}$
\begin{equation*}
|\pref{m}(u_j)|_1 < |\pref{m}(u_{j + 1})|_1 .
\end{equation*}
If $i \ne i'$, then
\begin{equation*}
|\pref{m}(u_1)|_1 + 2 \leq |\pref{m}(u_{n + 1})|_1
\end{equation*}
which contradicts the balance property of Sturmian words.
\end{proof}
\begin{example}
The factors of the Fibonacci word of length six are
\begin{alignat*}{3}
u_1 = 001001,
\quad u_2 &= 001010 = g_5(u_1),
& \quad u_3 &= 010010 = g_2(u_2),
& \quad u_4 &= 010100 = g_4(u_3), \\
u_5 &= 100100 = g_1(u_4),
& u_6 &= 100101 = g_6(u_5),
& u_7 &= 101001 = g_3(u_6).
\end{alignat*}
We have $u_2 \kae{2} u_3 \kae{2} u_4$ and $u_6 \kae{2} u_7$. There
are no other 2-Abelian equivalences between these factors.
\end{example}
\begin{proof}[Proof of Theorem~\ref{thm:factcompl}]
First let us suppose $\omega \in \{0,1\}^{\mathbb N}$ is Sturmian and let $1\leq k \leq+\infty.$
Let $n \leq 2k-1.$
By Lemma~\ref{lem:basic} two factors $u$ and $v$ of $\omega$ of length $n$ are $k$-Abelian equivalent if and only $u=v.$ Thus
$\fc{k}{w}(n) = n+1$ as required.
Next let $n \geq 2k$ and let $u_1, \dots, u_{n+1}$ and $\sigma$ be as
in Lemma \ref{lem:facttrans}. If $k \leq \sigma(i) \leq n-k$, then
there are words $s, t \in \{0,1\}^*$ and $u, v \in \{0,1\}^{k-1}$
and letters $a, b \in \{0,1\}$ so that $u_i = su01vt$ and $u_{i+1} =
\swap{\sigma(i)}(u_i) = su10vt$. We prove that $u_i \kae{k}
u_{i+1}$. The prefixes and suffixes of $u_i$ and $u_{i+1}$ of length
$k-1$ are the same. The factors of $u_i$ of length $k$ are the
factors of $su$, $u01v$ and $vt$ of length $k$, and the factors of
$u_{i+1}$ of length $k$ are the factors of $su$, $u10v$ and $vt$ of
length $k$.
Because $u01v$ and $u10v$ are factors of $w$,
it follows that $u$ is right special and
$v$ is left special and hence equal to the reversal of $u$.
By Theorem~\ref{2k}, $u01v$ and $u10v$ are $k$-Abelian equivalent.
This proves that $u_i \kae{k} u_{i+1}$ if $k \leq \sigma(i) \leq n-k$.
Thus the words $u_1, \dots, u_{n+1}$
are in at most $2k$ different $k$-Abelian equivalence classes
and $\fc{k}{\omega}(n) \leq 2k$.
By Corollary \ref{evperiodic}, $\fc{k}{\omega}(n) = 2k$.
Next let $1\leq k\leq +\infty$ and let $\omega \in A^{\mathbb N}$ be aperiodic and
\begin{equation*}
\fc{k}{\omega}(n) = q^{(k)}(n) =
\begin{cases}
n + 1 & \text{for $0 \leq n \leq 2k - 1$} \\
2k & \text{for $n \geq 2k$}
\end{cases}.
\end{equation*}
\noindent Taking $n=1$ we see that $\omega$ is binary, (say $\omega \in \{0,1\}^{\mathbb N}).$ We must show that $\omega$ is balanced.
We first recall some basic facts concerning factors of Sturmian words (see for instance \cite{RiZa}):
Let $\eta \in \{0,1\}^{\mathbb N}$ be a Sturmian word, and let ${\mathcal F}_{\eta}(n)$ denote the factors of $\eta$ of length $n.$
The set ${\mathcal F}_{\eta}(n+1)$ is completely determined from the set ${\mathcal F}_{\eta}(n)$ unless $\eta$ has a bispecial factor $B$ of length $n-1$ in which case both $0B$ and $1B$ are factors of $\eta$ and exactly one of the two is right special. If $0B$ is right special, then every occurrence of $1B$ in $\eta$ is an occurrence of $1B0.$ If $v$ is a factor of $\eta$ and $u$ a prefix of $v,$ we write $u\vdash v$ if every occurrence of $u$ in $\eta$ is an occurrence of $v.$ Thus if $0B$ is right special, then $1B\vdash 1B0,$ and similarly if $1B$ is right special, then $0B \vdash 0B1.$
Now suppose to the contrary that the aperiodic binary word $\omega$ is not Sturmian. Then there exists a smallest positive integer $n\geq 1$ and a Sturmian word $\eta$ such that ${\mathcal F}_{\omega}(n)= {\mathcal F}_{\eta}(n)$ but
${\mathcal F}_{\omega}(n+1)\neq {\mathcal F}_{\eta'}(n+1)$ for every choice of Sturmian word $\eta'.$
This means that $\omega$ has a bispecial factor $B$ of length $n-1$ and both $0B$ and $1B$ are in
${\mathcal F}_{\omega}(n)$ and one of the following must occur: i) Neither $0B$ nor $1B$ is right special in $\omega;$ ii) There exists a unique $a\in \{0,1\}$ such that $aB$ is right special, and $(1-a)B\vdash (1-a)B(1-a);$ iii) Both $0B$ and $1B$ are right special in $\omega.$ We will show that since $\omega$ is aperiodic, only case iii) is in fact possible. Clearly, if neither $0B$ nor $1B$ were right special, then $\mbox{Card}({\mathcal F}_{\omega}(n))=\mbox{Card}({\mathcal F}_{\omega}(n+1))$ whence $\omega$ is ultimately periodic, a contradiction.
Next suppose case ii) occurs. We may suppose without loss of generality that $0B$ is right special and $1B
\vdash 1B1.$ If $1\vdash 1B$ (and hence $1\vdash 1B1),$ then we would have $1\vdash 1(B1)^n$ for every $n\geq 1$ from which it follows that the tail of $\omega$ corresponding to the first occurrence of $1$ on $\omega$ is periodic. Thus if $\neg(1\vdash 1B),$ then there exists a bispecial factor $B'$ of $\omega$ with $0<|B'|<|B|$ such that $1B'$ is right special and $1B'1 \vdash 1B$ and hence $1B'1 \vdash 1B1.$ Writing $1B1=1B'1V$ we have $1B'1 \vdash 1B'1V.$ We next show by induction on $n$ that $1B'1V^n$ is a palindrome for each $n\geq 1.$ Clearly this is true for $n=1$ since $1B'1V=1B1.$
Next suppose $1B'1V^n$ is a palindrome. Then
\[\overline{1B'1V^{n+1}}=\overline{V}^{n+1}\overline{1B'1}=\overline{V}\,\overline{V}^n\overline{1B'1}=\overline{V}1B'1V^n=\overline{V}\,\overline{1B'1}\,V^n=1B'1VV^{n}=1B'1V^{n+1}.\]
Having established that $1B'1V^n$ is a palindrome, it follows that $1B'1$ is a suffix of $1B'1V^n$ and hence $1B'1V^n\vdash 1B'1V^{n+1}$ for each $n\geq 0.$
Whence as before $\omega $ is ultimately periodic.
Thus if $\omega$ is not Sturmian, case iii) must occur. This implies that
\[{\mathcal F}_{\omega}(n+1)= {\mathcal F}_{\eta}(n+1)\cup \{0B0,1B1\}\]
and $\mbox{Card}({\mathcal F}_{\eta}(n+1)\cap \{0B0,1B1\})=1.$ Since $\eta$ is Sturmian,
the number of $k$-Abelian classes of factors of $\eta$ of length $n+1$ is equal to $q^{(k)}(n+1).$
But the additional factor $aBa$ of $\omega$ of length $n+1$ introduces a new $k$-Abelian class since it is not even Abelian equivalent to any other factor of $\eta$ (and hence $\omega)$ of length $n+1.$ Thus $\fc{k}{\omega}(n+1) = q^{(k)}(n+1)+1,$ a contradiction. Thus $\omega$ is Sturmian.
\end{proof}
\begin{remark} \rm{In view of Corollary~\ref{evperiodic}, within the class of aperiodic words, Sturmian words have the lowest possible $k$-Abelian complexity. See \cite{AFKS,KWZ,KZ, MorHed1940} for other instances in which Sturmian words have the lowest complexity amongst all aperiodic words.}
\end{remark}
\section{Bounded $k$-Abelian complexity \& $k$-Abelian repetitions}\label{last}
There is great interest in avoidability of repetitions in infinite words. This originated with the classical work of Thue \cite{Th06} and \cite{Th12}, in which he established the existence of an infinite binary (resp. ternary) word avoiding cubes (resp. squares).
It was later shown that to avoid Abelian cubes or Abelian squares, one needs $3$-letter or $4$-letter alphabets respectively (see \cite{Dekking1979} and \cite{Keranen1992ICALP}).
The corresponding problems for $k$-abelian repetitions turned out to be quite nontrivial.
It follows easily that the smallest alphabet where $k$-abelian cubes can be avoided is either $2$ or $3,$ and similarly the smallest alphabet where $k$-abelian squares can be avoided is either $3$ or $4.$
In the latter case for $k = 2$ a computer verification revealed that the correct value is $4,$ as in the case of Abelian repetitions:
Each ternary $2$-abelian square-free word is of length at most $536$ \cite{HuKa11}.
In the former case computer verification shows that there exist binary words of length $100000$ which are $2$-abelian cube-free \cite{HuKa11}.
It is still unknown whether there exists an infinite binary word which is $2$-abelian cube-free.
For some larger values of $k$ such infinite words exist.
In the case of binary alphabets and cubes it was shown in a sequence of papers that an infinite word avoiding $k$-abelian cubes can be constructed for $k = 8$, $k = 5$ and for $k = 3$ (see \cite{HuKaSa12ehrenfeucht}, \cite{MeSa12jm} and \cite{MeSa13dlt} respectively).
So only the value $k = 2$ remains open.
It would be extremely surprising if no such infinite words exist.
For avoiding $k$-abelian squares in a ternary alphabet the situation is equally challenging.
We know that for $k = 3$ there exist words of length $100000$ avoiding $3$-abelian squares. The avoidability in infinite words of $k$-abelian squares in a ternary alphabet is only known for large values of $k$ ($k\geq 64)$ (see \cite{Huova}).
In this section we prove that $k$-Abelian repetitions are unavoidable in words having bounded $k$-Abelian complexity.
For each positive integer $k$ we set
\[A^{\leq k} =\{ x\in A^*\,:\, |x|\leq k\}.\] Given an infinite word $\omega = a_0a_1a_2\ldots \in A^{\mathbb N},$ for each
$0\leq i<j <+\infty$ we denote by $\omega[i,j]$ the factor $a_ia_{i+1}\cdots a_j.$
\begin{definition} Let $k$ and $B$ be positive integers and $\omega \in A^{\mathbb N}.$ We say $\omega$ is $(k,B)$-balanced if and only if for all factors $u$ and $v$ of $\omega$ of equal length, and for all $x \in A^{\leq k}$ we have $\left||u|_x-|v|_x\right| \leq B.$ We say $\omega$ is arbitrarily $k$-imbalanced if $\omega$ is not $(k,B)$-balanced for any positive integer $B.$
\end{definition}
\noindent An elementary, but key observation is that
\begin{lemma}\label{balance} Let $k$ be a positive integer and $\omega \in A^{\mathbb N}.$ Then $\omega$ has bounded $k$-Abelian complexity if and only if $\omega$ is $(k,B)$-balanced for some positive integer $B.$
\end{lemma}
\begin{proof} Clearly if $\mathcal {P}^{(k)}_\omega$ is bounded, say by $B,$ then $\omega$ is $(k,B-1)$-balanced. Conversely, if $\omega$ is $(k,B)$-balanced, then for each positive integer $n$ and for each $x\in A^*$ with $|x|\leq k$ we have
\[\mbox{Card}\{ |u|_x:\, u \in {\mathcal F}_{\omega}(n)\} \leq B+1.\]
It follows that
\[\mathcal {P}^{(k)}_\omega (n) \leq (B+1) ^{K}\]
where $K=\mbox{Card}A^{\leq k}.$
\end{proof}
Fix a positive integer $k.$ It follows from Theorem~\ref{thm:factcompl} and Lemma~\ref{balance} that each Sturmian word is $(k,B)$-balanced for some positive integer $B$ (depending on $k.)$ Actually, I. Fagnot and L. Vuillon proved in \cite{FV} that every Sturmian word is $(k,k)$-balanced.
\begin{definition} Fix $k\in {\mathbb Z}^+ \cup \{+\infty\},$ and $N$ a positive integer. By a $k$-Abelian $N$-power we mean a word $U$ of the form $U=U_1U_2\cdots U_N$ such that $U_i\thicksim_k U_j$ for all $1\leq i,j\leq N.$
\end{definition}
\noindent In this section we shall prove the following result:
\begin{theorem}\label{kabpowers}
Fix $k\in {\mathbb Z}^+ \cup \{+\infty\}.$ Let $\omega =a_0a_1a_2\ldots \in A^{\mathbb N}$ be an infinite word on a finite alphabet $A$ having bounded $k$-Abelian complexity. Let $D\subseteq {\mathbb N}$
be a set of positive upper density, that is
\[
\limsup_{n\rightarrow \infty} \frac{\mbox{Card}\left(D \cap \{1,2, \ldots, n\} \right) }{n} >0.
\]
Then, for every positive integer $N$, there exist $i$ and $\ell$ such that $\{i, i+\ell, i+2\ell, \ldots, i+\ell N\}\subset D$ and the $N$ consecutive blocks $(\omega[i+j\ell, i+(j+1)\ell-1])_{0\leq j\leq N-1}$ of length $\ell $ are pairwise $k$-Abelian equivalent. In particular, $\omega$ contains arbitrarily high $k$-Abelian powers.
\end{theorem}
\begin{remark}\rm{The result in Theorem~\ref{kabpowers} is already known in the special case of $D={\mathbb N}$ and $k=+\infty$ and $k=1$ (see \cite{MorHed1940} and \cite{RSZ2} respectively).}
\end{remark}
\noindent Before proving Theorem~\ref{kabpowers} we give some immediate consequences:
\begin{corollary} Let $k$ and $N$ be positive integers, and $\omega $ an infinite word avoiding $k$-Abelian $N$-powers. Then $\omega$ is arbitrarily $k$-imbalanced.
\end{corollary}
\begin{proof} This follows immediately from Lemma~\ref{balance} and Theorem~\ref{kabpowers}.
\end{proof}
\begin{corollary}\label{sturmpowers} Let $\omega$ be a Sturmian word. Then $\omega$ contains $k$-Abelian $N$-powers for all positive integers $k$ and $N.$
\end{corollary}
\begin{proof} This follows immediately from Theorems~\ref{thm:factcompl} and \ref{kabpowers}; in fact the $k$-Abelian complexity $\mathcal {P}^{(k)}_\omega$ is bounded (by $2k$) for each positive integer $k.$
\end{proof}
\begin{remark}\rm{It is known that a Sturmian word $\omega$ contains an $N$-power for each positive integer $N$ if and only if the sequence of partial quotients in the continued fraction expansion of the slope of $\omega$ is unbounded. So, a Sturmian word whose corresponding slope has bounded partial quotients (e.g., the Fibonacci word) will not contain $N$-powers for $N$ sufficiently large (e.g., the Fibonacci word contains no $4$-powers \cite{Kar,MiPi}). However, every Sturmian word will contain arbitrarily high $k$-Abelian powers. }
\end{remark}
Our proof of Theorem~\ref{kabpowers} will make use of the following well known result
first conjectured by Erd\"os and Turan and later proved by to E. Szemer\'edi:
\begin{theorem}\label{vdw}{\rm[Szemer\'edi's theorem \cite{Sz}]}
Let $D\subseteq {\mathbb N}$ be a set of positive upper density. Then $D$ contains arbitrarily long arithmetic progressions.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{kabpowers}] Let $D\subseteq {\mathbb N}$ be a set of positive upper density. First we consider the case $k=+\infty.$ By assumption
$ \mathcal {P}^{(+\infty)}_\omega (n)$ is bounded. This is equivalent to saying that $\omega$ has bounded factor complexity. It follows by Morse-Hedlund that $\omega$ is ultimately periodic, i.e., $\omega =UV^\infty$ for some $U,V\in A^*.$ For each $i\geq 0,$ set $D_i= D\cap \{i+j|V|\,:\, j=1,2,3,\ldots \}.$ Pick $i>|U|$ such that the set $D_i$ has positive upper density. Then an arithmetic progression of length $N + 1$ in $D_i$ (guaranteed by Szemer\'edi's theorem) determines the $N$th power of some cyclic conjugate of $V.$
Next let us fix positive integers $k$ and $N$ and assume that $ \mathcal {P}^{(k)}_\omega (n)$ is bounded. It follows by Lemma~\ref{balance} that $\omega$ is $(k,B)$-balanced for some positive integer $B.$ We recall the following lemma proved in \cite{RSZ2}
\begin{lemma}\label{LA}{\rm[Lemma~5.4 in \cite{RSZ2}]} Let $k$ and $B$ be positive integers. There exist positive integers $\alpha _x$ for each $x\in A^{\leq k}$ and a positive integer $M$ such that whenever
\[\sum _{x\in A^{\leq k}}c_x\alpha _x \equiv 0 \pmod {M}\]
for integers $c_x$ with $|c_x|\leq B$ for each $x\in A^{\leq k},$ then
$c_x=0$ for each $x\in A^{\leq k}.$
\end{lemma}
Set
\[\mathcal{D}= (D-1)\cap \{k,k+1,k+2\ldots\}.\]
Then $\mathcal{D}$ is of positive upper density.
We now define a finite coloring
\[\Phi: \mathcal{D} \longrightarrow \{0,1,2,\ldots , M-1\}\times {\mathcal F}_{\omega}(2k) \]
\noindent as follows
\[\Phi (n) \doteqdot \left(\sum_{x\in A^{\leq k}}|\omega[1,n]|_x\alpha_x \,(\bmod {M})\,; \omega[n-k+1,n+k]\right)\]
\noindent where $\alpha_x$ and $M$ are as in Lemma~\ref{LA}. Note that the second coordinate of $\Phi(n)$ is the suffix of length $2k$ of $\omega[1,n+k].$ We note also that if $\Phi(m)=\Phi(n)$ for some $m<n,$ then by considering the first coordinate of $\Phi$ one has
\begin{eqnarray}
\sum_{x\in A^{\leq k}}|\omega[1,n]|_x\alpha_x - \sum_{x\in A^{\leq k}}|\omega[1,m]|_x\alpha_x \equiv 0 \pmod{M}
\end{eqnarray}
\begin{eqnarray}
\sum_{x\in A^{\leq k}}\left(|\omega[1,n]|_x - |\omega[1,m]|_x\right)\alpha_x \equiv 0 \pmod{M}
\end{eqnarray}
\begin{eqnarray}\label{used}
\sum_{x\in A^{\leq k}} |\omega[m-|x|+2, n]|_x\alpha_x \equiv 0 \pmod{M}.
\end{eqnarray}
$\Phi$ defines a finite partition of $\mathcal{D}$ where two elements $r$ and $s$ in $\mathcal{D}$ belong to the same class of the partition if and only if $\Phi(r)=\Phi(s).$ Clearly at least one class of this partition of $\mathcal{D}$ has positive upper density. Thus by Szemer\'edi's theorem, there exist positive integers $r$ and $t$ with $r\geq k$ such that
\[\{r,r+t,r+2t,\ldots,r+Nt\}\subset \mathcal{D}\] and
\[\Phi(r)=\Phi(r+t)=\Phi(r+2t)=\cdots =\Phi(r+Nt).\]
We now claim that the $N$ consecutive blocks of length $t$
\[\omega[r+1,r+t]\omega[r+t+1,r+2t]\omega[r+2t+1,r+3t]\ldots \omega[r+(N-1)t+1, r+Nt]\]
are pairwise $k$-Abelian equivalent. This would prove that $\omega$ contains a $k$-Abelian $N$-power in position $r+1\in D.$
To prove the claim, let $0\leq i,j \leq N-1.$ We will show that
\[\omega[r+it+1,r+(i+1)t]\thicksim_k \omega[r+jt+1, r+(j+1)t].\]
\noindent By (\ref{used}) first taking $n=r+(i+1)t$ and $m=r+it,$ then $n=r+(j+1)t$ and $m=r+jt$
\[\sum_{x\in A^{\leq k}}|\omega[r+it-|x|+2,r + (i+1)t]|_x\alpha_x \equiv \sum_{x\in A^{\leq k}}|\omega[r+jt-|x|+2,r + (j+1)t]|_x\alpha_x \equiv 0 \pmod{M}\]
\noindent and hence
\[\sum_{x\in A^{\leq k}}\left(|\omega[r+it-|x|+2,r + (i+1)t]|_x-|\omega[r+jt-|x|+2,r + (j+1)t]|_x\right)\alpha_x \equiv 0 \pmod{M}.\]
\noindent But since
\[|\omega[r+it-|x|+2,r + (i+1)t]|= |\omega[r+jt-|x|+2,r + (j+1)t]|=|x|+t-1\]
and $\omega$ is $(k,B)$-balanced, it follows that
\[\lvert|\omega[r+it-|x|+2,r + (i+1)t]|_x-|\omega[r+jt-|x|+2,r + (j+1)t]|_x\rvert\leq B\]
\noindent whence by Lemma~\ref{LA} we deduce that for each $x\in A^{\leq k}$
\begin{eqnarray}\label{phew}|\omega[r+it-|x|+2,r + (i+1)t]|_x=|\omega[r+jt-|x|+2,r + (j+1)t]|_x.\end{eqnarray}
\noindent Since $\Phi(r+it)=\Phi(r+jt),$ the second coordinate of $\Phi$ gives
\[\omega[r+it -k+1,r+it+k]=\omega[r+jt -k +1, r+jt+k].\]
Together with (\ref{phew}) we deduce that for each $x\in A^{\leq k}.$
\[ |\omega[r+it+1,r + (i+1)t]|_x=|\omega[r+jt+1,r + (j+1)t]|_x.\]
\noindent In other words
\[\omega[r+it+1,r + (i+1)t]\thicksim_k \omega[r+jt+1,r + (j+1)t]\]
as required. This completes our proof of Theorem~\ref{kabpowers}
\end{proof}
|
2,877,628,091,315 | arxiv | \section{Introduction}
Spacecraft immersed within a plasma will become electrically charged due to currents of incident electron and ion species \citep{Whipple81,Garrett81}. When the net current induced by these species are not zero, net charge is accumulated and there is a potential difference between the spacecraft and the surrounding plasma. Since the plasma species have different charges, each current is either decreased or increased by the change of the spacecraft potential, which continues until an equilibrium potential is reached, where the net current of all plasma species sum up to zero.
Understanding how the spacecraft potential is affected by its environment is important for interpretting the surrounding plasma conditions and on-board plasma measurements which can be significantly affected by the potential difference.
Near a given spacecraft surface a sheath boundary layer screens the potential of the surface over distances of the order of the Debye length \citep{Robertson13} and, for sufficiently high relative velocities between the spacecraft and surrounding plasma, a wake of depleted plasma is produced behind the obstacle \citep{Alpert66,Ludwig12,Miloch14}.
As the spacecraft absorbs incident plasma, a shock is unable to form upstream as would for an obstacle able to withstand the incoming flow. If the relative velocity exceeds the Alfv\'enic or acoustic velocities, the associated discontinuities will instead trail downstream and form an approximately conical wake region of disturbed flow commonly referred to as a Mach cone \citep{Willis11}.
Significant asymmetries are introduced into the spacecraft-plasma interaction by the presence of a strong ambient magnetic field which generates a convective electric field in the spacecraft frame \citep{Marklund94,Pecseli12} and modifies the structure of the wake \citep{Darian17,Usui19}. The high velocities of the significantly lighter electrons compared with ions mean that a spacecraft immersed within a typical space plasma will charge to negative potentials \citep{Spitzer41} although processes such as photoelectron and secondary electron emission, can shift the potential to positive values \citep{Roussel04,Engwall06,Miloch09,Yaroshenko11}.
When passing through Saturn's ionosphere, the Cassini spacecraft's floating potential was, surprisingly, observed as positive on all encounters below 3000 km altitude \citep{Morooka19}. The presence of negative ions and dust grains in Saturn's ionosphere is evident from the LP measurements \citep{Morooka19}, which showed significant concentrations up to over 95\% of the negative charge density, at altitudes of 3200 km down to the closest measurement at $\approx$1600km. These appear an intrinsic part of the giant planet's ionosphere and distinct from electrons depletions associated with Saturn's main rings \citep[e.g.][]{Farrell18} and transient negative ion populations observed near Saturn's icy satellites \citep{Coates10,Desai18,Nordheim20}.
Unfortunately, it was not possible to obtain a mass distribution of these negative ions or dust grains due to Cassini's plasma spectrometers being offline and the Grand Finale plasma datasets therefore lack crucial pieces of information.
A body immersed in a plasma with large quantities of negative ions can attain a positive potential due to the reduced electron currents, as was shown for a dust grain by \citet{Kim06} using orbital motion limit theory, and provides some indication for what might be occurring with Cassini. Similar electron depletions of up to $\approx$96 \% were, however, observed by Cassini within Titan's ionosphere where the spacecraft potential was consistently negative \citep{Wahlund05,Crary09,Shebanits16,Desai17a}. Significant unknowns therefore remain regarding how spacecraft interact with these outer solar system plasmas.
Plasmas with a significant negative ion content (ion-ion or dusty plasmas) possess very different characteristics to typical electron-ion plasmas. The reduced mobility of the heavier negative charge carriers alters the electric field screening phenomenon and increases Debye length scales. Plasma conductivities can reverse \citep{Muralikrishna06,Shebanits20} and the plasma can host a variety of altered instabilities and wave phenomenon \citep{Shukla02,Desai17b}. The interaction of Cassini with Saturn's ionosphere therefore represents a class of physics quite different to the classic view of spacecraft charging within space plasmas.
In this article, we describe a self-consistent three dimensional Particle-In-Cell (PIC) study where a model Cassini spacecraft is simulated immersed within plasmas representative of Saturn's ionosphere as observed during the Grand Finale. Section \ref{method} describes the simulation approach, how Cassini is modelled, and the available measurements of Saturn's ionospheric plasma. Section \ref{Results} then applies these simulations to Saturn's ionosphere with regards to specific and general parameterisations and conducts a parametric survey to assess the sensitivity of the results to the measured and inferred plasma properties. Section \ref{summary} concludes with a summary of the key findings.
\section{Simulation Development}
\label{method}
\subsection{EMSES}
\label{EMSES}
The three dimensional Particle-In-Cell simulation code Electro-Magnetic Spacecraft Environment Simulation (EMSES) has been developed for the self-consistent analysis of spacecraft-plasma interactions on either an electromagnetic or electrostatic basis \citep{Miyake09}. The electrostatic version of EMSES is utilised as the typical Alfv\'en velocities in Saturn's ionosphere are significantly greater than the spacecraft velocity and in this regime Alfv\'enic perturbations are assumed to form only a small contribution to the plasma currents \citep{Rehman14}.
Within the PIC approximation \citep{Hockney81,Birdsall85}, individual particles are represented by large quantities of ``super-particles'' which are integrated through the same equations of motion as a real particle \citep{Boris70}.
The plasma is defined to consist of an arbitrary number of plasma species in a drifting Maxwellian velocity distribution to represent in-flowing plasma. Each species has mass and charge normalized to the proton scale with a real ion-to-electron mass ratio. In this study a negatively charged ion component is included, which is required to study the target plasma environments.
The positions of the super-particles are tracked continuously in the simulation box, but the electric and the magnetic fields are assigned and updated only on the grid points based on the \citet{Yee1966} algorithm, and interpolated onto the super-particles' positions during calculation. The charge density profile at each timestep is used to solve Poisson's equation for the electrostatic potential with Dirichlet boundary conditions.
The simulations are run in the spacecraft frame and consists of a three dimensional box with inflow and outflow boundary conditions along a specified direction of plasma flow, and periodic boundary conditions orthogonal to these.
Within the domain the spacecraft is considered as a perfect conductor with separate boundary treatments for both longitudinal and transverse electric fields. The spacecraft surface can accumulate charge caused by impinging super-particles, and redistributes this to ensure the spacecraft is equipotential using the capacity matrix method \citep{Hockney81,Miyake09}.
\begin{figure*}[ht]
\centering
\hspace{-3em}
\includegraphics[width=0.9\textwidth]{model2.png}
\caption{Simulation configuration for Cassini during Grand Finale Rev 292 ingress at 2500 km Saturn altitude. The X, Y and Z axes correspond to the Z$_{s/c}$, X$_{s/c}$, and Y$_{s/c}$ axes of the Cassini spacecraft attitude coordinate system, respectively, and the precise simulation and plasma parameters are provided in Table \ref{table}.
\label{cassini}}
\end{figure*}
\subsection{Application to Cassini}
\label{model}
\subsubsection{Main Body}
\label{body}
The Cassini spacecraft is a three-axis stabilised spacecraft of approximately six metres in length. Previous spacecraft charging simulations have considered Cassini in two dimensions \citep{Olson10} and as a cylinder in three dimensions during the 2004 Saturn orbit insertion \citep{Yaroshenko11}. In this study we model Cassini's non-uniform shape encompassed within the simulation domain of 12.8$\times$12.8$\times$12.8 m$^3$, across 128$\times$128$\times$128 grid cells. Cassini is approximated using three structures: a large thin cylinder representing the antenna dish with an approximated diameter of 4 metres and width of 0.55 metres, and a longer cylinder representing the main body with an approximated diameter of 2.2 metres and length of 3.8 metres. These are separated by a 0.55 metres gap between them, as per the real Cassini spacecraft. Figure \ref{cassini} shows a schematic of the simulation geometry where the X, Y and Z axes correspond to the Z$_{s/c}$, X$_{s/c}$, and Y$_{s/c}$ axes of the Cassini spacecraft attitude coordinate system, respectively: where X is the direction of main thrusters, Y is opposite to the Langmuir probe direction and Z completes the right-hand set.
The dish and the main body of the spacecraft are considered as a single perfect conductor, as Cassini was designed. Although this approximation does not account for the curvature of the antenna dish, or other instruments attached onto the body, the most important factors for spacecraft charging are the surface area of structures larger than the Debye length scales as well as the ram profile for a fast moving object.
As far as the net surface current of the spacecraft and its potential evolution are concerned, this approximation is judged to reasonably approximate dynamics associated with Cassini's asymmetric shape interacting with Saturn's ionosphere.
\subsubsection{Langmuir Probe}
\label{lprobe}
The Langmuir Probe (LP) is represented by a small sphere at the side of Cassini, the precise extent of which is defined subgrid.
The LP is held at a bias to the main spacecraft, the precise voltage difference specified according to which point in the LP voltage sweep between --4 V and 4 V that is simulated. The time required for the currents to equilibriate is small compared to the sweep time scale of 0.5 s \citep{Gurnett04}, and a fixed bias within the sweep is chosen. The whole Cassini model is floating within the plasma environment and can therefore become charged relative to it but the voltage between Cassini and the probe remains fixed.
For a Langmuir Probe at a net negative potential within an electron-ion plasma, the total external plasma currents can be considered to derive purely from the positive ion current. This is because the lighter electrons are repelled and therefore their current contribution can be neglected. Similarly, at a net positive potential the external plasma currents is saturated by the faster moving electrons and can be considered to purely consist of electrons.
In the presence of large negative ions, however, the situation is different. When the LP is negatively biased, the large negative ions and dust grains are still collected due to their high inertia (mass). At positive biases the current will be a combination of electrons and negative ions but the significantly smaller electrons still dominate the total current.
The simulations are therefore run with the Langmuir Probe biased at a negative potential of --3 V to ensure no electrons are collected as the bias potential is higher than the electron thermal energy. The self-consistent addition of the negative ion component therefore allows us to constrain their contribution to the Langmuir Probe currents and to the overall potential accumulated by the spacecraft.
\subsection{Saturn's Ionospheric Plasma}
\label{Measurements}
Within Saturn's ionosphere the total current onto the spacecraft is a function of the positive ions, electrons and negative ions, charged dust, secondary electron emission and photoelectron emission. The photoelectron current at Saturn's orbit are several orders of magnitude lower than the ion and electron currents \citep{Holmberg17,Shebanits17}, and the secondary electron emission currents are assumed to be of a similar magnitude to this \citep{Morooka19}. They are therefore not included in this analysis.
The simulated currents can therefore be expressed as
\begin{equation}
I_{total} =
I_{electron} + I_{ion^+} +
I_{ion^-},
\label{eqcurrents}
\end{equation}
where I$_{ion^+}$ represents the positive ions and dust and I$_{ion^-}$, represents the negative ions and dust. While multiple positive and negative plasma components can be included the lack of information on the mass distribution function leads us to use a mean mass to represent these respective components.
\subsubsection{Cassini Observations}
\label{observations}
The measured composition of Saturn's ionosphere provides the inputs to the simulations which produce the currents in Equation \ref{eqcurrents}. Post--2012 CAPS was, unfortunately, turned off and the mass distribution on the ambient plasma in Saturn's ionosphere was therefore unknown. Cassini's Ion and Neutral Mass Spectrometer was able to provide some information on positive ion composition but this was limited up to low masses of several amu due to the high spacecraft velocity \citep{Waite18}. The Langmuir Probe was, however, able to differentiate between the bulk ion, electron and negative ion currents \citep{Wahlund18,Hadid19,Shebanits20} and therefore provide estimates of their densities \citep{Morooka19}.
The electron density was determined from the positive bias side of the Langmuir Probe sweep \citep{Morooka19}. The positive ion density could, however, only be constrained as a lower limit due to the unknown mass distribution \citep{Shebanits13,Morooka19}. A lower limit on the negative ion density was therefore also available by assuming quasi-neutrality.
\begin{table}[ht]
\caption{Environmental and System Simulations Parameters}
\centering
\begin{tabular}{|
>{\columncolor[HTML]{CBCEFB}}l
>{\columncolor[HTML]{CBCEFB}}c |}
\hline
\multicolumn{2}{|c|}{\cellcolor[HTML]{CBCEFB}\textbf{Environmental Parameters}} \\
Plasma ion density, $n_0$ & 854 cm$^{-3}$ \\
Negative ion concentration, $n_{ni}$ & 40.2 \% \\
Ion mass, $m_i$ & 1.4 amu \\
Negative ion mass, $m^{-}_{i}$ & 3.0 amu \\
Electron temperature, $T_e$ & 0.093 eV \\
Ion temperature, $T_i$ & 0.093 eV \\
Negative ion temperature, $T^-_{i}$ & 0.093 eV \\
Magnetic field, $\vec{B}$ & [1.48$\hat{x}$, --14.8$\hat{y}$, 1.24$\hat{z}$] $\mu$T \\
Flow velocity, $\vec{v}_{flow}$ & [--0.189$\hat{x}$, --37.3$\hat{y}$ --12.2$\hat{z}$] km s$^{-1}$ \\
Alfv\'en speed, $v_{A}$ & 11,149 km s$^{-1}$ \\
Ion acoustic speed, $v_{S}$ & 2.56 km s$^{-1}$ \\
Debye length, $\lambda_D$ & 10.05 cm \\
Electron gyroperiod, $\tau_{ge}$ & 2.38 $\mu$s \\
Electron plasma period, $\tau_{pe}$ & 3.81 $\mu$s \\
Ion gyroperiod, $\tau_{gi}$ & 6.12 ms \\
Ion plasma period, $\tau_{pi}$ & 0.193 ms \\
Negative ion gyroperiod $\tau^-_{gi}$ & 13.1 ms \\
Negative ion plasma period, $\tau^-_{pi}$ & 0.283 ms \\
\multicolumn{2}{|c|}{\cellcolor[HTML]{CBCEFB}\textbf{System Parameters}} \\
Grid width, $\Delta$r & 10 cm \\
Time step, $\Delta$t & 0.033 $\mu$s \\
Simulation time, $t$ & 0.67 ms \\
Particles per cell & 20 \\
Domain size & 12.8$^3$ m$^3$ \\
Probe Bias, $\phi_{LP}$ & --3 V \\ \hline
\end{tabular}
\label{table}
\end{table}
The electron density was observed to increase with decreasing altitude as Cassini sampled denser regions of Saturns ionosphere \citep{Persoon19,Morooka19}. The estimated negative ion density was also observed to increase and at a greater rate than the electrons. As electrons are lost to the negative ions/dust grains, the electron depletion also increased with decreasing altitude, and reached an estimated lower bound of $94$ \% \citep{Morooka19}. The electron depletion is traditionally quoted as a ratio of negative to positive ion densities such that a 94 \% depletion corresponds to 6 \% of the total negatively charged species being electrons, with the rest being negative ions. The Langmuir Probe determined a mean electron temperature of $\approx$0.1 eV and, based upon current balance, an estimate of the minimum mean positive ion mass of $\approx$5 amu at the deepest point sampled \citep{Morooka19}. The electron temperature was used as a conservative upper limit on the ion temperatures and reducing the ion temperatures below this was later found not to have a significant influence on the simulation results. Within Saturn's ionosphere the spacecraft potential varied between --1.25 and +0.75 V with a positive floating potential on every encounter below 3000 km.
\subsubsection{Simulation Inputs}
\label{inputs}
Rev 292 is selected as a representative flyby with sufficient data from which input parameters are derived. In the first instance, Cassini is simulated at an altitude of 2500 km during ingress where the LP observations revealed an estimated plasma density of 854 cm$^{-3}$, temperature of 0.093 $eV$, and the lower bound on the estimated electron depletion was 40 \%. A higher electron density decreases the Debye length which is the primary constraint on the simulation grid size. This altitude therefore provided an optimal balance between computational load and studying regions of interest with large negative ion concentrations. The magnetic field was measured to be 14,925 nT which results in the electrons being highly magnetised with gyroradii of the order of centimetres and therefore significantly less than the spacecraft size. The positive and negative ions are only weakly magnetised, with gyroradii of tens to hundreds of metres, and therefore much larger than the spacecraft and the simulation domain.
Collision frequencies in Saturn's ionosphere are much less than a herz \citep{Shebanits20} and the associated period is significantly greater than the total simulation time of less than a millisecond. During Rev 292 the spacecraft potential was --0.12 V at this altitude, and varied between --0.75 and +0.65 V. The precise simulations input parameters are provided in Table \ref{table}.
The relative orientation of the spacecraft to the plasma and magnetic field varies very slowly throughout a flyby and is different for different flybys.
The simulations described herein do not attempt a produce a precise reproduction of the Cassini measurements, instead attempting to understand the key features of the plasma interaction and associated charge accumulated.
\section{Results}
\label{Results}
\subsection{Global Interaction}
\label{global}
The simulated interaction between Cassini and Saturn's ionosphere is shown at Figure~\ref{global1}. The left-hand (a--b), centre (c--d) and right-hand (e--f) panels show electron, ion and negative ion densities respectively, with the upper (a, c, e) and lower panels (b, d, f) respectively showing x--y and y--z slices through the simulation. The results are displayed at the end of the simulation run. The two cylinders representing Cassini's antenna dish and main body are apparent as distinct regions absent of plasma. The plasma velocity is arriving predominantly along the y-direction at an angle of $\approx 168 ^{\circ}$ degrees to the near-oppositely directed magnetic field. The Langmuir Probe is pointed in the spacecraft ram direction and therefore directly samples the incoming plasma flow. The probe is most visible in Figure~\ref{global1}(a) at x=4 and y=4 as a local depletion in the electron density due to the negative bias of --3 V.
\begin{figure*}[ht]
\hspace{-2em}
\includegraphics[width=1.1\textwidth]{globalr.png}
\caption{Two dimensional slices of the electron (a--b), ion (c--d) and negative ion density (e--f) for simulation parameters outlined in Table \ref{table} and described in Section \ref{global}. The magnetic field is oriented along 1.48$\hat{x}$, --14.8$\hat{y}$, 1.24$\hat{z}$ and a schematic of the simulation geometry is displayed in Figure \ref{cassini}.}.
\label{global1}
\end{figure*}
The high relative velocity of the plasma interaction produces an extended wake of depleted plasma.
The length of the wake is greater for positive and negative ions due to their larger inertia resulting in slower refilling. When looking at a slice through Cassini's main body, panels (b, d, f), the wake exhibits the characteristic structure associated with supersonic flow around a cylinder. The slice in the x-y plane, however, in panels (a, c, e) reveals the wake as non-uniform and highly structured behind the antenna dish and the gap between the antenna dish and the main body
The electron density reveals wing-like structures of enhanced and then depleted density attached to the spacecraft in the y-z plane in panel (b). Similar wing-like structures have been identified at moving bodies produced by Alfv\'en and whistler waves \citep{Drell65,Neuebauer70,Stenzel89}, which propagate at characteristic velocities associated with their relative speed to the spacecraft such that they advect downstream. Electron-wing structures, similar to these reported herein, have been identified as consisting of propagating Langmuir waves \citep{Miyake20} produced by electrons reflected from a negatively charged spacecraft which are then guided by the magnetic field lines. Calculation of the Langmuir Probe group velocity, c$_L$=$\sqrt{3 k_B T_e m_e }\cong$ 1,200 km s$^{-1}$, and the angle to the magnetic field, $\theta = \arctan({v_{flow}/c_L}) \approx 2^{\circ}$, confirms this mode propagates at small angles to the magnetic field in the spacecraft frame, as can be seen in Figure \ref{global1}. Figure \ref{global1}a shows these wings structures striking the inflow boundary condition and modifying the electron density. Figure \ref{global1}b however reveals that the effects of this are shifted towards positive z-regions and mostly miss the spacecraft interaction and therefore have a negligible numerical impact. The electron-wings propagate upstream of Cassini and, for flybys where the magnetic field is more closely aligned with the spacecraft velocity, they may therefore have influenced the properties of the assumed pristine plasma ahead of Cassini.
Different density distributions are also visible for the different species and the electrons particularly show enhanced spatial variations around the spacecraft. To either side behind the antenna dish in Figure \ref{global1}a two distinct regions of depleted flows are visible which appears to produce a vortex-type structure in the x-z plane. This is analysed further in Section \ref{Gradient}.
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{Fig3_ver4.png}
\caption{Currents onto the spacecraft within Saturn simulations. (a) shows the spacecraft potential over time, (b) shows the current onto the spacecraft over time, (c) shows the current onto the dish over time, and (d) shows the current decomposed into the positive and negative parts.
\label{currents}}
\end{figure*}
\subsection{Spacecraft Potential}
\label{spacecraftpotential}
In this Section we compare the simulations results to Cassini Langmuir Probe measurements of the spacecraft potential and plasma currents during ingress on Rev 292. Figure \ref{currents} shows the evolution of the spacecraft potential and plasma currents through this simulation run for input parameters derived for when Cassini was at a Saturn altitude of 2500 km in the northern hemisphere.
The net spacecraft potential is shown in panel (a) and incident plasma currents in panel (b), both of which start near zero. As the spacecraft charges, the main spacecraft body and the probe remain at a fixed bias relative to one another of 3 V. The net current becomes strongly negative before returning to zero when an equilibrium is reached after $\approx$1 $\mu$s. The spacecraft has at this point accumulated a negative floating potential of just under --0.49 V with the probe biased at --3.49 V. The simulation is, however, run for significantly longer to ensure steady state. The simulated potential is --0.42 V, and 0.3 V more negative than the observed potential of --0.12 V. The sensitivity of the spacecraft potential to variations of this magnitude are analysed within following Sections \ref{survey}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Fig4_ver5.png}
\caption{Currents onto the simulated Langmuir Probe decomposed into the positive and negative parts and compared to the Cassini Langmuir Probe readings during Rev 292 at 2500 km altitude ingress.
\label{probe}}
\end{figure*}
The current decomposition for each of the electrons, positive and negative ions onto each part of the spacecraft is shown in panels (c--d). The spike in the negative current in panel (b) corresponds to electron current which also peaks at the beginning of the simulation. The spacecraft surfaces accumulate the associated negative charge and increasingly repel electrons away, thereby reducing the electron current. These initial electron dynamics reflects how the smaller electrons are much more sensitive to the change in the potential of the spacecraft compared to the negative ions which, due to their larger momentum, are not as easily deflected.
At the equilibrium potential, all three species of the plasma contribute comparable level of currents onto the spacecraft relative to each other.
This current balance is significantly different to within typical electron-ion plasmas
where the electrons constitute the majority of the incident current \citep[][and references therein]{Miyake20}. Comparing between the current received from the dish and from the main body, the current composition and relative magnitudes are also different. This is due to the difference in their surface areas but also the deflection of the particle's trajectories by the ambient magnetic field resulting in increased quantities of ions and negative ions striking the sides of the dish despite the incident flow being approximately parallel to it.
\subsection{Langmuir Probe Currents}
\label{probecurrents}
Figure \ref{probe} shows the currents onto the simulated Langmuir Probe. The positive ion current stabilises after $\approx$ 10 $\mu$s at $\approx$ 125 nA, slightly lower than the observed total current of $\approx$ 150 nA. However, when summing the net current over all plasma species, the current is around 30 \% less. The initial analysis of the LP current was conducted assuming the total current derived from positive ions and that the negative current was negligible.
As a result of the simulation it is now clear that the negative ion current is not negligible and can be approximated as an almost constant current, regardless of the potential difference, due to their inertia. These results highlight that while the electron density can be accurately determined, the total plasma density cannot due to the ion and negative ion components. Therefore, by scaling the plasma concentration the net currents onto the probe will increase. There could also be other effects at play such as from the impact and break-up of large dust particles onto the probe and spacecraft which are not included in these simulations.
\subsection{Parametric Survey}
\label{survey}
To further understand the sensitivity of the results to the unknown or inferred parameters, we carry out a parametric study and systematically varied the following parameters; the electron depletion, the ion and negative ion masses, the electron temperature and also the ion and negative ion temperatures which were previously assumed to be in thermal equilibrium with the electrons. The total plasma density, however, remains unchanged and the densities are scaled within this. The positive ion mass is now set at 5 amu to represent the measured lower bound in the deep ionosphere \citep{Morooka19}.
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{Fig5_ver3.png}
\caption{Parametric scan of electron density depletion for a range of negative ion masses (a) and positive ion masses (b).}
\label{scanmass}
\end{figure*}
Figure \ref{scanmass}(a) shows the spacecraft potential as a function of electron depletion. The electron density is varied from 1 \% up to 50 \% of the total ion density, for negative ion masses of 4, 6, 50 and 100 amu. For all negative ion masses simulated, as the electron density tends to zero, the spacecraft potential becomes less negative and eventually reaches positive values. For larger negative ion masses the spacecraft become positive at slightly higher electron densities although this change becomes increasingly smaller with the difference between 50 and 100 amu being significantly smaller than that between 4 and 50 amu.
The effect of a varying positive ion mass on the spacecraft potential is shown in Figure \ref{scanmass}b where the negative ion mass is held constant at 16 amu and the spacecraft similarly tends to positive potentials for increased electron depletions. Smaller positive ion masses produced positive potentials at lower masses than larger ones and this change also becomes smaller for the larger masses. The spacecraft potential also appears more sensitive to the positive ion mass than to the negative ion mass.
These trends are explained by the variation in the relative mobilities of the positive and negative charge carriers. The production of a positive potential is due to the positive ions actually becoming the most mobile charge carriers which result in the spacecraft accumulating a net positive current. \citet{Kim06} indeed predict that a body immersed in an electron-depleted negative ion plasma can gain a positive potential when the electron density reaches just a few percent of the total plasma density. Figure \ref{scanmass}a demonstrates this is possible for the Cassini spacecraft in Saturn's ionosphere.
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{Fig6_ver3.png}
\caption{Parametric scan of ion (a) and electron (b) temperatures.}
\label{temperaturescan}
\end{figure*}
To study the sensitivity to the electron and ion temperatures, the negative ion mass is held at 16 amu and positive ion mass at 5 amu but the depletion rate is now set at 95 \% to represent the parameter regime close to where Cassini gains a positive potential. The ion temperature variation, shown in Figure \ref{temperaturescan}a, reveals that the spacecraft potential increases as the temperature increases from the measured mean temperature of $\approx$0.1 eV, reaching --0.21 V when T$_{i}$=1 eV. For temperatures below the measured 0.1 eV, the spacecraft potential initially follows this trend but then, surprisingly, increases again when T$_{i} \leq$0.5 eV.
This slight effect is attributed to the ions becoming increasingly magnetised as the mean ion gyroradius decreases until it is of a comparable size to the spacecraft. The electron temperature variation, shown in Figure \ref{temperaturescan}b, has a much larger impact on the spacecraft potential due the electron's high relative mobility. The potential consequently varies from -0.05 V to -1.5 V, despite the electrons constituting just a few percent of the total plasma density.
\subsection{Potential gradient}
\label{Gradient}
The relative mobility of the positive and negative ions explains the positive potential. However, in Figure \ref{scanmass}a, the spacecraft still charges to a positive potential when using negative ions lighter than positive ions (4 and 5 amu respectively), so that the negatively charged particles remain the more mobile charge carriers. This therefore appear to contradicts this explanation. Beyond the variation in the plasma parameters examined in Section \ref{survey}, the effect of the ambient magnetic field remains to be evaluated.
Within the next simulations the magnetic field direction is oriented perpendicularly to the plasma flow direction, instead along the +z and --z axis. The influence of the magnetic field is felt in the generation of the convective electric field, E$_c$, in the spacecraft frame, namely,
\begin{equation}
\boldsymbol{E_c} = \boldsymbol{-v}_{flow} \times \boldsymbol{B_0},
\end{equation}
which produces a potential gradient along the main axis of the spacecraft \citep{Pecseli12}.
Figure \ref{potential} shows the electron density and potential within the two simulations with the two different field orientation. The electrons can be seen to be accumulating on one sides of the spacecraft and the potential gradient across the spacecraft interaction is clearly visible. The potential distribution controls where the electrons impact the spacecraft and alters the effective spacecraft cross section due to Cassni's asymmetric shape. The spacecraft potential is consequently +0.38 V in the initial magnetic field configuration but --0.42 V with the field reversed. Further tests run without the presence of a magnetic field (not shown) also show that the spacecraft is unable to attain a positive potential when the positive ions are less massive than the negative ions.
\begin{figure*}[h]
\centering
\includegraphics[width=0.7\textwidth]{potential_4plot_3.png}
\caption{Electron density (a \& c) and potential (b \& d) of Cassini immersed in a plasma with a negative ion mass of 4 amu and positive ion mass of 5 amu. The left-hand panels (a \& b) show the case where the magnetic field is oriented along B$_z$ which produces a spacecraft potential of --0.42 V whereas the right-hand panels (c \& d) show the case with the magnetic field reversed where the spacecraft potential is +0.38 V.
\label{potential}}
\end{figure*}
The perpendicular magnetic field enhances the magnetisation of the plasma interaction. The differential electron flows around Cassini can be seen to produce highly structured vortices at a scale that is comparable to Cassini itself. It is interesting to note that electrostatic spacecraft-scale modes have been identified in Saturn's deep ionosphere which are thought to be modulated by Cassini's presence \citep{Sulaiman17}.
\section{Conclusions}
\label{summary}
This study simulated the spacecraft-plasma interaction experienced by the Cassini within Saturn's ionosphere during the Grand Finale using a self-consistent three-dimensional PIC code. The asymmetric spacecraft shape was taken into account and a Langmuir Probe was also modelled which enabled preliminary comparisons between the simulated and observed currents.
The global interaction of the Cassini spacecraft was found to be strongly influenced by its asymmetric shape. Differential flows around Cassini's antenna produced a highly structured wake and spacecraft-scale electron vortices behind the spacecraft. Electron-wings associated with propagating Langmuir waves were also identified around the spacecraft which produced a sharp enhancements to the in-flowing electrons. Similar electron-wings were previously identified in simulations of spacecraft moving through the polar regions of the Earth's ionosphere \citep{Miyake20} and these simulations indicate this phenomena occurred at Cassini too.
An electron density of 40 \% of the total negative charge density was initially used in this simulations, which was determined to be a lower bound by Cassini's Langmuir Probe at 2500 km Saturn altitude during Rev 292 \citep{Morooka19}. The spacecraft potential was found to be negative in the same range as that observed at --0.42 V compared with the observed --0.12 V. The simulations indicated that in Saturn's ionosphere, the ion currents onto the spacecraft and Langmuir Probe were comparable to those from the electrons, a situation unique to plasmas depleted of electrons. The simulated currents onto the Langmuir probe also revealed that the total currents were in the same range, although slightly lower, than those measured. This is consistent with the 40 \% electron depletion being near the lower bound and indicates that the actual positive and negative ion density may have been higher.
Following these initial simulations a parametric study was carried out to examine how the floating potential is affected by the ambient conditions. To this end, we studied how the spacecraft potential changes with positive and negative ion mass and temperature as well as electron depletion rate.
As the electron depletion rate increased the spacecraft potential was observed to become less negative and even attained positive potentials of up to 0.55 V when just a few percent of electrons were contained within the plasma. Varying the plasma masses and temperatures had a similar effect in terms of charge mobility with the electron temperature found to have the largest influence.
The magnetic field orientation was also varied to examine how the induced electric field along Cassini's main axis affected the charge accumulated.
The direction of the magnetic field, therefore electric field, produces a strong potential gradient in the potential of the plasma and results in oppositely charge currents preferentially collecting at different parts of the spacecraft. Given the large different in surface area across Cassini due to its large antenna dish, reversing the magnetic field actually shifted the spacecraft through zero from an initial 0.38 V to negative values of a comparable magnitude of --0.42 V.
As well as providing an insight into the spacecraft plasma interactions experienced by Cassini during the Grand Finale, these simulations have provided an explanation for the positive potentials attained through considering classical charging theory and reversed charge mobility. Despite this, further effects such as secondary electron emission and the break-up of large dust or ice particles on the spacecraft, and an even more detailed Cassini model incorporating effects such as the curvature of the antenna dish, remain to be constrained. The study does indicate, however, that this approach presents a valuable method for understanding spacecraft charging in the outer solar system and can assist in interpretting in-situ measurements of these exotic environments.
\section*{Acknowledgements}
ZZ acknowledges an Undergraduate Summer Research Bursary from the Royal Astronomical Society. RTD acknowledges funding from NERC grant NE/P017347/1. YM and HU acknowledge grant 20K04041 from the Japan Society for the Promotion of Science: JSPS,
and support from the innovative High-Performance Computing Infrastructure (HPCI: hp200032) in Japan. OS acknowledges RS grant RP EA180014 and SNSA grant Dnr:195/20. This work has benefited from discussions with International Space Science Institute (ISSI) International Team 437. This work used the Imperial College High Performance Computing Service (doi: 10.14469/hpc/2232).
\newline
\section*{Data Availability}
All simulation data presented in this study can be retrieved from the Zenodo open-access repository at https://doi.org/10.5281/zenodo.4592954. Cassini Langmuir Probe data is available from the NASA Planetary Data System archive.
|
2,877,628,091,316 | arxiv | \section{Introduction}
In nuclear physics, there is a hierarchy of Effective Field Theories (EFTs)
which all describe nuclear phenomena at
a certain resolution scale (for reviews see, e.g.,
Refs.~\cite{Bedaque:2002mn,Epelbaum:2008ga,Hammer:2019poc}.)
Pionless EFT describes the interactions of individual nucleons
at momenta small compared to the pion mass \cite{vanKolck:1997ut,Kaplan:1998tg,Kaplan:1998we,vanKolck:1998bw,Chen:1999tn}.
Apart from electroweak interactions, the effective Lagrangian
contains only short-range contact interactions between non-relativistic nucleons.
It can be understood as an expansion around the unitary limit of infinite
scattering length. The breakdown scale of pionless EFT
is set by the pion mass, $M_{high}\sim M_\pi$, while the typical
low-energy scale is $M_{low} \sim 1/a \sim k$.
For momenta $k\sim M_\pi$, pion exchange can no longer be treated as
a short-range interaction and has to be included explicitly. This
leads to chiral EFT whose breakdown scale $M_{high}$ is set by the chiral
symmetry breaking scale $\Lambda_\chi$~\cite{Weinberg:1990rz,Weinberg:1991um}.
The pionless theory exploits the large scattering length but
is independent of the mechanism responsible for it.
Thus it can be applied to a variety of systems ranging from
ultracold atoms to hadrons and nuclei.
At leading order (LO), one needs to resum a momentum-independent
contact interaction in order to describe the large scattering length
physics. This resummation is conveniently implemented using
dibaryon or dimer fields~\cite{Kaplan:1996nv}. At next-to-leading order (NLO)
the two-body ranges have to be included perturbatively.
In the dimer framework
this requires one insertion of the dimer kinetic-energy operator between LO
amplitudes. At higher orders, the procedure of
perturbative range insertions becomes tedious, and a direct calculation of the
corrections requires fully off-shell LO amplitudes. To avoid this, range
corrections can be resummed by including the effective range in the denominator
of the dimer propagator. Early on it was noted that
this resummation introduces a spurious pole in the deuteron
propagator~\cite{Bedaque:1997qi}.
Located at a momentum scale of roughly
200~MeV, it is outside the range of validity of the EFT and thus in
principle is an irrelevant UV artifact.
However, in three- and higher-body systems it can limit the range of
cutoffs that can be
used in the numerical solution of the scattering equations.
In the three-nucleon system, this is especially
true in the doublet S-wave of neutron-deuteron scattering
(triton channel) unless measures are taken to remove the pole.
In the quartet S-wave, due to the Pauli principle, the solution is not
sensitive to this deep pole and the cutoff can be made arbitrarily large.
In Ref.~\cite{Bedaque:2002yg} it was proposed to partially re-expand
the resummed propagators and to use terms up to order $n$ for a calculation at
N$^n$LO. Using these ``partially resummed'' propagators generates all desired
terms at a given order, but still retains some higher-order corrections, which
have to be small.\footnote{We note that it is important to keep the cutoff
at or below the breakdown scale of the theory to satisfy this requirement.}
The first strictly perturbative NLO calculation of $nd$ scattering in the
doublet $S$ channel was carried out in \cite{Hammer:2001gh}, implementing
the procedure suggested in \cite{Bedaque:1998km}. Ji et
al.~\cite{Ji:2011qg,Ji:2012nj} extended these calculations to N$^2$LO
and pointed out that an additional three-body term enters at NLO when
the scattering length is varied. This is particularly relevant
for applications in ultracold atoms and quark mass extrapolations. Finally,
Vanasse \cite{Vanasse:2013sda} developed a scheme that avoids the numerically
expensive determination of full off-shell amplitudes made in previous
perturbative calculations. Overall, he obtains
$nd$ phase shifts at N$^2$LO which are in good agreement with the empirical
behavior up to laboratory energies of $\simeq$~ 24~MeV.
In this paper, we revisit the problem of range corrections in the
three-body system from the perspective of the three-body quantization
condition in a finite volume, following the formalism developed in
Refs.~\cite{Hammer:2017uqm,Hammer:2017kms}, see
also~\cite{Hansen:2014eka,Hansen:2015zga,Mai:2017bge} for alternative formulations.
For simplicity, we focus on the three-boson
system, which is known to have the same qualitative features as the
neutron-deuteron doublet $S$-wave channel.
We expect the approach of \cite{Vanasse:2013sda} to be problematic
numerically in a finite volume. Indeed, in a finite box of size $L$,
the S-wave dimer propagator gets replaced by~\cite{Hammer:2017uqm,Hammer:2017kms}:
\begin{eqnarray}\label{eq:tauL}
\tau_L({\bf k},{k^*}^2)=\frac{1}{k^*\cot\delta(k^*)+S({\bf k},{k^*}^2)}\, .
\end{eqnarray}
Here, ${\bf k},k^*$ denote the total three-momentum of a dimer and
the magnitude of the relative momentum of two particles, constituting
a dimer, in their center-of-mass frame. Furthermore, $\delta(k^*)$ denotes
the pertinent phase shift and the quantity $S({\bf k},{k^*}^2)$ stands for
the infinite sum
\begin{eqnarray}\label{eq:S}
S({\bf k},{k^*}^2)=-\frac{4\pi}{L^3}\,\sum_{\bf p}\frac{1}{{\bf p}^2+{\bf p}{\bf k}+{\bf k}^2-mE}\, ,\quad\quad {\bf p}=\frac{2\pi}{L}\,{\bf n}\, ,\quad
{\bf n}=\mathbb{Z}^3\, ,
\end{eqnarray}
where $E$ is the total energy of the particle-dimer system in the rest frame.\footnote{This sum diverges and has to be properly regularized, e.g., by using dimensional regularization. The details can be found in Refs.~\cite{Hammer:2017uqm,Hammer:2017kms}.}
In the infinite volume, the sum turns into the integral that can be easily evaluated, leading to a well-known result.
The problem with expanding the finite-volume dimer propagator in a manner
proposed in
Refs.~\cite{Hammer:2001gh,Bedaque:1998km,Ji:2011qg,Ji:2012nj,Vanasse:2013sda}
is related to the singularities of the denominator. Namely,
from Eqs.~(\ref{eq:tauL}) and (\ref{eq:S})
it can be immediately seen that, in a finite volume,
the propagator has an infinite tower of poles above the elastic threshold,
corresponding to the finite-volume energy spectrum in the two-particle
subsystems. In the infinite volume, these poles condense and form an elastic
cut. Next, we note that, in a finite volume,
the expansion will not work in the vicinity of these
poles, producing denominators that are more and more singular.
Bearing this fact in mind, we aim at an alternative procedure for removing
the spurious poles, which
is not based on such an expansion and, hence,
high powers of the energy denominator never appear. Below, we shall
demonstrate, how this goal can be achieved.
The paper is organized as follows. In In Sect.~\ref{sec:notations}
we set up the EFT framework which allows one to study the three-particle
problem in a systematic manner. In Sect.~\ref{sec:formalism} we formulate
a method that allows one to consistently remove a spurious subthreshold
pole from the dimer propagator. In Sect.~\ref{sec:numerics} this method
is numerically tested within a toy model. The convergence of the approach,
as well as the applicability of the power counting is discussed in detail.
Finally, Sect.~\ref{sec:concl} contains our conclusions.
\section{Formalism}
\label{sec:notations}
\subsection{Non-relativistic Lagrangians}
We consider the system of three identical non-relativistic bosons with a mass $m$, described
by the field $\psi$.
In this system a non-derivative three-body interaction is required for
renormalization already at leading order~\cite{Bedaque:1998kg}.
The Lagrangian takes the form (only S-wave contributions are shown explicitly):
\begin{eqnarray}\label{eq:particle}
\mathscr{L}&=&\psi^\dagger\biggl(i\partial_0+\frac{\nabla^2}{2m}\biggr)\psi
-\frac{C_0}{2}\,(\psi^\dagger\psi)^2
+\frac{C_2}{4}\,\biggl((\psi^\dagger{\stackrel{\leftrightarrow}{\nabla}}^2\psi^\dagger)\psi^2+\mbox{h.c.}\biggr)
\nonumber\\[2mm]
&-&\frac{D_0}{6}\,(\psi^\dagger\psi)^3
-\frac{D_2}{9}\,
\biggl((\psi^\dagger{\stackrel{\leftrightarrow}{\nabla}}^2\psi^\dagger)\psi^\dagger\psi^3+\mbox{h.c.}\biggr)+\cdots\, ,
\end{eqnarray}
where $\stackrel{\leftrightarrow}{\nabla}=\frac{1}{2}\,
(\stackrel{\rightarrow}{\nabla}-\stackrel{\leftarrow}{\nabla})$
is a Galilei-invariant derivative. The couplings $C_0,\,C_2$,
describe the interactions in the two-particle sector and can be related to
the S-wave
scattering length $a$ and effective range $r$, respectively.
$D_0$ and $D_2$ correspond to
three-body interactions with zero/two derivatives.
Higher-order terms with more derivatives
are not shown explicitly.
To describe the three-body systems, it is convenient to work in the
particle-dimer formalism. The dimer
can be introduced as an auxiliary integration variable in the path integral.
In this manner, it is obvious that the theory with dimers leads to the same
Green functions.
The particle-dimer Lagrangian takes the form\footnote{See, e.g., Refs.~\cite{Kaplan:1996nv,Bedaque:1998km,Bedaque:2002yg}.}
\begin{eqnarray}\label{eq:dimer}
\mathscr{L}_d&=&\psi^\dagger\biggl(i\partial_0+\frac{\nabla^2}{2m}\biggr)\psi
+\sigma d^\dagger\biggl(i\partial_0+\frac{\nabla^2}{4m}+\Delta\biggr)d
+\frac{f_0}{2}\,(d^\dagger\psi^2+\mbox{h.c.})+\cdots
\nonumber\\[2mm]
&+&h_0d^\dagger d\psi^\dagger\psi +h_2d^\dagger d(\psi^\dagger\nabla^2\psi+(\nabla^2\psi^\dagger)\psi)+\cdots\, .
\end{eqnarray}
Here, the ellipses stand for the terms that contain more space derivatives or higher partial waves, $d$ denotes the dimer field, and the sign $\sigma=\pm 1$ determines the sign of the effective range. In the examples discussed below,
we have $\sigma=-1$. The two
Lagrangians (\ref{eq:particle}) and (\ref{eq:dimer}) describe the same physics, so the couplings can be matched to each other. This matching has been
considered in the literature many times (see, e.g., Refs.~\cite{Bedaque:1999vb,Braaten:2004rn}) and we do not repeat it here once more.
Note only that two couplings $C_0,C_2$ (or, the scattering length and the effective range) can be traded for
two parameters $\Delta,f_0$, whereas two other couplings $D_0,D_2$ can be expressed
through $h_0,h_2$.
In the dimer picture, the three-particle amplitude is expressed through the particle-dimer
amplitude in a closed form. The latter obeys an integral equation (the Faddeev or Skorniakov-Ter-Martirosian equation), which can be readily obtained, considering the diagrammatic expansion of the amplitude. Note that the dimer need not correspond to a physical particle. Within this approach,
it is just a useful mathematical tool that makes the bookkeeping of various diagrams
extremely simple. In the numerical study that follows, however, we shall adjust the parameters so that the dimer is a stable particle, and use parameter values from the two-nucleon systems. The on-shell particle-dimer scattering
amplitude then has a direct physical interpretation.
\subsection{Faddeev equation for the particle-dimer scattering}
As already mentioned, the particle-dimer scattering amplitude in the non-relativistic effective theory obeys
the Faddeev equation
\begin{eqnarray}\label{eq:BS}
M({\bf p},{\bf q};E)=Z({\bf p},{\bf q};E)
+8\pi\int^\Lambda\frac{d^3{\bf k}}{(2\pi)^3}\,Z({\bf p},{\bf k};E)\tau({\bf k};E)M({\bf k},{\bf q};E)\, ,
\end{eqnarray}
where $E$ is the total energy of the particle-dimer system in the center-of-mass (CM) frame, and $\tau({\bf k};E)$ denotes the two-body amplitude.
It is always assumed that $E$ has an infinitesimal positive imaginary part
$E\to E+i \varepsilon$. As in the Lagrangian (\ref{eq:dimer}), we have
included only S-wave two-body interactions. Higher-partial wave
interactions contribute only beyond the order considered here.
The S-wave two-body amplitude in Eq.~(\ref{eq:BS}) is given by:
\begin{eqnarray}
\tau({\bf k};E)\doteq \tau(k^*)=\biggl(k^*\cot\delta(k^*)+k^*\biggr)^{-1}\, ,
\end{eqnarray}
where $\delta(k^*)$ denotes the S-wave phase shift, and $k^*$ is the magnitude of the
boosted relative momentum. In the non-relativistic kinematics,
\begin{eqnarray}
k^*=\sqrt{\frac{3}{4}\,{\bf k}^2-mE}\, .
\end{eqnarray}
Here, $m$ stands for the particle mass. Further, {\em for small momenta,} the effective-range
expansion can be carried out:
\begin{eqnarray}
k^*\cot\delta(k^*)=-\frac{1}{a}-\frac{1}{2}\,r{k^*}^2+\cdots\, ,
\label{Propagator}
\end{eqnarray}
where $a$ and $r$ stand for the scattering length and the effective range, respectively.
The kernel in the Faddeev equation consists of the one-particle
exchange contribution and a tower of the polynomial terms with the increasing powers of momenta, which are obtained from the
particle-dimer interaction Lagrangian:
\begin{eqnarray}\label{eq:Z}
Z({\bf p},{\bf q};E)=\frac{1}{{\bf p}^2+{\bf q}^2+{\bf p}{\bf q}-mE}
+\frac{H_0}{\Lambda^2}+\frac{3H_2}{8\Lambda^4}\,({\bf p}^2+{\bf q}^2)\, ,
\end{eqnarray}
where the parameters $H_0,H_2,\ldots$ can be expressed in terms of the
effective couplings in the Lagrangian $h_0,h_2,\ldots$. Further, $H_0,H_2,\ldots$ depend on the cutoff
$\Lambda$ so that the scattering amplitude $M({\bf p},{\bf q};E)$ is $\Lambda$-independent at a given order in the low-energy expansion.
Carrying out a partial-wave expansion in the Faddeev equation
and projecting onto S-wave results into:
\begin{eqnarray}
M(p,q;E)=Z(p,q;E)
+\frac{4}{\pi}\,\int^\Lambda k^2dk\,
Z(p,k;E)\tau(k^*)M(k,q;E)\,,
\label{FaddeevSWave}
\end{eqnarray}
where
\begin{eqnarray}
Z(p,q,E)=\frac{1}{2pq}\,\ln\frac{p^2+q^2+pq-mE}{p^2+q^2-pq-mE}
+\frac{H_0}{\Lambda^2}+\frac{3H_2}{8\Lambda^4}\,( p^2+q^2)+\cdots\, ,
\end{eqnarray}
where the subscript $\ell=0$ has been
dropped in all amplitudes. Note that this has been done only
in order to keep the formulae
simple and transparent. If needed, the formalism can be easily extended
to include higher partial waves (see, e.g., Ref.~\cite{Hammer:2017kms}).
Further,
as shown in Ref.~\cite{Bedaque:2002yg}, introducing a trimer auxiliary
field in the
Lagrangian along with the dimer field,
is it possible to simplify the Faddeev equation. In the kernel of the
transformed equation, the three-momenta are traded
for the total energy $E$:
\begin{eqnarray}\label{eq:ZE}
Z(p,q,E)\to\frac{1}{2pq}\,\ln\frac{p^2+q^2+pq-mE}{p^2+q^2-pq-mE}
+\frac{H_0}{\Lambda^2}+\frac{H_2}{\Lambda^4}\,(mE+\gamma^2)+\cdots\, ,
\end{eqnarray}
where $\gamma=\sqrt{mE_d}$ and $E_d$ denotes the binding energy of the dimer.
The amplitude, which is a solution of the equation with the transformed
kernel, is equal to the original amplitude up to the higher-order terms.
It is slightly easier to use the transformed kernel in numerical calculations
and we shall stick to this option in the following.
In the presence of a stable dimer, the on-shell amplitude $M$ is related to the
particle-dimer scattering phase, according to:
\begin{eqnarray}\label{eq:EFTphase0}
M(p,p,E_p)=\frac{3}{16\gamma}\,\frac{1}{p\cot\delta(p)-ip}\, ,
\end{eqnarray}
This phase is real below the dimer breakup threshold at $E=0$.
\section{Problem and Solution}
\label{sec:formalism}
\subsection{Spurious states}
The hard scale $M_{high}$ in the two-body interactions is set by the effective range $r$.
To make further discussion as transparent as possible, let us assume that
true dynamics of a system, which {\em at small momenta} is described
by the non-relativistic effective Lagrangian, is such that no deeply bound two-body
states with $\sqrt{mE_2}\simeq |r|^{-1}$ emerge. The effective field theory
setting in the present form could not be used to consistently describe such states anyway,
and we merely discard them (in the two-particle sector, the presence of such states at small momenta
will show up only indirectly, through their contributions to the effective couplings).
Only shallow bound states with $\sqrt{mE_2}\ll |r|^{-1}$ will be allowed. In particular,
in the following, we shall tune our parameters so that only one shallow bound state
-- a dimer -- with the binding energy $E_d>0$ exists. Hence, the two-body scattering
length $a$ must be large and positive, $a\gg |r|$.
After this introduction, let us formulate the problem. If in the Faddeev
equation~(\ref{eq:BS}), the integration momentum $|{\bf k}|$ runs from $0$ to
$\Lambda$, the quantity $k^*$ varies from $k^*=\sqrt{-mE}$ to
$k^*\simeq \frac{\sqrt{3}}{2}\,\Lambda$ (if $E<0$, the quantity $k^*$ is always real).
Thus, the subthreshold amplitude at large momenta enters
the equation. In the effective theory, all that can be done is to approximate
$k^*\cot\delta(k^*)$ by means of the effective range expansion, which
does not make sense at large momenta. One would argue that the behavior at large
momenta should not really matter and can be taken care of by an appropriate
renormalization prescription. Hence, it would be harmless to extend the integration
to high momenta. In reality, however, the situation is more subtle.
Let us retain only the first two terms in the effective-range expansion. Then, if $r>0$,
the two-body amplitude $\tau(k^*)$ develops a {\em spurious pole} at large momenta:
\begin{eqnarray}\label{eq:twopole}
\tau(k^*)=\frac{1}{-1/a-r{k^*}^2/2+k^*}=\frac{-2/r}{(k^*-k_1)(k^*-k_2)}\, ,
\end{eqnarray}
where
\begin{eqnarray}
k_1=\frac{2/a}{1+\sqrt{1-2r/a}}\simeq \frac{1}{a}\, ,\quad\quad
k_2=\frac{1+\sqrt{1-2r/a}}{r}\simeq \frac{2}{r}\, .
\end{eqnarray}
It is obvious that $k_1$ and $k_2$ correspond to the physical dimer
and to a spurious deep pole, respectively. Such a spurious pole emerges, because effective range expansion
is applied in a region where it is not valid anymore. Including higher orders in the expansion
will generate even more spurious poles.
An immediate consequence of the emerging spurious pole is that the integration
contour hits a singularity where, originally, there was no singularity. It should be
understood that the presence of the singularity is not a problem {\it per se:} in fact, in a theory where
physical deeply bound states are present, there are also singularities and one has
to handle them by deforming the integration contour or otherwise.
On the other hand, the fact that such a spurious pole contributes to the
unitarity relation
is a true problem. If in reality there is no such state, there should be no
such contribution at all. Even worse, in the case relevant to the two-nucleon
problem, $a>0$ and $r>0$,
the spurious pole has a residue with a wrong sign,
leading to the negative probabilities. Indeed, as can be
seen from Eq.~(\ref{eq:twopole}), the residues at two poles have opposite signs and,
since the dimer corresponds to a true bound state, the second pole has to correspond
to a ghost. In addition, it is not immediately
clear, how such contributions can be removed by changing the renormalization prescription
for the effective couplings, which are presumed to be real.
In the literature, one encounters different prescriptions for treating such spurious
singularities. For example, one may keep the cutoff $\Lambda$ low enough, so that the
spurious poles do not appear on the integration path.
The shortcomings of this approach, both conceptual and practical, are obvious.
First of all, one cannot remove the cutoff and ensure the independence of the results
on the regularization. Moreover, the upper bound of the cutoff depends on the order
one is working, and on the values of the effective-range expansion parameters.
Hence, setting up a universal upper bound is not possible in general.
The power counting of pionless EFT stipulates that the effective range
corrections in the three-body system are perturbative, since $|a|\gg|r|$
\cite{Bedaque:1998km}. This approach is implemented in
Refs.~\cite{Ji:2011qg,Vanasse:2013sda}. It is reminiscent of the threshold expansion
of Beneke and Smirnov~\cite{Beneke:1997zp} (see also Refs.~\cite{Mohr:2003du,Mohr:2005pv}), and the heavy baryon expansion
in Chiral Perturbation Theory~\cite{Jenkins:1990jv,Mannel:1991mc,Bernard:1992qa}. This approach is based on the observation that
the Taylor-expansion of the propagators alters only high-momentum contributions in
the Feynman graphs -- exactly those, which are responsible for the trouble. Namely,
following Refs.~\cite{Ji:2011qg,Vanasse:2013sda}, one may expand the quantity $\tau(k^*)$, given by
Eq.~(\ref{eq:twopole}), in series in the effective range $r$ and include the
contributions in strict perturbation theory. The energy denominators,
$(-1/a+k^*)^{-n}$, obtained in a result of this expansion, do not produce spurious poles.
The resulting Faddeev equation can be readily solved -- the solution is written
down as a series in powers of the effective range parameter $r$. The method is very
appealing, successful, and fully consistent. However, using this method in a finite volume, following
the approach of Refs.~\cite{Hammer:2017uqm,Hammer:2017kms}, is not very convenient numerically, since the
denominator in a finite volume becomes very singular (cf. the discussion
in the introduction). For this reason,
in this paper we propose an alternative approach to this problem, where only the spurious
pole contribution is expanded. In this manner, high powers of energy denominator never appear. In addition, in our opinion, this method could be even simpler in the applications.
\subsection{Method}
Let us in the beginning assume that we work below the dimer breakup threshold,
$E<0$. The argument is then crystal clear.
We start by splitting off two poles in Eq.~(\ref{eq:twopole}) from each other:
\begin{eqnarray}
\tau(k^*)=\frac{2(k_1+k_2)/r}{(k_2-k_1)(k^*+k_2)(k^*-k_1)}
-\frac{4k_2/r}{(k_2-k_1)({k^*}^2-k_2^2)}\, .
\end{eqnarray}
Here, the first/second term contain the dimer/spurious poles, respectively. Note now
that the second term is, in fact, a low-energy polynomial -- since $k_2$ is a quantity of order of a heavy scale, $k_2\sim M_{high}$,
it can be expanded in Taylor series in ${k^*}^2$. Doing this, one gets
rid of the spurious pole. It should be however demonstrated that the change in the
amplitude, which results by replacing the deep pole by its Taylor expansion,
can be indeed accounted for by adjusting the effective couplings. Below,
we shall demonstrate this by explicit calculations at one loop and
interpret this adjustment physically.
It is convenient to introduce the notations:
\begin{eqnarray}
f(k^*)=-\frac{4k_2/r}{(k_2-k_1)({k^*}^2-k_2^2)}
-\frac{4k_2/r}{(k_2-k_1)k_2^2}\biggl\{1+\frac{{k^*}^2}{k_2^2}+\frac{{k^*}^4}{k_2^4}+\cdots\biggr\}\, ,
\label{f}
\end{eqnarray}
as well as
\begin{eqnarray}
f_1(k^*)&=&-\frac{4k_2/r}{(k_2-k_1)({k^*}^2-k_2^2)}
-\frac{4k_2/r}{(k_2-k_1)k_2^2}\, ,
\nonumber\\[2mm]
f_2(k^*)&=&-\frac{4k_2/r}{(k_2-k_1)({k^*}^2-k_2^2)}
-\frac{4k_2/r}{(k_2-k_1)k_2^2}\biggl\{1+\frac{{k^*}^2}{k_2^2}\biggr\}\, ,
\nonumber\\[2mm]
f_3(k^*)&=&-\frac{4k_2/r}{(k_2-k_1)({k^*}^2-k_2^2)}
-\frac{4k_2/r}{(k_2-k_1)k_2^2}\biggl\{1+\frac{{k^*}^2}{k_2^2}+\frac{{k^*}^4}{k_2^4}\biggr\}\, ,
\label{f123}
\end{eqnarray}
and so on.
In other words, from the term corresponding to the spurious pole, we subtract its Taylor
expansion, up to some order. Further, writing down
$\tau(k^*)=[\tau(k^*)-f(k^*)]+f(k^*)$,
the Faddeev equation can be rewritten in the following form:
\begin{eqnarray}
M({\bf p},{\bf q};E)&=&W({\bf p},{\bf q};E)
+8\pi\int^\Lambda\frac{d^3{\bf k}}{(2\pi)^3}\,W({\bf p},{\bf k};E)
[\tau(k^*)-f(k^*)]M({\bf k},{\bf q};E)\, ,
\nonumber\\[2mm]
W({\bf p},{\bf q};E)&=&Z({\bf p},{\bf q};E)
+8\pi\int^\Lambda\frac{d^3{\bf k}}{(2\pi)^3}\,Z({\bf p},{\bf k};E)f(k^*)W({\bf k},{\bf q};E)\, .
\label{ChangedFaddeev}
\end{eqnarray}
Note now that in the first equation of the above system, which determines the
amplitude $M$ one is looking for, the spurious pole is replaced by its Taylor expansion.
Consequently, the culprit has been removed. The question remains, however, whether
the effective potential $W$, which is determined by the second equation, has the same
properties as $Z$, i.e., is given by a sum of the one-particle exchange diagram
and a low-energy polynomial. In this case, one could forget about the second equation
at all, since the difference between $W$ and $Z$ could be accounted for by a change of
the renormalization prescription.
In the following, we expand the quantity $W$ in the Born series
\begin{eqnarray}\label{eq:PT}
W=Z+ZfZ+ZfZfZ+\cdots =W^{(1)}+W^{(2)}+W^{(3)}+\cdots\, ,
\end{eqnarray}
in order to study the structure of each term separately. In particular, considering a couple
of simple examples at the second order, we verify that $W^{(2)}$ has indeed the
structure which was conjectured from the beginning. The generalization to higher
orders is clear.
Let us now start with the calculation of $W^{(2)}$. The quantity $Z$, displayed in
Eq.~(\ref{eq:Z}), contains an infinite number of terms, and hence $W^{(2)}$ will contain
infinite number of all cross products. To illustrate our statement, we pick out
a single term. The simplest choice is the one, proportional to $H_0^2$:
\begin{eqnarray}
W^{(2)}_{00}&=&-\frac{32\pi k_2/r}{k_2-k_1}\,\biggl(\frac{H_0}{\Lambda^2}\biggr)^2
I_{00}\, ,
\nonumber\\[2mm]
I_{00}&=&
\int^\Lambda \frac{d^3{\bf k}}{(2\pi)^3}\,
\biggl\{\frac{1}{{k^*}^2-k_2^2-i\varepsilon}+\frac{1}{k_2^2}\biggl(
1+\frac{{k^*}^2}{k_2^2}+\cdots\biggr)\biggr\}\, .
\end{eqnarray}
Note that the sign of $i\varepsilon$ follows from the prescription $E\to E+i\varepsilon$.
The imaginary part of $I_{00}$ is a constant, which depends on the energy $E$:
\begin{eqnarray}
\mbox{Im}\,I_{00}=\frac{2}{3\sqrt{3}\pi}\,\sqrt{k_2^2+mE}=
\frac{2k_2}{3\sqrt{3}\pi}\,\biggl\{1+\frac{mE}{k_2^2}+\cdots\biggr\}\, .
\end{eqnarray}
We assume here that the cutoff $\Lambda$ is chosen large enough, so the pole
is inside the integration region -- otherwise, the imaginary part would vanish.
Further, the real part is also a low-energy polynomial:
\begin{eqnarray}
\mbox{Re}\,I_{00}=\frac{2}{3\pi^2}\,\Lambda+\frac{1}{2\pi^2k_2^2}\,\biggl(
\frac{1}{3}\,\Lambda^3+\frac{1}{k_2^2}\biggl(\frac{3}{20}\Lambda^5
-\frac{1}{3}\,\Lambda^3mE\biggr)+\cdots\biggr\}\, .
\end{eqnarray}
It can be seen that the real part can be removed by altering the renormalization
prescription. The sole subtle point is that the counterterms depend on (are low-energy
polynomials of) the total three-particle CM energy $E$ which, in the Lagrangian,
translates into time derivatives on both the particle and dimer fields.
The following discussion
demonstrates, how one could circumvent this problem. First, if one is interested only
in the on-shell particle-dimer scattering matrix, one could directly use the equations of
motion (EOM) in the particle-dimer Lagrangian, trading the time derivatives for space
ones. In the description of the generic three particle processes, however, the dimers
may go off-shell. In this case, one should first integrate the dimer field out and then
use the EOM for the particle fields that leaves the three-body $S$-matrix elements
unchanged.
Applying the same procedure to the imaginary part leads, however, to a conceptual
inconsistency, since the counterterms, which are needed to remove it, should be complex.
The problem with the spurious poles shows up exactly at this place. Note, for example,
that if the cutoff $\Lambda$ is chosen so small that the integration contour does not
hit the pole, then the problem does not arise, since the imaginary part vanishes.
It is also clear that one could circumvent the problem, which originates from the use of
the effective-range expansion beyond the range of its applicability, by merely
dropping the imaginary part by hand (because, in the exact theory, there are no poles
and thus no imaginary part).
As a side remark, this discussion also shows how physical deep bound
states should be treated.
The corresponding poles are physical and cannot be eliminated from the
theory. On the other hand, it would be inconsistent to treat them in the present setting
explicitly, because their binding energy is determined by the
hard scale $M_{high}$. According to the above
discussion, such a deep bound state pole will show up indirectly, through the contribution
to the effective couplings, which become complex. In contrast to the case of spurious
poles, the imaginary part corresponds to the contribution of the physical
deep bound state to the unitarity relation and cannot be discarded. The potential $W$ becomes now a
kind of ``optical potential''~\cite{optical}, in which the shielded states manifest
themselves through the imaginary part. It should be also mentioned
that the contribution from the physical states to the imaginary part always comes with a
correct sign, in accordance with unitarity.
Next, we shall consider another contribution to the quantity $W^{(2)}$ that will allow us
to have a closer look on its structure at small momenta. We shall namely single
out the term where both $Z$ are replaced by the one-particle exchange
contribution:
\begin{eqnarray}
W^{(2)}_{ee}&=&-\frac{32\pi k_2/r}{k_2-k_1}\,(I_{\sf pole}-I_{\sf subtr})\, ,
\nonumber\\[2mm]
I_{\sf pole}&=&\frac{4}{3}\,\int^\Lambda\frac{d^3{\bf k}}{(2\pi)^3}\,
\frac{1}{{\bf p}^2+{\bf p}{\bf k}+{\bf k}^2-mE-i\varepsilon}\,
\frac{1}{{\bf k}^2-\rho^2-i\varepsilon}\,
\nonumber\\[2mm]
&\times& \frac{1}{{\bf k}^2+{\bf k}{\bf q}+{\bf q}^2-mE-i\varepsilon}\, ,
\nonumber\\[2mm]
I_{\sf subtr}&=&-\frac{1}{k_2^2}\,\int^\Lambda\frac{d^3{\bf k}}{(2\pi)^3}\,
\frac{1}{{\bf p}^2+{\bf p}{\bf k}+{\bf k}^2-mE-i\varepsilon}\,
\biggl(1+\frac{{k^*}^2}{k_2^2}+\cdots\biggr)
\nonumber\\[2mm]
&\times&\frac{1}{{\bf k}^2+{\bf k}{\bf q}+{\bf q}^2-mE-i\varepsilon}\, ,
\end{eqnarray}
and $\rho^2=\frac{4}{3}\,(k_2^2+mE)$.
The integral $I_{\sf pole}$ is ultraviolet-finite, and hence the cutoff $\Lambda$ can
be taken to infinity. Using the Feynman trick, it can be written in the following form:
\begin{eqnarray}
I_{\sf pole}=\frac{1}{12\pi}\,\int_0^1dx \int_0^1ydy
\frac{1}{(A+By+Cy^2-i\varepsilon)^{3/2}}\, ,
\end{eqnarray}
where
\begin{eqnarray}
A&=&-\rho^2\, ,
\nonumber\\[2mm]
B&=&-mE+\rho^2+x{\bf p}^2+(1-x){\bf q}^2\, ,
\nonumber\\[2mm]
C&=&-\frac{1}{4}\,(x{\bf p}+(1-x){\bf q})^2\, .
\end{eqnarray}
The integral over the variable $y$ can be performed, yielding:
\begin{eqnarray}\label{eq:pole}
I_{\sf pole}&=&\frac{1}{6\pi}\,\int_0^1dx\frac{1}{4AC-B^2}\,\biggl(
\frac{2A}{(-\rho^2-i\varepsilon)^{1/2}}
\nonumber\\[2mm]
&-&\frac{2A+B}{(-mE+x{\bf p}^2+(1-x){\bf q}^2-\frac{1}{4}\,(x{\bf p}+(1-x){\bf q})^2-i\varepsilon)^{1/2}}\biggr)\, .
\end{eqnarray}
The first term is again a low-energy polynomial (with complex coefficients) and can
therefore be discarded, while the second term is not. Expanding the numerator in the
integrand in a Taylor series, we get:
\begin{eqnarray}\label{eq:ABC}
-\frac{2A+B}{4AC-B^2}&=&-\frac{3}{4k_2^2}
\nonumber\\[2mm]
&-&\frac{3}{16k_2^4}\,\biggl(5mE-9(x{\bf p}^2+(1-x){\bf q}^2)+3(x{\bf p}+(1-x){\bf q})^2\biggr)+\cdots\, .
\end{eqnarray}
Next, consider the subtraction integral
$I_{\sf subtr}=\sum_n I_{\sf subtr}^{(n)}/k_2^{2n}$.
The leading term is given by:
\begin{eqnarray}
I_{\sf subtr}^{(1)}&=&
\int^\Lambda\frac{d^3{\bf k}}{(2\pi)^3}\,
\frac{1}{{\bf p}^2+{\bf p}{\bf k}+{\bf k}^2-mE-i\varepsilon}\,
\frac{1}{{\bf k}^2+{\bf k}{\bf q}+{\bf q}^2-mE-i\varepsilon}
\nonumber\\[2mm]
&=&\frac{1}{8\pi}\,\int_0^1dx\frac{1}
{(-mE+x{\bf p}^2+(1-x){\bf q}^2-\frac{1}{4}\,(x{\bf p}+(1-x){\bf q})^2-i\varepsilon)^{1/2}}\, .
\end{eqnarray}
It is immediately seen that the leading-order term $I_{\sf subtr}^{(1)}$ cancels the leading-order non-polynomial piece in $I_{\sf pole}$ that emerges from the first term
in the expansion in Eq.~(\ref{eq:ABC}). The higher-order terms such as
\begin{eqnarray}
I_{\sf subtr}^{(2)}=-\frac{3\Lambda}{8\pi^2}
-\frac{1}{32\pi}\,\int_0^1dx\frac{5mE-9(x{\bf p}^2+(1-x){\bf q}^2)+3(x{\bf p}+(1-x){\bf q})^2}
{(-mE+x{\bf p}^2+(1-x){\bf q}^2-\frac{1}{4}\,(x{\bf p}+(1-x){\bf q})^2-i\varepsilon)^{1/2}}\, ,
\end{eqnarray}
have the same property.
The integral cancels against the next-to-leading order non-polynomial contribution, emerging from the second term in Eq.~(\ref{eq:ABC}), and only the polynomial contribution
is left at this order. The role of the higher-order subtraction terms is similar --
they merely remove the non-polynomial contributions at the pertinent order, leaving
only the polynomial parts (as it should indeed be).
The general pattern becomes crystal clear already from these examples,
and there is no need to
consider higher-order terms. To summarize,
the quantity $W$ is indeed a low-energy polynomial
up to an order fixed by the order of the subtracted polynomial. The coefficients of this
polynomial are energy-dependent and complex. The energy-dependence can be
eliminated through the use of the EOM. The imaginary parts, arising from the spurious
poles, are artifacts of the use of the effective-range expansion for large momenta. Our
prescription consists of dropping these artifacts since, in the full theory,
there are no poles leading to the complex potential. Thus, one may finally
assume that $W=Z$, modulo the change in the renormalization prescription.
Final remarks about unitarity are in order. The un-expanded two-body
amplitude, which still contains the spurious pole, obeys exact two-body
unitarity by construction, whereas this property is lost after expansion.
However, the violation is small in the physically relevant region of small
momenta, because ${{k^*}^2}\!/k_2^2\sim M_{low}^2/M_{high}^2$ is a small parameter there. Moreover, the violation of unitarity in this region
can be systematically reduced,
including higher-order terms in the Taylor expansion.
Further, our argument can be extended for the energies above the breakup
threshold, $E>0$. In this region, it is no longer true that the contributions
to the imaginary part of $W$ come solely from the spurious subthreshold
pole. In fact, they can emerge also from the denominators, corresponding
to the particle exchange between the dimer and spectator particle. This
contribution to the imaginary part is physical and should be retained.
Note, however, that this contribution emerges exclusively from the region
of small integration momenta, where the quantity $k^*$ is small. In this
region, the quantity $f(k^*)$ is also small (it converges to zero in the
Taylor expansion in ${{k^*}^2}\!/k_2^2$).
Hence, the corresponding contribution
to the imaginary part of $W$ should be small. It can be systematically
reduced by including higher-order terms in the Taylor expansion.
Thus it can be safely neglected.
It should also be mentioned that the relation of the amplitude to the phase shift is
modified along with the unitarity relation, if the subtraction is done. In particular,
instead of Eq.~(\ref{eq:EFTphase0}), one now has:
\begin{eqnarray}\label{eq:EFTphase}
M(p,p,E_p)=\frac{3}{16\gamma}\,\frac{k_2-k_1}{k_2+k_1}\,\frac{1}{p\cot\delta(p)-ip}\, .
\end{eqnarray}
Note that Eq.~(\ref{eq:EFTphase}) reduces to Eq.~(\ref{eq:EFTphase0}) in the limit
$r\to 0$, as it should.
\subsection{Order of the subtraction polynomial}
It is natural to ask how large the order of the subtracted polynomial
in $f(k^*)$ should be. Is it so that, if one subtracts more terms, the accuracy of the method increases? The answer to this question is obviously no. Recall that one has to compensate the subtraction by adjusting effective couplings in the Lagrangian. If one does not have enough couplings $H_0,H_2,\ldots$, a further subtraction does not lead to an improved accuracy.
Since the problem is highly non-perturbative, it is difficult to establish
the order of the subtraction polynomial a priori without a non-perturbative
calculation. We stress that the
requirement to promote the three-body interaction to leading order
in Ref.~\cite{Bedaque:1998kg} was also established by explicitly
investigating the cutoff dependence of
numerical solutions of Eq.~(\ref{FaddeevSWave}).
Alternatively, one can analyze the asymptotic behavior of
non-perturbative solutions \cite{Danilov:61,Griesshammer:2005ga}.
In order to get a first idea on optimal number of subtractions, we
start with a perturbative analysis of Eq.~(\ref{FaddeevSWave}), being well
aware of the shortfalls of this approach.
It is convenient to consider
the effective potential $W$, rather than the amplitude $M$. It is
straightforward to establish counting rules for $W$ in perturbation theory. Indeed, assume that one is using dimensional
regularization to tame ultraviolet divergences in this quantity (the use of
any other regularization, say, the cutoff regularization, will alter only
the polynomial part of $W$ that can be compensated by a choice of the renormalization prescription). Further, the quantity $Z$
(containing the exchange diagram) counts
at $O(p^{-2})$ for small three-momenta (all low-energy constants count at
order $p^0$). The quantities $f_1(k^*),f_2(k^*),\ldots$, introduced in
Eq.~(\ref{f123}), for small values of $k^*$ count as $O(p^2),O(p^4),\ldots$.
Finally, the integration measure $d^3{\bf k}$ counts at $O(p^3)$.
Let us now consider the perturbative expansion of the potential $W$, given
by Eq.~(\ref{eq:PT}). Each consecutive term in this expansion contains one
additional factor $Z$, $f(k^*)$ and $d^3{\bf k}$ -- hence, the power in $p$
increases at least by one, when one goes to higher-order terms.
Hence, the most stringent constraint on the number of subtractions arises
from the term $W^{(2)}$. At lowest order, one has to replace
$f(k^*)$ by $f_1(k^*)$.
Then, $W^{(2)}$ counts at $O(p^{3-2+2-2})=O(p)$ according to our power
counting. Of course, this counting concerns the non-analytic piece
of $W^{(2)}$ only. Furthermore, taking $f_2(k^*)$ instead of $f_1(k^*)$,
we get the non-analytic piece starting at $O(p^3)$, and so on.
Imagine now that we have only one coupling $H_0$ at our disposal that counts
at $O(p^0)$. Adjusting this single coupling, one can achieve that
$\mbox{Re}\,W^{(2)}=O(p)$ if $f_1(k^*)$ is used since the non-analytic
piece starts at $O(p^3)$. If $f_2(k^*)$ is used, the non-analytic piece starts
only at $O(p^3)$ and the leading contribution comes from
the analytic piece at $O(p^2)$, i.e., $\mbox{Re}\,W^{(2)}=O(p^2)$.
Using $f_3(k^*),\ldots$ in the calculations
does not lead to the further improvement, since we do not have the
$H_2$ counterterm
at our disposal to remove the $O(p^2)$ piece. By the same token, using
$f_3(k^*)$ should be optimal in case of two constants $H_0,H_2$.
In this case, $\mbox{Re}\,W^{(2)}=O(p^4)$ can be achieved.
Finally, we reiterate that the above discussion should be taken with a grain of salt as it is based on perturbation theory.
Hence, the counting rules, given above, can provide only a hint about the optimal number of subtractions in the non-perturbative case. We therefore conclude that it is important to numerically check the expectation, based on the above power counting, in non-perturbative calculations. This goal will be accomplished in the next section.
\section{Numerical test}
\label{sec:numerics}
In this section, we shall test the approach described above using explicit
nonperturbative calculations. In these calculations, a quantum-mechanical system of three identical
bosons, interacting pairwise through some model potential, will play the
role of an
exact underlying theory. The underlying theory, by definition, does not contain spurious poles. These
appear, when one replaces an exact two-body amplitude in the Faddeev equations
by the effective-range expansion. Thus, one may check, whether the results,
obtained in our scheme, do indeed converge to the known (exact) result, and estimate the rate of this convergence.
We will consider a Yamaguchi potential first and then
repeat this analysis for a Gauss potential.
\subsection{Yamaguchi model}
As mentioned above, we consider a toy model with three bosons of a mass $m$, interacting through
the Yamaguchi potential~\cite{Yamaguchi:1954}, as an exact theory.
This potential is given by:
\begin{align}
V_Y(p,q)=\lambda \chi(p)\chi(q) \, ,\quad\quad \chi(q)=\frac{\beta^2}{\beta^2+q^2}.
\end{align}
Here, $\lambda$ denotes the strength of the potential, and $\beta$ is related to its
range. To connect the parameters of the Yamaguchi potential to the scattering length
$a$ and the effective range $r$, we calculate the two-body scattering amplitude:
\begin{eqnarray}
t_Y(p,q,z) =\chi(p)d_Y(E)\chi(q)\, ,\quad\quad
d_Y(z)=\left[\frac{1}{\lambda}-\int \frac{d^3{\bf q}}{(2\pi)^3}\, \frac{\chi^2(q)}
{z-E_q} \right]^{-1} \, .
\end{eqnarray}
The on-shell amplitude takes the form:
\begin{eqnarray}
t_Y(p,p,E_p)=\chi^2(p)\left[\frac{1}{\lambda}
-\frac{m\beta^3}{8\pi(p+i\beta)^2}\right]^{-1}\, ,
\label{twobodyscattering}
\end{eqnarray}
where $E_p=p^2/m+i\varepsilon$.
Expanding this amplitude and comparing the result to the effective-range expansion,
we obtain:
\begin{align}
-\frac{1}{a} =-\frac{\beta}{2}-\frac{4\pi}{\lambda m}\, ,
\quad\quad
r =\frac{1}{\beta}-\frac{16\pi}{\lambda\beta^2 m}\,.
\end{align}
In the numerical calculations, the values of $a$ and $r$ are chosen to be equal\footnote{Of course, here we study three
bosons, so the parameter choice is not directly linked to any real physical problem. However,
the generalization of our method to the case of the particles with spin is straightforward.}
to the $np$-triplet scattering parameters
$a=5.4194\text{ fm}$ and $r=1.7563\text{ fm}$. Given this input, one can fix
the parameters of the original Yamaguchi potential. This results in
$\lambda=-0.00013~\text{MeV}^{-2}$ and
$\beta=278.8~\text{MeV}$. The mass $m$ is chosen equal to the proton mass. We shall use these values in the following. For this choice of
parameters, a stable dimer with the mass $M_d=2m-E_d$ emerges.
The binding energy of the dimer, $E_d\simeq 2.22~\mbox{MeV}$, coincides with
that of the deuteron.
In the three-body sector, the model does not contain a three-body force.
This is a valid choice since all integrals are convergent at the upper limit (the parameter
$\beta$ plays a role of the ultraviolet cutoff).
The equation for the particle-dimer scattering amplitude $M_Y(k,p,E)$ takes
the form (see, e.g., the textbook by Schmid and Ziegelmann~\cite{Schmid}):
\begin{align}
\begin{split}
M_Y(k,p,E)&=2Z_Y(k,p,E)+\frac{1}{\pi^2}\int dq\,q^2 Z_Y(k,q,E)\tau_Y(q,E)M_Y(q,p,E)\, ,
\label{FaddeevYamaguchi}
\end{split}
\end{align}
where the dimer propagator $\tau_Y(q,E)$ is given by:
\begin{align}
\label{eq:convention}
\begin{split}
\tau_Y(q,E)&=d_Y(z)\biggr|_{z=3q^2/(4m)-E-i\varepsilon}
\\[2mm]
&=\frac{8\pi}{m\beta^3}\frac{(\beta+\gamma)^2(\beta+\sqrt{3q^2/4-mE})^2}{2\beta+\gamma+\sqrt{3q^2/4-mE}}\frac{1}{\gamma-\sqrt{3q^2/4-mE}},
\end{split}
\end{align}
with $\gamma=\sqrt{-m\lambda\beta^3/(8\pi)}-\beta=\sqrt{mE_d}$, and the convention
$E\to E+i\varepsilon$ is implicit everywhere in Eq.~(\ref{eq:convention}).
The one-particle exchange potential $Z_Y(p,q,E)$ in the Yamaguchi model
can be written down in the following form:
\begin{align}\label{eq:ZY}
\begin{split}
Z_Y(p,q,E)&=\frac{1}{2}\int_{-1}^1 d\cos\theta_{p,q}\frac{\chi({\bf q}+1/2\,{\bf p})
\chi(-{\bf p}-1/2\,{\bf q})}{E-{\bf p}^2/(2m)-{\bf q}^2/(2m)
-({\bf p}+{\bf q})^2/(2m)}\\[2mm]
&=\frac{m}{2}\int_{-1}^1 du\,\frac{\beta^2}{\beta^2+p^2/4+q^2+pqu}\,
\frac{\beta^2}{\beta^2+p^2+q^2/4+pqu}\\[2mm]
&\times \frac{1}{mE-p^2-q^2-pqu}\, .
\end{split}
\end{align}
The calculation of the amplitude $M_Y(k,p,E)$ can be carried out by using standard
numerical procedures. We namely use a large momentum cutoff $\Lambda=1500\,\text{MeV}$
to approximate the integral (the presence of the
momentum cutoff is not critical since, as said above, the integral converges even in the
absence of the cutoff).\footnote{Above the breakup threshold, one could
perform a contour rotation in the integral, or use some other technique, in order
to circumvent the (integrable) logarithmic singularity of the one-particle exchange potential which hits
the contour. For simplicity, we will not treat the integrable singularity in any special way. The emerging numerical irregularities are small and do not affect our conclusions.}
In the model, the particle-dimer scattering phase shift
$\delta_Y(p)$ is defined, according to:
\begin{eqnarray}
M_Y(p,p,E_p)=-\frac{3m\beta^3}{8\gamma(\beta+\gamma)^3}
\,\,\frac{1}{p\cot\delta_Y(p)-ip}\, .
\end{eqnarray}
As already mentioned above, below the dimer breakup threshold $E<0$, the phase shift $\delta_Y(p)$ is real, according to the unitarity. Note also that, in order to ease notations, we
did not choose the same normalization for the amplitudes $M$ and $M_Y$. This does not
cause a problem, since the particle-dimer phase shifts are compared, which are independent
on the normalization chosen.
\subsection{Matching of the EFT framework}
As stated before, our aim is to compare the solution of the Faddeev equation $M_Y(k,p,E)$
with the solution of the Eq.~(\ref{ChangedFaddeev}), where
$W({\bf{p}},{\bf{q}},E)=Z({\bf{p}},{\bf{q}},E)$ is assumed.
In the calculations,
again the hard cutoff is imposed, and two values $\Lambda=250\,\mbox{MeV}$
and $\Lambda=600\,\mbox{MeV}$ are used. Note that, in this case, the cutoff plays a
crucial role as regulator, since the momentum integrals are otherwise divergent.
\begin{sloppypar}
Owing to the initial choice of the parameters, both propagators $\tau_Y$ and
$\tau$ have a pole at the deuteron energy $E_d=k_1^2/m$,
corresponding to $k_1\simeq 46\,\mbox{MeV}$.
For the given choice of parameters,
the quantity $\tau$ exhibits a second, spurious pole at
$k_2\simeq 179\,\mbox{MeV}$ as well, whereas in $M_Y(k,p,E)$, such a pole is absent.
In order to apply our method,
we define the subtracted propagators $\tau_i(k^*)=\tau(k^*)-f_i(k^*)$, where $i$ denotes
the number of subtractions. Thus,
\end{sloppypar}
\begin{align}
\begin{split}
\tau_1(k^*)&=\frac{2(k_2+k_1)/r}{(k_2-k_1)(k^*+k_2)(k^*-k_1)}+\frac{4k_2/r}{(k_2-k_1)k_2^2}\,,\\[2mm]
\tau_2(k^*)&=\frac{2(k_2+k_1)/r}{(k_2-k_1)(k^*+k_2)(k^*-k_1)}+\frac{4k_2/r}{(k_2-k_1)k_2^2}\left\{1+\frac{k^{*2}}{k_2^2}\right\}\,,\\[2mm]
\tau_3(k^*)&=\frac{2(k_2+k_1)/r}{(k_2-k_1)(k^*+k_2)(k^*-k_1)}+\frac{4k_2/r}{(k_2-k_1)k_2^2}\left\{1+\frac{k^{*2}}{k_2^2}+\frac{k^{*4}}{k_2^4}\right\}\, .
\end{split}
\label{eq:Tau}
\end{align}
Note also that, for the remaining (shallow) pole in $\tau_i(k^*)$, the prescription
$k^*\to k^*-i\varepsilon$ is implicit in all above expressions. This corresponds to
$E\to E+i\varepsilon$.
Various approximations, which can be constructed within our approach, differ by
a) order in the effective range expansion and the number of the three-body couplings
$H_0,H_2,\ldots$ used, and b) the number of the retained terms in the expansion in $\tau_i(k^*)$. The calculations are done for leading order (LO), next-to-leading order (NLO) and next-to-next-to-leading order (N$^2$LO) in pionless EFT.
According to the standard power counting in the two- and three-body sectors, the following parameters appear:
\begin{table}[H]
\begin{center}
\begin{tabular}{c|c|c}
Order & 2-body parameters & 3-body parameters\\
\hline
LO & $a$ & $H_0$ \\
NLO & $a,\,r$ & $H_0$ \\
N$^2$LO & $a,\,r$ & $H_0,\,H_2$
\end{tabular}
\end{center}
\caption{Appearance of 2- and 3-body parameters per order of the EFT power counting.}
\end{table}
Next, we briefly discuss the matching of the low-energy couplings $H_0,H_2$. If there
is only one three-body coupling present, as at LO and NLO, it is most convenient to
determine it from matching at threshold. For technical reasons, we perform
matching of the particle-dimer scattering phases in two theories $p\cot\delta_Y(p)$ and
$p\cot\delta(p)$ at small, but non-zero value of the momentum
$p=0.001 \text{ MeV}$. When the second coupling $H_2$ is present (N$^2$LO), it would be
natural to match in addition the first derivative of the function $p\cot\delta(p)$ at threshold.
Equivalently, one could match the value of the function $p\cot\delta(p)$ at a some value
of $p$ above threshold. We have opted for the second option, because it
is easier to implement in our numerical algorithm, and have chosen the value
of the second matching momentum $p=10\,\text{ MeV}$, which is still quite close to
threshold.
Below, we shall discuss the matching condition briefly. First,
note that the values of the couplings $H_0$ and $H_2$, in addition
to the cutoff $\Lambda$, depend on the number of the retained terms in the
Taylor-expansion of the spurious pole (this latter dependence is not present in LO,
because there are no spurious poles at this order). Further,
it is seen that the results of the matching for $H_0$ do not depend on, whether
$H_2$ is included or not. This follows from the fact that the contribution from $H_2$
is multiplied by a factor $(mE+\gamma^2)$ (see Eq.~(\ref{eq:ZE})),
which exactly vanishes at the
particle-dimer threshold. This is seen in the Table~\ref{ValuesH}, which
summarizes our final results of the matching of $H_0,H_2$.
\begin{table}[t]
.
\begin{center}
\begin{tabular}{cc|cccc}
&$\tau_i$ &$H_0(\Lambda=250)$&$H_2(\Lambda=250)$&$H_0(\Lambda=600)$&$H_2(\Lambda=600)$\\ \hline
LO& &-5.30&&0.40&\\
\hline
\multirow{3}{*}{NLO \& N$^2$LO}
&$\tau_1$&-0.82&0.25&0.86&-8.20\\
&$\tau_2$&-1.17&0.63&-1.11&2.01\\
&$\tau_3$&-1.31&0.84&7.41&2223.\\
\end{tabular}
\end{center}
\caption{The three-body couplings $H_0$ and $H_2$ for the
different values of the cutoff $\Lambda$, and different number of subtractions
in the propagator $\tau(k)$ (no subtraction is needed at LO). All quantities are given in MeV units. The values of $H_0$ are the same at NLO and N$^2$LO, whereas $H_2=0$ at NLO.}
\label{ValuesH}
\end{table}
\subsection{Numerical results for the phase shift}
\begin{figure}[htb]
\subfigure{\includegraphics[width=0.49\linewidth]{YamaguchiDeltaRE.pdf}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{YamaguchiDeltaIM.pdf}}
\caption{Numerical results for real (left) and imaginary (right) part of the particle-dimer phase shift $\delta$ for the Yamaguchi model. Red line: the result obtained in the Yamaguchi model; in purple dotted: the LO result; in black dashed: the NLO result for $\tau_1$; in gray dot-dashed: the N$^2$LO result for $\tau_2$. For the real part the NLO and N$^2$LO results are on top of the Yamaguchi model. For the imaginary part the N$^2$LO results are on top of the model. The cut-off was set to the value $\Lambda=250~\text{MeV}$.}
\label{YamaguchiDelta}
\end{figure}
To begin with, we calculate the particle-dimer scattering phase shift $\delta$ in the toy model with the Yamaguchi potential, and in the effective theory, amended by our prescription for treating the spurious poles. As mentioned above, we have in fact to deal with two different expansions: the EFT expansion (i.e., including more derivative terms in the Lagrangian that are accompanied with the independent couplings), and the Taylor-expansion of the spurious pole. The convergence of these expansions need to be investigated separately.
Since it turns out to be the most efficient choice, we use the subtracted propagators $\tau_1(k^*)$ and $\tau_2(k^*)$ in the calculations at NLO and N$^2$LO respectively. Remember that at LO, no subtraction is needed.
Note also that this choice differs from our perturbative estimate in
Sec.~\ref{sec:formalism} by one order.
The other possible choices of $\tau_i(k^*)$ at NLO and N$^2$LO,
including the one based on perturbation theory, are discussed below.
The real part of the results of these calculations are shown in the left part of Fig.~\ref{YamaguchiDelta}. It is seen that LO is precise only at small momenta, whereas NLO can describe data at much higher values of $p$. The situation further improves at N$^2$LO, albeit this improvement is very small (practically not visible by a bare eye). In the left part of Fig: \ref{YamaguchiDelta} the imaginary part of $\delta$ is shown. It can be seen that the NLO and N$^2$LO results describe the model better than LO, while the N$^2$LO results are clearly improved compared to NLO.
The errors of the EFT calculation for $p>1/a$ can be estimated as
$(p/\Lambda)^{n+1}$ at N$^n$LO. A more detailed evaluation of the EFT
errors is presented in the discussion of possible choices for $\tau_i(k^*)$
below.
Up to now, everything follows the standard EFT pattern. However, in order to answer the question, whether a systematic improvement is achieved in higher orders, as well as to address the subtraction of the spurious pole, a more elaborate study of the problem is necessary. To this end, it is convenient to use the so-called Lepage plots, which will be considered below.
\subsection{Lepage plots and consistency assessment}
Lepage \cite{Lepage:1997} has proposed a method, which allows one to check,
how well the data are described by an EFT. The method makes use of certain
double-logarithmic plots, known as the Lepage plots.
Grie{\ss}hammer~\cite{Griesshammer:2020} has suggested
to verify the internal consistency of an EFT along the similar pattern.
In the following,
we shall adapt these methods for the problem we are working on.
Let us consider an EFT, describing the fundamental theory up to order $n$.
The corrections are of the order $[(k_{typ},p)/\Lambda_b]^{n+1}$, where
$k_{typ}\sim 1/a$ is a typical momentum in the reaction and $\Lambda_b$ is
the breakdown scale of an EFT. For an arbitrary observable, and, in particular,
for the three-body phase-shift $p \cot (\delta)$, we have:
\begin{align}
\frac{p \cot (\delta_{Data})-p \cot (\delta_{EFT})}{p \cot (\delta_{Data})}& = c \left(\frac{(k_{typ},p)}{\Lambda_b}\right)^{n+1-\eta}+\cdots\, .
\end{align}
This means that
\begin{align}
\ln\left[\frac{p \cot (\delta_{Data})-p \cot (\delta_{EFT})}{p \cot (\delta_{Data})}\right] & \approx c'+ (n+1-\eta) \, \ln\left[\frac{p}{\Lambda_b}\right]=c''+ (n+1-\eta) \, \ln\left[p\right]\, .
\end{align}
Here, $c$, $c'$ and $c''$ stand for some constants. The quantity $\eta$ describes the corrections due to the denominator. It is also assumed that $k_{typ}\ll p$, this is discussed below. Hence, the slope in a double-logarithmic plot gives the order $n$ of the neglected term.
To determine this slope, a linear function can be fitted to the numerical results.
Further, one may check the internal consistency of
an EFT without comparing to data at all~\cite{Griesshammer:2020}.
Instead, one can compare the results of
calculations within the same EFT, at two different values of the ultraviolet cutoff
$\Lambda_1$ and $\Lambda_2$,
\begin{align}
\begin{split}
\frac{p \cot (\delta_{EFT(\Lambda_2)})-p \cot (\delta_{EFT(\Lambda_1)})}{p \cot (\delta_{EFT(\Lambda_2)})}&= c(\Lambda_1,\Lambda_2,k_{typ},p,\Lambda_b) \left(\frac{(k_{typ},p)}{\Lambda_b}\right)^{n+1-\eta}+...
\end{split}
\label{slope}
\end{align}
Here $c(\Lambda_1,\Lambda_2,k_{typ},p,\Lambda_b)$ is a slowly varying function of $k_{typ}$ and $p$.
Further, the parameter $\eta$ describes the dependence
of $p \cot (\delta_{EFT(\Lambda_2)})$ on $p$ at LO and will be determined from the fit at
the LO. The slope in a double logarithmic plot is, approximately, $n+1-\eta$. Note that the $\eta$ in the consistency assessment and the Lepage plots
does not have to be the same.
Since $k_{typ}$ is not uniquely determined and the double expansion in
${k_{typ}}/{\Lambda_b}$ and ${p}/{\Lambda_b}$ complicates the analysis,
it is very useful to stick to the region,
\begin{eqnarray}
k_{typ}\ll p\ll\Lambda_b\sim\Lambda\,.
\label{Window}
\end{eqnarray}
Moreover, we choose the cutoff $\Lambda$ of the order of the breakdown scale
$\Lambda_b$ to simplify the analysis.
In this region, termed as the ``window of opportunity'',
the dependence on $k_{typ}$ should disappear (recall that, in our case,
$k_{typ}=1/a$). On the other hand,
one cannot use too large values of the variable $p$, of the order of the hard scale $M_{high}$ of the theory,
determined by the effective range and/or ultraviolet cutoff.
Hence, ensuring that one can reliably
determine the slopes from the fits in the ``window of opportunity'' is a non-trivial exercise. For example in Fig.~\ref{YamaguchiDeltaLepage} we see that around $80$ MeV a spike appears. This is due to $Re[\delta]=0$ (compare to Fig.~\ref{YamaguchiDelta}) in the denominator. This spike will change the slope in this region, so the ``window of opportunity'' is restricted to be below this. With this we choose the window between 42 MeV and 55 MeV for the $\delta$-slopes.
\begin{figure}[htb]
\subfigure{\includegraphics[width=0.49\linewidth]{YamaguchiDeltaLepage.pdf}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{YamaguchiDeltaConsistency.pdf}}
\caption{Lepage plot (left) and consistency assessment (right) for the particle-dimer phase shift in the Yamaguchi model. The ``window of opportunity'' is chosen to be between 42 MeV and 55 MeV for all orders (gray shaded region). The spike (zero of $\delta$ (Fig. \ref{YamaguchiDelta})) around 80 MeV limits us to low energy regions. Note that the LO result does not predict this zero, therefore the spike is not visible in the consistency assessment at LO, for the Lepage plot the results are divided by the Yamaguchi results, therefore the spike can be seen at all orders. As expected the slope is increasing by approximately one order by order. The deviant value for N$^2$LO $\tau_2$ is due to the accidental zero around 30 MeV (change in the sign), compare to \cite{Griesshammer:2020}.}
\label{YamaguchiDeltaLepage}
\end{figure}
\begin{table}[htb]
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}{l|lll}
slope fit& LO & NLO & N$^2$LO\\ \hline
no sub. & {2.7} & &\\
$\tau_1$ && 3.4 & 4.4\\
$\tau_2$& & 3.6 & 4.7\\
$\tau_3$& & 3.6 & 5.0\\
\end{tabular}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}{l|lll}
slope fit & LO & NLO & N$^2$LO\\ \hline
no sub. &3.0&&\\
$\tau_1$ && 3.8 & 5.3\\
$\tau_2$& & 4.0 & 7.2*\\
$\tau_3$& & 4.0 & 4.9\\
\end{tabular}
\end{minipage}
\caption{Results for the slopes for the particle-dimer phase shifts $\delta$ fitted in the ``window of opportunity'' for the Yamaguchi model. The uncertainty in the slopes is about 10\%.
Left for the Lepage plot right for the consistency assessment. The value with the asterisk * is unnaturally large due to a accidental zero, compare to Fig. \ref{YamaguchiDeltaLepage} (right).}
\label{SlopeYamaguchiLepage}
\end{table}
We start with the slope fits, using the subtracted propagators $\tau_1(k^*)$ and $\tau_2(k^*)$ for NLO and N$^2$LO respectively, we analyze the results for the real part of the particle-dimer phase shift $Re[\delta]$. The plots are shown in Fig.~\ref{YamaguchiDeltaLepage}. The slopes are increasing order by order as expected, for the Lepage-plots (left) as well as for the consistency assessment (right). The exact value of the increase should be one per order. In the left part of Table \ref{SlopeYamaguchiLepage} the slopes are shown for the Lepage plot, in the right part the slopes for the consistency assessment.
The slopes for other choices of $\tau_i(k^*)$ are also included.
By varying the ``window of opportunity'' slightly, we estimate the uncertainty in determination of these slopes
from the fit at about 10\%.
Note that the value for $\eta$ can not be predicted \cite{Griesshammer:2020}, it is determined by the slope for the LO results.\footnote{This is also the reason for the values for $\delta$ being different from the values for $k\cot\delta$ seen in Table \ref{Slope}.} It can be seen, that all results agree with the predicted increase approximately. Note that the result for N$^2$LO $\tau_2(k^*)$ in the consistency assessment is an exception due to the accidental zero (compare to the discussion in Fig. \ref{YamaguchiDeltaLepage}). The values for N$^2$LO using $\tau_1(k^*)$ or $\tau_3(k^*)$ are close to the expected value of 5, the corresponding graphs do not exhibit the accidental zero.
Taking into account the 10\% uncertainty in the determination of
the slope,
the results in Table \ref{SlopeYamaguchiLepage} show that using
$\tau_2$ and $\tau_3$ at NLO leads to no significant improvement of the slope
compared to $\tau_1$. This provides a justification for our choice of using
$\tau_1$ at NLO. Since we have one more constant, $H_2$, at our disposal
at N$^2$LO, one more subtraction can be accommodated. This motivates our use of
$\tau_2$ instead of $\tau_1$ at N$^2$LO despite the insignificant improvement
in the slope.
\begin{table}[htb]
\begin{center}
\begin{tabular}{l|lll}
slope & LO & NLO & N$^2$LO\\ \hline
fit~\cite{Griesshammer:2020} & 1.9 & 2.9 & 4.8\\
our fit, no sub. & 1.8 & & \\
our fit, $\tau_1$ & & 2.8 & 4.6\\
our fit, $\tau_2$& & 2.9 & 6.1*\\
our fit, $\tau_3$& & 2.8 & 3.6\\
\end{tabular}
\end{center}
\caption{Slope fits for $k \cot \delta$ in the consistency assessment for
the Yamaguchi model. The ``window of opportunity'' was chosen between $42$ MeV and $55$ MeV.
The uncertainty in the slopes is about 10\%.
Shown are the fits to the results for $Re[p\cot \delta]$.
The value with the asterisk * is unnaturally large due to a accidental zero.}
\label{Slope}
\end{table}
Additionally, we have repeated the same analysis for $k \cot \delta$ instead of
the phase shift $\delta$ (the observable considered in Ref.~\cite{Griesshammer:2020}).
The extracted slopes in Table~\ref{Slope}
are again consistent with our choice
$\tau_1(k^*)$ and $\tau_2(k^*)$ in the calculations at NLO and N$^2$LO,
respectively.
To summarize, solving the scattering equation for the particle-dimer amplitude in EFT,
while treating the spurious pole as proposed above, we have explicitly demonstrated
that the numerical solution systematically converges to the exact result, obtained in the
Yamaguchi model, which does not contain spurious poles. Moreover, the pattern of this
convergence, in general, follows the theoretical predictions. Hence, the theoretical
construction of Sect.~\ref{sec:formalism} has been verified.
\subsection{Order of the subtraction polynomial and numerical results}
\begin{figure}[htb]
\subfigure{\includegraphics[width=0.49\linewidth]{YamaguchiDeltaRENLO.pdf}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{YamaguchiDeltaIMNLO.pdf}}
\hfill
\caption{Real (left) and imaginary (right) part of the particle-dimer phase shift $\delta$ calculated for the Yamaguchi model and the EFT at NLO for different choices of number of subtractions in the propagator $\tau_i(k^*)$. The uncertainty bands are estimated by a naive power-counting of the EFT error, given by $Re[\delta]\,\left(p/\Lambda\right)^2$ and $Im[\delta]\,\left(p/\Lambda\right)^2$.
}
\label{YamaguchiDeltaNLO}
\end{figure}
In the last subsection we have focused on the consistency and model description of the EFT-expansion. We have provided some evidence for our choice
$\tau_1(k^*)$ and $\tau_2(k^*)$ in the calculations at NLO and N$^2$LO,
respectively, based on the behavior of the slopes in Lepage and consistency
plots. In this subsection the optimal order of the subtraction polynomial is investigated further, providing additional justification for the choice done earlier. Namely, the numerical calculations discussed in the last sections are repeated for different orders, which means different choices of $\tau_i(k^*)$ as defined in equation (\ref{eq:Tau}). In the left part of Fig. \ref{YamaguchiDeltaNLO} the EFT results at NLO for different $\tau_i(k^*)$ are compared with the Yamaguchi model for the real part of $\delta$. It becomes clear that $\tau_1(k^*)$ describes the model the best. Further subtractions do not improve the reproduction of the model, they actually make it worse.
This means one subtraction seems to be optimal. The right part of Fig. \ref{YamaguchiDeltaNLO} shows the corresponding imaginary part, here a tiny improvement from $\tau_1(k^*)$ to $\tau_2(k^*)$ is visible. However, this is only true for very large values of the momentum $p$ and the improvement is well below the expected EFT accuracy. The results for $\tau_1(k^*)$ agree with the Yamaguchi model everywhere within the EFT uncertainty.
There is no improvement from $\tau_2(k^*)$ to $\tau_3(k^*)$ at all. To conclude, at NLO the phase shift can be described most accurately using $\tau_1(k^*)$.
This choice is consistent with the slopes for the Lepage plots and the consistency assessments shown in Table \ref{SlopeYamaguchiLepage} and Table \ref{Slope}, where all slopes for NLO agree within $10\%$. Therefore we choose the minimal amount of subtractions in the following, which means using $\tau_1(k^*)$ at NLO.
\begin{figure}[htb]
\subfigure{\includegraphics[width=0.49\linewidth]{YamaguchiDeltaREN2LO.pdf}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{YamaguchiDeltaIMN2LO.pdf}}
\hfill
\caption{Real (left) and imaginary (right) part of the particle-dimer phase shift $\delta$ calculated for the Yamaguchi model and the EFT at N$^2$LO for different choices of number of subtractions in the propagator $\tau_i(k^*)$. The uncertainty bands are estimated by a naive power-counting of the EFT error, given by $Re[\delta]\,\left(p/\Lambda\right)^3$ and $Im[\delta]\,\left(p/\Lambda\right)^3$.
}
\label{YamaguchiDeltaN2LO}
\end{figure}
In Fig. \ref{YamaguchiDeltaN2LO} the phase shift is shown for N$^2$LO. The results appear to be similar to the situation for NLO, the real part is described the best by $\tau_1(k^*)$, while the difference between $\tau_{2,3}(k^*)$ and the model is larger. However, the effect is small in N$^2$LO, as all three choices of $\tau_i(k^*)$ agree within a power-counting estimation of the EFT uncertainty in the wide interval including the opportunity window.\footnote{We estimate our {\em relative}
error as $(p/\Lambda)^n$, with $n=2,3$ at NLO and N$^2$LO,
respectively. The {\em absolute} error in the real part then vanishes
at the energy where $Re[\delta]=0$, thus indicating on the
natural limitations of such a crude estimate. In fact, one expects
that the absolute error does not change much in the interval considered.}
For the imaginary part, however, the increase from $\tau_1(k^*)$ to $\tau_2(k^*)$ is large. The reproduction of the imaginary part of the model is better for $\tau_2(k^*)$ than $\tau_1(k^*)$. But again from $\tau_2(k^*)$ to $\tau_3(k^*)$ no improvement can be seen. Since the differences for the real part are not significant and the imaginary part clearly indicates to use $\tau_2(k^*)$, we choose $\tau_2(k^*)$ for the N$^2$LO calculations.
\subsection{Yamaguchi model with different $r$}
\begin{table}[htb]
\begin{center}
\begin{tabular}{cc|cccc}
&$\tau_i$ &$H_0(\Lambda=250)$&$H_2(\Lambda=250)$&$H_0(\Lambda=600)$&$H_2(\Lambda=600)$\\ \hline
LO& &-14.31&&1.46&\\
\hline
\multirow{3}{*}{NLO \& N$^2$LO}
&$\tau_1$&3.79&11.12&-1.14&-0.42\\
&$\tau_2$&3.03&10.02&-1.55&0.76\\
&$\tau_3$&2.90&9.92&-1.72&1.52\\
\end{tabular}
\end{center}
\caption{The three-body couplings $H_0$ and $H_2$ for the
Yamaguchi model with $r=0.8768$~fm for
different values of the cutoff $\Lambda$, and different number of subtractions
in the propagator $\tau(k)$ (no subtraction is needed at LO). All quantities are given in MeV units. The values of $H_0$ are the same at NLO and N$^2$LO, whereas $H_2=0$ at NLO. The values for $H_0$ are fine-tuned at $p=0.5 \text{ MeV}$ and $H_2$ at $p=20 \text{ MeV}$.}
\label{ValuesHShifted}
\end{table}
\begin{figure}[htb]
\center
\includegraphics[width=0.6\linewidth]{YamaguchiShiftkcotdRE.pdf}
\caption{Numerical results for the real part of the quantity $p\cot\delta(p)$ for the
Yamaguchi model with $r=0.8768$~fm. Red line:
the result obtained in the model with Yamaguchi potential;
in purple dotted: the LO result; in black dashed: the NLO result for $\tau_1$; in gray dot-dashed: the N$^2$LO result for $\tau_2$. The cut-off was set to the value $\Lambda=250~\text{MeV}$.}
\label{YamaguchikcotdShift}
\end{figure}
The results in the last chapters show a zero for the phase shift $\delta=0$ around $p=80\text{ MeV}$ for the Yamaguchi model. As discussed above this makes the determination of the slope difficult, and limits the ``window of opportunity'' to low energies. To test our method for higher values of the window, a different choice for the effective range is investigated. We choose $r'=0.8768 \text{ fm}(=0.5 r)$ and the same $a=5.4194 \text{ fm}$ as before. This moves the zero outside the considered energy region. Besides the unphysical pole is shifted to $k_2=410.149 \text{ MeV}$. The corresponding Yamaguchi parameters are given by $\lambda=-0.000049 \text{ MeV}^{-2}$ and $\beta=622.5$ MeV. The values for the three-body forces are summarized in Table \ref{ValuesHShifted}. The results for the quantity $k\cot \delta$ are shown in Fig.~\ref{YamaguchikcotdShift}. The pole around $p=80$ MeV is not present anymore. Everything else follows the pattern described for the Yamaguchi model for $r=1.7536$ fm. The description becomes better with increasing orders of the EFT.
\begin{figure}[htb]
\center
\includegraphics[width=0.7\linewidth]{YamaguchiShiftkcotdLepage.pdf}
\caption{Lepage plot compared with the Yamaguchi model
with $r=0.8768$~fm for the quantity $p\cot\delta$. The ``window of opportunity'' (shaded in gray) is chosen to be between 75 MeV and 125 MeV for all orders.}
\label{YamaguchkcotdShiftLepage}
\end{figure}
Since the spike is shifted we are able to choose higher values for the ``window of opportunity''. We choose it to be between $75\text{ MeV}$ and $125 \text{ MeV}$. In Fig.~\ref{YamaguchkcotdShiftLepage} the Lepage plot is shown. The different orders of the EFT can clearly be distinguished. The slopes increase order by order (see Table \ref{SlopeYamaguchiShift}),
but the increase in the Lepage plot from LO to NLO is slightly larger than expected. With the spike shifted, the slopes are stable under small changes to the ``window of opportunity''.
The general pattern of the method behaves as expected and strengthens the assumptions.
\begin{table}[htb]
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}{l|lll}
slope fit& LO & NLO & N$^2$LO\\ \hline
no sub. & {0.8} & &\\
$\tau_1$ && 3.2 & 4.6\\
$\tau_2$& & 3.0 & 4.4\\
$\tau_2$& & 3.1 & 4.4\\
\end{tabular}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}{l|lll}
slope fit & LO & NLO & N$^2$LO\\ \hline
no sub. &2.2&&\\
$\tau_1$ && 3.0 & 4.4\\
$\tau_2$& & 3.1 & 4.3\\
$\tau_2$& & 3.1 & 4.3\\
\end{tabular}
\end{minipage}
\caption{Results for the slopes for the quantity $p\cot\delta$ for the
Yamaguchi model with $r=0.8768$~fm fitted in the ``window of opportunity''. Left for the Lepage plot right for the consistency assessment.}
\label{SlopeYamaguchiShift}
\end{table}
\subsection{Gauss model}
To further check our results, we perform the same analysis for an additional model potential, namely a Gauss potential. For the Gauss model the regulator is given by
\begin{align}
\chi(p)=e^{-p^2/\lambda_G^2}
\end{align}
Similar to the Yamaguchi model (compare to equation (\ref{twobodyscattering})), this leads to (for $E<0$)
\begin{align}
\begin{split}
d_G(E)^{-1}&=2\pi^2\biggl[\sqrt{mE_d}\exp\left(\frac{2mE_d}{\lambda_G^2}\right)\text{erfc}\left(\frac{\sqrt{2mE_d}}{\lambda_G}\right)-\sqrt{-E}\exp\left(\frac{-2E}{\lambda_G^2}\right)\text{erfc}\left(\frac{\sqrt{-2E}}{\lambda_G}\right)\biggr]\\
&=2\pi^2\sqrt{mE_d}\exp\left(\frac{2mE_d}{\lambda_G^2}\right)\text{erfc}\left(\frac{\sqrt{2mE_d}}{\lambda_G}\right)+2\pi^2 ip-\frac{4\sqrt{2}\pi^{3/2}}{\lambda_G}p^2+O\left(p^3\right)
\label{Gaussd}
\end{split}
\end{align}
To connect the EFT parameters to the model, we choose $\lambda_G$ to fulfill
\begin{align}
\frac{1}{a} =\sqrt{mE_d}\exp\left(\frac{2mE_d}{\lambda_G^2}\right)\text{erfc}\left(\frac{\sqrt{2mE_d}}{\lambda_G}\right)\, ,
\quad\quad
\frac{r}{2} =\frac{4\sqrt{2}\pi^{3/2}}{\lambda_G2\pi^2}\,.
\end{align}
For the effective range $a=5.4194$ fm and $r=1.7536$ fm, this results in $\lambda_G=359.134$ MeV. In equation (\ref{Gaussd}) the parameter $E_d$ is a input value to the Gauss model, so the two parameters $\lambda_G$ and $E_d$ are described by the two parameters $a$ and $r$ in the EFT. However $E_d$ is equivalent to the position of the root of $d_G^{-1}(E)$ and therefore is the value of a two-body bound state. So for the chosen values of $a$ and $r$ it can be identified with the binding energy of the deuteron, $E_d\approx 2.22 \text{ MeV}$.
The dimer-propagator $\tau(q,E)$ is given by
\begin{align}
\tau(q,E)=d_G(z)\big{|}_{z=3q^2/(4m)-E-i\epsilon}
\end{align}
In the numerical calculations we use the un-expanded equation for $d_G(z)$ (first line of equation (\ref{Gaussd})).
The one-particle exchange
in the Gauss model, $Z_G(p,q,E)$, is given by a formula similar
to Eq.~(\ref{eq:ZY}).
To avoid numerical difficulties regarding the poles of the
angular integral, this is calculated partly analytically and partly numerically. For more details regarding this see the appendix.
The values for the three-body forces are fine-tuned to reproduce the Gauss results at $p=0.001$ MeV for $H_0$ and at $p=10$ MeV for $H_2$. The values can be seen in Table \ref{ValuesHGauss}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{cc|cccc}
&$\tau_i$ &$H_0(\Lambda=250)$&$H_2(\Lambda=250)$&$H_0(\Lambda=600)$&$H_2(\Lambda=600)$\\ \hline
LO& &4.35&&0.29&\\
\hline
\multirow{3}{*}{NLO \& N$^2$LO}&$\tau_1$&-0.90&1.99&0.57&37.87\\
&$\tau_2$&-1.23&2.07&-1.14&7.24\\
&$\tau_3$&-1.37&2.26&2.15&568.2\\
\end{tabular}
\end{center}
\caption{The three-body couplings $H_0$ and $H_2$ for the Gauss model and different values of the cutoff $\Lambda$. All quantities are given in MeV units. The values of $H_0$ are the same at NLO and N$^2$LO, whereas $H_2=0$ at NLO.}
\label{ValuesHGauss}
\end{table}
\begin{figure}[htb]
\subfigure{\includegraphics[width=0.49\linewidth]{GausskcotdRE.pdf}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{GausskcotdIM.pdf}}
\caption{Numerical results for the real (imaginary) part of the quantity $p\cot\delta(p)$ in the Gauss model. Red line:
the result obtained in the model with Gauss potential;
in purple dotted: the LO result; in black dashed: the NLO result for $\tau_1$; in gray dot-dashed: the N$^2$LO result for $\tau_2$. The cut-off was set to the value $\Lambda=250~\text{MeV}$.}
\label{Gausskcotd}
\end{figure}
In Fig. \ref{Gausskcotd} numerical results for the Gauss model and the EFT at different orders are shown. It can be seen, that NLO and N$^2$LO are clearly better in describing the Gauss model than LO, with N$^2$LO being also better than NLO. In the right part the imaginary part is shown, the EFT results also improve order by order. It is useful to note, that since for the Yamaguchi model in the last chapter and the Gauss model here the parameters $\lambda_Y$, $\beta$ and $\lambda_G$, $E_d$ are fine-tuned to give the same $a$ and $r$, the models both exhibit a pole (zero in $\delta$) around $80 \text{ MeV}$. This results in the same problems for the EFT describing the Gauss as before. The ``window of opportunity'' is chosen between $42$ MeV and 55 MeV.
\begin{figure}[htb]
\subfigure{\includegraphics[width=0.49\linewidth]{GausskcotdLepage.pdf}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{GausskcotdConsistency.pdf}}
\caption{Lepage plot (left) for the Gauss model and consistency assessment (right) for the quantity $p\cot\delta$. The ``window of opportunity'' (shaded in gray) is chosen to be between 42 MeV and 55 MeV for all orders. }
\label{GausskcotdLepage}
\end{figure}
In the Lepage plot in Fig. \ref{GausskcotdLepage} (left) the different orders of the EFT separate nicely. The obtained slopes, shown in Table \ref{SlopeGaussLepage}, are increasing as expected. Note that the values of the slope are close to the values obtained for the Yamaguchi model. However the N$^2$LO results differ, not only is the increase of slope the slope form NLO to N$^2$LO larger than expected,\footnote{Expected is an increase of one, here the increase is around three.} also the values are larger than in the Yamaguchi case. This can be explained by the accidental zero at $p=33\text{ MeV}$. Similar to the results for the Yamaguchi model shown in the consistency assessment in Fig. \ref{YamaguchiDeltaLepage} (right) for N$^2$LO using $\tau_2(k)$ the sign of the difference is changing. In the consistency assessment in Fig. \ref{GausskcotdLepage} (right) the slopes behave as expected, the slope increases per order by approximately one.
\begin{table}[htb]
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}{l|lll}
slope fit & LO & NLO & N$^2$LO\\ \hline
no sub.&1.1&&\\
$\tau_1$ & & 2.0 & 5.6*\\
$\tau_2$& & 2.3 & 5.5*\\
$\tau_3$& & 2.5 & 5.3*\\
\end{tabular}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}{l|lll}
slope fit & LO & NLO & N$^2$LO\\ \hline
no sub. &2.0&&\\
$\tau_1$ & & 3.1 & 4.4\\
$\tau_2$& & 3.3 & 5.3\\
$\tau_3$& & 3.2 & 3.8\\
\end{tabular}
\end{minipage}
\caption{Results of the slopes for the real part of the quantity $k\cot \delta$ for the Gauss model fitted in the ``window of opportunity''. Left for the Lepage plot right for the consistency assessment. All results for the Lepage plot for N$^2$LO, marked by an asterisk,
exhibit a accidental zero and are therefore unexpected large, compare to the Fig.~\ref{GausskcotdLepage}.}
\label{SlopeGaussLepage}
\end{table}
To conclude, the results using the presented method to deal with the unphysical pole $k_2$ can also be used to describe the Gauss model. The description is improved order by order. The obtained slopes increase as expected as well in the Lepage plot as in the consistency assessment, where the discussed deviations are not caused by the method.
\section{Summary and Conclusions}
\label{sec:concl}
In this paper, a novel procedure for removing the contribution
from spurious poles in the three-body Faddeev equation
for pionless EFT has been proposed.
These poles emerge
in the two particle scattering amplitudes, which enter the three-body
integral equation. Albeit the spurious
poles appear below threshold, at energies where
the EFT treatment is no more applicable, they still influence the
low-energy behavior of the particle-dimer (three-particle) amplitudes.
In the three-body integral equation the two-particle amplitudes are
evaluated at large negative energies because an
integration over all momenta is carried out. Furthermore, the
residue of these poles can have either sign, leading to the problems
with the three-particle unitarity at low energies.
In the literature, there exist different methods for treating spurious poles.
The most popular one is based on a strictly perturbative expansion
of the two-body amplitude in the range parameter(s)~\cite{Hammer:2001gh,Bedaque:1998km,Ji:2011qg,Ji:2012nj,Vanasse:2013sda}.
It will be, however, difficult to use this approach in a
finite volume for the extraction of the three-body observables
from lattice data. The reason for this is that the expansion diverges
in the vicinity of the two-particle energy levels in a finite box,
leading to more and more singular expressions at higher orders.
\begin{itemize}
\item[i)]
In the present paper, we propose a method which enables one to
circumvent this problem, expanding only the part of the two-body
amplitude that contains spurious poles. Such an expansion can be
systematically carried out. Furthermore,
in perturbation theory, the counting rules in the
underlying EFT are closely linked to the above-mentioned expansion
-- at a given order in the EFT counting, only first few terms in this
expansion should be retained (the number is determined by the order
in the EFT expansion). Adding more terms in the expansion does not lead to an increased accuracy.
However, due to the non-perturbative character
of the three-body integral equation, the above counting can be regarded
merely as a rule of thumb, and the optimal number of subtractions
should be determined in actual calculations.
\item[ii)]
The proposal has been tested in numerical calculations in a toy
model, using Yamaguchi and Gauss potentials in the two-body sectors.
The results of the exact calculations have been confronted with the
results, obtained within EFT, matched to the model parameters
in the two- and three-body sectors. Moreover, the consistency
assessment has been carried out, comparing the EFT results in different
orders. In a result of these studies, a clear pattern emerges. The
agreement with the exact calculations
systematically improves at
higher orders. Already at N$^2$LO, the
exact results are reproduced very well. Moreover, expanding the
spurious pole part in the two-body amplitude,
it is seen that, after few steps, the accuracy does
not further increase when more terms are subtracted.
This is fully in line with our expectations. The optimal
number of the subtraction terms is slightly lower than the expectation
from perturbation theory. This is not entirely surprising, bearing in
mind the non-perturbative character of the three-body problem at hand.
\item[iii)]
It would be extremely interesting to reformulate the three-body
quantization condition in a finite volume as given, e.g.,
in~\cite{Hammer:2017uqm,Hammer:2017kms,Hansen:2014eka,Hansen:2015zga,Mai:2017bge} along
similar lines. We leave this application for a future publication.
\end{itemize}
\begin{acknowledgments}
The authors would like to thank Evgeny Epelbaum and Jambul Gegelia for interesting discussions.
M.E. was supported by a PhD fellowship from Helmholtz Forschungsakademie Hessen f\"ur FAIR (HFHF).
M.E. and H.-W.H. were supported by Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) --
Project ID 279384907 -- SFB 1245.
A.R.
was supported in part by the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) -- Project-ID 196253076 -- TRR 110,
Volkswagenstiftung
(grant no. 93562) and the Chinese Academy of Sciences (CAS) President's
International Fellowship Initiative (PIFI) (grant no. 2021VMB0007).
\end{acknowledgments}
|
2,877,628,091,317 | arxiv | \section{Time and Causal Sets}
Did time ever begin? It is hard to decide which answer is more unsettling: the idea of an infinite past with no beginning or the concept of such a beginning---the birth of the Universe.
Stephen Hawking proved that General Relativity (GR) breaks down at a Big Bang singularity, but left open the possibility that the Big Bang is \textit{not} the beginning of time but rather that it was preceded by a quantum gravity era which cannot be captured by GR \cite{Hawking:2014xx}. The question of the beginning of time must therefore be addressed within a theory of quantum gravity.
Causal Set Theory is an approach to quantum gravity which postulates that spacetime is fundamentally discrete and takes the form of a \textit{causal set}, a partial order whose elements are the indivisible ``atoms'' of spacetime \cite{Bombelli:1987aa,Surya:2019ndm}. The partial order is interpreted as a temporal order, so that the \textit{past} of an element is formed of all the elements which precede it in the partial order. Thus the causal set furnishes a causal structure---a notion of before and after---in the absence of the continuum, allowing us to contemplate whether there was anything ``before'' the Big Bang (Fig.\ref{fig:continuum}) \cite{Dowker:2017zqj}.
\begin{figure}[h]
\centering
\includegraphics{continuum_approx.jpg}
\caption{
A causal set. Elements are represented as nodes and the order is indicated by the edges: element $x$ precedes element $y$ if and only if there is an upward-going path from $x$ to $y$. The portion of the causal set which lies in the shaded region is well approximated by a continuum spacetime (physics in this region is captured by GR). The remainder of the causal set forms the quantum gravity era preceding the Big Bang singularity.}
\label{fig:continuum}
\end{figure}
Naively, we may consider the continuum spacetime of GR to emerge from an underlying causal set via a large (length) scale approximation \cite{Dowker:2005tz}. But quantum mechanics suggests that reality is better described as a superposition of causal sets. A quantum theory of causal sets will ultimately be formulated as a sum-over-histories---a ``path integral" of sorts---with the causal set playing the role of ``history'' or ``spacetime configuration'' \cite{Sorkin:1997gi,Sorkin:2006wq,Sorkin:1994dt}. Assigning a weight to each history in the sum is the problem of causal set dynamics.
Much of the effort towards obtaining a dynamics for causal sets has been guided by the paradigm of \textit{growth dynamics} which states that the weight/action emerges from a fundamental physical process in which the causal set comes into being \textit{ex nihilo}. This notion of \textit{becoming}, the idea that a causal set grows element by element, further allows the passage of time to be captured by physics: an instantaneous moment---a \textit{now}---corresponds to the birth (not to the existence) of an element \cite{Sorkin:2007hga,Dowker:2014xga,Dowker:2020qqs}.
Kinematically, causal sets can provide a cosmology in which time has no beginning---namely, a causal set in which every element has an infinite past. But are such past-infinite causal sets compatible with the heuristic of growth and becoming? If not, we may be forced to choose between a passage of time and a beginningless time.
\section{Growth Dynamics: Sequential vs Covariant}
In its fully-fledged form, the growth process will be a quantum phenomenon \cite{Sorkin:2011sp,Criscuolo:1998gd,Dowker:2010qh,Surya:2020cfm} but at this stage of development of Causal Set Theory, growth dynamics are classical stochastic processes which generate infinite causal sets. Thus far, the most fruitful growth dynamics are the Classical Sequential Growth (CSG) models \cite{Rideout:1999ub} in which, starting from the empty set, a single element is born at each stage (Fig.\ref{fig:growth}). The ordering of each new-born element with respect to the already-existing elements is determined probabilistically according to each model but always satisfies the constraint that a new-born element cannot precede an already-existing one, ensuring a consistency between the interpretation of the partial order as a temporal order and of the birth of elements as the passage of time.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{growth.jpg}
\caption{Sequential growth. Elements are born in a total order, one after the other. The total order of births is unphysical (pure gauge).}
\label{fig:growth}
\end{figure}
Our individual experience of the passage of time as a linear, totally ordered sequence of events is reflected in the sequential nature of the CSG models where elements are born in a sequence, one after the other. But this familiar notion of becoming is too simplistic to capture the intrinsic partial order/causal structure, since the total order acts as a gauge global time. The struggle between the gauge formulation of sequential growth and the gauge-independent nature of the physical world (cf. local coordinates and general covariance in GR) is resolved by identifying gauge-independent observables. The role of observables is played by \textit{stems}, finite ``portions'' of a causal set which contain their own past (Fig.\ref{fig:stem}). In other words, in CSG models the growing causal set is fully determined by its stems \cite{Brightwell:2002yu,Brightwell:2002vw,Dowker:2005gj}.
The CSG models are toy models of quantum cosmology but their original formulation shies away from the question at hand---whether time began---since the condition which prohibits new-born elements from preceding already-existing ones means that the growth process can only produce causal sets in which time has a beginning.
Loosening this restriction by allowing new-born elements to precede already-existing ones opens a new avenue for causal set cosmology in which the problem of the beginning of time can be formalised \cite{convexcovtree}. But how should this new form of growth, in which the order of births is incompatible with the partial order, be understood? If element $x$ precedes element $y$ in the temporal partial order, what could it possibly mean for $y$ to be born before $x$? It is hard to see how the growth can be considered a real physical process in this modified framework. Is a time with no beginning inherently incompatible with
the notion of becoming?
\begin{figure}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{stems.png}
\caption{}
\label{fig:causalsetwithsets}
\end{subfigure}\hspace{15mm} %
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{continuum_lightcones.jpg}
\caption{}
\label{fig:onestem}
\end{subfigure}
\caption[]{
$(a)$ Stems and convex sets. The green ``portion'' is a stem because it is finite and it contains its own past (\textit{i.e.} the past of each of its elements). The red ``portion'' is \textit{not} a stem because it does not contain its entire past (\textit{e.g.} it does not contain the green elements), but it is a convex set because it contains all the elements which lie \textit{in between} its elements in the partial order. The black ``portion'' is neither a stem nor a convex set. $(b)$ Continuum analogues of stems and convex sets. A stem corresponds to any union of past lightcones whose total spacetime volume is finite. A causal set with no beginning contains no stems, just like a geodesically complete spacetime contains no past lightcone of finite spacetime volume. A convex set is a generalisation of the intersection of a past lightcone with a future lightcone. }
\label{fig:stem}
\end{figure}
The missing piece that may reconcile a beginningless time with a physical growth process is to replace our intuitive notion of sequential becoming with \textit{asynchronous becoming} where elements are born in a partial (not a total) order \cite{Sorkin:2007hga,Dowker:2014xga,Dowker:2020qqs}. What does it mean for elements to be born in a partial order? Through the lens of our largely sequential experience, asynchronous becoming may sound more like a fantastical riddle than a description of physical reality. It is the role of mathematics to make sense of notions which lie beyond our everyday experience, and it may be that new mathematics is what is needed to better understand asynchronous becoming and its consequences for the nature of time.
\textit{Covariant growth} is an alternative to sequential growth which may contain the seed of asynchronous becoming \cite{Dowker:2019qiz,Zalel:2020oyf}. In its original formulation, covariant growth only produces causal sets in which time has a beginning. Taking its cue from the CSG models, covariant growth assumes from the outset that a causal set spacetime is fully described by its stems (\textit{i.e.} that causal sets which share all the same stems are physically equivalent). Thus, in contrast to sequential growth, covariant growth does not keep track of individual element births but only of the stems contained in the growing causal set. The growth process can be illustrated as a sequence of sets, where the $n^{th}$ set in the sequence contains all the causal sets which have cardinality $n$ and are stems in the growing causal set (Fig.\ref{fig:covtree}). When the process runs to completion (in the $n\rightarrow\infty$ limit) all stems are determined, thus fully determining the causal set spacetime grown in the process.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\textwidth]{covtree_process.jpg}
\caption{Covariant growth. The growth process does not keep track of the birth of individual elements but rather of the stems in the growing causal set. The $n^{th}$ set in the sequence contains all the causal sets which have cardinality $n$ and are stems in the growing causal set, so that after $n$ steps all the stems of cardinality $\leq n$ are determined.}
\label{fig:covtree}
\end{figure}
While the process of becoming is explicit in sequential growth, it is implicit or ``vague'' \cite{Wuthrich:2015vva} in covariant growth (\textit{e.g.} at any finite stage of the growth process, one cannot say which portion of the causal set has already come into being). But if there is a process of becoming which can be associated with covariant growth, then it may be that it is this quality of vagueness which embodies asynchronous becoming and thus allows us to reconcile the passage of time with a beginningless time in Causal Set Theory.
\section{Causal sets with no beginning}
Covariant growth can be modified to accommodate growth of causal sets in which time has no beginning. The key is identifying the observables pertaining to these causal sets. A causal set with no beginning contains no stems, since if a portion of the causal set contains its own past then it must contain infinitely many elements, while stems have finite cardinality by definition. Instead, the role of observables is played by \textit{convex sets}, ``portions'' of a causal set which, whenever they contain a pair of elements $x$ and $y$, contain all elements which lie between $x$ and $y$ in the partial order (Fig.\ref{fig:stem}). If finite convex sets encode all that is physical in a causal set, then we can adapt the covariant growth process for past-infinite causal sets simply by replacing stems with convex sets \cite{convexcovtree}. This new formulation of covariant growth keeps track of convex sets contained in the growing causal set. At stage $n$, all convex sets of cardinality $n$ are fixed so that in the $n\rightarrow \infty$ limit the causal set spacetime is fully determined.
The significance of this new covariant formalism is twofold. First, this process is capable of growing all kinds of causal sets: in some time begins, in others it does not. Thus, whether time has a beginning or not is no longer a choice hardwired into our construction but rather a question which we can ask of the dynamics. Second, the implicit nature of the growth means that there is no immediate contradiction between the process of becoming and the past-infinite nature of a growing causal set. It will be up to future work to decide whether covariant growth can really be interpreted as a physical growth of past-infinite causal sets; whether there is a yet unknown formalism which better encompasses asynchronous becoming and in doing so captures the passage of a beginningless time; or whether the physics of passage dictates that time must have a beginning.
\vspace{2mm}
\noindent \textbf{Acknowledgments:} The authors are indebted to Fay Dowker for guidance and collaboration on the work presented in this essay.
|
2,877,628,091,318 | arxiv | \section{Introduction}
Isolated quantum systems are expected to approach thermal equilibrium after sufficiently long times and much of current research focuses on understanding the conditions for this to happen as well as the details of the process of thermalization~\cite{Gogolin2016}. From this point of view, quantum revival -- a wave function periodically returning to its value at time $t=0$~\cite{Bocchieri1957,Percival1961} -- is a well-known counterexample of non-thermalizing dynamics that has played an important role since the early days of quantum physics. Experimentally, such recurrent behaviour has been observed in small or weakly-interacting quantum systems, for example the Jaynes-Cummings model describing a two-level atom interacting with a resonant monochromatic field~\cite{Eberly1980}, a micromaser cavity with rubidium atom~\cite{Rempe1987}, in a Rydberg electron wave packet~\cite{Yeazell1990}, vibrational wave packets in $\mathrm{Na}_2$~\cite{Baumert1992}, infinite square well potentials and various types of billiards~\cite{Robinett2002, Aronstein1997, Dubois2017}, cold atoms~\cite{Brune1996,Greiner2002,Will2010}, and more recently larger systems of one-dimensional superfluids~\cite{Schweigler2017,Rauer2018}. The ability to engineer recurrent behavior in more complex quantum many-body systems is an important task because this allows one to study their long-term coherent evolution beyond the initial relaxation, while on the other hand, it also provides insight into the emergence of statistical ensembles in closed quantum systems that evolve according to the Schr\"odinger unitary evolution.
Intuitively, the conditions for observing \emph{many-body} wave function revivals in a strongly-interacting quantum system are expected to be very stringent due to the exponentially large size of the Hilbert space. It was thus surprising when recent experiments on strongly-interacting one-dimensional chains of Rydberg atoms~\cite{Schauss2012,Labuhn2016} observed revivals of local observables when the chain was quenched~\cite{CalabreseQuench} from an initial N\'eel state of atoms~\cite{Bernien2017}, $|\psi(0)\rangle = |\mathbb{Z}_2\rangle \equiv | 0101\ldots\rangle$, where $0$ denotes an atom in the ground state and $1$ in the excited (Rydberg) state. This observation was surprising as the N\'eel state effectively forms an ``infinite-temperature" ensemble for this system, for which equilibration is expected to occur very fast according to the Eigenstate Thermalization Hypothesis (ETH)~\cite{DeutschETH, SrednickiETH}. The observed revivals were thus in apparent disagreement with the na\"ive expectations based on the ETH. Moreover,
the revivals from the N\'eel initial state have also been seen in numerical simulations of an idealized model believed to describe the Rydberg atom chain~\cite{Sun2008,LesanovskyDynamics,Olmos2012,Turner2017,wenwei18TDVPscar}.
This model is known as the ``PXP" model~\cite{Bernien2017}, and it has the form of a one-dimensional spin-1/2 chain with a kinetically-constrained spin flip term that results from removing all nearest-neighbor pairs of atoms that are simultaneously excited into the Rydberg states (see Sec.~\ref{sec:pxp} for more details on the model). It has been understood that the key to revivals in the Rydberg atom chain are the special eigenstates -- ``quantum many-body scars"~\cite{Turner2017,lin2018exact} -- whose non-thermal properties cause a violation of the strong ETH~\cite{dAlessio2016, Gogolin2016, ShiraishiMori}. Such atypical eigenstates have previously been rigorously constructed in the non-integrable Affleck-Kennedy-Lieb-Tasaki (AKLT) model~\cite{Bernevig2017,BernevigEnt}. While the collection of models that feature scarred-like eigenstates has recently expanded~\cite{Calabrese16, Konik1, Konik2, Vafek, IadecolaZnidaric, NeupertScars, Haldar2019,Moudgalya2019,Pretko2019,Khemani2019,Sala2019,Khemani2019_2}, a smaller subset of such models have been demonstrated to display revivals from easily preparable initial states~\cite{Bull2019,Michailidis2019,Buca2019_2}. Thus, the connection between revivals and the presence of atypical eigenstates remains to be fully understood.
Revivals in the experimentally realized PXP model are relatively fragile. For example, numerical simulations have shown that the revival of a wavefunction, quantified in terms of the return probability, $\vert \langle \psi(0) \vert \psi(t) \rangle \vert^2$~\cite{Gorin2006}, is at best $\sim70$\% of its initial value, and it undergoes a clear decay as a function of time~\cite{TurnerPRB}. While the imperfect PXP revivals are still remarkable given the exponentially large many-body Hilbert space, their decay poses a question of whether the PXP many-body scars could be a transient effect that disappears in the thermodynamic limit. It was realized, however, that revivals can be significantly enhanced by slightly deforming the PXP model~\cite{Khemani2018}, with the fidelity revival reaching the value $\sim (1-10^{-6})$ in the largest systems available in numerics~\cite{Choi2018}, suggesting there could exist fine-tuned models that host ``perfect" many-body scars while their overall behavior, as witnessed by the energy level statistics~\cite{Choi2018}, remains thermalizing.
Indeed, several non-integrable spin chain models have recently been shown to contain ``exact" scars and exhibit perfect wavefunction revivals when quenched from special initial states~\cite{Iadecola2019_2, Iadecola2019_3,Chattopadhyay,OnsagerScars}.
Exact revivals in these models are a consequence of a dynamical symmetry of certain terms in the Hamiltonian (as we explain below in Sec.~\ref{section:exact_embedding}), such that scarred eigenstates are equidistant in energy. On the other hand, PXP is not the only model to exhibit decaying wavefunction revivals due to many-body quantum scars. This phenomenon has also been observed in models of fractional quantum Hall effect in a quasi one-dimensional limit~\cite{Moudgalya2019} and in a model of bosons with correlated hopping~\cite{bosonScars}. In each of these cases it was found that scar states are well approximated by Ritz vectors of a Krylov-like subspace generated by the action of some raising operator. In general, the energy variance of this subspace is non-zero; however, provided this subspace variance is small, the Hamiltonian takes the approximate block-diagonal form $H \approx H_{\mathrm{Krylov}} \bigoplus H_{\bot}$. This is reminiscent of the recently introduced notion of ``Krylov-restricted thermalization"~\cite{MoudgalyaKrylov}, whereby the Hilbert space fractures into closed Krylov subspaces in which exponentially large integrable and ergodic sectors can coexist alongside one another. While ``Krylov restricted thermalization" with exponentially large integrable sectors arises naturally in a model of interacting fermions~\cite{MoudgalyaKrylov}, it has also been demonstrated that one can embed a target integrable subspace of arbitrary size alongside ergodic subspaces in an interacting spin model~\cite{ShiraishiMori,NeupertScars}. We will refer to the latter approach as ``projector embedding".
In this paper we demonstrate how a ``loosely embedded" integrable subspace can give rise to many-body quantum scars and strong ETH violation, thus providing a general picture of scarring in the PXP model that relates it to other types of scarred models in the literature. Our embedding scheme is defined by considering Hamiltonians that consist of generators of a Lie algebra representation, but with slightly ``broken" commutation relations, resulting in the approximate block diagonal form $H \approx H_{\mathrm{int}} \bigoplus H_{\bot}$. Due to the ``broken" root struture of the Lie algebra, $H_{\mathrm{int}}$ is found to possess an approximate dynamical symmetry such that scar states are embedded throughout the spectrum with nearly equal energy spacing. This, along with the non-zero subspace variance, gives rise to decaying wavefunction revivals when the system is quenched from certain initial states.
Further, we introduce an iterative scheme to identify perturbations which correct the errors in the root structure of the Lie algebra representation. While the perturbations we find are generically long-range and have complicated forms, they serve to elucidate the connection between exact integrable subspaces, seen in either ``projector embedding" or ``Krylov-restricted thermalization", and loose embeddings such as in PXP model. Correcting the algebra causes the energy variance of the loosely embedded subspace to decrease, resulting in the Hamiltonian becoming increasingly block diagonal. In addition, an improving root structure within the embedded subspace results in scar states becoming more equidistant in energy, such that revivals are also enhanced.
Specifically, our scheme allows to re-derive perturbations to the PXP model which have been shown to enhance revivals from the $\vert \mathbb{Z}_2 \rangle$ state~\cite{Khemani2018,Choi2018}. Nevertheless, in doing so, we also identify a missing set of perturbations which enhance the revivals further by several orders of magnitude compared to previous works~\cite{Khemani2018,Choi2018}. Moreover, by considering different possible $\mathrm{su(2)}$ representations embedded within the PXP model, we also identify a weak perturbation which enhances revivals from the $\vert \mathbb{Z}_3 \rangle = \vert 100100...\rangle$ state, and a strong deformation resulting in a new model which supports revivals from $\vert \mathbb{Z}_4 \rangle$ initial state. We also identify two deformations of the PXP model which fix an $\mathrm{su(2)}$ algebra \emph{completely}, such that the models feature exact wavefunction revivals from simple product states and an exact integrable Krylov subspace generated by repeated application of the Hamiltonian, while also simultaneously containing thermalizing sectors.
The remainder of this paper is organized as follows. Secs.~\ref{sec:pxp} and \ref{section:exact_embedding} contain an overview of the physics of the PXP model and the recent constructions of scarred models via projector embedding and dynamical symmetry. Sec.~\ref{sec:loose} introduces our notion of ``loose" embedding of broken Lie algebra representations into an eigenspectrum of a many-body system. In Sec.~\ref{sec:pxpz2} we present the simplest application of our construction to revivals from $|\mathbb{Z}_2\rangle$ product state in PXP model.
In Sec.~\ref{sec:pxpz3} we explore a different $\mathrm{su(2)}$ Lie algebra representation which can be loosely embedded in the PXP model in order to give rise to revivals from $|\mathbb{Z}_3\rangle$ product state. Additionally, we find an exactly embedded subspace in a new model which represents a strong deformation of the PXP model. In Sec.~\ref{sec:pxpz4} we demonstrate that our method can be used to stabilise revivals from $|\mathbb{Z}_4\rangle$ product state which are absent in the PXP model. Our conclusions are presented in Sec.~\ref{sec:conc}. Appendices contain a non-trivial perturbation that stabilizes $\mathbb{Z}_2$ revivals in the spin-1 generalization of the PXP model, as well as technical details on the second-order corrections to $\mathrm{su(2)}$ algebras.
\section{A brief overview of PXP model} \label{sec:pxp}
The PXP model~\cite{Lesanovsky2012} prevents adjacent excitations of atoms into the Rydberg states~\cite{Bernien2017}. The model can be expressed as a kinetically constrained spin-$1/2$ chain by denoting the basis of $\vert 0 \rangle = \vert {\downarrow} \rangle$, $\vert 1 \rangle = \vert {\uparrow} \rangle$, where $|0\rangle$ refers to an atom in its ground state and $|1\rangle$ denotes an excited state. The PXP Hamiltonian is given by
\begin{eqnarray}\label{eq:pxp}
H_{\mathrm{PXP}} &=& \sum_{n=1}^N P_{n-1} \sigma^x_n P_{n+1},
\end{eqnarray}
where $\sigma_n^x= \vert 0 \rangle_n \langle 1 \vert_n + \vert 1 \rangle_n \langle 0 \vert_n$ is the standard Pauli $x$-matrix on site $n$, and the projector
\begin{equation}\label{eq:proj}
P_n = \vert 0 \rangle_n \langle 0 \vert_n
\end{equation}
implements correlated spin flips, i.e., $P$ removes any transitions that would create adjacent Rydberg excitations. Examples of allowed and forbidden processes are illustrated in Fig.~\ref{fig:sketch}.
Our numerical study of the model in Eq.~(\ref{eq:pxp}) and related models below will be based on exact diagonalization of finite chains with periodic boundary condition ($n+N \equiv n$).
\begin{figure}
\includegraphics[scale=0.8]{sketch.pdf}
\caption{An example of an allowed (a) and forbidden (b) transition under the Hamiltonian in Eq.~(\ref{eq:pxp}).}\label{fig:sketch}
\end{figure}
The PXP model in Eq.~(\ref{eq:pxp}) is non-integrable and thermalizing~\cite{Turner2017}, but its quench dynamics is strongly sensitive to the choice of the initial state~\cite{Bernien2017}. For simplicity, we focus on initial states that are product states of atoms compatible with the Rydberg constraint (recent work in Ref.~\onlinecite{michailidis2017slow} studied the revivals from more general classes of weakly-entangled initial states). One such initial state is the N\'eel state $|\psi(0)\rangle = \vert \mathbb{Z}_2 \rangle \equiv \vert 0101... \rangle$, which gives rise to revivals in the quantum fidelity,
\begin{eqnarray}
\vert \langle \psi(0) \vert e^{-iHt} \vert \psi(0) \rangle \vert^2.
\end{eqnarray}
Other physical quantities, such as local observable expectation values, correlation functions as well as non-local quantities such as entanglement entropy, were all found to revive with the same frequency as the fidelity~\cite{TurnerPRB}. Other initial states such as $|\mathbb{Z}_3\rangle \equiv |100100\ldots\rangle$ also revive, though much more weakly, while states with larger unit cells, such as $|\mathbb{Z}_4\rangle \equiv |10001000\ldots\rangle$, do not revive even in small systems accessible by exact numerics~\cite{TurnerPRB}.
As we pointed out in the Introduction, the return probability of the
$\vert \mathbb{Z}_2 \rangle$ state in PXP model clearly decays with time, suggesting that the revival is fragile and likely to disappear in the thermodynamic limit.
In this context, Ref.~\onlinecite{Choi2018} made an important observation that PXP model could be weakly deformed such that revivals are made nearly perfect. The enhancement of revivals in the PXP model was explained by the fact that appropriate perturbations stabilise an approximate $\mathrm{su(2)}$ algebra formed by the special eigenstates of the PXP model. The special eigenstates can be described, with high accuracy, using a ``forward scattering approximation" (FSA)~\cite{Turner2017}. The FSA is based on a particular decomposition of the PXP Hamiltonian, $H_{\mathrm{PXP}} = H^+ + H^-$, chosen in such a way that $H^-$ annihilates the initial N\'eel state $|\mathbb{Z}_2 \rangle$ (with $H^+=(H^-)^\dagger$). The set of states $(H^+)^n |\mathbb{Z}_2 \rangle$ then form an orthogonal Krylov-like subspace of finite dimension $N+1$, where $N$ is the number of atoms. The scarred eigenstates can be compactly represented as linear superpositions of $N+1$ FSA basis states~\cite{TurnerPRB}. Within the subspace of special eigenstates, the operators $H^+$ and $H^-$ act like raising and lowering operators for a fictitious spin-$N/2$ particle. Intuitively, periodic revivals can then be interpreted as precession of this large spin~\cite{Choi2018}. In the pure PXP model, the emergent $\mathrm{su}(2)$ spin algebra is only approximate but becomes nearly exact at the optimal revival point.
In this paper, we reinterpret the revivals in PXP model from the point of view of broken Lie algebras, by defining a set of broken generators for which the scar states act as an approximate basis. Considering corrections to this algebra allows us to construct perturbations that significantly enhance the revivals for general types of initial states without relying on FSA scheme.
\section{Exact Embedding of Scarred Eigenstates} \label{section:exact_embedding}
Before we consider PXP model which features approximate integrable subspaces with small subspace variance, which we term as having loosely embedded scar states, we first review several ways in which an \emph{exact} integrable subspace has been demonstrated to arise in recent works in the literature.
\subsection{Projector embedding} \label{section:ShiraishiScars}
Selected eigenstates can be embedded into the spectrum of an ergodic Hamiltonian via the ``projector embedding" construction due to Shiraishi and Mori~\cite{ShiraishiMori} (further extensions to topologically ordered systems have been developed in Ref.~\onlinecite{NeupertScars}). Consider a Hamiltonian describing some lattice system of the form:
\begin{equation}
H = \sum_{i=1}^N P_i h_i P_i + H^{\prime},
\end{equation}
where $P_i$ are arbitrary local projectors [not necessarily the same as in Eq.~(\ref{eq:proj})], $h_i$ are arbitrary local Hamiltonians acting on lattice sites $i=1,2,\ldots N$, $[H^{\prime},P_i] = 0$ for all $i$, and $\vert \psi_i \rangle$ are target states that are annihilated by the projectors,
\begin{equation}
P_i \vert \psi_j \rangle = 0, \quad \forall \, i,j.
\end{equation}
It follows
\begin{eqnarray}
P_i H \vert \psi_j \rangle = P_i H^{\prime} \vert \psi_j \rangle = H^{\prime} P_i \vert \psi_j \rangle = 0,
\end{eqnarray}
thus $[H,P_i] = 0$ for all $i$, which implies $[H,\sum_i P_i]=0$. Therefore, $H$ takes the block diagonal form
\begin{eqnarray}
H = H_{\mathrm{target}} \bigoplus H_{\bot},
\end{eqnarray}
where $H_{\mathrm{target}}$ is spanned by the target states $\vert \psi_i \rangle$. Such a decomposition may result in the model possessing both integrable and ergodic sectors. Models of this form generically contain eigenstates embedded near the center of the spectrum~\cite{ShiraishiMori}. There is no guarantee embedded states are equidistant in energy and may even be degenerate, such that this scheme can produce models which do not exhibit perfect wavefunction revivals. We note that, for periodic boundary conditions, the PXP model, introduced in Sec.~\ref{sec:pxpz2} below, can be expressed in this ``projector embedded" form such that a single target state is embedded -- namely the AKLT ground state at zero energy~\cite{Shiraishi_2019}. However, the complete set of $N+1$ scarred eigenstates with enhanced support on $\vert \mathbb{Z}_2 \rangle$ state (mentioned in Sec.~\ref{sec:pxp}) have not been understood through this embedding procedure.
\subsection{Equidistant embedding: Dynamical Symmetry} \label{section:IadecolaScars}
Next we review a way in which ETH violating eigenstates can be embedded with equidistant energy spacing, yielding exact wavefunction revivals in specially designed quenches. Consider a Hamiltonian of the form:
\begin{equation}
H = H_0 + H^{\prime}.
\end{equation}
We assume the existence of some local operator $Q^+$ for which there exists an extensive dynamical symmetry with $H^{\prime}$:
\begin{equation}
[H^{\prime}, Q^+] = \alpha Q^+,
\end{equation}
such that, for any eigenstate $\vert \Omega \rangle$ of $H^{\prime}$, we can generate an equally spaced tower of eigenstates, $(Q^+)^n \vert \Omega \rangle$.
If the subspace given by a tower of $H^{\prime}$ eigenstates, $\vert n \rangle = 1/\mathcal{N} (Q^+)^n \vert \Omega \rangle$, are also zero energy eigenstates of $H_0$, $H^{\prime}$ will split the degeneracy such that $\vert n \rangle$ are equidistant eigenstates of the full Hamiltonian. Further, if $\vert \Omega \rangle$ is a weakly entangled state, due to the locality of $Q^{+}$, states $\vert n \rangle$ are also expected to be weakly entangled. Given an appropriate choice of $H_{0}$ such that the model is non-integrable, the states $\vert n \rangle$ will be weakly entangled scarred eigenstates which violate the ETH. Such a scenario has been realised in a variety of models, such as spin-1 XY models~\cite{Iadecola2019_2,Chattopadhyay}, a spin $1/2$ model with emergent kinetic constraints~\cite{Iadecola2019_3} and a spin chain where the dynamical symmetry emerges due to an underlying Onsager algebra~\cite{OnsagerScars}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{./scarMechs_schematic.pdf}
\caption{Summary of various mechanisms for embedding scarred eigenstates in a many-body system.
(a) An exactly embedded Krylov subspace (purple tridiagonal matrix, with red lines symbolizing the non-zero elements).
Such a scenario can emerge in models exhibiting the phenomenology of fractonic systems~\cite{Pretko2019}, where if the Krylov subspace is exponentially large this effect is coined ``Krylov-restricted thermalization"~\cite{MoudgalyaKrylov}. Lifting the restriction the embedded subspace be tridiagonal, models of type (a) can also be generically realized by the ``projector embedding method" (Sec.~\ref{section:ShiraishiScars}).
(b) Exact scars featuring perfect revivals due to a dynamical symmetry of certain terms in the Hamiltonian generated by $Q^+$ (see Sec.~\ref{section:IadecolaScars}).
Type (b) scars have being realized in a variety of spin models such as spin-1 XY model~\cite{Iadecola2019_2,Iadecola2019_3,Chattopadhyay,OnsagerScars}.
(c) PXP-like scarring~\cite{Khemani2018,Choi2018}, where a Krylov subspace which approximately acts as an $\mathrm{su(2)}$ representation is sparsely coupled to the thermal bulk, such that the subspace has a low subspace variance (which is equivalent to the Frobenius norm of the block labelled couplings). By fixing various broken Lie algebra representations in models of type (c) we can also realize the scarred subspace of approximate type (a), where the nearly exactly embedded subspace forms a representation of the Lie algebra (as will be discussed in Sec.~\ref{section:pxpZ3Exact} and \ref{section:pxpZ4}).
}
\label{fig:scarMechs}
\end{figure}
A summary of exact embeddings is presented in Figs.~\ref{fig:scarMechs}(a), (b). In contrast to exact embeddings, the focus of this paper is the PXP model~\cite{Bernien2017} where the scarred subspace is only approximately decoupled from the thermal bulk, Fig.~\ref{fig:scarMechs}(c). Before discussing in detail the PXP model in Sec.~\ref{sec:pxpz2}, in the following section we introduce our general notion of loose embedding that can be applied, in principle, to any model.
\section{Loose embeddings of broken Lie algebras }\label{sec:loose}
Previous examples of exact embeddings of scarred eigenstates in Sec.~\ref{section:exact_embedding} are analytically tractable, but they do not directly apply to the experimentally observed scarred revivals in the PXP model~\cite{Bernien2017}. In the latter case, the revivals clearly decay over time, thus we are looking to interpret such revivals in terms of an \emph{inexact} embedding of an algebra whose representation is defined by the scarred states. Here we outline how to construct models with loosely embedded scar states, whose Hamiltonian approximately fractures into the form $H \approx H_{\mathrm{int}} \bigoplus H_{\bot}$, where $H_{\mathrm{int}}$ possesses an approximate dynamical symmetry, which we engineer from the root structure of a Lie algebra representations with weakly ``broken" commutation relations.
\subsection{Embedding scheme}\label{sec:scheme}
We start by recalling some basics of Lie algebras and representation theory.
Infinitesimal generators $g_i$ of a Lie group $\mathcal{G}$ form a Lie Algebra $\mathcal{A}$:
\begin{eqnarray}
[g_i,g_j] = f^k_{ij} g_k.
\end{eqnarray}
The algebra is encoded in the structure constants $f^k_{ij}$, which are antisymmetric with respect to lower indices, $f^k_{ij} = - f^k_{ji}$.
A set of $n \times n$ matrices $\{M_i\}$ satisfying $[M_i,M_j] = f^k_{ij} M_k$ forms an $n$-dimensional representation of the Lie algebra. Verifying these commutation relations is sufficient to verify the set $\{M_i\}$ forms a valid representation.
Given a set of infinitesimal generators of a Lie group, define $\{H^i\}$ as the largest set of mutually commuting generators. By taking linear combinations of the remaining generators, one can construct a set of ladder operators, $\{E^\alpha\}$:
\begin{eqnarray}
[H^i, E^\alpha] = \alpha^i E^\alpha.
\label{eq:root_system}
\end{eqnarray}
Together, the sets $\{H^i\}, \{E^{\alpha}\}$ are known as the Cartan-Weyl basis. As the set $\{H^i\}$ are mutually commuting by definition, there exists a basis which simultaneously diagonalizes every $H^i$ such that we can label basis states of a representation by their $H^i$ quantum numbers. On application of $E^{\alpha}$, the change in $H^i$ quantum numbers is just the roots $\alpha^i$:
\begin{eqnarray}
H^i \vert \psi \rangle &=& \lambda_i \vert \psi \rangle, \\
H^i E^{\alpha} \vert \psi \rangle &=& (E^\alpha H^i + \alpha^i E^\alpha) \vert \psi \rangle = (\lambda_i + \alpha^i) E^\alpha \vert \psi \rangle. \quad
\end{eqnarray}
Given a single basis state which is an eigenstate of every $H^i$, one can systematically construct the remaining basis states via repeated applications of the ladder operators $E^\alpha$. This construction will prove useful for forming approximate basis states of broken Lie algebra representations, which can be used to approximate many-body scar states (e.g., within the FSA scheme~\cite{Turner2017}).
Consider the set of operators $\{E^\alpha\}$ which are raising and lowering operators of some Lie algebra $\mathcal{A}$ in the Cartan-Weyl basis. The set of equations,
\begin{eqnarray}
[E^\alpha,E^\beta] = \sum_\gamma c_{\gamma}E^\gamma + \sum_i d_i H^i,
\label{eq:Hi_def}
\end{eqnarray}
follows from the properties of the Lie algebra, but can be taken as defining the operators $H^i$ when these equations are inverted.
Now we are in position to introduce our notion of ``broken" Lie algebra. Let the set of operators $\{\bar{E}^\alpha\}$ be of equal size as the previous set $\{ E^\alpha \}$, but we do not assume they are raising/lowering operators of any Lie algebra. Taking Eqs.~(\ref{eq:Hi_def}) as a definition for $H^i$ as some linear combination of $\{E^\alpha, [E^\alpha,E^\beta]\}$, define $\bar{H^i}$ as the same linear combination of $\{\bar{E}^\alpha,[\bar{E}^\alpha,\bar{E}^\beta]\}$.
If the sets $\{\bar{E}^\alpha\}$, $\{\bar{H}^i\}$ satisfy:
\begin{eqnarray}
[\bar{H}^i, \bar{E}^\alpha] = \alpha^i \bar{E}^\alpha + \delta^\alpha,
\end{eqnarray}
where $\alpha^i$ are the root coefficients of the Lie algebra $\mathcal{A}$ and it is understood $\delta^{\alpha}$ contain no terms proportional to the generators $\bar{E}^{\alpha}$, we say $\{\bar{E}^\alpha\}, \{\bar{H}^i\}$ form a \emph{broken representation} of the Lie algebra $\mathcal{A}$.
Now consider a Hamiltonian consisting of a linear combination of the diagonal generators $\{\bar{H}^i\}$ rotated to some other basis:
\begin{eqnarray}
H = \sum_n a_n U^{\dagger} \bar{H}^n U,
\label{eq:Hamiltonian_lin_comb}
\end{eqnarray}
where $U$ is an arbitrary unitary rotation. Consider quenching from a
simultaneous eigenstate $\vert \psi_0 \rangle$ of the operators
$\{\bar{H}^i\}$. Construct an approximate basis for the broken
representation by repeated application of the raising operators
$\bar{E}^\alpha$ on $\vert \psi_0 \rangle$. If the algebra were exact, the
Hamiltonian would fracture into the block diagonal form $H = H_{\mathrm{rep \,\,
basis}} \bigoplus H_{\bot}$ and there would exist several dynamical symmetries of $H_{\mathrm{rep \,\, basis}}$, corresponding to the rotated ladder operators, $Q_{\alpha} = U^{\dagger} E^{\alpha} U$.
For a broken Lie algebra, these relations become approximate, thus Hamiltonians of the form of Eq.~(\ref{eq:Hamiltonian_lin_comb}) will contain an approximate dynamical symmetry within a loosely embedded integrable subspace.
It is possible the dynamics can resemble a quench with additional decoherence from the related system $H(\bar{H^i},\bar{E}^{\alpha}) \rightarrow H(H^i,E^{\alpha} $). For example, if the embedded algebra was $\mathrm{su(2)}$, it is possible the wavefunction will revive with a single frequency provided the following conditions are met:
\begin{enumerate}
\item The variance of the approximate basis with respect to $\bar{H}^i$ is sufficiently small.
\item The spacing of expectation values with respect to $\bar{H}^i$ after applications of $\bar{E^\alpha}$ to $\vert \psi_0 \rangle$ approximately obeys the root structure of the desired Lie algebra, i.e.,
\begin{eqnarray}
\frac{\langle \phi \vert \bar{H}^i \vert \phi \rangle}{\langle \phi \vert \phi \rangle} &\approx \lambda_i + \alpha^i,
\end{eqnarray}
where $ \bar{H}^i \vert \psi_0 \rangle = \lambda_i \vert \psi_0 \rangle$ and
$ \vert \phi \rangle = \bar{E}^\alpha \vert \psi_0 \rangle$.
\item Repeated application of $\bar{E}^\alpha$ on $\vert \psi_0 \rangle$ will terminate after a finite number of steps, thus generating a subspace of the full Hilbert space. In general, this subspace does not correspond to an exact symmetry sector of the Hamiltonian. To see signatures of the exact Lie algebra, this subspace must be sufficiently disconnected from the orthogonal space under the action of the Hamiltonian.
\end{enumerate}
\subsection{Iterative corrections to broken Lie algebras: Identifying perturbations that stabilize revivals}
By perturbing the operators $\bar{E^\alpha}$ with terms that appear in the error $\delta^\alpha$, it is possible to improve the broken Lie algebra, in the sense that decoherence in the previously described quench in Sec.~\ref{sec:scheme} is reduced.
Consider some broken representation of a Lie algebra:
\begin{eqnarray}
[\bar{H}^i, \bar{E}^\alpha] = \alpha^i \bar{E}^{\alpha} + \delta^\alpha, \quad
\delta^\alpha = \sum_n a_n V_n^\alpha,
\end{eqnarray}
where the error $\delta^\alpha$ has been decomposed into terms sharing the same coefficient $a_n$. Now perturb the raising/lowering operators as follows:
\begin{eqnarray}
\bar{E}^\alpha_{(1)} = \bar{E}^\alpha + \sum_n c_n V_n^\alpha.
\end{eqnarray}
This in turns defines new $\bar{H}^i_{(1)} = \bar{H}^i + H^i_{\mathrm{perts}}$ , following the same definition of $H^i$ in Eq.~(\ref{eq:Hi_def}). It follows:
\begin{align}
[ {\bar{H}^i}_{(1)},{\bar{E}^\alpha}_{(1)}] &= \alpha^i \bar{E}^\alpha + \sum_m f_m(c_0,...,c_N) V_m^\alpha + {\delta^\alpha}_{(2)}, \\
{\delta^\alpha}_{(2)} &= \sum_n g_n(c_0,...,c_N) {V^\alpha}_{(2)n},
\label{eq:polynomial}
\end{align}
where $f_m(c_0,...,c_N)$, $g_n(c_0,...,c_N)$ are polynomials in the perturbation coefficients and ${V^\alpha}_{(2)n}$ are second order error terms. If the coefficients $c_n$ can be optimized to satisfy
\begin{eqnarray}
[\bar{H^i}_{(1)}, \bar{E}^\alpha_{(1)}] \approx \alpha^i \bar{E}^\alpha_{(1)} + \delta^\alpha_{(2)},
\end{eqnarray}
such that decoherence in the previously described quench is reduced, we say that the broken representation has been improved. This can lead to decreased variance of $\{H^i\}$ and/or improved spacing of $\langle H^i \rangle$ with respect to the approximate basis of the broken representation and also may result in the approximate basis becoming more disconnected from the orthogonal subspace under the action of the perturbed Hamiltonian [Eq.~\ref{eq:Hamiltonian_lin_comb}, with $H(H^i, E^{\alpha}) \rightarrow H(H^i_{(1)}, E^{\alpha}_{(1)})$]. Further, if the representation improves, we expect the magnitude of the error terms to decrease, given by the Frobenius norm $\vert \vert \delta^\alpha_{(2)} \vert \vert_F < \vert \vert \delta^\alpha \vert \vert_F$. Fig.~\ref{fig:algebraCorrection} schematically shows this process of identifying corrections to the algebra. We will demonstrate that this procedure results in many-body scarred models with long-lived coherent dynamics in the subsequent sections.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{./brokenErrorFeedback.pdf}
\caption{Schematic illustration of our iterative scheme which identifies corrections to broken Lie algebras, specifically an $\mathrm{su(2)}$ Lie algebra in this case. The optimization of $\lambda_n$ is with respect to the error measures described in the text, such as maximizing the first fidelity peak $|\langle H^z, LW \vert e^{-iHt} \vert H^z, LW \rangle|^2$ or minimizing the subspace variance of $H$ w.r.t. to the $\mathrm{su(2)}$ basis defined in Eq.~(\ref{eq:lw}).
}
\label{fig:algebraCorrection}
\end{figure}
Before illustrating this approach with examples of a broken $\mathrm{su(2)}$ Lie algebra, we briefly discuss ways of quantifying how much the approximate $\mathrm{su(2)}$ Lie algebra representation differs from an exact representation. As a possible error measure, we consider $\mathrm{max} \; \mathrm{var}(\bar{H}^z)_n$ with respect to the approximate basis, where $\mathrm{var}(\bar{H}^z)_n$ is the variance of the basis state $\vert n \rangle$ defined as
\begin{eqnarray}\label{eq:lw}
\vert n \rangle = \frac{1}{\sqrt{\mathcal{N}}} (\bar{H}^+)^n \vert \mathrm{LW} \rangle,
\end{eqnarray}
with $| \mathrm{LW}\rangle$ being the lowest weight state of the $\mathrm{su(2)}$ Lie algebra representation $\vert S,-S \rangle$. This state obeys $\bar{H}^-\ket{\mathrm{LW}}{=}0$, or equivalently, it is the ground state of $\bar{H}^z$.
If the revivals are due to an $\mathrm{su}(2)$ algebra, we expect the corresponding basis states should have harmonic (equal) energy spacing. To quantify the deviation from harmonic spacing we introduce the quantity $K$:
\begin{eqnarray}
K = \vert \vert M \vert \vert_F, \quad M_{nm} = \vert \Delta E_n - \Delta E_m \vert,
\end{eqnarray}
which represents the Frobenius norm of the matrix of level spacings. The latter are given by
\begin{eqnarray}
\Delta E_n &= \langle \bar{H}^z \rangle_{n+1} - \langle \bar{H}^z \rangle_n, \quad \langle \bar{H}^z \rangle_n &= \langle n \vert \bar{H}^z \vert n \rangle.
\end{eqnarray}
To quantify how disconnected the subspace spanned by $\vert n \rangle$ is from its orthogonal subspace under the action of the Hamiltonian, we use the subspace variance $\sigma$:
\begin{eqnarray}
\sigma = \mathrm{tr}\Big( (U_{\mathrm{rep}}^{\dagger} H^2 U_{\mathrm{rep}}) - ( U_{\mathrm{rep}}^{\dagger} H U_{\mathrm{rep}} )^2 \Big),
\end{eqnarray}
where $U_{\mathrm{rep}}$ is the unitary operator which projects to the broken representation basis. This quantity can be interpreted as being proportional to the Frobenius norm of the block labelled couplings in Fig~\ref{fig:scarMechs}(c).
\section{Example: PXP model and embedded $\mathrm{su(2)}$ algebra }\label{sec:pxpz2}
We now exemplify our general embedding scheme outlined in Sec.~\ref{sec:loose} by using the PXP model~\cite{Lesanovsky2012,Bernien2017}. We demonstrate how to identify and improve the broken $\mathrm{su(2)}$ algebra associated with $\mathbb{Z}_2$ revivals.
\subsection{$\mathbb{Z}_2$ revivals and $\mathrm{su}(2)$ algebra}
First we focus on the well-known case of $\mathbb{Z}_2$ revivals in the PXP model~\cite{Choi2018}. Define the $\mathrm{su}(2)$ spin raising operator
\begin{eqnarray}
\bar{H}^+ \equiv \sum_n \left( \tilde{\sigma}^+_{2n} + \tilde{\sigma}^-_{2n-1} \right),
\end{eqnarray}
where we have introduced the shorthand notation
\begin{eqnarray}\label{eq:sigmatilde}
\tilde{\sigma}^{\alpha}_n \equiv P_{n-1} \sigma^\alpha_n P_{n+1}.
\end{eqnarray}
We have $H_{\mathrm{PXP}} = \bar{H}^+ + \bar{H}^-$ such that $H_{\mathrm{PXP}} = \bar{H}^x$ can be interpreted as an element of su(2) algebra.
From the commutation rules of $\mathrm{su}(2)$ algebra, the diagonal element is given by (half) the commutator (note the minus sign)
\begin{eqnarray}
\bar{H}^z \equiv \frac{1}{2} [\bar{H}^+,\bar{H}^-] = \sum_n \left( \tilde{\sigma}^z_{2n} - \tilde{\sigma}^z_{2n-1} \right).
\end{eqnarray}
The reason for this choice of $\bar{H}^{+/-}$ is that the lowest weight state of $\bar{H}^z$ is the N\'eel state, $\vert 0101...\rangle$.
We seek a representation for which $|\mathbb{Z}_2\rangle$ is the lowest weight state of $\bar{H}^z$ as, for an exact algebra, the lowest/highest weight states of $\bar{H}^z$ are also simultaneously eigenstates of the Casimir operator, such that repeated application of $\bar{H}^+$ on the lowest weight state would generate an $\mathrm{su(2)}$ subspace. To be explicit, consider the exact algebra $H^+ = \sum_n \sigma_n^+$, $H^- = (H^+)^{\dagger}$, $H^z = \sum_n \sigma_n^z$. Of the eigenstates of $H^z$, only repeated application of $H^+$ on the lowest weight state $\vert 000...\rangle = \vert S=N/2,S_z=-N/2 \rangle$ would generate an $\mathrm{su(2)}$ subspace. Superpositions of states with equal number of singlets must be taken as the root state for which repeated application of $H^+$ would generate further $\mathrm{su(2)}$ sectors.
It further follows:
\begin{eqnarray}
\left[ \bar{H}^z,\bar{H}^{+} \right] &=& \bar{H}^{+} + \delta^{+}_{(1)}, \label{eq:delta1p} \\
\left[ \bar{H}^z, \bar{H}^{-} \right]&=& -\bar{H}^{-} + \delta^{-}_{(1)}, \label{eq:delta1m}
\end{eqnarray}
where the error terms that break the algebra are
\begin{eqnarray}
\nonumber \delta^+_{(1)} &=& -\frac{1}{2} (PP\sigma^+_{2n}P + P\sigma^+_{2n}PP \\
&+& P \sigma^-_{2n+1}PP + PP \sigma^-_{2n+1}P), \label{eq:z2FirstOrder_raising} \\
\nonumber \delta^-_{(1)} &=& \frac{1}{2} (PP\sigma^-_{2n}P + P\sigma^-_{2n}PP \\
&+& P \sigma^+_{2n+1}PP + PP \sigma^+_{2n+1}P).
\end{eqnarray}
For brevity, we have suppressed a summation over the lattice sites in the definition of $\delta_{(1)}^{+/-}$, and terms like $PP\sigma_{2n}^+P$ stand for $\sum_n P_{2n-2}P_{2n-1}\sigma_{2n}^+P_{2n+1}$ (i.e., strings of $P$'s act on consecutive neighboring sites).
From the expressions in Eqs.~(\ref{eq:delta1p})-(\ref{eq:delta1m}), we see that $\{ \bar{H}^z,\bar{H}^+,\bar{H}^- \}$ form a broken representation of $\mathrm{su}(2)$. In this language, the forward scattering approximation (FSA)~\cite{Turner2017} is rephrased as projecting the Hamiltonian $H$ to the broken representation basis in Eq.~(\ref{eq:lw}), with $|\mathrm{LW}\rangle \equiv |\mathbb{Z}_2\rangle$,
and diagonalizing. This procedure gives very accurate approximations to the special eigenstates of the full PXP model -- see red crosses in Fig.~\ref{fig:z2pxpHalfSumm} (a), (b), (c), (e).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./pxp_z2_summary.pdf}
\caption{$\mathbb{Z}_2$ revival in PXP model. (a) Eigenstate overlap with the N\'eel $|\mathbb{Z}_2\rangle$ state. (b) Eigenstate overlap after including the first order $\mathrm{su}(2)$ correction (Eq.~\ref{eq:pxppert1}). (c) Eigenstate overlap after including the second order $\mathrm{su}(2)$ correction (Eq.~\ref{eq:z2_2ndOrder_1_1}-\ref{eq:z2_2ndOrder_1_N}). (d) Quantum fidelity in $\mathbb{Z}_2$ quench, with and without perturbations. Perturbation coefficients are those that maximize the first fidelity revival peak. (e) Bipartite entropy, Eq.~(\ref{eq:entropy}), of the eigenstates of PXP model after including second order $\mathbb{Z}_2$ $\mathrm{su(2)}$ corrections. The states labelled ``Exact Scars" are exact diagonalization results identified from the top band of states in (c). Red crosses in (a), (b), (c), (e) indicate approximate scar states obtained by projecting the Hamiltonian to the broken $\mathrm{su}(2)$ basis and diagonalizing. Color scale in (a), (b), (c), (e) indicates the density of data points, with lighter regions being more dense.}
\label{fig:z2pxpHalfSumm}
\end{figure}
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c|}
\hline
Order & $1-f_0$ & $\sigma/D_{\mathrm{su(2)}}$ & $max(var(H^z)_n)$ & $K$ \\
\hline
$n=0$ & $2.853 {\times} 10^{-1}$ & $1.116 {\times} 10^{-1}$ & $2.711 {\times} 10^{-1}$ & $9.310 {\times} 10^{0}$ \\
\hline
$n=1$ & $6.760 {\times} 10^{-4}$ & $2.190 {\times} 10^{-4}$ & $9.694 {\times} 10^{-4}$ & $6.008 {\times} 10^{-1}$ \\
\hline
$n=2$ & $3.113 {\times} 10^{-6}$ & $3.303 {\times} 10^{-6}$ & $2.355 {\times} 10^{-5}$ & $8.090 {\times} 10^{-2}$ \\
\hline
\end{tabular}
\caption{Error metrics for the $\mathbb{Z}_2$ $\mathrm{su(2)}$ subspace of the PXP model at various perturbation orders for $N=24$. Subspace variance $\sigma$ is normalized by the dimension of the $\mathrm{su(2)}$ representation, $N+1$. See text for details of the pertubations.}
\label{tab:pxpz2}
\end{table}
Next, we continue our program and identify a perturbation which can potentially improve the $\mathrm{su}(2)$ representation. First, define $\bar{H}^\pm_{(1)} = \bar{H}^\pm + \lambda \delta^\pm_{(1)}$. This gives us
\begin{eqnarray}
\nonumber H_{(1)} &=& H + \lambda(\delta^+_{(1)} + \delta^-_{(1)}) \\
&=& P \sigma_n^x P + \lambda (P \sigma_n^x PP + PP \sigma_n^x P).
\label{eq:pxppert1}
\end{eqnarray}
In order to find the optimal perturbation strength $\lambda$, we maximize the first fidelity revival as a function of $\lambda$,
\begin{eqnarray}
f_0(\lambda) = f(\lambda, t_0) = \vert \langle \psi(0) \vert e^{-iH(\lambda)t_0} \vert \psi(0) \rangle \vert^2,
\end{eqnarray}
where $t_0$ is the time at which the first revival occurs. Note that $t_0$ is $\lambda$-dependent. Throughout this paper, the minimization was carried out using the Python SciPy routine that employs the ``Sequential Least Squares Programming" (SLSQP) method. After optimization, we recover the perturbation that was previously empirically found~\cite{Khemani2018} to enhance the revivals following a $\vert \mathbb{Z}_2 \rangle$ quench with maximal $f_0$ when $\lambda = 0.108$ (at system size $N=18$). It was previously demonstrated the PXP model remains non-integrable after including this perturbation~\cite{Choi2018}. Note that the first order perturbation improves all error metrics of the broken representation, see Table.~\ref{tab:pxpz2}.
Second order perturbations can be obtained in a similar fashion, although algebraic manipulations become very laborious to perform by hand. Our analytical results have been tested against a custom-designed software for symbolic computations of the nested commutators involving projectors~\footnote{K. Bull (\href{https://github.com/Cable273/comP}{https://github.com/Cable273/comP}).}. Fig.~\ref{fig:z2pxpHalfSumm} summarizes the differences between models after including first and second order perturbations. We find the scarred eigenstates become increasingly decoupled from the thermal bulk and can also be characterized by their anomalously low bipartite entanglement entropy $S$, defined in the usual way
\begin{eqnarray}
S &=& -\mathrm{Tr}(\rho_A \ln \rho_A), \label{eq:entropy}
\end{eqnarray}
in terms of the reduced density matrix $\rho_A = \mathrm{Tr}_{B} |\psi\rangle \langle \psi|$, obtained via partial trace over the subsystem $B$ for some bipartition of the total system into two halves, $A$ and $B$, in the computational basis.
Restricting to terms with only a single spin flip, we identify the following second order error terms $\delta^+_{(2)}$:
\begin{eqnarray}
\nonumber \delta^+_{(2),1} &=& P\sigma^z P \sigma_{2n}^+P + P \sigma_{2n}^+ P \sigma^z P
\label{eq:z2_2ndOrder_1_1} \\
&+& P \sigma^z P \sigma_{2n+1}^- P + P \sigma_{2n+1}^- P \sigma^z P, \\
\nonumber \delta^+_{(2),2} &=& P \sigma_{2n}^+ PPP + PPP \sigma_{2n}^+ P \\
&+& P \sigma_{2n+1}^- PPP + PPP \sigma_{2n+1}^- P, \\
\delta^+_{(2),3} &=& PP \sigma_{2n}^+ PP + PP \sigma_{2n+1}^- PP, \\
\nonumber \delta^+_{(2),4} &=& PP \sigma_{2n}^+ P \sigma^z P + P \sigma^z P \sigma_{2n}^+ PP \\
&+& PP \sigma_{2n+1}^- P \sigma^z P + P \sigma^z P \sigma_{2n+1}^- PP, \\
\nonumber \delta^+_{(2),5} &=& PPP \sigma_{2n}^+ PP + PP \sigma_{2n}^+ PPP \\
&+& PPP \sigma_{2n+1}^- PP + PP \sigma_{2n+1}^- PPP, \\
\nonumber \delta^+_{(2),6} &=& P \sigma_{2n}^+ P \sigma^z PP + PP \sigma^z P \sigma_{2n}^+ P \\
&+& PP \sigma^z P \sigma_{2n+1}^- P + P \sigma_{2n+1}^- P \sigma^z PP, \\
\nonumber \delta^+_{(2),7} &=& PPPP \sigma_{2n}^+ P + P \sigma_{2n}^+ PPPP \\
&+& PPPP \sigma_{2n+1}^- P + P \sigma_{2n+1}^- PPPP , \\
\nonumber \delta^+_{(2),8} &=& PP \sigma_{2n}^+ P \sigma^z PP + PP \sigma^z P \sigma_{2n}^+ PP \\
&+& PP \sigma_{2n+1}^- P \sigma^z PP + PP \sigma^z P \sigma_{2n+1}^- PP. \label{eq:z2_2ndOrder_1_N}
\label{eq:z2_2ndOrder_2}
\end{eqnarray}
Putting these terms together, we obtain the second order perturbations,
$\bar{H}^+_{(2)} = \bar{H}^+ + \lambda_0 \delta^+_{(1)} + \sum_{i=1}^8 \lambda_i \delta^+_{(2),i}$ and $\bar{H}^-_{(2)} = \bar{H}^- + \lambda_0 \delta^-_{(1)} + \sum_{i=1}^8 \lambda_i \delta^-_{(2),i}$, which in turn define $H_{(2)} = \bar{H}^+_{(2)} + \bar{H}^-_{(2)}$. Coefficients optimizing fidelity were found to be:
\begin{eqnarray}
\lambda_i^* &= [0.11135,0.000217,-0.000287,-0.00717,\\
&0.00827,0.00336,0.00429,0.0103,0.00118],
\end{eqnarray}
where the first value is the optimal coefficient for the first order term Eq.~(\ref{eq:z2FirstOrder_raising}), while the remaining coefficients correspond to the terms in order of appearance in Eqs.~(\ref{eq:z2_2ndOrder_1_1})-(\ref{eq:z2_2ndOrder_1_N}).
These values have been found via numerical optimization at system size $N=16$. Note that previous work in Ref.~\cite{Choi2018} only considered $PXPIP+PIPXP$ as a second order perturbation to $H_{\mathrm{PXP}}$. By including all spin flip terms obtained from the Lie algebra error, fidelity can be enhanced to $1-f_0 \approx O(10^{-6})$, while if we only retain $PXPIP+PIPXP$ we obtain infidelity that is a few orders of magnitude higher, $1-f_0 \approx O(10^{-3})$ (data for $N=16$). In Ref.~\onlinecite{Choi2018} fidelity on the order $1-f_0 \approx O(10^{-6})$ was found by including only terms $P_{n-1}X_nP_{n+1}P_{n+d}+P_{n-d}P_{n-1}X_nP_{n+1}$ up to high order $d \leq 10$, which are expected to arise as corrections in higher orders of our method. While these terms alone appear sufficient to reach very high fidelity values, our analysis suggests that, strictly speaking, these terms do not fully fix the $\mathrm{su(2)}$ algebra.
The decomposition of $H_{\mathrm{PXP}}=\bar{H}^++\bar{H}^-$ used to identify the broken $\mathrm{su(2)}$ algebra assosciated with $\mathbb{Z}_2$ revivals is not unique. In the following Sections, we discuss further decompositions leading to additional $\mathrm{su(2)}$ representations which can be enhanced to fix revivals from $\vert \mathbb{Z}_3 \rangle$ and $\vert \mathbb{Z}_4 \rangle$ initial states.
\section{$\mathbb{Z}_3$ revivals from $\mathrm{su}(2)$ algebra}
\label{sec:pxpz3}
In addition to $\mathbb{Z}_2$ revivals, PXP model was also shown numerically to exhibit wave function revivals following a quench from $\vert \mathbb{Z}_3 \rangle = \vert 100100...\rangle$ state~\cite{Turner2017, TurnerPRB}. (Somewhat more robust revivals are in fact seen from a weakly-entangled initial state ``close" to $|\mathbb{Z}_3\rangle$~\cite{Michailidis2019}.) Unlike $\mathbb{Z}_2$ state, the revivals from $\mathbb{Z}_3$ sharply decay even in numerical simulations on fairly small systems~\cite{TurnerPRB}, suggesting the model is even further away from any exact Lie algebra representation furnished by $|\mathbb{Z}_3\rangle$ state.
The $\mathbb{Z}_3$ revivals originate from $2N/3+1$ scarred eigenstates with enhanced support on the $\vert \mathbb{Z}_3 \rangle$ state. We stress that out of these $2N/3+1$ scarred eigenstates, only two eigenstates coincide with the $N+1$ scarred eigenstates with enhanced support on $\mathbb{Z}_2$, which are the ground and most excited eigenstates of the model. Thus, we interpret the $\mathbb{Z}_3$ scarred subspace as a distinct loosely embedded $\mathrm{su(2)}$ subspace as compared to the $\mathbb{Z}_2$ scarred subspace. There has been no FSA method to describe the $2N/3+1$ $\mathbb{Z}_3$ scar states and, consequently, the perturbations that improve the $\mathbb{Z}_3$ revival are not known. Here we demonstrate that it is possible to deform the PXP model to stabilise a \emph{different} $\mathrm{su}(2)$ algebra representation compared to the $\mathbb{Z}_2$ case, which results in robust $\mathbb{Z}_3$ revivals.
We follow our general approach and start by introducing raising and lowering operators compatible with $|\mathbb{Z}_3\rangle$ state:
\begin{eqnarray}
\bar{H}^+ &=& \sum_n \left( \tilde{\sigma}^-_{3n} + \tilde{\sigma}^+_{3n+1} + \tilde{\sigma}^+_{3n+2} \right), \label{eq:z3raising} \\
\bar{H}^- &=& \sum_n \left( \tilde{\sigma}^+_{3n} + \tilde{\sigma}^-_{3n+1} + \tilde{\sigma}^-_{3n+2}\right),
\end{eqnarray}
where, as before, we have $H_{\mathrm{PXP}} = \bar{H}^+ + \bar{H}^-$. The $\mathrm{su}(2)$ diagonal generator is then given by $\bar{H}^z = \frac{1}{2} [\bar{H}^+,\bar{H}^-]$,
which can be shown to take the form
\begin{eqnarray}
\nonumber \bar{H}^z &=& \sum_n -\tilde{\sigma}^z_{3n} + \tilde{\sigma}^z_{3n+1} + \tilde{\sigma}^z_{3n+2} \\
\nonumber &+& \frac{1}{2}\sum_n \Big( P_{3n} \sigma_{3n+1}^+ \sigma_{3n+2}^- P_{3n+3} \\
&+& P_{3n} \sigma_{3n+1}^- \sigma_{3n+2}^+ P_{3n+3} \Big).
\end{eqnarray}
The lowest weight state of $\bar{H}^z$ is $\vert \mathbb{Z}_3 \rangle$, as it should be, although it is degenerate. The first order perturbation will lift this degeneracy such that $\vert \mathbb{Z}_3 \rangle$ is the unique ground state of $\bar{H}^z_{(1)}$. We find the $\bar{H}^z, \bar{H}^+, \bar{H}^-$ obey the commutation relations:
\begin{eqnarray}
[\bar{H}^z,\bar{H}^{+}] &=& \bar{H}^+ + \delta_{(1)}^+, \\
\nonumber \delta_{(1)}^+ &=& -\frac{1}{2}\sum_n \Big ( P_{3n-1}P_{3n} \sigma_{3n+1}^+ P_{3n+2} \\
\nonumber &+& P_{3n-2} \sigma_{3n-1}^+ P_{3n} P_{3n+1}+ P_{3n-1} \sigma^-_{3n}P_{3n+1}P_{3n+2} \\
\nonumber &+& P_{3n+1} P_{3n+2} \sigma_{3n+3}^- P_{3n+4} \Big ) \\
\nonumber
&+& \frac{1}{2} \sum_n \Big ( P_{3n-1} \sigma_{3n}^- \sigma_{3n+1}^+ \sigma_{3n+2}^- P_{3n+3} \\
\nonumber &+& P_{3n} \sigma_{3n+1}^- \sigma_{3n+2}^+ \sigma_{3n+3}^- P_{3n+4} \Big )\\
\nonumber &+& \sum_n \Big ( P_{3n} \sigma^+_{3n+1} P_{3n+2} P_{3n+3} \\
&+& P_{3n} P_{3n+1} \sigma^+_{3n+2} P_{3n+3} \Big ).
\end{eqnarray}
Similarly, we find $[\bar{H}^z,\bar{H}^-] = - \bar{H}^- + \delta^-_{(1)}$, such that $\{\bar{H}^z,\bar{H}^+,\bar{H}^-\}$ form a broken representation of $\mathrm{su}(2)$. We identify the following first order perturbations to the PXP model which improve the representation:
\begin{eqnarray}
\nonumber V_1 &=& \sum_n \Big(P_{3n-2} \sigma_{3n-1}^x P_{3n} P_{3n+1} + P_{3n-1} P_{3n} \sigma_{3n+1}^x P_{3n+2} \\
&+& P_{3n-1} \sigma_{3n}^x P_{3n+1} P_{3n+2} + P_{3n-2} P_{3n-1} \sigma_{3n}^x P_{3n+1} \Big), \label{eq:z3_1} \\
\nonumber V_2 &=& \sum_n \Big(P_{3n} P_{3n+1} \sigma_{3n+2}^x P_{3n+3}
+ P_{3n} \sigma_{3n+1}^x P_{3n+2} P_{3n+3} \Big), \\ && \label{eq:z3_2} \\
\nonumber V_3 &=& \sum_n \Big(P_{3n} \sigma_{3n+1}^x \sigma_{3n+2}^x \sigma_{3n+3}^x P_{3n+4} \\
&+& P_{3n-1} \sigma_{3n}^x \sigma_{3n+1}^x \sigma_{3n+2}^x P_{3n+3} \Big).
\label{eq:z3_3}
\end{eqnarray}
We emphasize that perturbations that improve $\mathbb{Z}_3$ revival, even at first order, break the full translation symmetry of the model to a subgroup of translations by a unit cell of size 3. This is different from $\mathbb{Z}_2$ revivals where the first-order corrections respect the full translation symmetry of the chain. We next discuss two interesting limits, corresponding to weak and strong magnitude of these perturbations.
\subsection{Weak limit}\label{sec:z3weak}
By numerical optimization of the revival amplitude under perturbations in Eqs.~(\ref{eq:z3_1}), (\ref{eq:z3_2}) and (\ref{eq:z3_3}), bounding coefficients to satisfy $\vert \lambda_i \vert <0.5$, we find that revivals from $\vert \mathbb{Z}_3 \rangle$ can be enhanced with optimal perturbation coefficients
\begin{eqnarray}
\lambda^* = [0.18244,-0.10390,0.05445].
\end{eqnarray}
Similar to $\vert \mathbb{Z}_2 \rangle$ revival, we can find second order perturbations which improve revivals further (see Appendix~\ref{appendix:Z3_Perts} for the terms and optimal coefficients). A summary of the effect of succesive pertubations on $\vert \mathbb{Z}_3 \rangle$ is given in Fig.~\ref{fig:z3PxpSpinHalf}, while error metrics at various orders are given in Table~\ref{tab:pxpz3}. Despite long-lived coherent oscillations when the system is initialized in the $|\mathbb{Z}_3\rangle$ state, we verify the model including second order perturbations is still ergodic by calculating the mean level spacing~\cite{OganesyanHuse} $\langle r \rangle = 0.5256$ at $N=24$, consistent with the Wigner-Dyson distribution one would expect in an ergodic system.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./pxp_z3_summ.pdf}
\caption{Improving the $\mathbb{Z}_3$ revival in the PXP model. (a) Eigenstate overlap with $\vert \mathbb{Z}_3 \rangle$ state for PXP model. (b) Eigenstate overlap after including first order correction in Eq.~(\ref{eq:z3_1})-(\ref{eq:z3_3}). (c) Eigenstate overlap after including second order perturbations listed in Appendix ~\ref{appendix:Z3_Perts}. (d) Quantum fidelity when the system is quenched from $\vert \mathbb{Z}_3 \rangle$ state at various perturbation orders. The perturbation coefficients are those which maximize the first fidelity revival peak. (e) Bipartite entropy (Eq.~\ref{eq:entropy}) of eigenstates of the PXP model after including second order $\mathbb{Z}_3$ $\mathrm{su(2)}$ corrections. Points labelled ``Exact Scars" are exact diagonalization results identified from the top band of states in (c). Red crosses in (a), (b), (c), (e) indicate approximations to the scar states obtained by projecting the Hamiltonian to the broken representation basis and diagonalizing. Color scale in (a), (b), (c), (e) indicates the density of data points, with lighter regions being more dense.}
\label{fig:z3PxpSpinHalf}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Order & $1-f_0$ & $\sigma/D_{\mathrm{su(2)}}$ & $max(var(H^z)_n)$ & $K$ \\
\hline
$n=0$ & $6.397 {\times} 10^{-1}$ & $3.358 {\times} 10^{-1}$ & $9.300 {\times} 10^{-1}$ & $1.234 {\times} 10^{1}$ \\
\hline
$n=1$ & $1.338 {\times} 10^{-2}$ & $3.349 {\times} 10^{-2}$ & $1.717 {\times} 10^{-1}$ & $4.957 {\times} 10^{0}$ \\
\hline
$n=2$ & $1.852 {\times} 10^{-5}$ & $7.082 {\times} 10^{-3}$ & $2.357 {\times} 10^{-2}$ & $2.124 {\times} 10^{0}$\\
\hline
\end{tabular}
\caption{Error metrics for the $\mathbb{Z}_3$ $\mathrm{su(2)}$ subspace of the PXP model at various perturbation orders for system size $N=24$. Subspace variance $\sigma$ is normalized by the dimension of the $\mathrm{su(2)}$ representation, $2N/3+1$. See text for details of the perturbations.}
\label{tab:pxpz3}
\end{table}
\subsection{Strong limit: exact dynamical symmetry}\label{section:pxpZ3Exact}
A curious feature of $\mathbb{Z}_3$ revivals is that the $\mathrm{su}(2)$ algebra can be made exact for the model
\begin{eqnarray}\label{eq:z3spec}
H = \sum_n \tilde{\sigma}_n^x - V_1,
\end{eqnarray}
which is the PXP model from which we subtracted the $V_1$ perturbation defined previously in Eq.~(\ref{eq:z3_1}). As the strength of $V_1$ is order unity, this model should not be called a ``perturbation" to the PXP model. For the model in Eq.~(\ref{eq:z3spec}), the raising operator is
\begin{eqnarray}
\nonumber \bar{H}^+ &=& \sum_n \Big((\mathbb{I} - (P_{3n-2} + P_{3n+2})) \bar{\sigma}_{3n}^- \\
&+& (\mathbb{I} - P_{3n-1}) \bar{\sigma}_{3n+1}^+
+ (\mathbb{I} - P_{3n+4}) \bar{\sigma}_{3n+2}^+ \Big ),
\end{eqnarray}
and, as before, $\bar{H}^- = (\bar{H}^+)^{\dagger}$, $\bar{H}^z = \frac{1}{2} [\bar{H}^+,\bar{H}^-]$, $H = \bar{H}^+ + \bar{H}^-$. By inspection, it is easy to see the projectors $(\mathbb{I}-P_{3n-1}), (\mathbb{I}-P_{3n+4})$ evaluate to zero when $\bar{H}^+$ is applied to $\vert \mathbb{Z}_3 \rangle = \vert 100100...\rangle$. Thus, the terms containing $\bar{\sigma}^+_{3n+1}, \bar{\sigma}^+_{3n+2}$ never generate a spin flip and spins pointing down at these sites are frozen. It follows that the action of $\bar{H}^+$ on $\vert \mathbb{Z}_3\rangle$ is equivalent to:
\begin{equation}
(\bar{H}^+)^n \vert \mathbb{Z}_3 \rangle = \left( - \sum_n \tilde{\sigma}_{3n}^- \right)^n \vert \mathbb{Z}_3 \rangle,
\end{equation}
which implies that, within this subspace, the $\mathrm{su}(2)$ algebra is exact. Dynamics is just a free precession of spins located at positions $3n$ along the chain, $\vert 100100...\rangle \rightarrow \vert 000000...\rangle \rightarrow \vert 100100..\rangle \rightarrow...$. The model now possesses an exact dynamical symmetry within the $\mathrm{su(2)}$ subspace, namely
\begin{eqnarray}
\big[P_{\mathrm{su(2)}}^{\dagger}HP_{\mathrm{su(2)}},P_{\mathrm{su(2)}}^{\dagger}Q^+ P_{\mathrm{su(2)}}\big] &=& P_{\mathrm{su(2)}}^{\dagger}Q^+ P_{\mathrm{su(2)}},\label{eq:z3ExactDynamicalSym} \\
Q^+ = e^{-i \frac{\pi}{2} \bar{H}^y } \bar{H}^+ e^{i \frac{\pi}{2} \bar{H}^y}, \quad \bar{H}^y &=& \frac{1}{2i} (\bar{H}^+ - \bar{H}^-), \quad
\end{eqnarray}
where $P_{\mathrm{su(2)}}$ is the basis transformation which projects to the subspace spanned by the $\mathrm{su(2)}$ basis states $\vert n \rangle = (\bar{H}^+)^n \vert \mathbb{Z}_3 \rangle$.
The Hamiltonian in Eq.~(\ref{eq:z3spec}) fractures the Hilbert space in the computational basis even further than the pure PXP model. We find the number of sectors grows exponentially with system size, in a similar fashion to fractonic systems~\cite{Pretko2019}. While one sector is the desired embedded representation of $\mathrm{su(2)}$, various other sectors emerge due to the projectors blocking access from one configuration to another based on the decomposition of the state into unit cells of three consisting of $\{\vert 000 \rangle, \vert 001 \rangle, \vert 010 \rangle, \vert 100 \rangle, \vert 101 \rangle \}$.
We find it is also possible for a model to feature an exactly embedded $\mathrm{su(2)}$ representation for which the computational basis does not fracture into exponentially many sectors as seen in the $\mathbb{Z}_3$ case. In the following Section we discuss one embedded $\mathrm{su(2)}$ representation which allows us to identify such a model.
\section{$\mathbb{Z}_4$ Revivals from $\mathrm{su(2)}$ Algebra}\label{sec:pxpz4}
Unlike $\vert \mathbb{Z}_2 \rangle$ and $\vert \mathbb{Z}_3 \rangle$, quenches from $\vert \mathbb{Z}_4 \rangle = \vert 10001000...\rangle$ do not result in a reviving wavefunction beyond system size $N\gtrsim 20$ and expectation values of local observables equilibrate as expected from the ETH, such that there appear to be no scarred eigenstates with enhanced support on $\vert \mathbb{Z}_4 \rangle$.
Nevertheless, in this Section we show that our Lie algebra approach identifies deformations to the PXP model which fixes a new $\mathrm{su}(2)$ algebra, engineered such that $\vert \mathbb{Z}_4 \rangle$ is the lowest weight eigenstate of some $\bar{H}^z$, rather than $\vert \mathbb{Z}_2\rangle, \vert \mathbb{Z}_3 \rangle$ as seen previously. While the subspace variance of this representation is too large to witness observable revivals in the PXP model, by fixing the algebra we realize new models which \emph{do} exhibit $\mathbb{Z}_4$ revivals.
In direct analogy with the previous cases, we define the raising and lowering operators as
\begin{eqnarray}\label{eq:z4raising}
\bar{H}^+ &=& \sum_n \left( \tilde{\sigma}^-_{4n} + \tilde{\sigma}^+_{4n+1} + \tilde{\sigma}^+_{4n+2} + \tilde{\sigma}_{4n+3}^+ \right), \\
\bar{H}^- &=& \sum_n \left( \tilde{\sigma}^+_{4n} + \tilde{\sigma}^-_{4n+1} + \tilde{\sigma}^-_{4n+2} + \tilde{\sigma}_{4n+3}^- \right),
\end{eqnarray}
which, in turn, define $\bar{H}^z = \frac{1}{2} [\bar{H}^+,\bar{H}^-]$ that evaluates to
\begin{eqnarray}
\nonumber \bar{H}^z &=& \sum_n \left( -\tilde{\sigma}_{4n}^z + \tilde{\sigma}_{4n+1}^z + \tilde{\sigma}_{4n+2}^z + \tilde{\sigma}_{4n+3}^z \right) \\
\nonumber &+& \frac{1}{2} \sum_n \Big( P_{4n} \sigma_{4n+1}^+ \sigma_{4n+2}^- P_{4n+3} \\
\nonumber &+& P_{4n} \sigma_{4n+1}^- \sigma_{4n+2}^+ P_{4n+3} + P_{4n+1} \sigma_{4n+2}^+ \sigma_{4n+3}^- P_{4n+4} \\
&+& P_{4n+1} \sigma_{4n+2}^- \sigma_{4n+3}^+ P_{4n+4} \Big). \quad
\end{eqnarray}
Similar to previous cases, $\vert \mathbb{Z}_4 \rangle$ is the lowest weight state of $\bar{H}^z$ and it is found that $\{ \bar{H}^z,\bar{H}^+,\bar{H}^-\}$ form a broken representation of $\mathrm{su}(2)$. Errors in the root structure (Appendix~\ref{appendix:pxpZ4_2ndOrder}) suggest the following perturbations to PXP model are necessary to stabilise $\mathbb{Z}_4$ revival:
\begin{eqnarray}
V_1 &=& \sum_n P_{4n} \sigma^x_{4n+1} \sigma^x_{4n+2} \sigma^x_{4n+3} P_{4n+4}, \label{eq:z4Pert_1}\\
\label{eq:z4Pert_2} \nonumber V_2 &=& \sum_n \big(P_{4n-1} \sigma^x_{4n} \sigma^x_{4n+1} \sigma^x_{4n+2} P_{4n+3} \\
&+& P_{4n+1} \sigma^x_{4n+2} \sigma^x_{4n+3} \sigma^x_{4n+4} P_{4n+5} \big), \\
\label{eq:z4Pert_3} \nonumber V_3 &=& \sum_n \big(P_{4n} P_{4n+1} \sigma^x_{4n+2} P_{4n+3} \\
\nonumber &+& P_{4n} \sigma^x_{4n+1} P_{4n+2} P_{4n+3} \\
\nonumber &+& P_{4n+1} P_{4n+2} \sigma^x_{4n+3} P_{4n+4} \\
\ &+& P_{4n+1} \sigma^x_{4n+2} P_{4n+3} P_{4n+4} \big), \\
\nonumber V_4 &=& \sum_n \big(P_{4n-2} \sigma^x_{4n-1} P_{4n} P_{4n+1} \label{eq:z4Pert_4} \\
\nonumber &+& P_{4n-1} P_{4n} \sigma^x_{4n+1} P_{4n+2} \\
\nonumber &+& P_{4n-1} \sigma^x_{4n} P_{4n+1} P_{4n+2} \\
&+& P_{4n+2} P_{4n+3} \sigma^x_{4n+4} P_{4n+5} \big).
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./pxp_z4_summary_24.pdf}
\caption{$\mathbb{Z}_4$ revival in PXP model. (a) Eigenstate overlap with $\vert \mathbb{Z}_4 \rangle$ state for PXP model. (b) Eigenstate overlap with $\vert \mathbb{Z}_4 \rangle$ state after including first order $\mathrm{su}(2)$ corrections, Eqs.~(\ref{eq:z4Pert_1})-(\ref{eq:z4Pert_2}). (c) Eigenstate overlap after including second order $\mathrm{su}(2)$ corrections (Appendix~\ref{appendix:pxpZ4_2ndOrder}). (d) $\mathbb{Z}_4$ quench fidelity. $\vert \mathbb{Z}_4 \rangle$ state does not revive in pure PXP model, but it does revive in the new model obtained by correcting the $\mathrm{su}(2)$ algebra. (e) Bipartite entropy (Eq.~\ref{eq:entropy}) of eigenstates of the PXP model after including second order $\mathbb{Z}_4$ $\mathrm{su(2)}$ corrections. Points labelled ``Exact Scars" are exact diagonalization results identified from the top band of states in (c). Red crosses in (a), (b), (c), (e) indicate approximate scar states obtained by projecting the Hamiltonian to the broken representation basis and diagonalizing. Color scale in (a), (b), (c), (e) indicates the density of data points, with lighter regions being more dense.}
\label{fig:pxpZ4summ}
\end{figure}
In contrast to our previous example of $\mathbb{Z}_3$ revival, explicit optimization finds that the terms in Eqs.~(\ref{eq:z4Pert_1})-(\ref{eq:z4Pert_4}) can stabilise $\mathbb{Z}_4$ revivals, but some of the resulting optimal coefficients turn out to be of the order unity. Thus, similar to the special $\mathbb{Z}_3$ case discussed above, we arrive at a model that cannot be viewed as a small deformation of PXP, but rather a new model in its own right. Specifically, optimizing $V_i$ coefficients $\lambda_i$ for fidelity we find (at $N=16$)
\begin{eqnarray}
\lambda_i^* = [0.0008,-1.43,0.0979,0.0980],
\end{eqnarray}
where we see the coefficient of optimal $V_2$ is $\sim O(1)$. Once again, second order perturbations can be identified from the Lie algebra and revivals enhanced further (see Appendix~\ref{appendix:pxpZ4_2ndOrder} for details of the $36$ terms and optimal coefficients -- note only 3 terms contribute significantly with $O(1)$ coefficient after optimizing for revivals). The effect of these perturbations is summarized in Fig.~\ref{fig:pxpZ4summ}. Error metrics at various perturbation orders are given in Table~\ref{tab:pxpZ4Errors}. As in the previous examples, the second order deformations leave the model non-integrable, which we verify from the mean level spacing $\langle r \rangle = 0.5271$ at $N=24$, consistent with an ergodic system.
\begin{table}
\begin{tabular}{|c|c|c|c|c|}
\hline
Order & $1-f_0$ & $\sigma/D_{\mathrm{su(2)}}$ & $max(var(H^z)_n)$ & $K$ \\
\hline
$n=0$ & $9.993 {\times} 10^{-1}$ & $3.333 {\times} 10^{0}$ & $2.779 {\times} 10^{0}$ & $4.323 {\times} 10^{0}$ \\
\hline
$n=1$ & $5.814 {\times} 10^{-5}$ & $6.722 {\times} 10^{-4}$ & $7.902 {\times} 10^{-4}$ & $3.258 {\times} 10^{-3}$ \\
\hline
$n=2$ & $3.351 {\times} 10^{-9}$ & $9.746 {\times} 10^{-6}$ & $2.753 {\times} 10^{-4}$ & $1.534 {\times} 10^{-3}$ \\
\hline
\end{tabular}
\caption{Error metrics for the $\mathbb{Z}_4$ $\mathrm{su(2)}$ subspace of the PXP model at various perturbation orders for $N=24$. Subspace variance $\sigma$ is normalized by the dimension of the $\mathrm{su(2)}$ representation, $N/2+1$. See text for details of the perturbations. Errors at $n=0$ are much worse than $n=0$ $\mathbb{Z}_2, \mathbb{Z}_3$ errors (compare with Table~\ref{tab:pxpz2} and Table~\ref{tab:pxpz3}), consistent with there being no revivals or $\mathbb{Z}_4$ scars in pure PXP model.}
\label{tab:pxpZ4Errors}
\end{table}
\subsection{Exact Z4 $\mathrm{su(2)}$ Embedding}
\label{section:pxpZ4}
Finally, we mention that similar to $\mathbb{Z}_3$ case, there exists a deformation of PXP such that $\vert \mathbb{Z}_4 \rangle$ is the lowest weight state of an \emph{exact} $\mathrm{su}(2)$ representation. That model is obtained by redefining the raising operator in Eq.~(\ref{eq:z4raising}) according to
\begin{eqnarray}
\nonumber \bar{H}^+ &\rightarrow& \bar{H}^+ - V_2\\
\nonumber &=& \bar{H}^+ - \sum_n \Big( P_{4n+3} \sigma_{4n+4}^- \sigma_{4n+5}^+ \sigma_{4n+6}^- P_{4n+7} \\
&+& P_{4n+1} \sigma_{4n+2}^- \sigma_{4n+3}^+ \sigma_{4n+4}^- P_{4n+5} \Big),
\end{eqnarray}
which yields the Hamiltonian:
\begin{eqnarray}
\nonumber H &=& \sum_n P_{n-1}\sigma^x_{n}P_{n+1} \\
\nonumber &-& \sum_n \big( P_{4n+3}\sigma^x_{4n+4}\sigma^x_{4n+5}\sigma^x_{4n+6}P_{4n+7} \\
&+& P_{4n+1} \sigma^x_{4n+2} \sigma^x_{4n+3} \sigma^x_{4n+4} P_{4n+5} \big).\label{eq:z4ExactModel}
\end{eqnarray}
Similar to the $\mathbb{Z}_3$ case, this model features an exact dynamical symmetry within the $\mathrm{su(2)}$ subspace, with the symmetry generator taking the same form as Eq.~(\ref{eq:z3ExactDynamicalSym}).
However, unlike the $\mathbb{Z}_3$ case, the computational basis which satisfies the Rydberg constraint does not fracture into exponentially many sectors. There still exists an exact Krylov subspace generated by repeated application of the Hamiltonian on $\vert \mathbb{Z}_4 \rangle$ which is block diagonal with respect to the orthogonal thermalizing subspace, such that this model exhibits type (b) scarring described in Fig~\ref{fig:scarMechs} and the Krylov subspace is an exact $\mathrm{su(2)}$ representation. We verify the model is still thermalizing in the orthogonal subspace by verifying the mean level spacing $\langle r \rangle = 0.5365$ at $N=24$, consistent with level spacings obeying the Wigner-Dyson distribution as expected for an ergodic subspace.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{./pxp_z4_exactSu2_autoc_20.pdf}
\caption{ Local autocorrelation function $\langle \sigma_{2i}^z(t) \sigma_{2i}^z(0) \rangle$ of the model given by Eq.~(\ref{eq:z4ExactModel}), for various initial states given in the legend. Results are for $N=20$. We consider sites $2i$ as the translation symmetry of Eq.~(\ref{eq:z4ExactModel}) is broken to a subgroup corresponding to translations by two units. Generic initial states such as the polarized state $\vert 000...\rangle$ equilibrate, whereas the autocorrelation function exhibits non stationary behaviour for all times when the system is initialized in the $\vert \mathbb{Z}_4 \rangle=\vert 10001000...\rangle$ state.
}
\label{fig:z4_autoc}
\end{figure}
As a consequence of the exact $\mathrm{su(2)}$ embedding the $\vert \mathbb{Z}_4 \rangle$ state revives perfectly, whereas generic initial states from the orthogonal sector still thermalize as expected from the ETH. Thus, local observables and local autocorrelation functions, which generically equilibrate, may exhibit long-lived non-stationary behavior following a quench from $\vert \mathbb{Z}_4 \rangle$, Fig.~\ref{fig:z4_autoc}.
\section{Conclusions and Discussion}\label{sec:conc}
We have argued that, up to a rotation, many-body scars in kinetically constrained spin models can be interpreted as forming an approximate basis of a broken Lie algebra representation. This results in a loosely embedded integrable subspace with approximate dynamical symmetry, which acts as an approximate representation of the Lie algebra. Seeking deformations of the Hamiltonian which improve this broken Lie algebra we have identified several models related to the PXP model describing a chain of Rydberg atoms, which exhibit many-body scars and feature near perfect revivals from the simple product states $\vert \mathbb{Z}_2 \rangle$, $\vert \mathbb{Z}_3 \rangle$, $\vert \mathbb{Z}_4 \rangle.$ Further, we have constructed two models with exactly embedded $\mathrm{su(2)}$ representations, thus obtaining ``exact scars" in a similar spirit to ``Krylov-restricted thermalization"~\cite{MoudgalyaKrylov} and ``projector embedded" scar states~\cite{ShiraishiMori}.
The identification of embedded $\mathrm{su(2)}$ subspaces followed from identifying decompositions of the Hamiltonian $H=\bar{H}^+ + \bar{H}^-$, with $\bar{H}^- = (\bar{H}^+)^{\dagger}$. Thus, the representation is fixed by the choice of $\bar{H}^+$. Obviously, this choice is not unique and many other possible decompositions of $H$ exist, but many of these decompositions would result in embedded representations whose subspace variance is too large to give rise to scarred dynamics.
However, from the examples considered above, it appears that aspects of an $\mathrm{su}(2)$ algebra can generically be improved in certain models like PXP, no matter how broken the representation is to begin with, by considering errors of a suitably defined broken representation (e.g. $ \mathbb{Z}_4$ case). An obvious question is ``how broken" can these representations be such that we would see signatures of $\mathrm{su}(2)$ dynamics (revivals) following quenches from states in the $\mathrm{su}(2)$ subspace. In the examples considered in the main text, subspace variance of the approximate representation basis seems to be the best indicator that one would see scarred dynamics.
While the focus of this paper has been on deformations of the PXP model resulting in embedded $\mathrm{su(2)}$ representations, we note our construction can be readily applied to arbitrary spin chains. An interesting question for future work is if it is possible to engineer approximate dynamical symmetries in a subspace without making use of a Lie algebra, but perhaps more general algebraic structures such as the quantum group $\mathrm{U_q(sl_2)}$. Indeed, exact dynamical symmetry of the Hamiltonian which does not rely on a Lie algebra root structure has already been observed in the AKLT model~\cite{BernevigEnt}. The model possesses a dynamical symmetry $[H_{\mathrm{AKLT}},K^+] = \omega K^+$ and, while the operators $\{K^+,K^- = (K^+)^{\dagger}, H^z=\frac{1}{2}[K^+,(K^+)^{\dagger}] \}$ form an exact representation of $\mathrm{su(2)}$, the AKLT Hamiltonian itself $H_{\mathrm{AKLT}}$ is not a linear combination of the $\mathrm{su(2)}$ generators. Therefore, the dynamical symmetry does not trivially follow from the root structure and further the scarred subspace, generated by repeated application of $K^{\pm}$ on the AKLT ground state, does not act as a representation of $\mathrm{su(2)}$~\cite{BernevigEnt}. Moreover, we have not considered embeddings of higher order $\mathrm{su(n)}$ Lie algebras throughout this paper, instead restricting only to $\mathrm{su(2)}$. We expect this to be increasingly more difficult compared to $\mathrm{su(2)}$, due to the presence of more than one set of raising operators, resulting in multiple error sources where there is no guarantee that improving errors of one set of raising operators will not exasperate errors in another set.
An important open question relates to the closure of the broken Lie algebra -- will recursively feeding higher order error terms back into the broken generators result in an exact representation? Indeed, we have identified two cases where an $\mathrm{su(2)}$ algebra can be made exact ($\vert \mathbb{Z}_3 \rangle, \vert \mathbb{Z}_4 \rangle$) after only considering first order error terms. Neglecting closure, we have demonstrated that this integrable subspace need not be exactly embedded, but can be loosely embedded with small enough subspace variance such that signatures of the embedded group are still realized in dynamics, as seen in the PXP model. Finally, it would be interesting to investigate generalizations of loosely embedded Lie algebras in the context of open quantum systems, where recent work has shown that dissipation can give rise to the emergence of kinetic constraints~\cite{Everest2016} and robust dynamical symmetry~\cite{Buca2019, Tindall2019}.
{\sl Note added:} During the completion of this manuscript we became aware of Ref.~\onlinecite{MotrunichTowers}, which clarifies further the ``exact scars" seen in models we describe in Section~\ref{section:IadecolaScars}.
\section{Acknowledgments}
We acknowledge support by EPSRC Grants No. EP/R020612/1, No. EP/M50807X/1.
Statement of compliance with EPSRC policy framework on
research data: This publication is theoretical work that
does not require supporting research data.
This research was supported in part by the National Science Foundation under Grants No. NSF PHY-1748958 and No. EP/R513258/1 (J.-Y.D). We thank Berislav Bu\v ca and Gabriel Matos for their insightful comments on the manuscript.
|
2,877,628,091,319 | arxiv | \section{Introduction}
In studies of neutron stars, the fundamental role is played by
the equation of state (EoS) for dense nuclear matter.
The observed masses of neutron stars J1614$-$2230~\cite{Demorest10},
J0348+0432~\cite{Antoniadis13} and J0740+6620~\cite{Cromartie2020}
are given as $(1.97\pm0.04)M_{\odot}$, $(2.01\pm0.04)M_{\odot}$ and
$(2.17^{+0.11}_{-0.10})M_{\odot}$, respectively, being important
conditions for the stiffness of the EoS of neutron-star matter.
In non-relativistic approaches, the stiff EoS giving the maximum
mass of $2M_{\odot}$ can be derived from the existence of strongly
repulsive effects such as three-nucleon repulsions
in the high-density region \cite{APR98}.
The hyperon ($Y$) mixing in neutron-star matter brings about
a remarkable softening of the EoS and a maximum mass is reduced
to a value far less than $2M_{\odot}$.
The mechanism of EoS softening is understood as follows:
With increasing of baryon density toward centers of neutron stars,
chemical potentials of neutrons become high so that neutrons at
Fermi surfaces are changed to hyperons ($Y$) via strangeness
non-conserving weak interactions overcoming rest masses of hyperons.
Then, it should be noted that naively such a mechanism of EoS softening
works also for mixing of any exotic particles such as quarks into neutron matter.
One of the ideas to avoid this ``hyperon puzzle in neutron stars" is to assume
that the many-body repulsions work universally for every kind of baryons \cite{NYT}.
In Refs.\cite{YFYR14}\cite{YFYR16}\cite{YTTFYR17}, the multi-pomeron exchange
potential (MPP) was introduced as a model of universal repulsions among three
and four baryons on the basis of the extended soft core (ESC) baryon-baryon
interaction model developed by two of the authors (T.R. and Y.Y.)
and M.M. Nagels~\cite{ESC16}.
Another solution for the hyperon puzzle has been suggested by taking
into account quark deconfinement phase transitions from a hadronic-matter
EoS (H-EoS) to a sufficiently stiff quark-matter EoS (Q-EoS)
in the neutron-star interiors, namely by studying hybrid stars
having quark matter in their cores \cite{Schaffner99} \cite{Baldo2006} \cite{Lastowiecki2012}
\cite{Shahrbaf1} \cite{Shahrbaf2} \cite{Maslov19} \cite{Xia19}
\cite{Kojo2015} \cite{Baym2018} \cite{Otto2020}.
It is well known that repulsive effects in quark phases are needed
to result in massive neutron stars of $2M_{\odot}$.
In the Nambu-Jona-Lasinio (NJL) model, for instance, repulsions to stiffen
EoSs are given by vector interactions \cite{Kunihiro} , strengths of which
are treated as phenomenological parameters to stiffen the EoSs.
Note that NJL models, including extended ones, are mainly based on
mean-field approximations, in which two-body quark-quark interactions
are not used explicitly. In spite of many works for hadron-quark phase
transitions in neutron-star matter, there is not yet a unified theory
of both the hadronic and quark phases.
In this work, our approach to hadron-quark phase transitions is
different from the usual methods in which the deconfined quark phases
are treated in mean-field approximations. We handle here the quark matter
with the two-body quark-quark ($QQ$) potentials derived as follows:
The meson-exchange quark-quark potentials are derived from the ESC
baryon-baryon ($BB$) potentials in the framework of the constituent quark model (CQM).
The quark-quark-meson ($QQM$) vertices are defined such that, upon folding
with the Gaussian ground-state baryonic quark wave functions,
the $BB$ potentials are reproduced \cite{QQint}.
In this process the $QQM$ couplings are related to the $BBM$ couplings,
and the extra interactions at the quark level necessary to achieve
this connection are completely determined.
(Like in the ESC16 $BB$-potentials, relativistic effects are included in the
$QQ$-potentials via the small components of the Dirac-spinors and a $1/M_Q$-expansion.)
The quark-quark instanton-exchange potential is derived from tuning the
baryon masses ($N, \Lambda, \Sigma, \Xi$), and $\Delta_{33}$ in the CQM.
Here, also the one-gluon-exchange (OGE) and the confining potential are included.
With use of these $QQ$ potentials together with the ESC $BB$-potentials,
baryonic matter and quark matter are treated in the common framework of
the Brueckner-Bethe-Goldstone (BBG) theory, where the transitions
between them are described in a reasonable way.
It should be emphasized here that our $QQ$ potentials are determined
on the basis of the terrestrial data and do not include parameters only
for the purpose of stiffening the quark-matter EoS.
Recently, the radius measurement has been performed for the most
massive neutron star PSR J0740+6620:
The two analyses have been done independently for the X-ray data
taken by the {\it Neutron Star Interior Composition Explorer} (NICER)
and the X-ray Multi-Mirror (XMM-Newton) observatory.
The radius and mass are $12.39^{+1.30}_{-0.98}$ km and
$2.072^{+0.067}_{-0.066}$ M$_\odot$ \cite{Riley2021} or
$13.7^{+2.6}_{-1.5}$ km ($68\%$) and $2.08 \pm 0.07$ M$_\odot$ \cite{Miller2021}.
The radius of a typical 1.4M$_\odot$ neutron star $R_{1.4M_\odot}$ has been
estimated by combining the NICER measurements and the other multimessenger data
\cite{Raaij2021}\cite{Peter2021}.
In Ref.\cite{Raaij2021}, the two values of $R_{1.4M_\odot}=12.33^{+0.76}_{-0.81}$ km
and $R_{1.4M_\odot}=12.18^{+0.56}_{-0.79}$ km are obtained for the two different
high-density EoSs of a piecewise-polytrophic (PP) model and
a model based on the speed of sound, respectively. In Ref.\cite{Peter2021},
the estimated value is $R_{1.4M_\odot}=11.94^{+0.76}_{-0.87}$ km
at $90\%$ confidence. These values of radii are rather similar to each other.
On the other hand, when the implication of PREX-II for the neutron skin
thickness of heavy nuclei are taken into account on the neutron-star EoS,
they obtain 13.33 km $< R_{1.4M_\odot} < 14.26$ km \cite{Brendan2021}.
Our obtained EoSs in this work are investigated in the light of these new data.
This paper is organized as follows:
In Sect.II, the hadronic-matter EoS (H-EoS) is recapitulated
on the basis of our previous works.
In Sect.III, on the basis of realistic $QQ$ interaction models,
the BBG theory is applied to quark matter:
In III-A, the G-matrix framework is outlined for quark matter.
In III-B, our $QQ$ potentials are explained, which are composed of
the extended meson-exchange potential, the multi-pomeron potential,
the instanton potential and the one-gluon exchange potential.
In III-C, the $QQ$ G-matrix interactions in coordinate space are
parameterized as density-dependent interactions.
In Sect.IV, there are obtained the quark-matter EoSs (Q-EoS)
and $MR$ diagrams of hybrid stars: In IV-A, Q-EoSs are derived.
In IV-B, hadron-quark phase transitions in hybrid stars are
investigated on the basis of the obtained EoSs.
In IV-C, the $MR$ relations of hybrid stars are obtained
by solving the TOV equation.
The conclusion of this paper is given in Sect.V.
\section{Hadronic-matter EoS}
Here, the hadronic matter is defined exactly as
$\beta$-stable baryonic matter including leptons.
On the basis of the BBG theory,
the hadronic-matter EoS (H-EoS) is derived with use of the ESC
baryon-baryon interaction model \cite{YFYR14}\cite{YFYR16}\cite{YTTFYR17}.
Then, the EoS is stiff enough to assure the neutron-star masses of $2M_{\odot}$,
if the strong three-nucleon repulsion is taken into account.
However, the hyperon ($Y$) mixing results in remarkable softening
of the EoS canceling this repulsive effect. In order to avoid
this ``hyperon puzzle", it is assumed that the repulsions work
universally for $Y\!N\!N$, $Y\!Y\!N$ $Y\!Y\!Y$ as well as for $N\!N\!N$.
In \cite{YFYR14}\cite{YFYR16}\cite{YTTFYR17}, such universal repulsions
are modeled as the multi-pomeron exchange potential (MPP).
In Ref.\cite{YTTFYR17} they proposed three versions of MPP: MPa, MPa$^+$, MPb.
MPa and MPa$^+$ (MPb) include the three- and four-body (only three-body) MPPs,
where mixings of $\Lambda$ and $\Sigma^-$ hyperons are taken into account.
The three-body part of MPa (MPa$^+$) is less repulsive than (equal to) that of MPb,
and the four-body parts of MPa and MPa$^+$ are equal to each other.
The EoSs for MPa and MPa$^+$ are stiffer than
that for MPb, because of which radii of neutron stars obtained from
the formers are larger than those from the latter.
Our ESC $BB$ interactions including MPb, MPa and MPa$^+$
are named as H1, H2, H3, for simplicity.
In addition, we introduce two versions H0 and H1' for comparative studies:
H0 is the nucleon-nucleon part of H1, being used in nuclear-matter EoSs with
no hyperons. H1' is the $BB$ interaction H1 in which MPP works only
among nucleons. In the case of H1', the remarkable softening of the EoS is
brought about by hyperon mixing.
As shown later, neutron-star radii $R$ for masses lower than about $1.5M_\odot$ are
determined by H-EoSs even in our $MR$ diagrams including hadron-quark transitions.
For the H-EoSs derived from the above $BB$ interactions,
the obtained values of radii at $1.4M_\odot$ ($R_{1.4M_\odot}$) are
12.4 km (H1), 13.3 km (H2) and 13.6 km (H3).
\section{Quark-Quark interaction and quark matter}
\subsection{G-matrix framework}
The BBG theory is adopted for studies of quark matter on the basis of
two-body $QQ$ potentials given in Ref.\cite{QQint}.
Here, correlations induced by $QQ$ potentials are renormalized into
coordinate-space G-matrix interactions, being considered as effective
$QQ$ interactions to derive the Q-EoS.
In this stage to construct G-matrix interactions in quark matter,
color quantum numbers are not taken into account.
We start from the G-matrix equation for the quark pair $f_1 f_2$ in
quark matter, where $f_1$ and $f_2$ denote flavor quantum numbers ($u,d,s$):
\begin{eqnarray}
G_{cc_0}=v_{cc_0} + \sum_{c'}
v_{cc'} {Q_{y'} \over \omega -\epsilon_{f'_1}-\epsilon_{f'_2} }
G_{c' c_0}
\label{eq:GM1}
\end{eqnarray}
where $c$ denotes a relative state $(y, T, L, S, J)$ with $y=f_1f_2$,
$S$ and $T$ being spin and isospin quantum numbers, respectively.
Orbital and total angular momenta are denoted by $L$ and $J$,
respectively, with ${\bf J}={\bf L}+{\bf S}$:
A two-quark state is specified by $^{2S+1}L_J$.
In Eq.~(\ref{eq:GM1}), $\omega$ gives the starting energy in
the starting channel $c_0$.
The Pauli operator $Q_y$ acts on intermediate quark states with $y=f_1f_2$.
We adopt for simplicity the gap choice for the intermediate states
in the G-matrix equation, meaning that an intermediate energy
$\epsilon_{f}$ is replaced by a kinetic-energy operator.
The G-matrix equation~(\ref{eq:GM1}) is represented in the
coordinate space, whose solutions give rise to G-matrix elements.
The quark single particle (s.p.) energy $\epsilon_f$
in quark matter is given by
\begin{eqnarray}
\epsilon_f(k_f)={\hbar^2k_f^2 \over 2m_f} + U_f(k_f)
\label{eq:GM2}
\end{eqnarray}
where $k_f$ is a $f$-quark momentum ($f=u,d,s$).
The potential energy $U_f$ is obtained self-consistently
in terms of the G-matrix as
\begin{eqnarray}
&& \hspace{-5mm} U_f(k_f) =
\sum_{|{\bf k}_{f'}|} \langle {\bf k}_f {\bf k}_{f'}
\mid G_{ff'}(\omega=\epsilon_f(k_f)+\epsilon_{f'}(k_{f'})) \mid
{\bf k}_f {\bf k}_{f'} \rangle
\nonumber\\
\label{eq:GM3}
\end{eqnarray}
where $(TLSJ)$ quantum numbers are implicit.
Then, the potential energy per particle
$\langle U \rangle = \sum_{f} \langle U_f \rangle$
is obtained by averaging $U_f(k_{f})$ over $f$:
\begin{eqnarray}
\langle U \rangle=
\frac32\ \sum_{f} \omega_f \int_0^{k_F^f} \frac{d^3k_{f}}{(2\pi)^3}
\ U_f(k_{f})
\label{eq:GM4}
\end{eqnarray}
where $\omega_f=\rho_f/(\sum_{f'} \rho_{f'})$ with
a $f$-quark density $\rho_f$.
Making a partial wave reduction of Eq.~(\ref{eq:GM3}) with
explicit use of $TLSJ$ quantum numbers, $U_f(k_{f})$ is
represented as a sum of $U^{TLSJ}_f(k_{f})$ obtained from
G-matrix elements $G_{ff'}^{TLSJ}$.
\subsection{Quark-Quark interactions}
Our $QQ$ interaction is given by
\begin{eqnarray}
V_{QQ} &=& V_{EME}+V_{INS}+ V_{OGE}+V_{MPP}
\end{eqnarray}
where $V_{EME}$, $V_{INS}$, $V_{OGE}$ and $V_{MPP}$ are the extended
meson-exchange potential, the instanton exchange potential, the
one-gluon exchange potential and the multi-pomeron potential,
respectively \cite{QQint}. The included parameters in our $QQ$ potential
are chosen so as to be consistent with physical observables as much as possible.
The contributions of the confining potential ($V_{conf}$) to $V_{QQ}$
are minor in quark matter, being omitted in this work.
The $V_{EME}$ $QQ$ potential is derived from the ESC16 $BB$ potential \cite{ESC16}
so that the $QQM$ couplings are related to the $BBM$ couplings
through folding procedures with Gaussian baryonic quark wave functions.
Then, the $V_{EME}$ $QQ$ potential is basically of the same functional expression
as the ESC16 $BB$ potential. The explicit expressions for $V_{EME}$ $QQ$
potentials are given in Ref.\cite{QQint},
In the ESC modeling, the strongly repulsive components in $BB$ potentials
are described mainly by vector-meson and pomeron exchanges between baryons.
It should be noted that this feature persists in the $V_{EME}$ $QQ$ potential,
which includes the strongly repulsive components originated from
vector-meson and pomeron exchanges between quarks.
Multi-pomeron exchanges are expected to work not only among baryons
but also among quarks, in which the baryon mass $M_B$ is replaced by
the quark mass $M_Q=M_B/3$ and the pomeron-baryon-baryon coupling constant
$g_{PBB}$ is replaced by the pomeron-quark-quark coupling constant $g_{PQQ}$.
In this work, the $QQ$ multi-pomeron potential $V_{MPP}$ is derived
from the version MPa for the MPP among baryons.
The included parameters included in $V_{INS}$ and $V_{OGE}$ are
chosen so as to reproduce basic features of baryon mass spectra.
The form of the one-gluon exchange potential is given as
\begin{equation}
V_{OGE}(r)=\frac14\, ({\bf \lambda^C_1 \cdot \lambda^C_2})\,\alpha_S\,
V_{vector}(m_G;r)
\end{equation}
where $\lambda^C_a$, $a=1,,,8$ are the Gell-Mann matrices in color
SU(3) space and $V_{vector}(m_G;r)$ is the vector-type one boson
exchange potential. Its explicit form is given by Eq.(E9a) in Ref.\cite{QQint}.
The strength of $V_{OGE}$ is determined by the quark-gluon coupling
constant $\alpha_S$, being fixed as $\alpha_S=0.25$ in this work.
The gluon mass $m_G$ is taken as 420 MeV \cite{Hut95}.
In quark matter, $({\bf \lambda^C_1 \cdot \lambda^C_2})$=
$-8/3, +4/3, +4/3, -8/3$ in states of $(S,T)$=$(0,0), (0,1), (1,0), (1,1)$,
respectively.
The instanton potential $V_{INS}$ is based on the SU(3) generalization
of the 't Hooft interaction for (u,d,s) quarks. In the configuration space,
with the addition of the Gaussian cut-off $\exp(-k^2/\Lambda_I^2)$,
the local instanton potential is given as \cite{QQint}
\begin{eqnarray}
&& V_{INS}(r)= -(4/3-\mbox{\boldmath $\lambda^F_1 \cdot \lambda^F_2$})\, G_I \,
\left(\frac{\Lambda_I}{2\sqrt{\pi}}\right)^3
\nonumber
\\
&&\times \left[1+\frac{\Lambda_I^2}{2m_Q^2} \left(3-\frac12 \Lambda_I^2 r^2 \right)
\left(1-\frac13\mbox{\boldmath $\sigma_1 \cdot \sigma_2$}\right)
\right]
\nonumber
\\
&&\times \exp \left(-\frac14\Lambda_I^2 r^2\right)
\label{eq:INS}
\end{eqnarray}
where $\lambda^F_a$, $a=1,,,8$ are the Gell-Mann matrices in flavor SU(3) space
and $m_Q$ is the quark mass.
In two-quark states, \mbox{\boldmath $\lambda$}$^F$ operators
are reduced to \mbox{\boldmath $\tau$} operators of isospin.
The strength of $V_{INS}$ is determined by coupling constant $G_I$ and
cut-off mass $\Lambda_I$. They are taken as $G_I=2.5$ GeV$^{-2}$ and
$\Lambda_I=0.55$ GeV, being estimated from the $\pi-\rho$ mass splitting.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm]{Urho1.eps}
\caption{Averaged single particle potentials $\langle U \rangle$
in quark matter as a function of the baryon number density
$\rho_B=\frac13 \rho_Q$ in the case of $\rho_u=\rho_d=\rho_s$.
The solid, short-dashed, long-dashed and dot-dashed curves are
the contributions to $\langle U \rangle$ from $V_{EME}$, $V_{MPP}$,
$ V_{OGE}$ and $V_{INS}$, respectively. The bold-solid curve
is obtained by $V_{EME}+V_{MPP}+V_{INS}+V_{OGE}$.
}
\label{Urho1}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm]{Urho2.eps}
\caption{Averaged single particle potentials $\langle U \rangle$
in quark matter as a function of the baryon number density
$\rho_B=\frac13 \rho_Q$ in the case of $\rho_u=\rho_d=\rho_s$.
The solid curve is obtained from $V_{EME}$.
The even- and odd-state contributions
$\langle U_{even} \rangle$ and $\langle U_{odd} \rangle$
are given by the dashed and short-dashed curves.
The dotted curves are the corresponding values of
$\langle U_{even} \rangle$ and $\langle U_{odd} \rangle$
in neutron matter.}
\label{Urho2}
\end{center}
\end{figure}
In order to demonstrate the features of our $QQ$ interaction, we show
the averaged s.p. potentials $\langle U \rangle$ given by Eq.~(\ref{eq:GM4})
as a function of the baryon number density $\rho_B=\frac13 \rho_Q$
in the case of $\rho_u=\rho_d=\rho_s$.
In Fig.~\ref{Urho1}, the solid, short-dashed, long-dashed and dot-dashed curves
are the contributions to $\langle U \rangle$ from $V_{EME}$, $V_{MPP}$,
$ V_{OGE}$ and $V_{INS}$, respectively. The bold-solid curve
is obtained by the sum of $V_{EME}+V_{MPP}+V_{INS}+V_{OGE}$.
The strongly-repulsive nature of $\langle U(\rho_B) \rangle$ is the
key point in this work, which leads to the quark-matter EoS stiff
enough to reproduce neutron-star masses over $2M_{\odot}$.
In the figure, the repulsive contribution of $V_{EME}$ is found to
be essential the repulsive nature of $\langle U(\rho_B) \rangle$,
where the repulsive contributions of $ V_{OGE}$ and $V_{MPP}$ are
considerably canceled by the attractive contribution of $ V_{INS}$.
It is worthwhile to say that the repulsive components in $V_{EME}$
are from the vector-meson and pomeron exchanges. This feature persists
from the ESC $BB$ interaction model.
In neutron matter, $\langle U \rangle$ includes repulsive contributions
from the multi-pomeron potential MPP, being quite large in high density
regions. The strengths of three- and four-body parts of MPP are proportional
to $(g_{PBB})^3$ and $(g_{PBB})^4$, respectively, $g_{PBB}$ being the
pomeron-baryon-baryon coupling constant.
In $QQ$ potentials, $g_{PBB}$ is replaced by the pomeron-quark-quark coupling
constant $g_{PQQ}$. Because of the relation $g_{PQQ}=\frac13 g_{PBB}$,
the strengths of three- and four-body parts of MPP among quarks are far smaller
than those among baryons. Therefore, MPPs among quarks are not so remarkable
in comparison with those among baryons.
The even- and odd-state contributions to $\langle U \rangle$ are denoted
as $\langle U_{even} \rangle$ and $\langle U_{odd} \rangle$, respectively,
In Fig.~\ref{Urho2}, the solid curve shows $\langle U(\rho_B) \rangle$
obtained from $V_{EME}$, and $\langle U_{even}(\rho_B) \rangle$ and
$\langle U_{odd}(\rho_B) \rangle$ are given by the dashed and short-dashed curves.
The dotted curves are the even- and odd-state contributions of
averaged neutron potentials $\langle U_{even} \rangle$
and $\langle U_{odd} \rangle$ in neutron matter.
The remarkable feature of $\langle U_{even} \rangle$
and $\langle U_{odd} \rangle$ given by $V_{EME}$ is that
they are attractive and repulsive, respectively.
This feature of $\langle U_{even} \rangle$ and $\langle U_{odd} \rangle$
is similar to the corresponding one in neutron matter qualitatively.
When our $QQ$ potentials are used in quark-matter calculations,
it is reasonable to assume the constituent quark masses originated
from the chiral symmetry breaking as the QCD non-perturbative effect.
Then, it is probable that the constituent quark masses in quark matter
become smaller than those in vacuum and move to current masses in
the high-density limit.
At the mean-field (MF) level usually, the density-dependent quark masses
in matter have been derived from the MF-Lagrangian such as that of
the NJL model. In the present approach, we introduce phenomenologically
the density-dependent quark mass
\begin{eqnarray}
M_Q^*(\rho_Q) = M_0/[1+\exp \{\gamma (\rho_Q-\rho_c\}] +m_0 +C
\label{mstar}
\end{eqnarray}
with $C=M_0-M_0/[1+\exp (-\gamma \rho_c)]$ assuring $M_Q^*(0) = M_0+m_0$,
where $\rho_Q$ is number density of quark matter, and $M_0$ and $m_0$
are taken as 300 (360) MeV and 5 (140) MeV for $u$ and $d$ ($s$) quarks.
Then, we have $M_Q^*(0)=$ 305 (500) MeV for $u$ and $d$ ($s$) quarks.
The adjustable parameters $\rho_c$ and $\gamma$ are used to control
mainly the onset densities of quark phases into hadronic phases.
Furthermore, because the quark mass reduction has to bring about
an increase of the vacuum energy $B$, we assume simply
\begin{eqnarray}
B(\rho_Q)=M_Q^*(0) -M_Q^*(\rho_Q) \ .
\label{bag}
\end{eqnarray}
It is well known that there are three schemes for the
density-dependent quark mass \cite{Blaschke20}:
(i) a constant quark mass, (ii) a linear density dependence
(Brown-Rho scaling \cite{Brown}), (iii) a density-dependence
within a higher-order NJL model \cite{Kashiwa} \cite{Benic}.
Eq.~(\ref{mstar}) includes these schemes, representing (i) for $\gamma=0$,
(ii) for small values of $\gamma$ and (iii) for large values of $\gamma$.
The parameter $\rho_c$ is chosen as $6\rho_0$ by referring
to forms of (iii) derived from the higher-order NJL models.
We define the following five sets with different values of $\gamma$
of $QQ$ interactions for deriving Q-EoSs.
\noindent
Q0 : $V_{EME}$ \ with $\gamma$=1.2
\noindent
Q1 (Q1e) : $V_{EME}+V_{INS}+V_{OGE}$ \ with $\gamma$=1.0 ($\gamma$=2.6)
\noindent
Q2 (Q2e) : $V_{EME}+V_{MPP}+V_{INS}+V_{OGE}$ \ with $\gamma$=1.6 ($\gamma$=2.2)
In the cases of Q0, Q1 and Q2, the values of $\gamma$ are chosen
so that the critical chemical potentials and densities for phase
transitions are as small as possible.
In the cases of Q1e and Q2e, they are chosen so that critical
densities are near crossing points of hadronic and quark
energy densities.
In Fig.~\ref{fig.quarkmass} the quark mass $M_Q^*(\rho_Q)$ ($Q=u,d$) as
a function of the baryon number density $\rho_B=\rho_Q/3$ is plotted in
the cases of (a) Q1, (b) Q1e, (c) Q2 and (d) Q2e.
The density-dependent quark masses in these cases (especially Q1e and Q2e)
are found to be rather close to (ii) with the Brown-Rho scaling.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm]{quarkmass.eps}
\caption{Quark mass as a function of the baryon number density $\rho_B$ for the
(a) Q1, (b) Q1e, (c) Q2 and (d) Q2e models. As a reference, also the Brown-Rho
scaling is shown by the dashed line.}
\label{fig.quarkmass}
\end{center}
\end{figure}
It is quite important to use the density-dependent quark masses
together with our $QQ$ potentials. When constant quark masses are used,
hadron-quark transitions derived from our $QQ$ potentials occur
in density regions over $5\rho_0$ in hadronic matter giving
$2M_{\odot}$ masses. In such a case, quark phases have no effect on
masses and radii of neutron stars, even if they exist in inner cores.
For instance, when the baryon-baryon interaction H2 is used together
with quark-quark interaction Q2, the combined set is denoted as H2+Q2.
Hereafter, combinations of $BB$ and $QQ$ interactions are expressed like this.
\subsection{Effective Quark-Quark interactions}
For applications to quark-matter calculations, we construct
density-dependent effective local interactions ${\cal G}_{QQ}(\rho_Q;r)$
simulating G-matrices in coordinate space,
where $\rho_Q$ is number density of quark matter.
We use here the method given in Ref.\cite{Yama10}.
The effective interactions are written as ${\cal G}_{QQ}={\cal G}_{EME}
+{\cal G}_{MPP}+{\cal G}_{INS}+{\cal G}_{OGE}$ approximately
corresponding to $V_{QQ}=V_{EME}+V_{MPP}+V_{INS}+V_{OGE}$.
Though they can be obtained for each $(ff',T,L,S,J)$ state,
for simplicity, the dependence on $L$ is approximated by
that on parity $P$ and the dependence on $J$ is averaged:
Quantum numbers $TLSJ$ are reduced to $TSP$.
The respective interactions are represented in two- or one-range Gaussian forms,
and coefficients are adjusted so that s.p. potentials $U^{TSP}_f$
obtained from ${\cal G}_{ff'}^{TSP}$ simulate the original G-matrix results.
It is far easier to derive quark-matter EoSs with use of these
density-dependent interactions ${\cal G}_{QQ}$ than derivations by
G-matrix calculations with $V_{QQ}$.
The density-dependent effective interactions ${\cal G}_{EME}$ and ${\cal G}_{OGE}$
derived from $V_{EME}$ and $V_{OGE}$. respectively, are parameterized
in a two-range Gaussian form as
\begin{eqnarray}
&& {\cal G}_{EME,OGE}(\rho,r) = (a\rho^{\alpha}+b\rho^{\beta})\cdot \exp(-(r/0.8)^2)
\label{eq:EME}
\nonumber\\
&& \hspace{2cm} + c\cdot \exp(-(r/1.6)^2) .
\end{eqnarray}
The parameter set $(a,\alpha,b,\beta,c)$ in Eq.(\ref{eq:EME}) is given
for each $(y,T,S,P)$ state with $y=qq,qs,ss$ ($q=u,d$).
In Tables \ref{Geme} and \ref{Goge}, the values of parameters are
tabulated for ${\cal G}_{EME}$ and ${\cal G}_{OGE}$, respectively.
${\cal G}_{INS}$ derived from $V_{INS}$ is parameterized
in an one-range Gaussian form as
\begin{eqnarray}
{\cal G}_{INS}(\rho,r) =
(a\rho^{\alpha}+b\rho^{\beta})\cdot \exp(-(r/0.6)^2) \ .
\label{eq:INS}
\end{eqnarray}
The parameter set $(a,\alpha,b,\beta,c)$ in Eq.(\ref{eq:INS}) is given
for each $(y,T,S,P)$ state with $y=qq,qs$ ($q=u,d$).
The values of them are given in Table \ref{Gins}.
${\cal G}_{MPP}$ derived from $V_{MPP}$ is parameterized
in an one-range Gaussian form as
\begin{eqnarray}
{\cal G}_{MPP}(\rho,r)= (a+b\rho^{\beta})\cdot \exp(-(r/1.3)^2)
\label{eq:MPP}
\end{eqnarray}
being independent of $(y,T,S)$ and given only for $P$.
The values of parameters $(a,b,\beta)$ are given in Table \ref{Gmpp}.
\begin{table}
\begin{center}
\caption{${\cal G}_{EME}(\rho,r)=(a\rho^{\alpha}+b\rho^{\beta})\cdot \exp(-(r/0.8)^2)
\\
\qquad \qquad + c\cdot \exp(-(r/1.6)^2)$.
$y=qq,qs,ss$ ($q=u,d$).
}
\label{Geme}
\vskip 0.2cm
\begin{tabular}{|c|c|ccccc|}\hline
$y$ & $T$ $S$ $P$ & $a$ & $\alpha$ & $b$ & $\beta$ & $c$ \\
\hline
$qq$ & 1 0 + & $-$3.520 & $-1$ & $-$17.94 & 0 & $-$0.9978 \\
& 0 1 + & $-$2.871 & $-1$ & $-$30.59 & 0 & $-$0.8389 \\
& 0 0 $-$ & 43.34 & $-1$ & 192.8 & 0 & 3.896 \\
& 1 1 $-$ & 6.621 & $-1$ & 102.5 & 0 & 1.595 \\
\hline
$qs$ & 1/2 0 $+$ & $-$0.5716 & $-1$ & $-$28.27 & 0 & $-$0.4530 \\
& 1/2 1 $+$ & $-$0.6959 & $-1$ & $-$24.58 & 0 & $-$0.1993 \\
& 1/2 0 $-$ & $-$1.597 & $-1$ & 149.0 & 0 & 1.568 \\
& 1/2 1 $-$ & 1.183 & $-1$ & 75.98 & 0 & 1.217 \\
\hline
$ss$ & 0 0 $+$ & $-$2.755 & $-1$ & $-$26.37 & 0 & $-$0.1212 \\
& 0 1 $-$ & $-$1.651 & $-1$ & 51.06 & 0 & 0.3558 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{${\cal G}_{OGE}(\rho,r)= (a\rho^{\alpha}+b\rho^{\beta})\cdot \exp(-(r/0.8)^2)
\\
\qquad \qquad +c\cdot \exp(-(r/1.6)^2)$. $y=qq,qs,ss$ ($q=u,d$).
}
\label{Goge}
\vskip 0.2cm
\begin{tabular}{|c|c|ccccc|}\hline
$y$ & $T$ $S$ $P$ & $a$ & $\alpha$ & $b$ & $\beta$ & $c$ \\
\hline
$qq$ & 1 0 + & 8.565 & 1 & 3.892 & 0.3742 & 0.5185 \\
& 0 1 + & 7.543 & 1 & 1.977 & 0.6431 & 1.142 \\
& 0 0 $-$ & -0.8959 & 1 & 9.982 & 0.2741 & 0.5027 \\
& 1 1 $-$ & 8.094 & 1 & 11.64 & 0.2881 & 1.147 \\
\hline
$qs$ & 1/2 0 $+$ & $-$4.733 & 1 & $-$1.359 & 0.4161 & $-$0.2593 \\
& 1/2 1 $+$ & $-$3.658 & 1 & $-$0.7316 & 0.6347 & $-$0.5709 \\
& 1/2 0 $-$ & $-$1.282 & 1 & $-$3.141 & 0.3675 & $-$0.2514 \\
& 1/2 1 $-$ & $-$5.645 & 1 & $-$3.831 & 0.3727 & $-$0.5736 \\
\hline
$ss$ & 0 0 $+$ & 10.48 & 1 & 1.478 & 0.5930 & 0.5185 \\
& 0 1 $-$ & 11.23 & 1 & 7.743 & 0.3250 & 1.147 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{${\cal G}_{INS}(\rho,r)=\ (a\rho^{\alpha}+b\rho^{\beta})\cdot \exp(-(r/0.6)^2)$.
\\ $y=qq,qs$ ($q=u,d$).}
\label{Gins}
\vskip 0.2cm
\begin{tabular}{|c|c|cccc|}\hline
$y$ & $T$ $S$ $P$ & $a$ & $\alpha$ & $b$ & $\beta$ \\
\hline
$qq$ & 0 1 $+$ & 0.2132 & $-1$& $-$130.0 & 0 \\
& 0 0 $-$ & 124.8 & 0 & $-$38.82 & 0.4227 \\
\hline
$qs$ & 1/2 0 $+$ & $-$1.504 & $-1$& $-$61.35 & 0.0705 \\
& 1/2 1 $+$ & $-$0.1638 & $-1$& $-$63.47 & 0 \\
& 1/2 0 $-$ & 59.03 & 0 & $-$17.06 & 0.4966 \\
& 1/2 1 $-$ & $-$36.95 & 0 & $-$5.749 & 0.3192 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{${\cal G}_{MPP}(\rho,r)= (a+b\rho^{\beta})\cdot \exp(-(r/1.3)^2)$.
}
\label{Gmpp}
\vskip 0.2cm
\begin{tabular}{|c|ccc|}\hline
$P$ & $a$ & $b$ & $\beta$ \\
\hline
$+$ & 0.3597 & 1.600 & 1.490 \\
$-$ & 0.4338 & 2.618 & 1.384 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{EoS and $MR$ diagram of hybrid star}
\subsection{Derivation of quark-matter EoS}
Let us derive the EoS of quark matter composed of quarks with flavor $f=u,d,s$.
In this derivation, we use the density-dependent $QQ$ interactions
Eq.(\ref{eq:EME}), Eq.(\ref{eq:INS}), Eq.(\ref{eq:MPP})
based on the non-relativistic formalism.
Relativistic expressions are used only for kinetic energies.
A single $f$ quark potential in quark matter composed of $f'$ quarks
is given by
\begin{eqnarray}
U_f(k)&=&\sum_{f'} U_{f}^{(f')}(k)
= \sum_{f'} \sum_{k'<k_F^{f'}} \langle kk'|{\cal G}_{ff',ff'}|kk'\rangle
\nonumber\\
\end{eqnarray}
with $f,f'=u, d, s$, where spin and isospin quantum numbers are implicit.
The quark energy density is given by
\begin{eqnarray}
\varepsilon_f&=&
2N_c\sum_{f} \int_0^{k_F^f} \frac{d^3k}{(2\pi)^3}
\left\{\sqrt{\hbar^2 k^2+M_f^2}+\frac 12 U_f(k)\right\}
\nonumber\\ && + B(\rho_Q)
\label{eden}
\end{eqnarray}
where $N_c=3$ is the number of quark colors.
The quark number density is given as $\rho_Q=\sum_f \rho_f$
with $\rho_f=N_c\frac{(k_F^f)^3}{3\pi^2}$.
The chemical potential $\mu_f$ and pressure $P_Q$
are expressed as
\begin{eqnarray}
&&\mu_f = \frac{\partial \varepsilon_Q}{\partial \rho_f} \ ,
\label{chem} \\
&& P_Q = \rho_Q^2 \frac{\partial (\varepsilon_Q/\rho_Q)}{\partial \rho_Q}
=\sum_f \mu_f \rho_f -\varepsilon_Q \ .
\label{press}
\end{eqnarray}
Here, we consider the EoS of $\beta$-stable quark matter
composed of $u$, $d$, $s$, $e^-$.
The equilibrium conditions are summarized as follows:
\noindent
(1) chemical equilibrium conditions,
\begin{eqnarray}
&& \mu_d = \mu_s = \mu_u+\mu_e
\label{eq:c1}
\end{eqnarray}
\noindent
(2) charge neutrality,
\begin{eqnarray}
0 = \frac13 (2\rho_u -\rho_d -\rho_s) -\rho_e
\label{eq:c2}
\end{eqnarray}
\noindent
(3) baryon number conservation,
\begin{eqnarray}
\rho_B = \frac13 (\rho_u +\rho_d +\rho_s)= \frac13 \rho_Q
\label{eq:c3}
\end{eqnarray}
In the parabolic approximation, the following relation can be derived:
\begin{eqnarray}
\mu_e=\mu_d-\mu_u=4\beta E_{sym}
\label{eq:c4}
\end{eqnarray}
where $x=\rho_u/(\rho_u +\rho_d)$ and $\beta=1-2x$.
$E_{sym}$ is the symmetric energy of $ud$ part.
When the chemical potentials (\ref{chem})
are substituted into (\ref{eq:c1}),
the chemical equilibrium conditions
are represented as equations for densities $\rho_u$,
$\rho_d$, $\rho_s$ and $\rho_e$.
Then, equations (\ref{eq:c1}) $-$ (\ref{eq:c3})
are solved iteratively, and densities and
chemical potentials in equilibrium are obtained.
Finally, energy densities (\ref{eden}) and pressures
(\ref{press}) can be calculated.
An example of solution is demonstrated in Fig.~\ref{comp}:
The number fractions of quarks and electrons in $\beta$-stable
quark matter are plotted as a function of the baryon density
$\rho_B$ in the case of using Q2, where solid (dashed) curves
are for $u$, $d$ and $s$ quarks (electrons).
In the figue, the electron fractions are not visible below
the $s$-quark onset. The reason of such small values are because
the symmetry energies $E_{sym}$ in Eq.(\ref{eq:c4}) are not so
large in the case of our $QQ$ interactions.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm,height=7.5cm]{comp.eps}
\caption{The number fractions of quarks and electrons
in $\beta$-stable quark matter as a function of the
baryon density $\rho_B$ in the case of using Q2.
The fractions of $u$, $d$ and $s$ quarks are given by
solid curves, and that of electrons $e^-$ are by dashed curve.}
\label{comp}
\end{center}
\end{figure}
\subsection{Phase transition from hadronic matter to quark matter}
The EoSs are shown in Fig.~\ref{Peden}, where pressures of quark matter
are given as a function of the energy density $\epsilon$ and compared
to those of hadronic matter.
Steeper slopes of curves correspond to stiffer EoSs:
The Q-EoSs are stiffer than the H-EoSs, and the EoSs for (b) Q1 and
(c) Q2 are stiffer than that for (a) Q0 owing to the repulsive contributions
of $V_{OGE}$ and $V_{MPP}$. As shown later, these features are clearly
reflected in the $MR$ curves of hybrid stars.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm,height=7.5cm]{Peden.eps}
\caption{Pressures of hadronic matter and quark matter as a function of
the energy density $\epsilon$. Short-dashed, long-dashed and dot-dashed
curves are for hadronic matter for H1, H2 and H3, respectively.
Solid curves are for quark matter obtained from (a) Q0, (b) Q1 and (c) Q2.
}
\label{Peden}
\end{center}
\end{figure}
In order to construct the hybrid EoS including a transition from
hadronic phase and quark phase,
we use the replacement interpolation method \cite{RIM} \cite{Shahrbaf2},
being a simple modification of the Maxwell and the Glendenning (Gibbs)
constructions \cite{Glendenning}.
In our actual calculations, we follow the interpolation formula
given in Ref.\cite{Shahrbaf2}.
Then, interpolated regions can be considered as mixed phases.
Both of H-EoSs and Q-EoSs are assumed to fulfill separately
the charge-neutrality and $\beta$-equilibrium conditions.
The EoSs of hadronic and quark phases and that of mixed phase are
described with the relations between pressures and chemical potentials
$P_H(\mu)$, $P_Q(\mu)$ and $P_M(\mu)$, respectively.
The critical chemical potential $\mu_c$ for the transition
from the hadronic phase to the quark phase is
obtained from the Maxwell condition
\begin{eqnarray}
P_Q(\mu_c)=P_H(\mu_c)=P_c \ .
\end{eqnarray}
The pressure of the mixed phase is represented by a polynomial ansatz
\begin{eqnarray}
P_M(\mu)=\sum^N_{q=1} \alpha_q (\mu-\mu_c)^q +P_c+\Delta P
\label{eq:PM}
\end{eqnarray}
where the pressure shift $\Delta P$ at $\mu_c$ is treated as
a free parameter.
The pressure of the mixed phase at $\mu_c$ is determined by
$P_M(\mu_c)=P_c+\Delta P= (1+\Delta_P)P_c$ with
$\Delta_P=\Delta P/P_c$.
Then, the matching chemical potential $\mu_H$ ($\mu_Q$) of $P_M(\mu)$
to $P_H(\mu)$ ($P_Q(\mu)$) can be obtained from the continuity condition.
The corresponding matching densities $\rho_H$ and $\rho_Q$ are obtained
with use of $\rho(\mu)=dP(\mu)/d\mu$.
The finite values of $\Delta_P=0.05 - 0.07$ corresponds to
the Glendenning construction \cite{Shahrbaf2}.
We choose $\Delta_P=0.07$ in this work.
\begin{figure*}[ht]
\begin{center}
\includegraphics*[width=6in,height=3in]{Pmu.eps}
\caption{Pressures as a function of the chemical potential $\mu_B$.
Short-dashed, long-dashed and dot-dashed curves are
pressures of hadronic matter for H1, H2 and H3, respectively.
In the left panel, solid curves are pressures of quark matter obtained
from (a) Q0, (b) Q1 and (c) Q2. In the right panel, they are obtained
by using constant quark masses without density dependences
Eq.~(\ref{mstar}) and vacuum energies Eq.~(\ref{bag}).
}
\label{Pmu}
\end{center}
\end{figure*}
In Fig.~\ref{Pmu}, pressures are drawn as a function of
the chemical potential $\mu_B$, where short-dashed,
long-dashed and dot-dashed curves are pressures of
hadronic matter for H1, H2 and H3, respectively.
In the left panel, solid curves are pressures of quark matter
obtained from (a) Q0, (b) Q1 and (c) Q2. The crossing of the hadronic
and the quark-matter curves is considered to be a condition for
phase transition to occur.
The values of $P$ at crossing points give the critical pressures
$P_c$ for phase transitions. The hadronic and quark-matter curves
are connected smoothly by Eq.~(\ref{eq:PM}).
Then, the effective-mass parameter $\gamma$ in Eq.~(\ref{mstar})
is adjusted so that cross points appear at similar values of
$\mu_B \sim 1200$ MeV.
In the right panel, on the otherhand, solid curves are obtained
from Q0, Q1 and Q2 by using constant quark masses $M_Q^*(\rho_Q=0)$
without density dependences Eq.~(\ref{mstar}) and vacuum energies
Eq.~(\ref{bag}). It is found that there is no crossing point in
this region of $\mu_B$. Thus, the density-dependent quark mass
plays a decisive role in the occurrence of phase transition.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm,height=7.5cm]{Mixed1.eps}
\caption{Pressures as a function of the chemical potential $\mu_B$
in the transition region. The short-dashed curve is obtained
by the H-EoS for H2 and the solid curve is by the Q-EoS for Q2.
The bold-solid curve is the interpolated one.
}
\label{Mixed1}
\end{center}
\end{figure}
In Fig.~\ref{Mixed1}, pressures are given as a function of
the chemical potential $\mu_B$ in the transition region.
The short-dashed curve is obtained by the H-EoS for H2 and
the solid curve is by the Q-EoS for Q2.
The bold-solid curve is obtained by the interpolation method.
\begin{table}
\begin{center}
\caption{Pressures $P_c$ at critical chemical potentials
$\mu_c$ in phase transitions from the hadronic phases for H1, H2
and H3 to the quark-matter phases for Q0, Q1 and Q2. Values of
$\mu_H$ ($\mu_Q$) are chemical potentials at matching points
between mixed phases and hadron (quark) phases.
}
\label{match1}
\vskip 0.2cm
\begin{tabular}{|l|cccc|}
\hline
& $P_c$ & $\mu_c$ & $\mu_H$ & $\mu_Q$ \\
& MeV/fm$^3$ & MeV & MeV & MeV \\
\hline
H1+Q0 & 125.7 & 1241 & 1186 & 1373 \\
H1+Q1 & 92.55 & 1183 & 1141 & 1277 \\
H1+Q2 & 110.4 & 1215 & 1095 & 1368 \\
\hline
H2+Q0 & 139.6 & 1254 & 1199 & 1386 \\
H2+Q1 & 102.3 & 1193 & 1149 & 1282 \\
H2+Q2 & 138.2 & 1252 & 1141 & 1415 \\
H2+Q1e& 132.9 & 1189 & 1243 & 1446 \\
H2+Q2e& 209.6 & 1360 & 1261 & 1600 \\
\hline
H3+Q0 & 136.0 & 1251 & 1198 & 1382 \\
H3+Q1 & 101.2 & 1191 & 1148 & 1279 \\
H3+Q2 & 131.6 & 1243 & 1142 & 1412 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{
Critical densities (fm$^{-3}$) of phase transitions:
$\rho_H$ and $\rho_Q$ are densities at matching points
in phase transitions from the hadronic phases for H1, H2
and H3 to the quark-matter phases for Q0, Q1, Q2, Q1e and Q2e.
$\rho^c_H$ and $\rho^c_Q$ are critical densities for the
Maxwell construction defined by $P_H(\rho^c_H)=P_Q(\rho^c_Q)=P_c$.
Values of $\rho_E$ are densities at crossing points of
energy densities $\epsilon_H(\rho)$ and $\epsilon_Q(\rho)$.
There is no crossing point in the case of H1+Q2.
}
\label{match2}
\vskip 0.2cm
\begin{tabular}{|l|cc|cc|c|}
\hline
& $\rho_H$ & $\rho_Q$ &$\rho^c_H$ & $\rho^c_Q$ & $\rho_E$ \\
\hline
H1+Q0 & 0.566 &0.904 &0.661 &0.673 &0.784 \\
H1+Q1 & 0.490 &0.703 &0.574 &0.544 &0.918 \\
H1+Q2 & 0.407 &0.721 &0.623 &0.584 & --- \\
\hline
H2+Q0 & 0.521 &0.930 &0.664 &0.694 &0.702 \\
H2+Q1 & 0.446 &0.712 &0.573 &0.561 &0.753 \\
H2+Q2 & 0.433 &0.776 &0.661 &0.620 &0.716 \\
H2+Q1e& 0.506 & 1.02 &0.650 &0.707 &0.643 \\
H2+Q2e& 0.608 &0.987 &0.790 &0.722 &0.695 \\
\hline
H3+Q0 & 0.482 &0.922 &0.616 &0.689 &0.659 \\
H3+Q1 & 0.416 &0.706 &0.568 &0.559 &0.692 \\
H3+Q2 & 0.407 &0.772 &0.608 &0.612 &0.660 \\
\hline
\end{tabular}
\end{center}
\end{table}
Our phase transition is specified by the pressures $P_c$ at
critical chemical potentials $\mu_c$ and boundary values of
chemical potentials and densities for mixed phases.
They are shown in the cases of phase transitions from the
H-EoS for H1, H2 and H3 to the Q-EoSs for Q0, Q1, Q2, Q1e and Q2e.
In Table~\ref{match1},
the chemical potentials at matching points are given by
values of $\mu_H$ and $\mu_Q$.
In Table~\ref{match2},
$\rho_H$ and $\rho_Q$ are densities at matching points
in phase transitions,
and $\rho^c_H$ and $\rho^c_Q$ are critical densities
defined by the $P_H(\rho^c_H)=P_Q(\rho^c_Q)=P_c$
in the case of $\Delta_P=0$.
It is reasonable that that the values of $\rho^c_H$ and $\rho^c_Q$ are
between the values of $\rho_H$ and $\rho_Q$.
The Maxwell construction is conditioned by $\rho^c_H < \rho^c_Q$.
As found in Table~\ref{match2}, however, the values of $\rho^c_H$ are larger
than those of $\rho^c_Q$ in some cases, meaning that first-order phase transitions
do not appear in $\Delta_P=0$ limits.
The values of $\rho_E$ are densities at crossing points of
energy densities $\epsilon_H(\rho)$ and $\epsilon_Q(\rho)$.
In the case of H2+Q1e (H2+Q2e), the value of $\rho_E$ is between
(smaller than) $\rho^c_H$ and $\rho^c_Q$.
In Fig.~\ref{Mixed2}, pressures are given as a function
of energy density $\epsilon$ in the transition region.
The short-dashed curve is obtained by the H-EoS for H2 and
the solid curve is by the Q-EoS for Q2.
The bold-solid curve is pressure in the interpolated region.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm,height=7.5cm]{Mixed2.eps}
\caption{Pressures as a function of as a function of the energy density $\epsilon$
in the region of phase transitions. The short-dashed curves are obtained by
the H-EoS for H2 and the solid curve is by the Q-EoS for Q2.
The bold-solid curve is the interpolated ones in the mixed phase.
}
\label{Mixed2}
\end{center}
\end{figure}
It is worthwhile to point out that our hybrid-EoSs are consistent with
the picture of hadron-quark continuity \cite{Kojo2015}\cite{Baym2018}.
In these references, the interpolated pressures are given in the density region of
$2<\rho_B/\rho_0<(4 - 7)$, where quark degrees of freedom gradually emerge.
Correspondingly, our mixed phases are given in the region of
$(2.4 - 3.3)<\rho_B/\rho_0<(4.1 - 6.0)$,
as found in Table~\ref{match2}.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm,height=7.5cm]{MRdel.eps}
\caption{Pressure $P$ as a function of density $\rho_B$ in the case of H2+Q1e,
where the dashed (short-dashed) curves are for hadronic (quark) matter.
The horizontal solid line shows the range of the hadron-quark mixed phases
in case of $\Delta_P=0$. The dot-dashed curve is in the case of $\Delta_P=0.07$.
}
\label{MRdel}
\end{center}
\end{figure}
Let us demonstrate that the Maxwell construction appears
in the the $\Delta_P=0$ limit. In Fig.~\ref{MRdel}, we show pressure $P$
as a function of density $\rho_B$
in the case of using H2+Q1e, where the dashed (short-dashed) curves are
for the hadronic (quark) matter. The horizontal solid lines show the
range of the hadron-quark mixed phase.
It is well known that the Maxwell construction is specified by
the horizontal line in the $P-\rho$ diagram, where
the density values at ends of horizontal and vertical lines
are given by $\rho^c_H$ and $\rho^c_Q$.
The difference of the curve for $\Delta_P=0$ from the dot-dashed curve
for $\Delta_P=0.07$ is found to be small, the appearance of which is
seen in the corresponding $MR$ curves as shown later.
Not only in the case of He+Q1e, there appear the similar curves
specifying the Maxwell construction in the cases of
$\rho^c_H < \rho^c_Q$ in Table~\ref{match2}.
Our hybrid-star EoS is composed of H-EoS and Q-EoS, being combined
by the interpolation formula including the parameter $\Delta_P$.
The $MR$ relations of hybrid stars can be obtained by solving the
Tolmann-Oppenheimer-Volkoff (TOV) equation, where our hybrid EoSs
are connected smoothly to the crust EoS \cite{Baym1}\cite{Baym2}
in the low-density side.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8.6cm,height=5cm]{MRgam.eps}
\caption{Hybrid-star masses as a function of radius $R$,
where the short-dashed curves are obtained by the H-EoS for H2.
In the left panel, the solid and dashed curves are obtained by
H2+Q1e in cases of $\Delta_P=0.07$ and $\Delta_P=0$, respectively.
In the right panel,
the upper (lower) solid curves are obtained by H2+Q2 (H2+Q1),
and the upper (lower) dashed curves are by H2+Q2e (H2+Q1e).
}
\label{MRgam}
\end{center}
\end{figure}
In Fig.~\ref{MRgam}, hybrid-star masses are shown as a function of
radius $R$ in the cases of using Q1e or Q2e (Q1 or Q2), where
the short-dashed curves are obtained by the H-EoS for H2.
In the left panel, the solid and dashed curves are obtained by H2+Q1e
in cases of $\Delta_P=0.07$ and $\Delta_P=0$ (Maxwell construction),
respectively. The slight reduction of the latter compared to the former
is due to the difference between the $\Delta_P=0.07$ and $\Delta_P=0$
curves in in Fig.\ref{MRdel}.
In the right panel, the upper (lower) solid curves are obtained by
H2+Q2 (H2+Q1) in the case of $\Delta_P=0.07$,
and the upper (lower) dashed curves are by H2+Q2e (H2+Q1e).
The maximum masses are $2.15M_\odot$ ($2.25M_\odot$) for H2+Q2e (H2+Q2),
and $2.07M_\odot$ ($2.16M_\odot$) for H2+Q1e (H2+Q1).
The quark-phase onset values of the central baryon densities
are 0.54 fm$^{-3}$ (0.48 fm$^{-3}$) in the case of H2+Q1e (H2+Q1),
and 0.65 fm$^{-3}$ (0.46 fm$^{-3}$) in the case of H2+Q2e (H2+Q2).
Thus, Q2e (Q1e) leads to the larger onset density
and the smaller maximum mass than Q2 (Q1):
It is a general trend that maximum masses become smaller
as onset densities of quark phases become larger.
In our approach, there is no clear criteria to decide which of Q2 (Q1)
and Q2e (Q1e) is more appropriate. In the following section,
we use Q1 and Q2 because they seem to be more suitable than
Q1e and Q2e in the light of the recent observations for the maximum masses.
\subsection{$MR$ diagrams of hybrid stars}
\begin{figure*}[ht]
\begin{center}
\includegraphics*[width=6in,height=3in]{MR1.eps}
\caption{Hybrid-star masses as a function of radius $R$ (left panel)
and central density $\rho_{Bc}$ (right panel).
The solid curves are obtained by the Q-EoSs for (a) Q0, (b) Q1 and (c) Q2.
The short-dashed curves are by the H-EoS for H2.
The deviations from the latter to the formers are by phase transitions
from hadronic-matter to quark-matter. The rectangle indicates
the region of mass $2.072^{+0.067}_{-0.066}$M$_\odot$ and radius
$12.39^{+1.30}_{-0.98}$ km \cite{Riley2021} for PSR J0740+6620.}
\label{MR1}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics*[width=6in,height=3in]{MR2.eps}
\caption{Hybrid-star masses as a function of radius $R$ (left panel)
and central density $\rho_{Bc}$ (right panel),
where the Q-EoS for Q2 is used.
Short-dashed, long-dashed and dot-dashed curves are obtained with
H-EoSs for (a) H1, (b) H2 and (c) H3, respectively.
Solid curves show the deviations by transitions from hadronic-matter
to quark-matter phases. In the left panel, the rectangle indicates
the region of mass $2.072^{+0.067}_{-0.066}$M$_\odot$ and radius
$12.39^{+1.30}_{-0.98}$ km \cite{Riley2021}.
Dotted and solid line segments indicate $R_{1.4M_\odot}=12.33^{+0.76}_{-0.81}$ km
(PP model) and $R_{1.4M_\odot}=12.18^{+0.56}_{-0.79}$ km (CS model) \cite{Raaij2021},
and dashed and dot-dashed ones do
$R_{1.4M_\odot}=11.94^{+0.76}_{-0.87}$ km \cite{Peter2021} and
$R_{1.4M_\odot}=13.80 \pm 0.47$ km \cite{Brendan2021}, respectively.}
\label{MR2}
\end{center}
\end{figure*}
In Fig.\ref{MR1}, hybrid-star masses are given as a function of radius $R$
(left panel) and central baryon density $\rho_{Bc}$ (right panel).
The curves obtained by the Q-EoSs for (a) Q0, (b) Q1 and (c) Q2 are given by
solid curves, and those by the H-EoS for H2 are given by the short-dashed curves.
The former curves are connected from the latter curves by the hadron-quark phase
transitions. The maximum masses for (b) Q1 and (c) Q2 are over $2M_\odot$ substantially.
It is noted that the Q-EoS for (a) Q0 derived from $V_{EME}$ is still stiff enough
to reach $2M_\odot$ without help of the repulsive contributions of
$V_{OGE}$ and $V_{MPP}$. This repulsive components in $V_{EME}$ come from
vector-meson and pomeron exchanges between quarks.
The rectangle in the left panel indicates the region of mass
$2.072^{+0.067}_{-0.066}$M$_\odot$ and radius $12.39^{+1.30}_{-0.98}$ km
\cite{Riley2021} for the most massive neutron star PSR J0740+6620.
The $MR$ curves for Q1 and Q2 are found to pass through this rectangle.
In the left panel of Fig.\ref{MR2}, hybrid-star masses are drawn
as a function of radius $R$, where the Q-EoSs for Q2 and
H-EoSs for (a) H1, (b) H2 and (c) H3 are used.
Short-dashed, long-dashed and dot-dashed curves are obtained with
H-EoSs for H1, H2 and H3, respectively.
Solid curves show deviations by transitions from hadronic-matter
to quark-matter phases.
The maximum masses in the figures are as follows:
In the cases of H-EoSs, they are
$1.82M_\odot$ (H1), $1.94M_\odot$ (H2) and $2.07M_\odot$ (H3).
In the cases of including hadron-quark transitions, they are
$2.25M_\odot$ (H1+Q2), $2.25M_\odot$ (H2+Q2) and $2.28M_\odot$ (H3+Q2).
The maximum masses are noted to be determined by the Q-EoSs,
being larger than those given by the H-EoSs.
The rectangle in the left panel is the same as that in Fig.\ref{MR1},
indicating the mass-radius region obtained from the observation \cite{Riley2021}.
The $MR$ curves for the Q-EoSs pass through the rectangle,
though those for the H-EoSs (dashed curves) are below this rectangle.
The radii $R$ at $1.4M_\odot$ ($R_{1.4M_\odot}$) are given as follows:
The values of $R_{1.4M_\odot}$ are 12.5 km (H1+Q2), 13.3 km (H2+Q2)
and 13.6 km (H3+Q2), being obtained in the cases of
including hadron-quark transitions. The similar values are obtained
by using H-EoSs only, which means that the values of
$R_{1.4M_\odot}$ are determined by H-EoSs.
In the figure,
dotted and solid line segments indicate $R_{1.4M_\odot}=12.33^{+0.76}_{-0.81}$ km
(PP model) and $R_{1.4M_\odot}=12.18^{+0.56}_{-0.79}$ km (CS model) \cite{Raaij2021},
and dashed and dot-dashed ones do
$R_{1.4M_\odot}=11.94^{+0.76}_{-0.87}$ km \cite{Peter2021} and
$R_{1.4M_\odot}=13.80 \pm 0.47$ km \cite{Brendan2021}, respectively.
The former three line segments (dotted, solid and dashed lines)
are similar with each other, and the $MR$ curve for H1 intersects them.
On the other hand, the $MR$ curves for H2 and H3 intersect the dot-dashed line,
but do not intersect the other three lines.
In the present stage of the observations for radii of neutron stars,
it is difficult to determine which one of H1, H2 and H3 lead to the most
reasonable EoS.
In the right panel of Fig.\ref{MR2}, hybrid-star masses are drawn as
a function of central baryon density $\rho_{Bc}$, where the Q-EoS for Q2 is used.
Short-dashed, long-dashed and dot-dashed curves are obtained with
H-EoSs for H1, H2 and H3, respectively. Solid curves show the deviations
by transitions from hadronic-matter to quark-matter phases.
In the cases of including hadron-quark transitions,
the onset values of $\rho_{Bc}$ for quark phases are
0.40 fm$^{-3}$ (H1+Q2), 0.46 fm$^{-3}$ (H2+Q2) and 0.43 fm$^{-3}$ (H3+Q2).
It is useful to compare our results for $MR$ diagrams with those
in Ref.\cite{Shahrbaf2}, because we employ the method in this
reference for the hadron-quark phase transitions. Though their
quark-matter EoS is based on the nonlocal Nambu-Jona-Lasinio
(nlNJL) model differently from ours, it is found that the quark-phase
regions of the $MR$ curves in Fig.4 of \cite{Shahrbaf2} are similar
to ours qualitatively. Especially, maximum masses of
$2M_{\odot}$ are reproduced well, namely the Q-EoSs are stiff
similarly in both cases of ours and \cite{Shahrbaf2}.
However, the hadronic-matter regions are different from each other,
since softer H-EoSs are used in \cite{Shahrbaf2} than ours.
As stated before, the hyperon mixing results in remarkable softening
of the EoS. In order to avoid this ``hyperon puzzle", the universal repulsions
modeled as MPP are included in our derivations of our H-EoSs.
Here, let us try to use the H-EoS for H1' in which the MPP repulsions work only
among nucleons. The $BB$ interaction used in \cite{Shahrbaf2} is of this type.
In Fig.\ref{MR3}, hybrid-star masses are given as a function of radius $R$
(left panel) and of central density $\rho_{Bc}$ (right panel).
The top dashed curve (a) is obtained from the H-EoS for H0 without hyperons.
The middle dashed curve (b) is from the H-EoS for H1.
The bottom dashed curve (c) is from the H-EoS for H1' including hyperons,
in which the MPP repulsions work only among nucleons.
The solid curves show the deviations by transitions from hadronic phase
to quark-matter phase for Q1.
It should be noted that the large difference from the top dashed curve to
the bottom dashed curve demonstrates the softening of the EoS by hyperon mixing.
Then, the solid curves show the deviations by transitions from hadronic phases
to quark-matter phases for Q1. The lowering of the maximum mass by the EoS
softening turns out to be recovered by the transition to the quark-matter
phase given by the stiff EoS. It is interested that the curves
for H0+Q1 are similar to those for H0.
The basic feature of the $MR$ curve for H1'+Q1 is similar
to those of the curves in \cite{Shahrbaf2}.
\begin{figure*}[ht]
\begin{center}
\includegraphics*[width=6in,height=3in]{MR3.eps}
\caption{hybrid-star masses are given as a function of radius $R$
(left panel), and as a function of central density $\rho_{Bc}$ (right panel).
The top dashed curve (a) is obtained from the H-EoS for H0 without hyperons.
The middle dashed curve (b) is from the H-EoS for H1.
The bottom dashed curve (c) is from the H-EoS for H1' including hyperons,
in which the MPP repulsions work only among nucleons.
The solid curves show the deviations by transitions from hadronic phase
to quark-matter phase for Q1.}
\label{MR3}
\end{center}
\end{figure*}
Our $MR$ diagrams of hybrid stars are derived from H-EoSs for
$BB$ interactions (H1, H2, H3) and Q-EoSs for $QQ$ interactions
(Q0, Q1, Q2): There are nine combinations of H-EoSs and Q-EoSs,
among which some combinations are used in the above results.
In Table~\ref{MR}, features of the obtained $MR$ diagrams
in all combinations of (H1, H2, H3) and (Q0, Q1, Q2)
are demonstrated by showing the calculated values of
maximum masses $M_{max}$ and radii $R_{M_{max}}$,
and radii at $1.4M_{\odot}$ ($R_{1.4M_{\odot}}$).
For comparison, those for H1' and H1'+Q1 are added.
Here, the important features are as follows:
(1) In all cases, the Q-EoSs combined with the H-EoSs are stiff
enough to reproduce maximum masses over $2M_{\odot}$.
(2) The values of $R_{1.4M_{\odot}}$ are specified by the H-EoSs.
\begin{table}
\begin{center}
\caption{Maximum masses $M_{max}$ and radii $R_{M_{max}}$,
radii at $1.4M_{\odot}$ $R_{1.4M_{\odot}}$, dimensionless
tidal deformability at $1.4M_{\odot}$ $\Lambda_{1.4M_\odot}$
}
\label{MR}
\vskip 0.2cm
\begin{tabular}{|l|cccc|}
\hline
& $M_{max}/M_{\odot}$ & $R_{M_{max}}$ & $R_{1.4M_{\odot}}$ & $\Lambda_{1.4M_{\odot}}$ \\
& & (km) & (km) & \\
\hline
H1 & 1.82 & 10.4 & 12.4 & 422 \\
H1+Q0 & 1.99 & 10.0 & 12.4 & 422 \\
H1+Q1 & 2.14 & 10.3 & 12.4 & 422 \\
H1+Q2 & 2.25 & 10.7 & 12.5 & 422 \\
\hline
H1' & 1.52 & 10.4 & 12.1 & 334 \\
H1'+Q1& 2.10 & 10.0 & 12.2 & 337 \\
\hline
H2 & 1.94 & 10.3 & 13.3 & 671 \\
H2+Q0 & 2.01 & 10.4 & 13.3 & 671 \\
H2+Q1 & 2.16 & 10.6 & 13.3 & 671 \\
H2+Q2 & 2.25 & 10.9 & 13.3 & 671 \\
\hline
H3 & 2.07 & 10.7 & 13.6 & 771 \\
H3+Q0 & 2.04 & 10.7 & 13.6 & 771 \\
H3+Q1 & 2.18 & 10.8 & 13.6 & 771 \\
H3+Q2 & 2.28 & 11.2 & 13.6 & 771 \\
\hline
\end{tabular}
\end{center}
\end{table}
Another constraint for the EoS is given by the tidal deformability,
being the induced quadruple polarizability.
The dimensionless tidal deformability $\Lambda$ is defined as
$\Lambda=(2/3)k_2 (c^2 R/GM)^5$ \cite{Abbott2017},
where $c$ is the speed of light, $R$ and $M$ are radius and mass of
a neutron star and $G$ is the gravitational constant. $k_2$ is the tidal
Love number describing the response of each star to the external disturbance.
The binary neutron star merger GW170817 give the upper limit on the
tidal deformability of a neutron star with mass $1.4M_\odot$:
$\Lambda_{1.4M_\odot} \leq 800$ \cite{Piekarewitz2019}.
In Table~\ref{MR} are given the calculated values of $\Lambda_{1.4M_\odot}$
for our EoSs, where all values are less than the upper limit of 800.
It should be noted that the values of $\Lambda_{1.4M_\odot}$ are determined
by the H-EoSs, even if the Q-EoSs are combined with them.
\section{Conclusion}
The EoSs and $MR$ diagrams of hybrid stars are obtained
on the basis of our $QQ$ interaction model composed of
the extended meson exchange potential ($V_{EME}$),
the multi-pomeron exchange potential ($V_{MPP}$),
the instanton exchange potential ($V_{INS}$) and
the one gluon exchange potential ($V_{OGE}$), whose strengths
are determined on the basis of terrestrial data with
no adhoc parameter to stiffen EoSs.
The repulsive nature of our $QQ$ interaction in high density
region are basically given by $V_{EME}$ including strongly
repulsive components owing to vector-meson and pomeron exchanges.
Additional repulsions (attractions) are given by
$V_{MPP}$ and $V_{OGE}$ ($V_{INS}$).
The resultant repulsions included in our $QQ$ interaction are
so strong that the quark-matter EoSs become stiff enough
to give maximum masses of hybrid stars over $2M_\odot$.
Hadronic-matter EoSs (H-EoS) and quark-matter EoSs (Q-EoS)
are derived in the same framework based on the BBG theory.
In quark matter, density-dependent quark masses are introduced
phenomenologically, playing a decisive role in the occurrence
of phase transition.
Parameters of density dependences are taken so that
hadron-quark phase transitions are able to occur at reasonable
density region owing to the reduction of the quark masses and
chemical potentials in quark matter. Our resulting density
dependence of effective quark mass is similar to the Brown-Rho scaling.
Our H-EoSs are still not stiff enough to give maximum masses of neutron
stars over $2M_\odot$ due to the softening by hyperon mixing, although
the stiffness is recovered substantially by universal many-body repulsions.
In the case of using our $QQ$ interaction model, the Q-EoSs are stiffer
than the H-EoSs and $MR$ curves of hybrid stars shift above those of
stars with hadronic phases only.
The maximum masses of the formers including quark phases become larger
than those of the latters, and $MR$ curves are characterized by Q-EoSs
in the mass region higher than about $1.5M_\odot$.
Our Q-EoSs for Q1 and Q2 are stiff enough to give a maximum
mass over $2M_\odot$. The derived mass and radius are consistent
with the recent measurement for the most massive neutron star
PSR J0740+6620, obtained by the combining analysis for
the NICER and the other multimessenger data.
In our approach, star radii $R_{1.4M_\odot}$ given by hadronic-matter
EoSs do not changed by the hadron-quark phase transitions, namely
they are determined by H-EoSs regardless of Q-EoSs.
There are three estimates of $R_{1.4M_\odot}$ based on
the NICER measurements and the other multimessenger data.
Two of them give $R_{1.4M_\odot}=11.1 - 13.1$ km,
and the other $R_{1.4M_\odot}=13.1 - 14.4$ km.
Our H-EoS for H1 (H2 or H3) is consistent with the former (latter).
Our H-EoSs and Q-EoSs lead to $MR$ diagrams of hybrid stars
consistent with the recent observations for masses and radii.
\newpage
\section*{Acknowledgments}
{The authors would like to thank D. Blaschke
for valuable comments and fruitful discussions.
This work was supported by JSPS KAKENHI (No.20K03951 and No.20H04742).}
|
2,877,628,091,320 | arxiv | \section{Introduction}
Observations of low surface brightness galaxies and dwarf galaxies
indicate that the cores of galactic halos have shallow density
profiles \citep{dal97,swt00} instead of central cusps
predicted by cold dark matter (CDM) \citep{nfw97}.
In addition, the number density of dwarf
galaxies in Local Group turns out to be an order of magnitude fewer
than that produced by CDM simulations \citep{kkvp99}. These two features cast doubt on the validity of
standard CDM. There have been at least three different solutions
proposed to resolve these problems: (1) warm dark matter, (2)
collisional dark matter and (3) fuzzy dark matter.
Warm dark matter can suppress small-scale structures by free
streaming. It seems to be able to both solve
the over-abundance problem of dwarf galaxies and the singular core problem.
In this model the flat core is embedded within a radius a couple of percents of the virial radius \citep{jing01,colins08}, and the core smoothly connects to the NFW profile \citep{nfw97} outside.
However, this modification may generally adversely affect structures of
somewhat larger scales \citep{hu00}, despite that fine tuning of
the thermal velocity of dark matter particles may still be able to have
the larger scale structures consistent with observations \citep{abazajian06, viel08}.
For collisional dark matter, the halo core can be flattened and dwarf
galaxies destroyed, and N-body simulations
confirm this conjecture \citep{spst00}. But simulations
also show that very frequent collisions can yield even more
singular cores than the standard collisionless CDM does \citep{yoshida00}. This opposite behavior is indicative of
the requirement of fine tuning for collisional parameters.
The third solution to the problem is to treat dark matter as an
\emph{extremely light bosonic dark matter} (ELBDM) or \emph{fuzzy dark matter} \citep{pr90,sin94,hu00}.
Axion has been thought to be a candidate of light bosonic dark
matter. But for the light dark matter to erase the singular galactic core
and suppresses low-mass halos, the particle mass must be far smaller than axion (m$\sim$ $10^{-22}$ eV), so low that the uncertainty
principle operates on the astronomical length scale. Much like
axions, the ELBDM is in a Bose-Einstein condense state produced in
the early universe. These extremely light particles share the common
ground state and is described by a single coherent wave function. Its
de-Broglie wavelength is comparable to or even somewhat smaller than the
Jean's length \citep{dw97}, where the quantum
fluctuation provides effective pressure against self-gravity.
Several previous works have pondered on such an idea or its variance \citep{sin94,hu00,slopez03}, in which the wave mechanics
is described by the Schr\"{o}dinger-Poisson equation with Newtonian
gravity or by the Klein-Gordon equation with gravity. The Schr\"{o}dinger-Poisson system addresses the scale-free regime of quantum mechanics, where the Jean's length is a dynamical running parameter. On the other hand, the Klein-Gordon system makes use of the Compton wavelength as a natural length scale to create the flat core in a halo. Widrow \& Kaiser conducted simulations for the two-dimensional Schr\"{o}dinger-Poisson system to approximate the standard
collisionless cold dark matter \citep{wk93}. In the 2D case, the $1/r$ gravitational potential is replaced by $log(r)$, and
the 2D force law in their simulation becomes of longer range than it
actually is in 3D. Due to the lack of 3D numerical simulations,
some authors resort to spherical symmetry \citep{sin94,slopez03} or even 1D \citep{hu00} to study this
problem. These simplifications may not capture what actually results
in a 3D system with realistic initial conditions. In particular, the
existence of a flattened core has been derived or inferred from these
previous works of 1D system or with spherical symmetry. In this paper we report high-resolution fully 3D
simulations for this problem. Surprisingly, our simulations
reveal that the singular cores of bound objects remain to exist even when
the core size is much smaller than the Jean's length.
In Sec.2, we provide an explanation for the possible existence
of the Bose-Einstein state for the extremely low mass bosons under
investigation here. We then discuss two different representations of
ELBDM and the evolution of linear perturbations for the two representations.
In Sec.3, the numerical
scheme and initial condition are described. We present the simulation results in Sec.4.
In Sec.5, we look into the physics of collapsed cores with detailed analyses
from different perspectives. Finally the conclusion is given in Sec.6.
In the Appendix, we present results of 1D and 2D simulations and demonstrate singular cores do not occur in 1D and 2D cases.
\section{Theory}
\subsection{Bose-Einstein Condensate}
A Bose-Einstein condensate (BEC) is a state of bosons cooled to a
temperature below the critical temperature. BEC happens after a
phase transition where a large fraction of particles condense into
the ground state, at which point quantum effects, such as
interference, become apparent on a macroscopic scale. The critical
temperature for a gas consisting of non-interacting relativistic
particles is given by \citep{bh96}:
\begin{equation}
T_c\sim (\frac{n_{ch}}{3 m})^{1/2},
\end{equation}
where the Boltzmann's constant and speed of light have been set to
unity. Given the extremely low particle mass assumed here, $T_c$ is
derived from the relativitic Bose-Einstein particle-antiparticle
distribution with the chemical potential set to particle mass $m$. Here the "charge"
density $n_{ch}\equiv n_+ - n_-$, where $n_+$ and $n_-$ are the
number densities of particles and antiparticles in excited states.
On the other hand, we have $n_{ch}\sim (m/T) n_+$, and it follows
that $T_c\sim(\frac{n_+}{3T})^{1/2}$. Note that $n_+$ scales as
$a^{-3}$ and $T$ as $a^{-1}$, and it follows $T_c$ scales as $a^{-1}$.
It means that when $T$ is below $T_c$ at some time after a phase
transition, the temperature will remain sub-critical in any later epoch.
As an estimate, if we assume one percent of ELBDM to be in the excited states after its decoupling, the current critical temperature becomes
\begin{equation}
T_c=3 \times 10^{-14}(\frac{m}{eV})^{-1/2} (\frac{T}{eV})^{-1/2} eV.
\end{equation}
Substituting $m\sim 10^{-22}$eV and $T\sim 10^{-4}$eV, the same as the present
photon temperature, we find that the current critical temperature
$T_c=0.3 eV >> T$. Hence ELBDM, if exists and accounts for the dark matter,
may very well be in the BEC state ever since a phase transition in the early universe.
Despite ELBDM particles in the excited state are with a relativistic temperature, almost all particles are in the ground state and described by a single non-relativistic wave function.
\subsection{Basic Analysis}
The Lagrangian of non-relativistic scalar field in the comoving
frame is
\begin{equation}
L={a^3\over 2}[i\hbar(\psi^*{\partial\psi\over\partial t}-
\psi{\partial\psi^*\over\partial t}) +{\hbar^2\over
a^2m}(\nabla\psi)^2-2mV\psi^2],
\end{equation}
and the equation of motion for this Lagrangian gives a modified form
of Schr\"{o}dinger's Equation \citep{slopez03} :
\begin{equation}
i\hbar \frac{\partial\psi}{\partial t}=-\frac{\hbar ^{2}}{2
a^2m}\nabla ^{2}\psi + m V\psi,
\end{equation}
where $\psi\equiv \phi ({n_0}/a^3)^{-1/2}$ with $\phi$ being the
ordinary wave function, $n_0$ the present background number density
and $V$ is the self-gravitational potential obeying the Poisson
equation,
\begin{equation}
\nabla ^{2} V = 4 \pi G a^2\delta\rho = {4 \pi G \over a}{
\rho_0}(|\psi|^{2} -1).
\end{equation}
The only modification to the conventional Schr\"{o}dinger-Poisson
equation is the appearance $a^{-1}$ associated with the comoving
spatial gradient $\nabla$, and the probability density
$|\psi|^2$ to be normalized to the background proper density $\rho /m$.
In the above,
\begin{equation}
\rho_{0}\equiv \frac {3 H_{0} ^2}{8 \pi G} \Omega_{m}=m n_0
\end{equation}
is the background mass density of the universe.
To explore the nature of the ELBDM, we first adopt the
hydrodynamical description to investigate its linear evolution. This
approach is not only more intuitive than the wave function
description, its advantage will also become apparent later.
Let the wave function be
\begin{equation}
\psi = \sqrt{\frac{{n}}{n_0}} e ^{i \frac{S}{\hbar}},
\end{equation}
where ${n}=\bar{n} a^3$, the comoving number density and ${\bar{n}}$ is the proper number density. The quadrature of Schr\"{o}dinger's equation can
be split into real and imaginary parts, which become the equations
of accerleration and density separately,
\begin{equation}
\frac{\partial}{\partial t} {\bf v}+ \frac{1}{a^2}{\bf
v}\cdot\nabla{\bf v} + \frac{\nabla V}{m} - \frac {\hbar ^{2}}{2 m^2
a^2} \nabla (\frac {\nabla ^{2} \sqrt{{n}}}{\sqrt{{n}}})=0
\end{equation}
\begin{equation}
\frac{\partial {n}}{\partial t} +{1\over a^2}\nabla \cdot
({n} {\bf v})=0,
\end{equation}
where ${\bf v}\equiv\nabla S/m$ is the fluid velocity. There is a
new term depending on the third-order spatial derivative of the wave
amplitude $\sqrt{n}$ in the otherwise cold-fluid force equation.
This term results from the "quantum stress" that acts against
gravity, and it can be cast into a stress tensor in the energy and
momentum conservation equation \citep{chiu98,chiu00}. The quantum stress
becomes effective only when the spatial gradient of the structure is
sufficiently large.
The fluid equations, Eqs.(5),(8) and (9), are linearized and combined
to yield
\begin{equation}
\frac{\partial}{\partial t}a^2\frac{\partial}{\partial t}\delta
n-\frac{3 {H_0}^2 \Omega_m}{2a}\delta n
+\frac{\hbar^2}{4m^2a^2}\nabla^2\nabla^2\delta n=0.
\end{equation}
Upon spatially Fourier transforming $\delta n$, it follows
\begin{equation}
\frac{d}{d t}a^2\frac{d n_k}{d t}-({3 {H_0}^2 \Omega_m \over {2a}})
n_k +\frac{\hbar^2k^4}{4m^2 a^2}n_k=0,
\end{equation}
which can be recast into
\begin{equation}
x^2\frac{d^2}{dx^2}n_k+(x^2-6)n_k=0,
\end{equation}
and the solution to this equation is
\begin{equation}
n_k =\frac{(3\cos{x}-x^2\cos{x}+3x\sin{x})}{x^2}
\end{equation}
where $x\equiv \hbar k^2/(m\sqrt{ {H_0}^2 a})$ and $a=(t/t_0)^{2/3}$
and $\Omega_m = 1$ appropriate for early universe have been assumed.
In the small-$k$
limit, $x$ is small and $n_k\sim x^{-2}$, which grows in time as
$a$; for large $x$ the solution oscillates. Fig.(1) depicts the
solution, Eq.(13). From Eq.(12) we can easily identify the oscillating
solutions when $x^2 \geq 6$, thereby defining the
Jeans wave number:
\begin{equation}
k_{J} = (6a)^{1/4}(\frac{mH_{0}}{\hbar})^{1/2}.
\end{equation}
Beyond the Jeans wave number, the perturbation is suppressed by
quantum stress. Moreover, the Jeans wave number scales as $a^{1/4}$
and is proportional to $m^{1/2}$ (Hu, Barkana \& Gruzinov 2000). We shall come back in a later section to examine up to what evolutionary stage the linear solution of $n_k$ can remain valid.
Next, we linearize the Schr\"{o}dinger-Poisson equation to derive
a governing equation for an alternative wave-function representation.
The wave function can be separated into real and imaginary parts,
\begin{equation}
\psi = 1+ R+ iI,
\end{equation}
with $R$, $I \ll 1$. In the linear regime, we have the real part of
linearized Schr\"{o}dinger's equation
\begin{equation}
-\hbar \frac{ \partial}{\partial t}I =-\frac{\hbar^2}{2 a^2
m}\nabla^{2}R + m V,
\end{equation}
and the imaginary part
\begin{equation}
\hbar \frac{\partial R}{\partial t} = -\frac{\hbar^2}{2 a^2
m}\nabla^{2}I.
\end{equation}
The Poisson Equation becomes
\begin{equation}
\nabla^2 V= \frac{8\pi G}{a} \rho_0 R.
\end{equation}
The spatial Fourier components of gravitational potential and
$\psi$ satisfy
\begin{equation}
V_k=-\frac{8\pi G \rho_0}{a}\frac{R_k}{k^2},
\end{equation}
\begin{equation}
-\hbar \frac{d}{dt}I_k =\frac{\hbar^2 k^{2}}{2 a^2 m}R_k + m V_k,
\end{equation}
and
\begin{equation}
I_k=\frac{2 a^2 m}{\hbar k^2} \frac{dR_k}{dt}.
\end{equation}
Combing the above, we have, as Eq. (11), that
\begin{equation}
\frac{d}{d t} a^2 \frac{d}{d t}R_k
- (\frac{3 {H_0}^2 \Omega_m}{2a})R_k + (\frac{\hbar^2 k^4}{4 m^2 a^2} R_k=0,
\end{equation}
and $R_k$ has, up to a constant factor, the same solution as $n_k$.
Note that since $dR_k/dt = \dot{a} R_k/a$ for low-$k$ modes, it follows
that $|I_k|= (2mH_0 a^{1/2}/\hbar
k^2)|R_k|=({k_{J}}^2/\sqrt{3}k^2)|R_k| \gg |R_k|$. This feature will
serve as one of the indicators for the validity of the linear regime
in the wave function representation.
\begin{figure}
\begin{center}
\includegraphics[width=12cm,angle=0]{f1.eps}\\
\caption{The solution of the linear perturbation given by Eq.(6). The vertical
line labels the location of squared scaled Jean's wavenumber at $x=6$.}\label{fig:Solution}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=12cm,angle=0]{f2.eps}\\
\caption{The linear evolution of the power spectrum from $z=100$ to $z=10$ in an 1 $h^{-1}$Mpc box. The low-$k$ power obeys the linear scaling $\propto a^2$. }\label{fig:Power_Evolve_Linea}
\end{center}
\end{figure}
\section{Numerical Scheme and Simulations}
\subsection{Numerical Scheme}
We normalize the length to the computational grid size, $\Delta$, and
further define $\eta=(m \Delta^2 H_{0})/{\hbar}$. The value of
$\eta$ determines the size of Jean's length relative to the computational grid size. Set $\nabla =
\frac{1}{\Delta} \widetilde{\nabla}$. The dimensionless
Schr\"{o}dinger-Poisson equations becomes
\begin{equation}
i\frac{\partial\psi}{\partial \tau}= -\frac{1}{2 a^2
\eta}\widetilde{\nabla} ^{2}\psi + \frac{3\Omega_m \eta}{2a} U\psi,
\end{equation}
and
\begin{equation}
\widetilde{\nabla}^{2} U = (\vert \psi \vert ^{2}- 1),
\end{equation}
where $U(x)={V(x)}/(\frac{3\Omega_{m} \eta}{2a})$, the
dimensionless gravitational potential, and $\tau=H_{0} t$.
Given a Hamiltonian $\bf {H}$, one can evolve the wave function
through (23). It is simply a unitary transformation of the system,
\begin{equation}
\psi^{j+1}= e^{-i{\bf{H}}dt}\psi^{j}.
\end{equation}
We use the \emph{pseudo-spectral method} to solve the Schr\"{o}dinger
equation in the comoving box. Let ${\bf{K}}$ be the kinetic energy
operator (${\bf{K}}=-\frac{1}{2\eta a^2}\widetilde{\nabla}^2
\rightarrow {{k}}^2/2\eta a^2$ in Fourier space)
and $\bf{W}$ the potential operator(${\bf{W}}=\frac{3\Omega_m}{2
a}U$ in real space). The evolution is then split into
\begin{equation}
e^{-i{\bf{H}}dt}=e^{-i({\bf{K}+\bf{W}})dt}=1-i({\bf{K+W}})dt-\frac{1}{2}{\bf(K^2+KW+WK+W^2)}
dt^2+O(dt^3).
\end{equation}
On the other hand, we need to consider the non-commutative relation
between ${\bf{K}}$ and ${\bf{W}}$, where
\begin{equation}
e^{-i{\bf{K}}{dt}}e^{-i{\bf{W}} dt}=1-i({\bf{K+W}})
dt-\frac{1}{2}{\bf{K}}^2 dt^2-\frac{1}{2}{\bf{W}}^2 dt^2-{\bf{KW}}
dt^2+O(dt^3),
\end{equation}
\begin{equation}
e^{-i{\bf{W}}{dt}}e^{-i{\bf{K}} dt}=1-i({\bf{W+K}})
dt-\frac{1}{2}{\bf{W}}^2 dt^2-\frac{1}{2}{\bf{K}}^2 dt^2-{\bf{WK}}
dt^2+O(dt^3).
\end{equation}
It follows that we obtain, to the second order accuracy,
\begin{equation}
e^{-i{\bf{(K+W)}}dt}\approx \frac{1}{2}[e^{-i{\bf{K}}
dt}e^{-i{\bf{W}} dt}+e^{-i{\bf{W}} dt}e^{-i{\bf{K}} dt}],
\end{equation}
which will be adopted to advance the time steps.
For each time step, the kinetic energy
operator is calculated in the Fourier domain,
\begin{equation}
{\psi_k}^{j+1}=e^{-i\frac{{k}^2}{2\eta
a^2}dt}{\psi_k}^{j}
\end{equation}
and $\psi$ is advanced in real space with the
potential energy operator,
\begin{equation}
{\psi({\bf x})}^{j+1}=e^{-i\frac{3 \Omega_m \eta U}{2a}dt}{\psi({\bf
x})}^{j}.
\end{equation}
To ensure numerical stability, we restrict the magnitude of $dt$
that rotates the phase angle of wave function less than
$\frac{\pi}{4}$ in each time step,
\begin{equation}
dt \leq \frac{\pi}{2} \frac{\eta a^2}{{k_{max}}^2},
\end{equation}
\begin{equation}
dt \leq |\frac{\pi a }{6 \Omega_m \eta U_{max}}|.
\end{equation}
In the early stage($a \sim 10^{-3}$), the stability condition is
governed by the kinetic energy term, where
${k_{max}}^2=3{\pi}^2$ and $dt \leq {{(6 \pi)}^{-1}} (\eta
a^2)$. At the late time, the gravitational potential becomes ever
deeper, and therefore $dt$ is controlled by the potential energy,
where $U_{max}$ is the greatest value of potential in the real
space.
\subsection{Simulation Scale}
We prepare the initial conditions with CMBFAST \citep{cmbfast96} at $z=1000$ with $\Lambda$CDM cosmology. Such initial conditions differ from that of \citet{hu00},
where the Compton length of ELBDM already has imprints on the power spectrum
at $z=1000$. We choose this initial condition because only a few low-$k$ modes can grow for our choice of Jean's length and the details of initial power spectrum are irrelevant at the late time.
The simulations run up to $1024^{3}$-grid resolution in a 1 $h^{-1}$Mpc comoving box. For simulations in a much larger box, the background density averaged over an 1$h^{-1}$ Mpc box can often change with time, the so-called environment effects, where galaxies prefer to form in regions of high background density. Here, we ignore the environment effect by fixing $\Omega_m=0.3$.
We let the dimensionless parameter $\eta=1.22 \times 10^{-2}$ and $4.88\times 10^{-2}$ for the $1024^3$ and $512^3$ simulation boxes, which give a Jeans wavelength 50 kpc at $z=0$. This value of $\eta$ corresponds to $m \sim 2.5 \times 10^{-22}eV$. In the rest of this paper, we shall report only the simulation results of the highest resolution.
\section{Results}
\subsection{Validity of Linear Perturbation Theory}
We depict the early evolution of density power spectrum in Fig (2)
from $z=100$ to $z=10$. The density power spectrum increases as $a^2$
for modes with wave number $\ll k_{J}$, and $k_{J}$ indeed increases
as $a^{1/4}$. These features are in agreement with what the linear
perturbation theory predicts.
On the other hand, we show the early evolution of $4|R_k|^2$ and
$4|I_k|^2$ in Fig.(3) from $z=400$ to $z=10$. As expected the
low-$k$ modes grow initially, while the high-$k$ modes are
suppressed by quantum pressure. It is surprising to find that the
linear evolution of $\psi_k$ is valid only for a short period of
time before $z = 200$. After that, the wave function deviates from
what the linear theory predicts. In particular, the linear theory of
wave function representation predicts that $R_k$ and $I_k$ grow as
$a$ and $a^{3/2}$ respectively, and $|I_k|^2=({k_{J}}^4 /3 k^4)
|R_k|^2 \gg |R_k|^2$ for low-$k$ modes. This prediction agrees with
the simulation result only when $z>200$.
At a somewhat later epoch
than $z=200$, we observe that the difference between $|I_k|^2$ and $|R_k|^2$
diminishes, and at $z=10$ we find $|I_k|^2 \approx |R_k|^2$.
This problem is also manifested in the growth rate of the linear
density power spectrum $|n_k|^2\approx 4|R_k|^2$. It is found that
$4|R_k|^2$ indeed scales as $a^2$ before $z=200$. When $z < 200$,
the quantity $4|R_k|^2$ grows much faster than $a^2$, and $4|R_k|^2$
becomes about one order of magnitude greater than what the linear
theory predicts at $z=100$. That is, the density power spectrum $|n_k|^2$
vastly deviates from, and is much less than, $4|R_k|^2$ even since early on in the
evolution.
To examine this peculiar feature, we construct the imaginary part of wave function $I(\bf{x})$ from $I_k$ of a few
lowest $k$ modes and depict $|(I^2)_k|^2$ on the same plot as
$4|R_k|^2$ at $z=100$ in Fig.(3). It is found that
$|{(I^2)}_k|^2$ coincides with $4|R_k|^2$ at low-$k$. Since $n_k \approx 2R_k + (I^2)_k$, it follows that $(I^2)_k$ has the opposite
sign but approximately the same magnitude as $2R_k$ so that
the two terms of $n_k$ almost cancel to yield $|n_k|^2 \ll 4|R_k|^2$.
Thus nonlinearity already sets in as early as $z=100$ in
the wave function representation. On the other hand, the fluid representation
does not have such a problem. We find that $|n_k|^2$ of low-$k$ modes in the
fluid representation agrees with what the linear fluid theory predicts even as late as
$z = 1$, despite that the high-$k$ modes already become nonlinear.
The difference arises from that $(\nabla S/m)(\equiv \bf{v})$ in the
fluid representation remains small at low-$k$, in contrast to $I=S$
in the wave function representation which has a large amplitude for low-$k$ modes
even at high $z$.
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=12cm,angle=0]{f3.eps}\\
\caption{Evolution of $4|R_k|^{2}(z)$ and $4|I_k|^{2}(z)$ to check against the linear theory. Deviation from the linear theory is evident from z=200 on. Black dots are $|(I^2)_k|^2$ at $z=100$ constructed from few low-$k$ modes to show the cancellation between $2R_k$ and $(I^2)_k$ so as to make $n_k \ll 2R_k$. }\label{fig:4R2_4I2}
\end{center}
\end{figure}
\subsection{Weakly Nonlinearity Regime}
\begin{figure}
\begin{center}
\includegraphics[width=11cm,angle=0]{f4.eps}
\caption{The weakly nonlinear evolution of the power spectrum from
$z=10$ to $z=1.5$, where the high-$k$ modes are seen to be nonlinearly excited.
}\label{fig:z_10_z_1}
\end{center}
\end{figure}
Shown in Fig.(4) is the evolution of $|n_k|^2(=|2R_k+(R^2+I^2)_k|^2)$ for $1.5<z<10$.
The initial $n_k$ of high-$k$ mode has been linearly suppressed and is later replaced by the high-$k$ modes that are nonlinearly generated beginning around $z=5$. The nonlinear coupling arises from the coupling $V\psi$ in the Schr\"{o}dinger equation. Since $V$ is dominated by low-$k$ modes, the nonlinear coupling transfers modal energy local in $k$ space from one mode to the neighboring mode, and from low $k$ to high $k$. The gravitational potential $V$ is barely evolving in the weakly nonlinear regime, and hence the dynamics in this regime is for the wave function to settle into almost static potential wells.
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=8cm]{f5a.eps}
\includegraphics[width=7.8cm]{f5b.eps}\\
\caption{Two-dimensional projections of density in real space in a 1$h^{-1}$Mpc comoving box at z=3 (left panel) and z=0 (right panel). Halo A and B are at the top left and bottom left.}
\label{fig:2D_project1}
\end{center}
\end{figure}
We note that if $V$ were exactly static, the Schr\"{o}dinger equation would have been linear and the coupling from low to high $k$ modes would simply be the linear evolution of wave function starting with a non-eigenstate. This argument may explain why the linear theory
of low-$k$ modes works so well even after high-$k$ modes are excited in this regime. Shown in the left panel of Fig.(5) is the real-space configuration of $\delta n$ at $z=3$ in this weakly nonlinear regime, where the wave function settling into individual quasi-static potentials well is underway.
\subsection{Strong Nonlinearity Regime}
Plotted in Fig.(6) is the evolution of density power spectrum after $z=1.5$. This is the strongly nonlinear regime where the gravitational potential develops deep wells at the collapsed cores.
To illustrate the contribution of the few collapsed objects to the final high-$k$ power spectrum, $P(k,0)$, at $z=0$, we remove all matter outside the virial radii of these collapsed halos and construct the power spectrum of these artificial objects, ${P_h}(k,0)$. The power spectrum of the removed matter, $P_{b}(k,0)$, is also constructed for reference. The two power spectra along with the original power spectrum are depicted in Fig.(7) for comparison. Clearly the bound objects contribute to almost all the power contained in the original power spectrum, except for the low-$k$ modes that are contributed dominantly by $P_b(k,0)$. These few lowest-$k$ modes are grown out of the initial noise and remain so in the final configuration. That is, despite that the initial condition possesses many independent degrees of freedom, the final configuration has only a few degrees of freedom, where almost all randomly placed, small collapsed objects seen in standard CDM simulations are entirely suppressed.
During the final collapse phase, the core undergoes large-scale oscillations that send out waves to remove excess angular momentum deposited into the core region, rendering the core to settle into an almost stationary configuration in the physical space. Fig.(8) shows the wavy structures of this nature around the collapsed halo A at z=1. Even in the quasi-stationary state of these halos at $z = 0$, we find that this wave phenomenon is still pronounced around the halos, as will be discussed later.
There are two collapsed halos, A and B, of mass $5.7 \times 10^{9} M_{\bigodot}$
and $5 \times 10^{9} M_{\bigodot}$ respectively, at $z=0$ in our simulation as shown in the right panel of Fig.(5). Halo B is subject to major merger around $z=0.7$. Shown in Fig.(9) are halo B before and after the major merger. The final density profiles of halos A and B are plotted in Fig.(10). Interestingly, they both develop singular cores, in spite of the presence of quantum pressure. Both power-law singular cores have a power index $-1.4$, reminiscent of that of the standard cold dark matter \citep{ginamor00}. The density profiles of the outskirts also obey power law, with a power index $-2.5$, slightly shallower than that produced by the cold dark matter \citep{nfw97}. Similar density profiles arising from different formation processes, i.e., accretion versus merger, suggest that the profile can be universal.
\begin{figure}
\begin{center}
\includegraphics[width=12cm,angle=0]{f6.eps}\\
\caption{The nonlinear evolution of the power spectrum from
$z=1.5$ to $z=0.0$. The highest-$k$ modes acquire their full power after $z=0.4$, indicative of the creation of singular halo cores.
}\label{fig:z_1_z_0}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=11cm,angle=0]{f7.eps}
\caption{The comparison of the halo power spectrum $P_{h}$ (circle), the background power spectrum $P_{b}$ (triangle) and the full power spectrum $P$ (square) at $z=0$. Note that $P_{h}$ matches $P$ at high $k$ and $P_b$ matches P at low $k$. }\label{fig:Power_cut_vs_no_cut}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm,angle=0]{f8a.eps}
\includegraphics[width=8cm,angle=0]{f8b.eps}
\caption{Waves are sent out from the collapsing halo A at $z=1$ (left panel); it develops an oblate singular halo at $z=0$ (right panel). }\label{fig:wave_structure}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm,angle=0]{f9a.eps}
\includegraphics[width=8cm,angle=0]{f9b.eps}
\caption{Halo B is subject to major merger at $z=0.7$. The left panel reveals the progenitors at $z=1$ and the right panel shows a singular halo with high degree of spherical symmetry at $z=0$. }\label{fig:major_merger_before}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=6cm]{f10a.eps}
\hspace{0.5cm}
\includegraphics[width=6cm]{f10b.eps}\hspace*{\fill}\\
\caption{Density profiles of two massive halos at z=0. The left panel plots the profile of halo A and the right panel the profile of halo B. In both panels dot-dash lines and solid lines denote the power law of indices $-1.4$ and $-2.5$, respectively.}
\label{fig:Density_Profile}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=14cm,angle=0]{f11.eps}\\
\caption{A two-dimensional slice of density for halo A through the core.}\label{fig:one_slice_image_A}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=13cm,angle=-90]{f12.eps}\\
\caption{The same two-dimensional slice of the velocity field for halo A in the comoving frame.}\label{fig:Velocity_Field_Halo_A}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=14cm,angle=0]{f13.eps}\\
\caption{A two-dimensional slice of density for halo B through the core}\label{fig:one_slice_image_B}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=13cm,angle=-90]{f14.eps}\\
\caption{The same two-dimensional slice of the velocity field for halo B in the comoving frame.}\label{fig:Velocity_Field_Halo_B}
\end{center}
\end{figure}
Note that the final collapsed core contains angular momentum through angular dependence in the wave function. The angular dependence manifests itself with large-amplitude, small-scale fluctuations in the wave function. To examine this aspect of the halo, we let the wave function be represented by $\psi= f e^{iS}$. The specific kinetic energy is obtained through the real part of the expression:
\begin{equation}
-{{\psi^*\nabla^2 \psi}\over{2{\eta}^2 {|\psi}|^2}} = {{1}\over {\eta}^2}[(\frac{(\nabla S)^2}{2} - \frac{\nabla^2 f}{2f} ) - i (\frac{\nabla f}{f} \cdot \nabla S + \frac{\nabla^2 S}{2})].
\end{equation}
On the other hand, the specific flow energy can be evaluated through
the real part of the following:
\begin{equation}
-\frac{\nabla^2 ({\psi^2}/|{\psi}|^2)}{4{\eta}^2 ({\psi^2}/|{\psi}|^2))} = {{1}\over {\eta}^2} [ \frac{(\nabla S)^2}{2} - i (\frac{\nabla^2 S}{2})].
\end{equation}
Combining the two, the specific internal energy is obtained. Plotted in Figs.(11) and (13) are the 2D slices of density for halos A and B, showing large-amplitude, small-scale fluctuations in density. We also plot the same slice for the 2D flow velocity $\nabla_\perp S/\eta$ $(=-i\nabla_\perp(\psi^2/|\psi|^{2})/(2\eta\psi^2/|\psi|^2)$ in Figs.(12) and (14). The velocity patterns clearly reveal well-defined boundaries of turbulence regions in halos A and B against the infall. The flow becomes randomized inside this sharp boundary. Despite the boundary outlines an accretion shock-like structure, we find in the density slice that there is no obvious jump at the boundary and it is not a shock.
Thus, there is no analogy of such a structure in the fluid system. This peculiar feature warrants our further investigation in the future.
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=8cm,angle=-90]{f15a.eps}\\
\vspace{1cm}
\includegraphics[width=8.1cm,angle=-90]{f15b.eps}\\
\caption{Virial ratios of the kinetic energy integrated up to a radius $r$ (square) to the potential energy integrated up to $r$, the specific flow energy (plus) and specific internal energy (star) for halos A (upper panel) and B (lower panel). This virial ratios are about 0.5 at the average radii of the infall boundaries.
The specific internal energy is about twice as large as the specific flow kinetic energy in the interiors of the two halos.}\label{fig:Energy_K_P}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=8cm,angle=-90]{f16a.eps}\\
\vspace{1cm}
\includegraphics[width=8cm,angle=-90]{f16b.eps}\\
\caption{Specific radial flow energy and tangential flow energy of halo A (upper panel) and halo B (lower panel).}\label{fig:KE_r_t}
\end{center}
\end{figure}
We next investigate the virial conditions in the two halos.
Plotted in Figs.(15a) and (15b) are the ratios of the kinetic energy integrated up to a radius $r$ to the potential energy
($\int_0^r 4\pi r'^2 (3\Omega_m\eta/4)[n(U-U_{min})] dr'$)
integrated up to $r$ for halos A and B.
The virial ratios are 0.5 at the average radii of the infall boundaries, within which turbulence occurs. In addition, we also plot the specific flow energy and the specific internal energy respectively in Fig.(15). It is found that the specific internal energy is twice as large as the specific flow kinetic energy in the interiors of two halos.
Virialization can be correlated with the flow equi-partition. Plotted in Fig.(16) are the random tangential specific flow energy and the random radial specific flow (subtracted off the mean radial infall) energy averaged over spherical shells for the two halos. The tangential flow energy is about twice the radial flow energy only well within the halos, thus providing an evidence of equi-partition at the halo cores. In the outskirts of the halos, the random radial flow energy is larger than the equi-partition value. This aspect is reminiscent of the velocity dispersion in a standard CDM halo.
Equi-partition is also related to the sphericity of mass distribution. Significant large-scale angular dependence in the wave function can yield aspherical halos. We define the quadrapole-to-monopole ratio as
\begin{equation}
Q\equiv (\frac{(\lambda_1-\lambda_2)^2+(\lambda_2-\lambda_3)^2+(\lambda_3-\lambda_1)^2}{2(\lambda_1^2+\lambda_2^2+\lambda_3^2)})^{\frac{1}{2}},
\end{equation}
where $\lambda_1$, $\lambda_2$ and $\lambda_3$ are the eigenvalues of $\int ({\bf{r}}{\bf{r}}/r^{\beta}) n d^3{\bf{r}}$, with $\beta=3.5$ to weigh in favor of the core.
The $Q$ value characterizes the low-order angular dependence of wave function, which assumes the extreme value zero or unity when the density profile has spherical symmetry or a one-dimensional shape. It is found that $Q=0.34$ for halo A and $0.11$ for halo B respectively. Perhaps violent relaxation after a major merger accelerates halo B to assume spherical symmetry. We note that the quantum stress is in fact anisotropic: $T_{ij}^{Q}=(\partial_i \sqrt{n})(\partial_j \sqrt{n})/{\eta}^2 - \delta_{ij}(\nabla^2 n)/4{\eta}^2$ \citep{chiu00}.
Unlike fluid dark matter \citep{yoshida00}, the density asphericity arises from the anisotropic stress of quantum mechanics, similar to that produced by collisionless dark matter. The similarity between the quantum dynamics and the collisionless particle dynamics in fact motivated \citet{wk93} to propose a model that approximates the latter by the former.
\section{Conclusion}
As far as we know of, this work presents the first result for the study of Bose-Einstein condensate under self-gravity via high resolution ($1024^{3}$ grids) simulation.
\citet{hu00} conjectured that if the dark matter is ELBDM, it can solve the long standing problem of far too many low-mass halos present in the standard CDM simulations, and also explains the existence of flat cores in some galaxies. In this work, we confirm that low mass halos are indeed suppressed by quantum stress even when the small-scale fluctuations are abundant in the initial power spectrum. This result is a consequence of long-time linear suppression of the
small-scale modes. We also find, from our simulations of different grid resolutions, that collapsed halos develop singular cores regardless of the halo formation processes. All these runs produce convergent density profiles.
Our $1024^{3}$ highest resolution run gives singular density profiles similar to what standard CDM simulations produce.
In retrospect this singular-core result may not be too surprising, as it arises from an almost scale-free Schr\"{o}dinger-Poisson system.
This system is not exactly scale free because there exists a Jeans length for fluctuations that are small in amplitude. However, when the local density much exceeds the background density, the latter becomes locally ill-defined, the Jeans length no longer has any physical significance, and the Schr\"{o}dinger-Poisson system becomes locally scale free.
Being locally scale-free, the system develops singularities within a finite time. By contrast, the conservation of phase-space density in classical particle dynamics precludes the space density of standard dark matter particles from developing any singularity \citep{chiu97,dal01}, and explains the existence of a flat core in the warm dark matter model. Note that such a phase-space constraint does not exist for nonlinear wave dynamics; one example of this nature is a system described by the nonlinear Schr\"{o}dinger equation with attractive
self-interaction \citep{sulem99}.
Most recent observations of rotation curve in low-surface-brightness galaxies indicate inconclusive results, as far as the existence of singular halo core is concerned. Some galaxies are claimed not to possess singular halo cores,
and some are if non-circular motion is taken into consideration. Among those that do, many possess concentration parameters inconsistent with the constraint given by
$\Lambda$CDM cosmology \citep{swt03,zaku06,kuz06,kuz08}. Given the present status of observations, if galaxies indeed contain singular cores, ELBDM will likely be the only viable candidate for the dark matter that, on one hand, permits the galactic-scale, NFW-like halo cores, and on the other hand suppresses the sub-galactic low-mass halos.
\acknowledgments
\emph{Acknowledgements}- We thank Prof. Ue-Li, Pen who made the \emph{Sunnyvale cluster} of CITA available to us. We also thank Prof. Yih-Yu, Chen and Mr. Shing-Kwang, Wong for helpful discussions. This project is supported in the part by National Science Council of Taiwan under the grant: NSC 97-2628-M-002-008-MY3, and also by National Center for High-performance Computing for the availability of IBM 1350.
|
2,877,628,091,321 | arxiv | \section{introduction}
The polarizability of molecules and nano-structures is an important property determining the assembly and behavior of solids and liquids composed of well-defined building blocks. In addition, molecular polarizabilities play a key role in vibrational spectroscopy, e.g. in the calculation of Raman \cite{putrino2002anharmonic} and sum-frequency generation \cite{wan2015first} spectra, in the determination of
van der Waals interactions in solids and liquids \cite{mahan1982van, klimevs2012perspective}, and in the development of polarizable force fields.
While several methods are available to compute and predict polarizabilities of isolated molecules and nanostructures, their definition and calculation in condensed phases have been challenging and various levels of approximations have been adopted in the literature (e.g., \cite{feynman, kittel2004, heaton2006condensed, salanne2008polarizabilities, wan2013raman}). For example, the Clausius-Mossotti (CM) equation relates the average atomic or molecular polarizability $\alpha$ of a material building block to its electronic dielectric constant $\epsilon_\infty$\cite{feynman,kittel2004}:
\begin{equation}\label{CM}
\frac{4\pi N\alpha}{3}=\frac{\epsilon_\infty-1}{\epsilon_\infty+2},
\end{equation}
where $N$ is the number density of atoms or molecules. In Eq. \ref{CM} if we substitute $\epsilon_\infty$ with the refractive index of the material $n$ using $\epsilon_\infty=n^2$, the Lorentz-Lorentz equation is recovered. The validity of the CM relation in condensed phases, such as molecular liquids or assembly of nanostructured solids, depends on the system and general rules to establish its regime of applicability are not available. We note that often times the variation of polarizabilities from the gas to the condensed phase are neglected. For example, many molecular dynamics simulations of aqueous solutions using force fields assume a fixed molecular polarizability of water \cite{schropp2008polarizability}, though first-principles electronic structure studies of water at ambient conditions have shown that molecular polarizabilities have rather broad distributions and are not isotropic \cite{heaton2006condensed, salanne2008polarizabilities, wan2013raman}. Most electronic structure methods \cite{heaton2006condensed, salanne2008polarizabilities, wan2013raman} consider only the dipole-dipole interaction when calculating the variation of polarizabilities upon assembly of molecular fluids or solids, and in many cases such approximations have remained untested. Recently, substantial progress has been reported in computing polarizabilities of building blocks using maximally localized Wannier functions (MLWFs)\cite{marzari2012maximally}. However, this method requires separate calculations of the dielectric properties of each constituent self-consistently, and it is not based on global induced fields within the condensed system \cite{PhysRevB.92.241107,PhysRevB.96.075114}.
In this letter, we propose a first-principles method to compute the polarizabilities of building blocks in condensed phases. The method, bases solely on electronic structure calculations for the condensed phase, is applicable to any semiconductor or insulator. We present results for the molecular polarizablities of water in a wide pressure-temperature (P-T) range, and we validate the CM relation for water at ambient conditions and the dipole-induced-dipole approximation (DID). We found that the DID becomes increasingly less accurate under pressure and breaks down when covalent bonds are present and oxygen ions are formed within the solid.
We start by summarizing our formulation. A building block (BB) composing a condensed system (e.g. a molecule in a molecular crystal) is defined by its ionic coordinates and by electronic wave functions spatially localized at the BB site, for example maximally localized Wannier functions $w_i(\vec{r})$\cite{marzari2012maximally} constructed from the Bloch orbitals of the condensed phase. The linearly induced electron polarization density of the BB, in response to a macroscopic field $\vec{E}$ is:
\begin{equation} \label{mlwf}
\Delta \rho_{BB}(\vec{r}) = 2 \sum_i^{N_{orb}} w_i^*(\vec{r})\Delta w_i(\vec{r}) + c.c.,
\end{equation}
where $N_{orb}$ is the number of localized electronic orbitals (e.g. four doubly-degenerate orbitals for a water molecule with 8 valence electrons), and $\Delta w_i$ is the variation of the $i$-th Wannier function. The local field ($\vec{E}_{loc}$) acting on the BB is given by two contributions:
\begin{equation}\label{E_loc}
\vec{E}_{loc} = \vec{E} + \vec{E}_{env},
\end{equation}
where $\vec{E}_{env}$ denotes the field produced by the environment surrounding the BB, that is by all the electrons that do not belong to the BB. In most previous studies, $\vec{E}_{env}$ was approximated by dipole-induced-dipole (DID) electrostatic interactions\cite{heaton2006condensed, salanne2008polarizabilities, wan2013raman}.
The polarizability tensor $\alpha_{BB}$ of the BB is defined by the equation
\begin{equation}\label{mol_pol}
\vec{\mu}_{BB} = \alpha_{BB}\vec{E}_{loc},
\end{equation}
where
$\vec{\mu}_{BB}$ is the dipole moment computed as:
\begin{equation} \label{mu}
\vec{\mu}_{BB} = -q_e\int \vec{r} \Delta \rho_{BB}(\vec{r})d\vec{r}.
\end{equation}
In Eq. (\ref{mu}), $q_e$ is the elementary charge and $\Delta \rho_{BB}$ is from Eq. (\ref{mlwf}).
To compute $\alpha_{BB}$, we need to calculate $\vec{E}_{loc}$. Since $\vec{E}$ is fixed, we only need to determine $\vec{E}_{env}$, which is simply:
\begin{equation}\label{projector}
\vec{E}_{env} =
\frac{1}{N_{orb}} \sum_i^{N_{orb}} \int \vec{e'}(\vec{r}) \left| w_i(\vec{r}) \right|^2 d\vec{r} \,,
\end{equation}
where $\vec{e'}(\vec{r})$ is the \emph{microscopic} electric field induced by all the electrons outside the BB.
The microscopic electric field $\vec{e'}$ can be evaluated within the
random phase approximation (RPA) or by including the variation of the exchange and correlation potential (we denote the latter with DFT). Within RPA, $\vec{e'}_{RPA}$ is obtained using Gauss' law \footnote{Historically, it is from the \textit{wing} part of the inverse dielectric matrix\cite{baldereschi1979microscopic,PhysRevB.23.6615}}:
\begin{equation} \label{rpa}
\nabla \cdot \vec{e'}_{RPA}(\vec{r}) = -4\pi q_e \Delta \rho'(\vec{r}),
\end{equation}
where $\Delta \rho' = \Delta \rho - \Delta \rho_{BB}$ and $\Delta \rho$ is the electron polarization density of the whole system.
At the DFT level, the exchange-correlation potential $V_{xc}$ also contributes to the microscopic local field:
\begin{equation} \label{dft}
\vec{e'}_{DFT}(\vec{r}) = \vec{e'}_{RPA}(\vec{r}) + \frac{1}{q_e} \nabla \left(\frac{d V_{xc}}{d\rho} \Delta \rho' (\vec{r})\right).
\end{equation}
Once $E_{loc}$ and $\mu$ are computed from Eqs. (\ref{E_loc}) and (\ref{mu}), respectively, $\alpha_{BB}$ is known.
Therefore the procedure outlined here to obtain polarizabilities of BB within a condensed system is rather simple. Once the density and single particle wavefunctions are computed, e.g. by solving the Kohn-Sham equations, the electron polarization density is obtained by performing a single self-consistent calculation for the whole system. Eq. (\ref{rpa}) or (\ref{dft}) are then solved non-self-consistently, including multiple interactions at all orders. Solvers to obtain linear variations of the electron density exist in most DFT codes, using either density-functional pertubation theory (DFPT) \cite{baroni2001phonons} or finite fields \cite{PhysRevLett.89.117602}.
We now turn to applying the method outlined above to the study of
the molecular polarizabilities of water in a broad P-T range from ambient to supercritical conditions.
The electron polarization density was obtained by DFPT, as implemented in the plane-wave pseudopotential code Qbox (http://qboxcode.org/) \cite{gygi2008architecture,wan2013raman}
\footnote{We used Hamann-Schluter-Chiang-Vanderbilt norm-conserving pseudopotentials \cite{PhysRevLett.43.1494,PhysRevB.32.8412} with a plane-wave kinetic energy cutoff of 85 Ry. The MD trajectories of water at ambient conditions were taken from the water PBE400 dataset \cite{gygiPBE400}: http://www.quantum-simulation.org, where there are 64 water molecules in the simulation box. The supercritical water trajectories were from our previous simulations \cite{pan2013dielectric,pan2014refractive}, where the simulation box has 128 water molecules. The MD trajectories of the Na$^+$-water solution is from Ref. \cite{gaiduk2017local}. At least 60 snapshots from each MD trajectory were employed in our electronic structure calculations.
For ice VIII and ice X, we used a 96 and 128-molecule supercells, respectively; the results were validated using a Monkhorst-Pack k-point mesh of 8$\times$8$\times$8 with the primitive cells \cite{monkhorst1976special}.}.
The Perdew-Burke-Ernzerhof (PBE) exchange-correlation (xc) functional \cite{PhysRevLett.77.3865} was used. Although
PBE overestimates the molecular polarizability of an isolated water molecule by $\sim$10\%, for water under pressure PBE gives both the static and the electronic dielectric constants in better agreement with experimental values than at ambient conditions, as shown in previous studies \cite{pan2013dielectric,pan2014refractive}. Here we used one xc functional to analyze trends of polarizabilities as a function of P and T, however the method is general and can be used with any functional. In particular we note that using finite field methods to compute polarizabilities (http://qboxcode.org/)\cite{wan2013raman}, calculations with hybrid functionals are readily carried out.
Fig. \ref{distriDFT} shows that at ambient conditions, the molecular polarizabilities of water given by the DFT method (Eq. (\ref{dft})) are anisotropic. The out of plane polarizability is the largest, and the ones in-plane and perpendicular to the water dipole direction are smaller, consistent with the reports of other authors using just DID interactions to compute $E_{env} $\cite{heaton2006condensed, salanne2008polarizabilities, wan2013raman}.
We found that at high pressures and high temperatures, the anisotropy substantially decreases as shown in Fig. \ref{distriDFT}.
Note that an isolated water molecule also exhibits a polarizability which is less anisotropic than in the liquid at ambient conditions \cite{wan2013raman,PhysRevB.96.075114},
suggesting that the anisotropy is critically related to the formation of hydrogen bonds. Indeed, also in supercritical water, the polarizability components are less dissimilar than at ambient conditions (see Fig. \ref{distriDFT}).
In Table \ref{table}, four different methods to compute polarizabilites are compared
from ambient to 11 GPa and 0 to 2000 K.
All methods show that with increasing pressure along an isotherm, the average molecular polarizability of water ($\bar{\alpha}_{mol}=\frac{1}{3}\mathrm{Tr} \{\alpha_{mol}\}$) decreases,
while with increasing temperature along an isobar, it increases.
Our previous study showed that the average dipole moment of water molecules increases with pressure,
but decreases with temperature,
so the present results indicate that varying the molecular dipole moments of water becomes more difficult when the values of the moments increase.
The polarizabilities obtained by DFT (Eq. (\ref{dft})) are slightly larger than
those from RPA (Eq. (\ref{rpa})) by $\sim$0.02 \AA$^3$.
When applying the two methods, we used the same electron polarization density $\Delta \rho$, which is obtained when the exchange-correlation functional is included. The local electric field mainly comes from the electrostatic interactions,
so the DFT and RPA values are very similar.
In order to test the validity of the CM relation, we substituted the electronic dielectric constant $\epsilon_\infty$, obtained by DFPT into the CM relation to calculate the average molecular polarizability.
It is interesting to see that the CM relation yields nearly identical results as the DID approximation.
The standard deviations obtained for the CM relation are smaller than those from the DID approximation by one order of magnitude,
as they only arise from the thermal fluctuation of $\epsilon_\infty$, not from molecular distributions as shown in Fig. \ref{distriDFT}.
The CM relation holds when the Lorentz relation holds in an isotropic material, that is to say that the field at the center of a fictitious spherical cavity created by molecules inside the cavity vanishes \cite{kittel2004}.
A well-known example for the Lorentz relation is the lattice with cubic symmetry, where only dipole-dipole interactions are considered \cite{kittel2004}.
The agreement between the results obtained using the CM relation and the DID approximation suggests that the Lorentz relation is accurate when we consider only the dipole-dipole interaction for the water systems studied in Table \ref{table}.
We now turn to comparing molecular polarizabilities of water in various phases.
If we substitute the refractive index of 1.333, the experimental value for water at 293 K and ambient pressure, into the CM relation,
we get a molecular polarizability of 1.47 \AA$^3$, which is the same as the experimental value for water vapor.
At the PBE level of theory, the polarizability of an isolated water molecule is 1.60 \AA$^3$, the same as the values obtained at ambient conditions using the CM relation and DID approximation (see Table \ref{table}), consistent with previous studies \cite{wan2013raman}.
Hence, within the DID approximation, the average molecular polarizability of water does not change from gas to liquid phase.
However, both the RPA and DFT methods give slightly larger values ($\sim$3\% than that obtained with CM and DID methods).
We note that recently, Ge and Lu reported the molecular polarizabilities of water and ice at ambient conditions calculated using the local dielectric response of orbitals \cite{PhysRevB.96.075114}, where the electron polarization density of each molecule $\Delta \rho_{mol}$ is evaluated individually. In general the sum of $\Delta \rho_{mol}$ does not equate the total $\Delta \rho$. In calculations of Ref.\cite{PhysRevB.96.075114}, the $\bar{\alpha}_{mol}$ of water increases by $\sim$10\% (instead of $\sim$3\%) from gas to the liquid at ambient conditions.
Using the DFT method, we also calculated the molecular polarizabilities of water in the first solvation shell of the Na$^+$ ion at ambient conditions\cite{gaiduk2017local}: $\bar{\alpha}_{mol}$ is 1.62 \AA$^3$, which is again slightly larger than that obtained by the DID approximation by $\sim$3\%. For water molecules with dangling bonds in the basal surface layer of ice Ih \cite{pan2008surface, watkins2011large}, the difference in $\bar{\alpha}_{mol}$ given by the DFT and DID approaches is even smaller: 1.62 \AA$^3$ vs 1.61 \AA$^3$, only $\sim$1\%. Our results suggest that at ambient conditions, the DID approximation works remarkably well.
Table \ref{table} shows that the polarizabilities obtained by our method and the CM relation or the DID approximation differ with increasing pressure at a fixed temperature. For ice VIII, a high pressure ice phase consisting of two interpenetrating cubic ice sublattices \cite{petrenko1999physics}, when increasing pressure from 0 to 30 GPa, the difference between DFT values and the DID approximation increases from 6\% to 13\%, as shown in Fig. \ref{iceVIII-pol}. It indicates that interactions higher than dipole-dipole play a bigger role for denser water.
We close by considering the case of extremely dense water: ice X, the highest pressure phase ever determined experimentally \cite{hemley1987static}. In ice X, the oxygen atoms are in a body-centered cubic lattice, and the hydrogen atoms sit right between two nearest O atoms (see Fig. \ref{iceX-pol}). Because the H atom is equidistant to two O atoms, it is no longer possible to define H$_2$O molecules; however since the four maximally localized Wannier orbitals are still closely localized around O atoms, a new BB can be defined, and the molecular polarizability discussed below refers to the polarizability of the O$^{2-}$ anion.
Fig. \ref{iceX-pol} shows that the CM relation gives the same results as the DID approximation
whereas the molecular polarizabilities given by the DFT and RPA methods are about 20\% larger. The reason is that in ice X covalent bonds are present, and indeed the BB identified by our calculation is no longer a water molecule, but rather an anion, for which higher-order interactions play an important role.
For ice X, another interesting finding of our calculation is that the electronic dielectric constant $\epsilon_\infty$ has a minimum at around 250 GPa, and
accordingly the band gap increases up to 150 GPa and then decreases slowly, as shown in Fig. \ref{epsi-gap}. The inverse correlation between the electronic dielectric constant and the band gap of ice X is consistent with the Penn model \cite{PhysRev.128.2093, angilella2017correlations}, and differs from what we found in ice VII/VIII and hot water up to 30 GPa in our previous study \cite{pan2014refractive}.
Generally, $\epsilon_\infty$ increases when both molecular polarizability and material density become larger. With increasing pressure, the molecular polarizability of ice X decreases as shown in Fig. \ref{iceX-pol}, whereas the material density increases due to volume shrinking, so the molecular polarizability and the material density of ice X are two competing factors determining $\epsilon_\infty$; this is also the reason why the variation of $\epsilon_\infty$ is weak (see Fig. \ref{epsi-gap}).
From 50 to 250 GPa, the molecular polarizability dominates the change of $\epsilon_\infty$, but above 250 GPa, the rate of its decrease becomes slower and thus the material density becomes a more important factor. As a result, $\epsilon_\infty$ decreases slowly as shown in Fig. \ref{epsi-gap}.
\section{Conclusion}
In order to predict the properties of solids and liquids composed of well defined building blocks, it is important to determine the variation of the dielectric properties of the isolated molecular or nano-scale constituents upon assembly. Hence the ability to compute dipole moments and polarizabilities of building blocks in condensed phases is critical. In this paper we proposed a first-principles method to compute polarizabilities of sub-entities in condensed phases, which includes multipole interactions at all orders and is applicable to any semiconductor or insulator. The methods only requires a single self-consistent calculation for the entire condensed system, as opposed to multiple calculations for each building block, and it is readily applicable within and beyond the RPA. As an example, we presented results for the molecular polarizabilities of liquid water in a wide pressure and temperature range. We found that at ambient conditions, the dipole-induced-dipole approximation is sufficiently accurate and the Clausius-Mossotti relation may be used, e.g. to obtain molecular polarizabilities from experimental refractive indexes. However with increasing pressure this approximation becomes unreliable and in the case of ice X, the Clausius-Mossotti relation is not valid. Interestingly, we found that the DID is increasingly less accurate under pressure.
For example in ice VIII the contribution of multipole beyond the dipole is $\sim$13\% at 30 GPa and in ice X, the difference between all multiple and the DID contribution is about $\sim$20\% at 350 GPa,
indicating that when hydrogen bonds are replaced by covalent bonds, higher-order interactions cannot be ignored.
In the case of ice X the CM relation is not valid, though the Lorentz relation still holds under the DID approximation.
We also found that the band gap of ice X has a maximum, while the electronic dielectric constant of ice X has a minimum, as a function of pressure. Finally we note that
the knowledge of the polarizabilities of sub-entities under pressure may help to design polarizable force fields suitable for extreme P-T conditions.
The method presented here can be used to study the local dielectric response of a wide range of semiconductors and insulators, and brings new insights into chemical bond interactions.
\section{acknowledgements}
We thank Deyu Lu, He Ma, and Ikutaro Hamada for their helpful discussions.
D.P. acknowledges support from Hong Kong Research Grants Council (project number ECS-26305017), the National Natural Science Foundation of China (project number 11774072), the Alfred P. Sloan Foundation through the Deep Carbon Observatory, and the Croucher Foundation through the Croucher Innovation Grant.
M.G. and G.G. were supported by MICCoM, as part of the Computational Materials Sciences Program funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. This research used resources of the Research Computing Center at the University of Chicago, the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under contract DE-AC02-06CH11357.
|
2,877,628,091,322 | arxiv | \section{References}
|
2,877,628,091,323 | arxiv | \section{Introduction}\label{section1}
The study of geometric aspects of value distribution theory of complex analytic mappings has achieved many important
advances. One of the most brilliant results in the study is to give a geometric interpretation of the precise maximum
for the number of exceptional values of a nonconstant holomorphic map from the complex plane $\C$ to a closed Riemann surface
$\overline{\Sigma}_{\gamma}$ of genus $\gamma$. Here we call a value that a function or a map never attains an
{\it exceptional value} of the function or map. In fact, Ahlfors \cite{Ah1935} and Chern \cite{Ch1960} proved that
the least upper bound for the number of exceptional values of a nonconstant holomorphic map from $\C$ to
$\overline{\Sigma}_{\gamma}$ coincides with the Euler characteristic of $\overline{\Sigma}_{\gamma}$ by using
Nevanlinna theory (see also \cite{Ko2003, NO1990, NW2014, Ru2001}). In particular, for a nonconstant meromorphic function on
$\C$, the geometric interpretation of the maximal number $2$ of exceptional values is the Euler characteristic of the
Riemann sphere $\RC :=\C\cup \{\infty \}$. We remark that if the closed Riemann surface is of $\gamma \ge 2$, then
such a map does not exist because the Euler characteristic is negative.
There exist several classes of immersed surfaces in $3$-dimensional space forms whose Gauss maps have
value-distribution-theoretical property. For instance, Fujimoto \cite[Theorem I]{Fu1988} proved that the Gauss map of a nonflat
complete minimal surface in Euclidean $3$-space ${\R}^{3}$ can omit at most $4$ values. The fourth author and Nakajo \cite{KN2012}
obtained that the maximal number of exceptional values of the Lagrangian Gauss map of a weakly complete improper affine
front in the affine $3$-space ${\R}^{3}$ is $3$, unless it is an elliptic paraboloid. We note that an improper affine front
is also called an improper affine map in \cite{MA2005}. We here call it an improper affine front because Nakajo \cite{Na2009}
and Umehara and Yamada \cite{UY2011} showed that an improper affine map is a front in ${\mathbf{R}}^{3}$.
Moreover, we \cite{Ka2014} gave
similar result for flat fronts in ${\H}^{3}$. In \cite{Ka2013}, we obtained a geometric interpretation for the maximal number of
exceptional values of their Gauss maps. To be precise, we gave a curvature bound for the conformal metric
$ds^{2}=(1+|g|^{2})^{m}|\omega|^{2}$ on an open Riemann surface $\Sigma$, where $m$ is a positive integer, $\omega$ is a
holomorphic $1$-form and $g$ is a meromorphic function on $\Sigma$ (\cite[Theorem 2.1]{Ka2013}) and, as a corollary of
the theorem, proved that the precise maximal number of exceptional values of the nonconstant meromorphic function $g$ on
$\Sigma$ with the complete conformal metric $ds^{2}$ is $m+2$ (\cite[Corollary 2.2 and Proposition 2.4]{Ka2013}).
We note that the geometric meaning of the $2$ in $m+2$ is the Euler characteristic of $\RC$ (\cite[Remark 2.3]{Ka2013}).
Since the induced metric from ${\R}^{3}$ of a minimal surface is $ds^{2}=(1+|g|^{2})^{2}|\omega|^{2}$ (i.e., $m=2$),
the maximal number of exceptional values of the Gauss map $g$ of a nonflat complete minimal surface in ${\R}^{3}$ is
$4\,(=2+2)$. For the Lagrangian Gauss map $\nu$ of a weakly complete improper affine front, because $\nu$ is
meromorphic, $dG$ is holomorphic and the complete metric $d{\tau}^{2}=(1+|\nu|^{2})|dG|^{2}$ (i.e., $m=1$), the maximal
number of exceptional values of the Lagrangian Gauss map of a weakly complete improper affine front is $3\,(=1+2)$,
unless it is an elliptic paraboloid.
On the other hand, Fujimoto \cite[Theorem I\hspace{-.1em}I]{Fu1988} also obtained an optimal estimate for
the number of exceptional values of the Gauss map of a nonflat complete (orientable) minimal surface in ${\R}^{4}$, and
Hoffman and Osserman \cite{HO1980} gave a similar result for a nonflat algebraic minimal surface in ${\R}^{4}$
(by algebraic minimal surface, we mean a complete minimal surface with finite total curvature). Recently,
we \cite{Ka2009} gave an effective estimate for
the number of exceptional values of the Gauss map for a special class of complete minimal surfaces in ${\R}^{4}$ that includes
algebraic minimal surfaces (this class is called the pseudo-algebraic minimal surfaces. For the corresponding result
in ${\R}^{3}$, see \cite{KKM2008}). This also provided a geometric interpretation of the Fujimoto and Hoffman-Osserman
results for this class, because the estimate is described in terms of geometric invariants.
However, from \cite{Ka2009}, it was still not possible to understand a geometric interpretation for general class.
Moreover there has been no unified explanation for the study of the image of the Gauss map of complete minimal surfaces
in ${\R}^{4}$ including nonorientable case.
The purpose of this paper is to perform a systematic study of the image of the Gauss map for complete minimal
surfaces in ${\R}^{4}$. The paper is organized as follows: In Section \ref{section2}, we give an optimal estimate
for the size of the image of the holomorphic map $G=(g_{1}, \ldots, g_{n})\colon \Sigma \to (\RC)^{n}:=
\underbrace{\RC\times \cdots \times \RC}_{n}$ on an open Riemann surface $\Sigma$ with the complete conformal metric
$
ds^{2}= \prod_{i=1}^{n}(1+|g_{i}|^{2})^{m_{i}}|\omega|^{2},
$
where $\omega$ is a holomorphic $1$-form on $\Sigma$ and each $m_{i}$ $(i=1, \cdots, n)$ is a positive integer
(Theorem \ref{thm-main} and Proposition \ref{prop-main}).
The result is a generalization of \cite[Corollary 2.2]{Ka2013}.
In Section \ref{section3.1}, applying the result, we give a geometric interpretation
of the Fujimoto result \cite[Theorem I\hspace{-.1em}I]{Fu1988} for the maximal number of exceptional values of the Gauss map
$G=(g_{1}, g_{2})$ of a complete orientable minimal surface in ${\R}^{4}$, that is, the maximal number deeply depends
on the induced metric from ${\R}^{4}$ and the Euler characteristic of ${\RC}$. In Section \ref{section3.2},
after reviewing basic facts, we give the maximal number of exceptional values of the nonconstant part of the Gauss map of
a complete minimal Lagrangian surface in ${\C}^{2}$ (Corollary \ref{thm-appl-3}).
In Section \ref{section3.3}, we study the value distribution of the generalized Gauss map of a complete nonorientable minimal
surface in ${\R}^{4}$. Recently the study of complete nonorientable minimal surfaces has attracted a lot of attention (for example,
see \cite{AL2015}, \cite{AFL2016}, \cite{LMM2006}, \cite{Ro2006}, \cite{Ro1992} and \cite{Ro1997}, for a good survey see
\cite{Ma2005}). In \cite{FL2010}, the geometry and topology of complete maximal surfaces with lightlike singularities in
the Lorentz-Minkowski $3$-space are studied. In this paper, we give an effective estimate for the maximal number of
exceptional values of the generalized
Gauss map of a complete nonorientable minimal surface in ${\R}^{4}$ (Corollary \ref{thm-appl-nonori-1}).
Moreover, by using the argument of L\'opez-Mart\'in
\cite{LM2000}, we construct examples showing that the estimate is shrap (Proposition \ref{thm-appl-nonori-2} and
Remark \ref{rmk-appl-nonori-2}).
\section{Main theorem}\label{section2}
We first state the main theorem of this paper.
\begin{theorem}\label{thm-main}
Let $\Sigma$ be an open Riemann surface with the conformal metric
\begin{equation}\label{equ-conformal}
ds^{2}=\displaystyle \prod_{i=1}^{n}(1+|g_{i}|^{2})^{m_{i}}|\omega|^{2},
\end{equation}
where $G=(g_{1}, \ldots , g_{n})\colon \Sigma \to (\RC)^{n}:=
\underbrace{\RC\times \cdots \times \RC}_{n}$
is a holomorphic map, $\omega$ is a holomorphic
$1$-form on $\Sigma$ and each $m_{i}$ $(i=1, \cdots, n)$ is a positive integer. Assume that $g_{i_{1}}, \ldots, g_{i_{k}}$
$(1\leq i_{1}< \cdots <i_{k} \leq n)$ are nonconstant and the others are constant. If the metric $ds^{2}$ is complete and
each $g_{i_{l}}$ $(l=1, \cdots , k)$ omits $q_{i_{l}}> 2$ distinct values, then we have
\begin{equation}\label{equ-exc}
\displaystyle \sum_{l=1}^{k} \dfrac{m_{i_{l}}}{q_{i_{l}}-2}\geq 1.
\end{equation}
\end{theorem}
We note that Theorem \ref{thm-main} also holds for the case where at least one of $m_{1}, \ldots, m_{n}$ is positive and
the others are zeros. For instance, we assume that $g:= g_{i_{1}}$ is nonconstant and the others are constant.
If $m:=m_{i_{1}}$ is a positive integer and the others are zeros, then the inequality (\ref{equ-exc}) coincides with
$$
\dfrac{m}{q-2}\geq 1 \, \Longleftrightarrow \, q \leq m+2,
$$
where $q:=q_{i_{1}}$. The result corresponds with \cite[Corollary 2.2]{Ka2013}.
Moreover if all $m_{i}$ are zeros, then the metric $ds^{2}=|\omega|^{2}$ is flat and complete on $\Sigma$.
We thus may assume that each $g_{i_{l}}$ is a nonconstant meromorphic function on $\C$
because there exists a holomorphic universal covering map $\pi \colon \C \to \Sigma$ and
each $g_{i_{l}}$ is replaced by $g_{i_{l}}\circ \pi$. By the little Picard theorem,
we have that each $g_{i_{l}}$ can omit at most $2$ distinct values.
We remark that the geometric interpretation of the precise maximum $2$ for the number of exceptional values
of a nonconstant meromorphic function on $\C$ is the Euler characteristic of the Riemann sphere $\RC$
(\cite{Ah1935}, \cite{Ch1960}).
The inequality (\ref{equ-exc}) is optimal because there exist the following examples.
\begin{proposition}\label{prop-main}
Let $\Sigma$ be the complex plane punctured at $p-1$ distinct points ${\alpha}_{1}, \ldots, {\alpha}_{p-1}$ or
the universal cover of that punctured plane. We set
$$
\omega =\dfrac{dz}{\prod_{j=1}^{p-1}(z-{\alpha}_{j})}
$$
and the map $G=(g_{1}, \ldots , g_{n})$ is given by
$$
g_{i_{1}}= \cdots = g_{i_{k}}=z \quad (1\leq i_{1}< \cdots < i_{k}\leq n )
$$
and the others are constant. Then all $g_{i_{l}}$ $(l=1, \cdots, k)$ omit $p$ distinct values
${\alpha}_{1}, \ldots, {\alpha}_{p-1}, \infty$ and the metric (\ref{equ-conformal}) is complete if and only if
$$
p\leq 2+\displaystyle \sum_{l=1}^{k}m_{i_{l}}.
$$
In particular, there exist examples which satisfy the equality of (\ref{equ-exc}).
\end{proposition}
\begin{proof}
A divergent path $\Gamma$ in $\Sigma$ must tend to one of the points ${\alpha}_{1}, \ldots, {\alpha}_{p-1}$ or $\infty$.
Thus we have
$$
\int_{\Gamma} ds= \int_{\Gamma}\, \prod_{i=1}^{n}(1+|g_{i}|^{2})^{m_{i}/2}|\omega|
= C \int_{\Gamma} \dfrac{\prod_{l=1}^{k}(1+|z|^{2})^{m_{i_{l}}/2}}{\prod_{j=1}^{p-1}|z-{\alpha}_{j}|}|dz|= \infty
$$
when $p\leq 2+\sum_{l=1}^{k}m_{i_{l}}$. Here $C$ is some constant. Then the equality of (\ref{equ-exc}) holds if and only if
$p=2+\sum_{l=1}^{k}m_{i_{l}}$.
\end{proof}
Before proceeding to the proof of Theorem \ref{thm-main}, we recall the notion of chordal distance between
two distinct values in $\RC$ and two function-theoretic lemmas. For two distinct values
$\alpha$, $\beta\in \RC$, we set
$$
|\alpha, \beta|:= \dfrac{|\alpha -\beta|}{\sqrt{1+|\alpha|^{2}}\sqrt{1+|\beta|^{2}}}
$$
if $\alpha \not= \infty$ and $\beta \not= \infty$, and $|\alpha, \infty|=|\infty, \alpha| := 1/\sqrt{1+|\alpha|^{2}}$.
We note that, if we take $v_{1}$, $v_{2}\in {\Si}^{2}$ with $\alpha =\varpi (v_{1})$ and $\beta = \varpi (v_{2})$, we have that
$|\alpha, \beta|$ is a half of the chordal distance between $v_{1}$ and $v_{2}$, where $\varpi$ denotes the stereographic projection of
the $2$-sphere ${\Si}^{2}$ onto $\RC$.
\begin{lemma}{\cite[(8.12) in page 136]{Fu1997}}\label{Lem-main1}
Let $g$ be a nonconstant meromorphic function on ${\Delta}_{R}=\{z\in \C ; |z|< R \}$ $(0<R\leq +\infty)$
which omits $q$ values
${\alpha}_{1}, \ldots, {\alpha}_{q}$. If $q>2$, then for each positive $\eta$ with $\eta <(q-2)/q$,
there exists a positive constant $C'$
depending on $q$ and $L:=\min_{i< j}|{\alpha}_{i}, {\alpha}_{j}|$ such that
\begin{equation}\label{equ-lemma1}
\dfrac{|g'_{z}|}{(1+|g|^{2})\prod_{j=1}^{q}|g, {\alpha}_{j}|^{1-\eta}}\leq C'\dfrac{R}{R^{2}-|z|^{2}}.
\end{equation}
\end{lemma}
\begin{lemma}{\cite[Lemma 1.6.7]{Fu1993}}\label{Lem-main2}
Let $d{\sigma}^{2}$ be a conformal flat-metric on an open Riemann surface $\Sigma$.
Then, for each point $p\in \Sigma$, there exists a local diffeomorphism $\Phi$ of a
disk ${\Delta}_{R}=\{z\in \C ; |z|< R \}$ $(0<R\leq +\infty)$ onto an open
neighborhood of $p$ with $\Phi (0)=p$ such that $\Phi$ is an isometry, that is,
the pull-back ${\Phi}^{\ast}(d{\sigma}^{2})$ is equal to the standard Euclidean metric $ds^{2}_{E}$ on ${\Delta}_{R}$
and that, for a specific point $a_{0}$ with $|a_{0}|=1$, the ${\Phi}$-image ${\Gamma}_{a_{0}}$ of
the curve $L_{a_{0}}=\{w:= a_{0}s ; 0 < s < R\}$
is divergent in $\Sigma$.
\end{lemma}
\begin{proof}[{\it Proof of Theorem \ref{thm-main}}]
Assume that each $g_{i_{l}}$ ($l=1, \cdots, k$) omits
$q_{i_{l}}$ distinct values, ${\alpha}_{1}^{l}, \ldots, {\alpha}_{q_{i_{l}}}^{l}$.
After a suitable M\"obius transformation for each $g_{i_{l}}$, we may assume that
${\alpha}_{q_{i_{1}}}^{1}=\cdots ={\alpha}_{q_{i_{k}}}^{k}=\infty$.
Suppose that each $q_{i_{l}}> 2$ and
\begin{equation}\label{equ-main-proof-1}
\displaystyle \sum_{l=1}^{k} \dfrac{m_{i_{l}}}{q_{i_{l}}-2}< 1.
\end{equation}
Then, by (\ref{equ-main-proof-1}), we ultimately suppose that $q_{i_{l}}> m_{i_{l}}+2$ for each $i_{l}$ $(l=1, \cdots, k)$.
Taking some positive number $\eta$ with
\begin{equation}\label{equ-main-proof-2}
0 < \eta < \dfrac{q_{i_{l}}-2-m_{i_{l}}}{q_{i_{l}}}
\end{equation}
for each $i_{l}$ ($l=1, \cdots, k$). We set
$$
\lambda_{i_{l}}:= \dfrac{m_{i_{l}}}{q_{i_{l}}-2-q_{i_{l}}\eta} \quad (l=1, \cdots, k).
$$
For a sufficiently small number $\eta$, we have
\begin{equation}\label{equ-main-proof3}
\displaystyle \Lambda := \sum_{l=1}^{k} {\lambda}_{i_{l}} = \sum_{l=1}^{k} \dfrac{m_{i_{l}}}{q_{i_{l}}-2-q_{i_{l}}\eta} <1
\end{equation}
and
\begin{equation}\label{equ-main-proof4}
\dfrac{{\lambda}_{i_{l}}}{1-\Lambda}> 1 \quad (l=1, \cdots, k).
\end{equation}
Then we define a new metric
\begin{equation}\label{equ-main-proof5}
\displaystyle d{\sigma}^{2}=|\hat{\omega}_{z}|^{\frac{2}{1-\Lambda}} \prod_{l=1}^{k}
\Biggl{(}\dfrac{1}{|g'_{i_{l}}|}\prod_{j=1}^{q_{i_{l}}-1}
\biggl{(}\dfrac{|g_{i_{l}}-{\alpha}_{j}^{l}|}{\sqrt{1+|{\alpha}_{j}^{l}|^{2}}} \biggr{)}^{1-\eta}\Biggr{)}^{\frac{2{\lambda}_{i_{l}}}{1-\Lambda}} |dz|^{2}
\end{equation}
on ${\Sigma}'=\{p\in \Sigma \,;\, g'_{i_{l}}\not= 0 \;\text{for each $l$}\}$, where $\omega =\hat{\omega}_{z}dz$ and $g'_{i_{l}}=dg_{i_{l}}/dz$.
Take a point $p\in {\Sigma}'$. Since $d{\sigma}^{2}$ is flat, by Lemma \ref{Lem-main2}, there exists an isometry $\Phi$ satisfying $\Phi (0)= p$ from
a disk $\triangle_{R}=\{z\in \C \,;\, |z|<R \}$ $(0< R\leq +\infty)$ with the standard Euclidean metric $ds^{2}_{E}$
onto an open neighborhood of $p\in {\Sigma}'$
with the metric $d{\sigma}^{2}$, such that, for a specific point $a_{0}$ with $|a_{0}|=1$, the $\Phi$-image ${\Gamma}_{a_{0}}$ of the curve
$L_{a_{0}}=\{w=a_{0}s \,;\, 0<s<R \}$ is divergent in ${\Sigma}'$. For brevity, we denote $g_{i_{l}}\circ \Phi$ on $\triangle_{R}$ by $g_{i_{l}}$ in the following.
By Lemma \ref{Lem-main1}, for each $i_{l}$, we get
\begin{equation}\label{equ-main-proof6}
\displaystyle R\leq C'_{i_{l}}\dfrac{1+|g_{i_{l}}(0)|^{2}}{|g'_{i_{l}}(0)|} \prod_{j=1}^{q_{i_{l}}}|g_{i_{l}}(0), {\alpha}_{j}^{l}|^{1-\eta} < +\infty,
\end{equation}
that is, the radius $R$ is finite. Hence
$$
L_{d\sigma} (\Gamma_{a_{0}}) =\int_{\Gamma_{a_{0}}} d\sigma = R <+\infty ,
$$
where $L_{d\sigma} (\Gamma_{a_{0}})$ denotes the length of $\Gamma_{a_{0}}$ with respect to the metric $d{\sigma}^{2}$.
Now we prove that ${\Gamma}_{a_{0}}$ is divergent in $\Sigma$. Indeed, if not, then ${\Gamma}_{a_{0}}$ must tend to a point
$p_{0}\in \Sigma \backslash {\Sigma}'$, where $g'_{i_{l}}(p_{0})= 0$ for some $i_{l}$. Taking a local complex coordinate
$\zeta :=g'_{i_{0}}$ in a neighborhood of $p_{0}$ with $\zeta (p_{0})=0$, we can write the metric $d{\sigma}^{2}$ as
$$
d{\sigma}^{2} = |\zeta|^{-2{\lambda}_{i_{l}}/(1-\Lambda )}\, w\, |d\zeta|^{2},
$$
with some positive function $w$. Since ${\lambda}_{i_{l}}/(1-\Lambda) > 1$, we have
$$
R= \int_{{\Gamma}_{a_{0}}} d\sigma > \widetilde{C} \int_{{\Gamma}_{a_{0}}} \dfrac{|d\zeta|}{|\zeta|^{{\lambda}_{i_{l}}/(1-\Lambda )}} = +\infty.
$$
Moreover, in the same way, if there exists a subset $\{ l_{1}, \ldots , l_{m}\}$ in $\{ 1, \cdots, k \}$ such that
each $g_{i_{l_{j}}}$ $(j=1, \cdots, m)$ have a zero at $p_{0}$, we also get that $R= +\infty$ because
$$
\displaystyle \sum_{s=1}^{m} \dfrac{{\lambda}_{i_{l_{s}}}}{1-\Lambda} >1.
$$
These contradict that $R$ is finite.
Since ${\Phi}^{\ast}d{\sigma}^{2}=|dz|^{2}$, we have by (\ref{equ-main-proof5}) that
$$
\displaystyle |\hat{\omega}_{z}|= \prod_{l=1}^{k} \Biggl{(}|g'_{i_{l}}| \prod_{j=1}^{q_{i_{l}}-1} \biggl{(}\dfrac{\sqrt{1+|{\alpha}_{j}^{l}|^{2}}}{|g_{i_{l}}-{\alpha}_{j}^{l}|} \biggr{)}^{1-\eta}\Biggr{)}^{{\lambda}_{i_{l}}}.
$$
By Lemma \ref{Lem-main1},
we have
\begin{eqnarray*}
{\Phi}^{\ast} ds &=& |\hat{\omega}_{z}|\prod_{i=1}^{n} (1+|g_{i}|^{2})^{m_{i}/2} |dz| \\
&\leq & C_{1} \Biggl{(}\prod_{l=1}^{k}|g'_{i_{l}}| (1+|g_{i_{l}}|^{2})^{m_{i_{l}}/2\lambda_{i_{l}}}\prod_{j=1}^{q_{i_{l}}-1} \Biggl{(}\dfrac{\sqrt{1+|{\alpha}_{j}^{l}|^{2}}}{|g_{i_{l}}-{\alpha}_{j}^{l}|} \Biggr{)}^{1-\eta} \Biggr{)}^{\lambda_{i_{l}}} |dz| \\
&=& C_{1}\prod_{l=1}^{k} \Biggl{(}\dfrac{|g'_{i_{l}}|}{(1+|g_{i_{l}}|^{2})\prod_{j=1}^{q_{i_{l}}}|g_{i_{l}}, {\alpha}^{l}_{j}|^{1-\eta}} \Biggr{)}^{\lambda_{i_{l}}} |dz| \leq C_{2}\Biggl{(}\dfrac{R}{R^{2}-|z|^{2}} \Biggr{)}^{\Lambda} |dz|.
\end{eqnarray*}
Now we consider the geodesic distance $d(p)$ with the respect to the metric $ds^{2}$ from each point $p\in \Sigma$ to the boundary of $\Sigma$.
Then we have
$$
\displaystyle d(p)\leq \int_{{\Gamma}_{a_{0}}} ds = \int_{{L}_{a_{0}}} {\Phi}^{\ast} ds\leq
C_{2} \int_{{L}_{a_{0}}}\Biggl{(}\dfrac{R}{R^{2}-|z|^{2}} \Biggr{)}^{\Lambda} |dz|\leq C_{2} \dfrac{R^{1-\Lambda}}{1-\Lambda} < +\infty
$$
because $0< \Lambda < 1$. This contradicts the assumption that the metric $ds^{2}$ is complete.
\end{proof}
\section{Applications}\label{section3}
\subsection{Gauss images of complete orientable minimal surfaces in ${\R}^{4}$}\label{section3.1}
We first recall some basic facts of minimal surfaces in ${\R}^{4}$. Details can be found, for example,
\cite{Ch1965, HO1980, HO1985, Os1964}. Let $X=(x^{1}, x^{2}, x^{3}, x^{4})\colon \Sigma \to {\R}^{4}$ be an oriented minimal
surface in ${\R}^4$. By associating a local complex coordinate $z=u+\sqrt{-1}v$ with each positive isothermal coordinate system
$(u, v)$, $\Sigma$ is considered as a Riemann surface whose conformal metric is the induced metric $ds^{2}$ from ${\R}^{4}$.
Then
\begin{equation}\label{equ-appl-min-1}
\triangle_{ds^{2}} X=0
\end{equation}
holds, that is, each coordinate function $x^{i}$ is harmonic. With respect to the local coordinate $z$ of the surface,
(\ref{equ-appl-min-1}) is given by
$$
\bar{\partial} \partial X =0,
$$
where $\partial =(\partial /\partial u - \sqrt{-1}\partial /\partial v)/2$, $\bar{\partial}
=(\partial /\partial u + \sqrt{-1}\partial /\partial v)/2$. Hence each ${\phi}_{i}:= \partial x^{i} dz$ ($i=1, 2, 3, 4$) is a
holomorphic $1$-form on $\Sigma$. If we set that
$$
\omega = {\phi}_{1} -\sqrt{-1} {\phi}_{2}, \qquad g_{1}=\dfrac{{\phi}_{3}+\sqrt{-1}{\phi}_{4}}{{\phi}_{1} -\sqrt{-1} {\phi}_{2}},
\qquad g_{2}=\dfrac{-{\phi}_{3}+\sqrt{-1}{\phi}_{4}}{{\phi}_{1} -\sqrt{-1} {\phi}_{2}},
$$
then $\omega$ is a holomorphic $1$-form and $g_{1}$ and $g_{2}$ are meromorphic functions on $\Sigma$.
Moreover the holomorphic map $G:=(g_{1}, g_{2})\colon \Sigma \to \RC \times \RC$ coincides with the Gauss map of $X(\Sigma)$.
We remark that the Gauss map of $X(\Sigma)$ in ${\R}^{4}$ is the map from each point of $\Sigma$ to its oriented tangent plane,
the set of all oriented (tangent) planes in ${\R}^{4}$ is naturally identified with the quadric
$$
\mathbf{Q}^{2}(\C) =\{[w^{1}: w^{2}: w^{3}: w^{4}] \in \mathbf{P}^{3}(\C) \, ;\, (w^{1})^{2}+\cdots +(w^{4})^{2} = 0\}
$$
in $\mathbf{P}^{3}(\C)$, and the quadric $\mathbf{Q}^{2}(\C)$ is biholomorphic to the product of the Riemann spheres $\RC \times \RC$.
Furthermore the induced metric from ${\R}^{4}$ is given by
\begin{equation}\label{equ-appl-min-2}
ds^{2}= (1+|g_{1}|^{2})(1+|g_{2}|^{2})|\omega|^{2}.
\end{equation}
Applying Theorem \ref{thm-main} to the metric $ds^{2}$, we can get the Fujimoto theorem for the Gauss map of complete orientable
minimal surfaces in ${\R}^{4}$.
\begin{theorem}\cite[Theorem I\hspace{-.1em}I]{Fu1988}\label{thm-appl-1}
Let $X\colon \Sigma \to {\R}^{4}$ be a complete orientable nonflat minimal surface and
$G=(g_{1}, g_{2})\colon \Sigma \to \RC \times \RC $ the Gauss map of $X(\Sigma)$.
\begin{enumerate}
\item[(i)] Assume that $g_{1}$ and $g_{2}$ are both nonconstant and omit $q_{1}$ and $q_{2}$ distinct values respectively.
If $q_{1}> 2$ and $q_{2}> 2$, then we have
\begin{equation}\label{equ-appl-min-3}
\dfrac{1}{q_{1}-2}+\dfrac{1}{q_{2}-2}\geq 1.
\end{equation}
\item[(i\hspace{-.1em}i)] If either $g_{1}$ or $g_{2}$, say $g_{2}$, is constant, then $g_{1}$ can omit at most
$3$ distinct values.
\end{enumerate}
\end{theorem}
\begin{proof}
We first show (i). Since $g_{1}$ and $g_{2}$ are both nonconstant and $m_{1}=m_{2}=1$ from (\ref{equ-appl-min-2}),
we can prove the inequality (\ref{equ-appl-min-3}) by Theorem \ref{thm-main}. Next we show (i\hspace{-.1em}i).
If we set that $g_{1}$ omits $q_{1}$ values, then we obtain
$$
\dfrac{1}{q_{1}-2}\geq 1
$$
from Theorem \ref{thm-main} because $m_{1}=1$. Thus we have $q_{1}\leq 3$.
\end{proof}
Hence we reveal that the Fujimoto theorem depends on the orders of the factors $(1+|g_{1}|^{2})$ and $(1+|g_{2}|^{2})$ in
the induced metric from ${\R}^{4}$ and the Euler characteristic of the Riemann sphere $\RC$.
\subsection{Gauss images of complete minimal Lagrangian surfaces in ${\C}^{2}$}\label{section3.2}
There exists a complex representation for a minimal Lagrangian surface $\Sigma\,(\subset {\C}^{2})$ in terms of
holomorphic data. On the representation for the surface $\Sigma$, Chen-Morvan \cite{CM1987} proved that there exists an explicit
correspondence in ${\C}^{2}$ between minimal Lagrangian surfaces and holomorphic curves with a nondegenerate condition.
Indeed, this correspondence is given by exchanging the orthogonal complex structure $J$ in ${\C}^{2}$ to another one
on ${\R}^{4}={\C}^{2}$. For the complete case, this result can also be proved from \cite[Theorem I\hspace{-.1em}I]{Mi1984} and
the well-known fact \cite{HL1982} that any minimal Lagrangian submanifold in ${\C}^{n}$ is stable.
More generally, H\'elein-Romon \cite{HR2000, HR2002} and the first author \cite{Ai2001, Ai2004}
proved that every Lagrangian surface $\Sigma$ in ${\C}^{2}$, not necessarily minimal, is represented in terms of
a plus spinor (or a minus spinor) of the $\text{spin}^{\C}$ bundle
$(\underline{\C}_\Sigma\oplus \underline{\C}_\Sigma)\oplus (K^{-1}_{\Sigma}\oplus K_{\Sigma})$ satisfying the Dirac equation with
potential (see \cite[Section\,1]{Ai2004} for details). Here,
$\underline{\C}_{\Sigma}$ and $K_{\Sigma}$ denote respectively the trivial complex line bundle and the canonical complex line bundle
of $\Sigma$. Note that the representation in terms of plus spinors in
$\Gamma (\underline{\C}_{\Sigma}\oplus \underline{\C}_{\Sigma}) = \Gamma (\Sigma\times {\C}^{2})$ given by
the first author is a natural generalization of the one given by Chen-Morvan.
Here we remark that the Lagrangian angle of any minimal Lagrangian surface is constant.
Combining these results, we get the following:
\begin{theorem}{$($\cite{CM1987}, \cite{Ai2001, Ai2004}$)$}\label{thm-appl-2}
Let $\Sigma$ be a Riemann surface with an isothermal coordinate $z=u+\sqrt{-1}v$
around each point. Let $F = (F_{1}, F_{2})\colon \Sigma\to {\C}^{2}$ be a holomorphic map satisfying
$|S_{1}|^{2}+|S_{2}|^{2}\not= 0$ everywhere on $\Sigma$, where
$S_{1}:= (F_{2})'_{z} = dF_2/dz$ and $S_{2}:= - (F_{1})'_{z} = - dF_1/dz$.
Then
\begin{equation}\label{equ-appl-lag-1}
f=\dfrac{1}{\sqrt{2}}e^{\sqrt{-1}\,\beta/2}(F_{1}-\sqrt{-1}\, \overline{F_{2}}, F_{2}+\sqrt{-1}\, \overline{F_{1}})
\end{equation}
is a minimal Lagrangian conformal immersion from $\Sigma$ to ${\C}^{2}$ with constant Lagrangian angle $\beta \in {\R}/2\pi\Z$.
The induced metric $ds^{2}$ on $\Sigma$ by $f$ and its Gaussian curvature $K_{ds^{2}}$ are respectively given by
\begin{equation}\label{eq-appl-lag-2}
ds^{2}=(|S_{1}|^{2}+|S_{2}|^{2})|dz|^{2}, \qquad K_{ds^{2}}=-2\dfrac{|S_{1}(S_{2})_{z}-S_{2}(S_{1})_{z}|}{(|S_{1}|^{2}+|S_{2}|^{2})^{3}}.\end{equation}
Conversely, every minimal Lagrangian immersion $f\colon M\to {\C}^{2}$ with constant Lagrangian angle $\beta$ is congruent
with the one constructed as above.
\end{theorem}
Set a meromorphic function $g:=-S_{2}/S_{1}$. Then
$$
G:=(g, e^{\sqrt{-1}\beta})\colon \Sigma \to \RC \times \RC
$$
can be regarded as the Gauss map of $F(\Sigma )$ in ${\R}^{4}={\C}^{2}$ (cf. \cite{HO1980, HO1985}).
Thus we get the following result.
\begin{corollary}\label{thm-appl-3}
The first component $g$ of the Gauss map of a complete minimal Lagrangian surface in ${\C}^{2}$ which is not a Lagrangian
plane can omit at most $3$ values.
\end{corollary}
\begin{proof}
We assume that $g$ omits $q$ distinct values and set a holomorphic $1$-form $\omega :=S_{1}dz$ on $\Sigma$.
In terms of the data $(\omega, g)$ of $\Sigma$, the induced metric can be rewritten by $ds^{2}=(1+|g|^{2})|\omega|^{2}$,
that is, $m_{1}=1$ and $m_{2}=0$. For this case, the first component $g$ of the Gauss map is nonconstant and the second one
is constant. From Theorem \ref{thm-main}, we obtain that $q\leq 1+2=3$.
\end{proof}
\subsection{Generalized Gauss images of complete nonorientable minimal surfaces in ${\R}^{4}$}\label{section3.3}
We first summarize some basic facts of nonorientable minimal surfaces in ${\R}^{4}$.
For more details, we refer the reader to \cite{El1986} and \cite{Ma2005}.
Let $\widehat{X}\colon \widehat{\Sigma}\to {\R}^{4}$ be a conformal minimal immersion of a nonorientable
Riemann surface $\widehat{\Sigma}$ in ${\R}^{4}$. If we consider the orientable conformal double
cover $\pi \colon \Sigma \to \widehat{\Sigma}$, then the composition $X:=\widehat{X}\circ \pi \colon \Sigma \to {\R}^{4}$
is a conformal minimal immersion of the orientable Riemann surface $\Sigma$ in ${\R}^{4}$.
Let $I\colon \Sigma \to \Sigma$ denote the antiholomorphic order two deck transformation associated to the orientable
cover $\pi \colon \Sigma \to \widehat{\Sigma}$, then $I^{\ast} ({\phi}_{j})=\bar{\phi}_{j}$ $(j=1, \cdots, 4)$ or equivalently,
\begin{equation}\label{eq-appl-nonori-1}
g_{1}\circ I = -\dfrac{1}{\bar{g_{1}}}, \qquad g_{2}\circ I = -\dfrac{1}{\bar{g_{2}}}, \qquad I^{\ast}\omega = \overline{g_{1}g_{2}\omega}.
\end{equation}
Conversely, if $(g_{1}, g_{2}, \omega)$ is the Weierstrass data of an orientable minimal surface $X\colon \Sigma \to {\R}^{4}$ and
$I$ is an antiholomorphic involution without fixed points in $\Sigma$ satisfying (\ref{eq-appl-nonori-1}), then the unique map
$\widehat{X}\colon \widehat{\Sigma}=\Sigma /\langle I \rangle \to {\R}^{4}$ satisfying that $X=\widehat{X}\circ \pi$ is
a nonorientable minimal surface in ${\R}^{4}$.
The fact that $g_{k}\circ I= -(\bar{g_{k}})^{-1}$ $(k=1, 2)$ implies the existence of a map $\hat{g_{k}}\colon \widehat{\Sigma}
\to \R\Pi^{2}$ satisfying $\hat{g_{k}}\circ \pi = {\pi}_{0} \circ g_{k}$, where
${\pi}_{0}\colon \RC \to \R\Pi^{2}\equiv \RC /\langle I_{0} \rangle$ is the natural projection and $I_{0}:=-(\bar{z})^{-1}$ is
the antipodal map of $\RC$. We call the map
$\widehat{G}=(\hat{g_{1}}, \hat{g_{2}})\colon \widehat{\Sigma} \to \R\Pi^{2}\times \R\Pi^{2}$
the {\it generalized Gauss map} of $\widehat{X}(\widehat{\Sigma})$. Applying Theorem \ref{thm-appl-1} to the
generalized Gauss map, we get the following:
\begin{corollary}\label{thm-appl-nonori-1}
Let $\widehat{X}\colon \widehat{\Sigma} \to {\R}^{4}$ be a nonflat complete nonorientable minimal surface and
$\widehat{G}=(\hat{g_{1}}, \hat{g_{2}})$ the generalized Gauss map of $\widehat{X}(\widehat{\Sigma})$.
\begin{enumerate}
\item[(i)] Assume that $\hat{g}_{1}$ and $\hat{g}_{2}$ are both nonconstant and omit $q_{1}$ and $q_{2}$ distinct
points in $\R\Pi^{2}$ respectively. If $q_{1}>1$ and $q_{2}> 1$, then
\begin{equation}\label{eq-appl-nonori-2}
\dfrac{1}{q_{1}-1}+\dfrac{1}{q_{2}-1}\geq 2.
\end{equation}
\item[(i\hspace{-.1em}i)] If either $\hat{g}_{1}$ or $\hat{g}_{2}$, say $\hat{g}_{2}$, is constant,
then $\hat{g}_{1}$ can omit at most $1$ point in $\R\Pi^{2}$.
\end{enumerate}
\end{corollary}
The inequality (\ref{eq-appl-nonori-2}) is optimal because there exist the following examples.
\begin{proposition}\label{thm-appl-nonori-2}
There exist nonflat complete nonorientable minimal surfaces in ${\R}^{4}$ each of which
components $\hat{g_{i}}$ ($i=1, 2$) of the generalized Gauss map $\widehat{G}=(\hat{g_{1}}, \hat{g_{2}})$
is nonconstant and omits $2$ distinct points in $\R\Pi^{2}$.
\end{proposition}
\begin{proof}
We take $2$ distinct points $\alpha$, $\beta$ in $\C\backslash \{0\}$ and assume that $\alpha \not= -(\bar{\beta})^{-1}$.
Let $\Sigma$ be the complex plane punctured at $4$ distinct points $\alpha$, $\beta$, $-(\bar{\alpha})^{-1}$,
$-(\bar{\beta})^{-1}$. We set that
$$
\check{g}_{1}= z, \qquad \check{g}_{2}= z, \qquad \check{\omega} = \dfrac{dz}{(z-\alpha )(z-\beta )(\bar{\alpha}z+1)(\bar{\beta}z+1)}
$$
on $\Sigma$. If we define $\check{I}\colon \Sigma \to \Sigma$, $\check{I}(z)=-(\bar{z})^{-1}$, then $\check{I}$ is an
antiholomorphic involution without fixed points and the following inequalities hold:
\begin{equation}\label{thm-appl-nonori-3}
{\check{g}}_{1}\circ \check{I}=-\dfrac{1}{\bar{\check{g}}_{1}}, \qquad
{\check{g}}_{2}\circ \check{I}=-\dfrac{1}{\bar{\check{g}}_{2}}, \qquad
{\check{I}}^{\ast}\check{\omega} =\overline{\check{g}_{1}\check{g}_{2}\check{\omega}}.
\end{equation}
Thus if we set
$$
\check{\phi}_{1}=\dfrac{1}{2}(1+\check{g}_{1}\check{g}_{2})\check{\omega}, \;
\check{\phi}_{2}=\dfrac{\sqrt{-1}}{2}(1-\check{g}_{1}\check{g}_{2})\check{\omega}, \;
\check{\phi}_{3}=\dfrac{1}{2}(\check{g}_{1}-\check{g}_{2})\check{\omega}, \;
\check{\phi}_{4}=-\dfrac{\sqrt{-1}}{2}(\check{g}_{1}+\check{g}_{2})\check{\omega},
$$
then we easily show that $\check{I}^{\ast}{\check{\phi}}_{i}= \overline{\check{\phi}}_{i}$ ($i=1, \cdots , 4$).
Moreover these holomorphic $1$-forms satisfy that $\sum_{i=1}^{4}{\check{\phi}}_{i}^{2}\equiv 0$ and
$\sum_{i=1}^{4}|{\check{\phi}}_{i}|^{2}$ is a complete conformal metric on $\Sigma$.
Let $\widetilde{\Sigma}$ be a universal cover
surface of $\Sigma$. By the uniformization theorem, we may assume that $\widetilde{\Sigma}$ is the unit disk $\D$.
Let $\pi \colon \D\to \Sigma$ be the conformal universal covering map and $\widetilde{I}$ a lift of $\check{I}$ to $\D$.
If we set $\tilde{\phi}_{i}:= \pi^{\ast}({\phi}_{i})$, then
$\widetilde{I}^{\ast} (\tilde{\phi}_{i})= \overline{\tilde{\phi}_{i}}$ ($i=1, \cdots , 4$).
Since $\check{I}$ is an antiholomorphic involution on $\Sigma$ without fixed points,
$\widetilde{I}^{2k+1}$ $(k\in \Z)$ is also an antiholomorphic transformation on $\D$ without fixed points.
From the argument of the proof of Lemma 1 in \cite{LM2000}, $\widetilde{I}^{2k}$ $(k\in \Z\backslash \{0\})$
has no fixed points on $\D$, $\langle \widetilde{I}^{2} \rangle \simeq \Z$, and
$\D / \langle \widetilde{I}^{2} \rangle$ is biholomorphic to the annulus $A(R) =\{z\in\C \,;\, R^{-1}< |z|< R\}$
for a suitable $R >1$. Since $(\widetilde{I}^{2})^{\ast}(\tilde{\phi}_{i}) = \tilde{\phi}_{i}$, each
holomorphic 1-form $\tilde{\phi}_{i}$ ($i=1, \cdots, 4$) can be induced on the quotient
$\D / \langle \widetilde{I}^{2} \rangle$. The corresponding holomorphic 1-forms on
$\D / \langle \widetilde{I}^{2} \rangle$ are denoted by ${\phi}_{1}$, ${\phi}_{2}$, ${\phi}_{3}$ and ${\phi}_{4}$,
and obviously satisfy that $\sum_{i=1}^{4} {\phi}_{i}^{2}\equiv 0$, $ds^{2}:= \sum_{i=1}^{4} |{\phi}_{i}|^{2}$ is
a complete conformal metric on $\D / \langle \widetilde{I}^{2} \rangle \simeq A(R)$ and
$I^{\ast}({\phi}_{i})=\bar{{\phi}}_{i}$ ($i=1, \cdots , 4$), where $I\colon A(R)\to A(R)$ induced by $\tilde{I}$. Then
it holds that $I(z)=-(\bar{z})^{-1}$ on $A(R)$. Moreover the two meromorphic functions
$$
g_{1}=\dfrac{{\phi}_{3}+\sqrt{-1}{\phi}_{4}}{{\phi}_{1}-\sqrt{-1}{\phi}_{2}} \qquad \text{and} \qquad
g_{2}=\dfrac{-{\phi}_{3}+\sqrt{-1}{\phi}_{4}}{{\phi}_{1}-\sqrt{-1}{\phi}_{2}}
$$
on $A(R)$ omit 4 points $\alpha$, $\beta$, $-(\bar{\alpha})^{-1}$ and $-(\bar{\beta})^{-1}$ in $\RC$.
Let $f\colon \RC \to \RC$ be a rational function given in Lemma 2 in \cite{LM2000}, that is, the function $f$ satisfies the following
three conditions:
\begin{enumerate}
\item[(a)] The only poles of $f$ are $0$ and $\infty$,
\item[(b)] $f\circ I_{0}= \bar{f}$,
\item[(c)] $f$ has no zeros on the circle $\{z \, ;\, |z|=1\}$.
\end{enumerate}
Set ${\phi}_{j}=({\varphi}_{j}/z) dz$ $(j=1, \cdots , 4)$ and write the Laurent series expansion of ${\varphi}_{j}$ as
$$
\displaystyle {\varphi}_{j}(z) = a_{0}^{j}+ \sum_{n>0} (a_{n}^{j}z^{n}+ (-1)^{n+1}\bar{a}_{n}^{j}z^{-n}), \quad
a_{0}^{j} \in \sqrt{-1}\R.
$$
We easily check that the Laurent series expansion of $f$ is written as
$$
\displaystyle f(z)=\sum_{n=1}^{m} (b_{n}z^{n}+(-1)^{n}\bar{b}_{n}z^{-n}),
$$
where $m\in \Z_{+}$. Let $k$ be an odd positive number with $k>m$. Then it holds that
\begin{equation}\label{equ-appl-nonori-4}
\text{Res}_{z=0}\Biggl{(}\biggl{[}\sum_{n> 0} (a^{j}_{n}z^{kn}+ (-1)^{n+1}\bar{a}^{j}_{n} z^{-kn}) \biggr{]} f(z)
\dfrac{dz}{z} \Biggr{)}=0, \quad j=1, \cdots , 4.
\end{equation}
Furthermore, by the virtue of the property for $f(z)$, we have
\begin{equation}\label{equ-appl-nonori-5}
\text{Res}_{z=0}\Biggl{(}a^{j}_{0} f(z)\dfrac{dz}{z} \Biggr{)} =0, \quad j=1, \cdots , 4.
\end{equation}
We consider the covering $T_{k}\colon A(R^{1/k})\to A(R)$, $T_{k}(z) =z^{k}$ and
define the holomorphic $1$-forms $\psi_{j}$ $(j=1, \cdots , 4)$ on $A(R^{1/k})$ as follows:
$$
{\psi}_{j}:= f(z)T^{\ast}_{k}({\phi}_{j}) = kf(z){\varphi}_{j}(z^{k}) \dfrac{dz}{z}.
$$
From (\ref{equ-appl-nonori-4}) and (\ref{equ-appl-nonori-5}), we deduce that each
$\displaystyle \int^{z}_{1} {\psi}_{j}$ is well-defined on $A(R^{1/k})$.
Moreover $\sum_{j=1}^{4} {\psi}_{j}^{2}\equiv 0$ holds. Since $k$ is odd, we have
\begin{equation}\label{thm-appl-nonori-6}
I^{\ast}({\psi}_{j}) = \bar{\psi}_{j}, \quad j=1, \cdots , 4,
\end{equation}
where $I\colon A(R^{1/k})\to A(R^{1/k})$ is the lift of the previous involution in $A(R)$.
Indeed, $I$ is represented as $I (z)=-(\bar{z})^{-1}$ here. We note that $\lim_{k\to \infty} R^{1/k}= 1$
and the zeros of $f$ are not on the circle $\{z \,;\, |z|=1\}$. Thus we take $k$ large enough, we can assume that
$f$ never vanishes on the closure of $A(R^{1/k})$. Furthermore, since the only poles of $f$ are $0$ and
$\infty$, there exists some real number $c >1$ such that
$$
\dfrac{1}{c}< |f(z)|< c,
$$
for any $z\in A(R^{1/k})$. Hence $\sum_{j=1}^{4}|{\psi}_{j}|^{2} \not= 0$, and if we define
$ds^{2}_{0}= \sum_{j=1}^{4}|{\psi}_{j}|^{2}$, then we have
$$
\dfrac{1}{c^{2}}T^{\ast}_{k} (ds^{2})\leq ds^{2}_{0} \leq c^2 T^{\ast}_{k} (ds^{2}).
$$
Since $ds^{2}$ is complete, the metric $T^{\ast}_{k} (ds^{2})$ and $ds^{2}_{0}$ are also complete.
Therefore we obtain the conformal minimal immersion
$$
X\colon A(R^{1/k})\to {\R}^{4}, \quad
\displaystyle X(z)=\text{Re}\int^{z}_{1} ({\psi}_{1}, {\psi}_{2}, {\psi}_{3}, {\psi}_{4})
$$
and the induced metric $ds^{2}_{0}$ is complete and each component of the Gauss map $g_{i}\circ T_{k}$
($i=1, 2$) omits $4$ points in $\RC$. From (\ref{thm-appl-nonori-6}), the immersion $X$ induces a
minimal immersion from the M\"obius strip $A(R^{1/k}) /\langle I \rangle$ to ${\R}^{4}$, and
each component of the generalized Gauss map omits $2$ points in $\mathbf{RP}^{2}$.
\end{proof}
\begin{remark}\label{rmk-appl-nonori-2}
From a similar argument of the proof, we can show that there exist nonflat complete nonorientable minimal surfaces
in ${\R}^{4}$ one of which components of the generalized Gauss map is nonconstant and omits $1$
point in $\R\Pi^{2}$ and the other is constant.
\end{remark}
Finally, we deal with value distribution of the generalized Gauss map of complete nonorientable minimal surfaces
in ${\R}^{4}$ with finite total curvature. Applying \cite[Theorem 6.9]{HO1985} (see also \cite[Theorem 3.2]{Ka2009})
to the generalized Gauss map, we get the following:
\begin{proposition}\label{thm-appl-nonori-3}
Let $\widehat{X}\colon \widehat{\Sigma}\to {\R}^{4}$ be a nonflat complete nonorientable minimal surface with finite total curvature
and $\widehat{G}=(\hat{g_{1}}, \hat{g_{2}})$ the generalized Gauss map of $\widehat{X}(\widehat{\Sigma})$.
\begin{enumerate}
\item[(i)] Assume that $\hat{g}_{1}$ and $\hat{g}_{2}$ are both nonconstant. Then at least one of them can
omit at most $1$ point in $\R\Pi^{2}$.
\item[(i\hspace{-.1em}i)] If either $\hat{g}_{1}$ or $\hat{g}_{2}$, say $\hat{g}_{2}$, is constant,
then $\hat{g}_{1}$ can omit at most $1$ point in $\R\Pi^{2}$.
\end{enumerate}
\end{proposition}
However we do not know whether Proposition \ref{thm-appl-nonori-3} is optimal or not.
|
2,877,628,091,324 | arxiv | \section{Introduction}
It is by and large recognized that the key properties of high-temperature superconducting materials
can be explained with the help of the two-dimensional $t$--$J$ model.\cite{spalek,pwa,zr} Therefore,
the major interest is
focused on this model in the regime that seems to be relevant to high-temperature superconductivity.
Since the Hubbard on-site repulsion $U$ in cuprates is
usually assumed to be an order of magnitude bigger than the hopping integral $t$,
the relevant AFM coupling between nearest neighbor sites $J=4t^2/U$ is about
tenths of $t$.
Much less is known about the $t$--$J$ model in the small--$J$ limit, i.e.,
the large--$U$ limit of the Hubbard model. At half filling the infinite--$U$
Hubbard model has AFM ground state. However, in 1965 Nagaoka proved a theorem\cite{nagaoka} which
states that when exactly one hole is introduced the ground state becomes FM. In the infinite--$U$
limit the ground state of the half filled Hubbard model is macroscopically degenerate. When a single hole
is introduced this degeneracy is lifted, since it is energetically favorable for the hole to move in a
background of fully aligned spins.
A simple proof of Nagaoka's theorem was later given by Tasaki,\cite{tasaki}
who also showed that additional density-dependent interactions do not alter this result.
The Nagaoka theorem
is one of very few rigorous results concerning strongly correlated electronic systems. However,
it does not say anything about the case of a finite density of
holes as well as finite AFM interaction. The question of a character of the leading instability
of the Nagaoka state with respect to the AFM exchange term or finite density of holes has
attracted much interest.
For finite $J$ ($J>0$) and/or finite doping the ground state is determined by the competition between
antiferromagnetism favored by the exchange interaction and Nagaoka's ferromagnetism favored by the kinetic
energy. This competition presumably drives the system into two phases, a hole-rich FM region
where the kinetic energy is minimized and a region with localized electrons characterized by AFM
order.
Generally, the $t$--$J$ model may display different types of phase separation depending on the
value of $J/t$ and
the actual state of affairs is far from being clear.\cite{ekl,hellberg,tae,riera,putikka,shih,dagotto,
ph_sep,ivanov,batista} It was demonstrated in Refs. \onlinecite{ekl,hellberg,tae} that phase separation takes
place in the $t$--$J$ model for all values of $J$. Other authors\cite{riera,putikka,shih,dagotto} find
phase separation
only for large $J$. It is very difficult to establish unambiguously the presence or absence of phase separation
in the small--$J$ limit of the $t$--$J$ model. The main reason is that a high-accuracy, unbiased calculation
of the ground state energy as a function of the hole density is required to assess the
competition between the interaction and kinetic energies. The spatial inhomogeneity in the
phase separated state makes analytical approaches rather involved. On the other hand, since the
FM bubbles are relatively large,\cite{eisenberg} it is difficult
to apply numerical approaches like the quantum Monte Carlo method or exact diagonalization. In this situation,
a computational method that is able to tackle a system sufficiently large to describe the
spatially separated state in the small--$J$ limit of the
$t$--$J$ model
is required.
In this paper, we demonstrate that Monte Carlo simulations for the recently proposed Ising
version of the $t$--$J$ model\cite{mmfk} provide sound and reliable results in this limiting case.
We employ this approach to study the formation of a bubble of the FM phase when holes are
introduced into an AFM background.
Namely, we investigate the so--called Nagaoka polaron which sets in for vanishing
doping and $J/t \ll 1$.\cite{white}
The Nagaoka polaron represents an intermediate state between the homogeneous FM Nagaoka
ground state for $J=0$ and the standard spin polaron for $J>0.2$.
Numerical studies of the Nagaoka polaron are difficult
because of its large spatial dimensions: for $J \rightarrow 0$ its radius diverges as $J^{-1/4}$.
In this paper we formulate an effective description of the Nagaoka regime,
which is based on the recently proposed Ising version of the $t$--$J$ model.\cite{mmfk}
In the subsequent sections we recall the basic properties of this model
and explain why it gives the same physical picture of the Nagaoka regime as the standard
isotropic $t$--$J$ model. These qualitative arguments are accompanied by quantitative comparison
with the numerical results for the isotropic model.
For the reader convenience we first recall density matrix renormalization group (DMRG) studies\cite{white} on
the Nagaoka {\em polaron}. We then present new results for the Nagaoka {\em bipolaron}, studied in the
$t$--$J$ model by means of exact diagonalization in the limited functional space (EDLFS).\cite{janez1}
Numerical calculations within the Ising version are by far less demanding hence much bigger clusters
and/or much larger doping become accessible. After having successfully tested the single and two--holes cases,
we investigate the ground state properties of the $t$--$J$ model
($J/t \ll 1$) doped with several holes. In particular, we show that all holes are confined in a single FM polaron.
We discuss how its energy and spatial dimensions depend on the number of holes. For low hole densities, our results provide an evidence that the leading instability of the
FM Nagaoka state is a phase separation rather that a single spin flip.
\section{the Ising $t$--$J$ model}
We start with the $t$--$J$ Hamiltonian on a square lattice \cite{spalek}
\begin{equation}
H_{t-J}=-\sum_{ij\sigma} t_{ij} \tilde{c}_{i\sigma}^{\dagger}
\tilde{c}_{j\sigma}+ J\sum_{\langle ij\rangle} \left(\bm Q_i \bm Q_j -
\frac{1}{4}\tilde{n}_i\tilde{n}_j\right),\label{2.1}
\end{equation}
where
$\tilde{c}_{i\sigma}=Pc_{i\sigma}P=c_{i\sigma}(1-n_{i,-\sigma})$ is
the projected electron operator (to exclude the on-site double
occupancy), $\bm
Q_i=\sum_{\sigma,\sigma'}{c}_{i\sigma}^{\dagger}\bm\tau_{\sigma\sigma'}{c}_{i\sigma'},
\,\bm\tau^2=3/4, $ is the electron spin operator and $\tilde
n_i=Pn_iP=n_{i\uparrow}+n_{i\downarrow}-2n_{i\uparrow}n_{i\downarrow}$
is the projected electron number operator.
Hamiltonian~(\ref{2.1}) contains a kinetic term with the hopping
integrals $t_{ij}$ and a potential $J$ describing the strength of
the nearest neighbor spin exchange interaction. At every lattice
site the Gutzwiller projection operator
$P=\prod_i(1-n_{i\sigma}n_{i-\sigma})$ projects out the doubly
occupied states. Physically this modification of the
original Hilbert space results in strong electron correlation
effects.
At the low-energy scale of order $J\,(\ll t)$, it is reasonable to consider the background
spin configuration to be nearly frozen with respect to the hole dynamics.
In this case the properties of the low-energy quasiparticle excitations in the $t$--$J$ model
are at least qualitatively similar to those in the anisotropic $t$--$J_z$ model,
in which the spin-flip part of the Heisenberg interaction is dropped:
\begin{equation}
H_{t-J_z}=-\sum_{ij\sigma} t_{ij} \tilde{c}_{i\sigma}^{\dagger}
\tilde{c}_{j\sigma}+ J_z\sum_{\langle ij\rangle} \left( Q^z_i Q^z_j -
\frac{1}{4}\tilde{n}_i\tilde{n}_j\right),\label{z1}
\end{equation}
The global continuous spin
SU(2) symmetry of the $t$--$J$ model now reduces to the global
discrete Z$_2$ symmetry of the $t$--$J_z$ model.
Although $Q^z_i Q^z_j$ interaction possesses discrete Z$_2$ symmetry, the
original SU(2) symmetry of all other terms of the Hamiltonian is preserved.
Therefore, the symmetry of the $t$--$J_z$ model depends
on whether $J_z$ is zero or finite. Namely, for $J_z =0$ the SU(2) symmetry
is restored again.
Although this model is more amenable to numerical calculations, again only rather small lattice clusters
are allowed.
One may hope that the full Ising version of the $t$--$J$ model in which the $t$-term also possesses the global
discrete $Z_2$ spin symmetry results in a more tractable though still nontrivial model.
It by definition has the global
discrete $Z_2$ symmetry, regardless of the values of the incoming parameters. This implies that
the resulting system can be thought of as a doped classical Ising model.
However, it is not clear how such a model can be derived directly from (\ref{2.1}),
since the projected electron operators $\tilde{c}_{i\sigma}$ transform themselves in the
fundamental representation of SU(2). To overcome this problem, the recently proposed spin-dopon
representation of the projected electron operators can be used.\cite{mmfk}
The idea behind that approach is to assign fermion operators $d_{i\sigma}$ to
doped carriers (holes, for example) rather than to the lattice electrons.
The enlarged on-site Hilbert space is spanned by the state vectors $|\sigma a\rangle,$ with
$\sigma=\Uparrow,\Downarrow$ labeling the 2D Hilbert space of the lattice spin,
$\bm{S}_i$, and $a=0,\uparrow,\downarrow,\uparrow\downarrow$ labeling the $4D$ on-site
dopon Hilbert space. The physical space consists of the spin-up $|\Uparrow 0\rangle_i$
spin--down $|\Downarrow 0\rangle_i$ and spinless vacancy $(|\Uparrow \downarrow\rangle_i
- |\Downarrow \uparrow\rangle_i)/\sqrt{2}$ states.\cite{ribeiro} The remaining unphysical
states are removed by the constraint \cite{fku}
\begin{eqnarray}
\bm S_i\bm{M}_i +\frac{3}{4}n^d_i =0 \label{2.3},
\end{eqnarray}
where $\bm M_i=\sum_{\sigma,\sigma'}{d}_{i\sigma}^{\dagger} \bm\tau_{\sigma\sigma'}{d}_{i\sigma'}$
stands for the dopon spin operator so that
\begin{equation}
\bm{Q}_i=\bm{S}_i+\bm{M}_i.
\label{Q}\end{equation}
The physical electron operator $\tilde{c}_{i\sigma}$ is then expressed in terms of the
spin and dopon operators projected onto the physical subspace singled out by Eq. (\ref{2.3}).
In view of the relation
$(S^{\alpha})^2=1/4$, the constraint (\ref{2.3}) can equivalently be written
in the form
\begin{eqnarray}
\sum_{\alpha=x,y,z} S^{\alpha}_i{M}^{\alpha}_i +n^d_i\!\!\!\sum_{\alpha=x,y,z}(S^{\alpha}_i)^2 =0
\label{2.3*}.
\end{eqnarray}
Within the full Ising $t$--$J$ model, one should have $Q_i^{\pm}=Q_i^{x}\pm Q_i^{y}\equiv 0.$
In view of Eq. (\ref{Q}), this requires $S_i^{\pm}=M_i^{\pm}=0$. To explicitly derive the
Ising $t$--$J$ model, we therefore project the dopon operators onto the Hilbert space
singled out by the local constraint
\begin{equation}
S^z_iM^z_i+ \frac{1}{4}n^d_i=0.
\label{2.4}\end{equation}
which can be thought of as an "Ising" form of Eq.(\ref{2.3}). It represents the Z$_2$ singlet
under the $Q^z_i\to \pm Q^z_i$ transformations $\in Z_2\subset SU(2)$.
The physical electron projected operators reduce to the Z$_2$ spinors:
\begin{eqnarray}
\tilde c_{i\downarrow}&=&{\cal P}^{\rm ph}_i d_{i\uparrow}^{\dagger}{\cal P}^{\rm ph}_i
=\left(\frac{1}{2}-S^z_i\right) d_{i\uparrow}^{\dagger},\label{2.5}\end{eqnarray}
\begin{eqnarray}
\tilde
c_{i \uparrow} &=& {\cal P}^{\rm ph}_i d_{i \downarrow}^{\dagger}{\cal P}^{\rm ph}_i
=\left(\frac{1}{2}+S^z_i\right)d_{i\downarrow}^{\dagger},
\label{2.5*}
\end{eqnarray}
with the projection operator
${\cal P}^{\rm ph}_i=1-(2S^z_iM^z_i+n^d_i/2)$.
It can readily be checked that
$$ Q_i^z=\frac{1}{2}(\tilde {c_{i \uparrow}}^{\dagger}\tilde {c_{i \uparrow}}
-\tilde c_{i\downarrow}^{\dagger}\tilde c_{i\downarrow})
=S_i^z+M_i^z,$$
$$Q^{+}_i=(Q^{-}_i)^{\dagger} =\tilde c^{\dagger}_{i\uparrow}\tilde c_{i\downarrow}\equiv 0,$$
as desired: the transverse components of the electron spin operators no longer appear
in the theory.
The underlying onsite Hilbert space rearranges itself in the following way.
The operators (\ref{2.5}) act on the Hilbert space ${\cal
H}_{\downarrow}=\left\{|\Downarrow,0\rangle,\,|\Downarrow,\uparrow\rangle\right\}$.
These operators do not mix up any other states. Operator
$\tilde c_{i\downarrow}$ destroys the spin-down electron and creates a
vacancy. This vacancy is described by the state
$|\Downarrow,\uparrow\rangle$. The similar consideration holds for
the $\tilde c_{i\uparrow}$ operators. Now, however, the vacancy is
described by the state $|\Uparrow,\downarrow\rangle$. Those two vacancy
states are related by the $Z_2$ transformation.
The operator
$(Q^z_i)^2=\frac{1}{4}(1-n^d_i)$ produces zero upon acting on the both.
The physical Hilbert state is therefore a direct sum ${\cal
H}_{\rm ph}={\cal H}_{\uparrow}\oplus {\cal H}_{\downarrow}.$ Under the
Z$_2$ transformation $(\uparrow\leftrightarrow\downarrow,\, S^z_i\to
-S^z_i)$ we get ${\cal H}_{\uparrow}\leftrightarrow{\cal
H}_{\downarrow}$, which results in ${\cal H}_{\rm ph}\to {\cal H}_{\rm ph}$.
In the isotropic $t$--$J$ model these two 2D spaces
merge into a 3D SU(2) invariant physical space, where the vacancy is
just an antisymmetric linear combination given by the
SU(2) spin singlet, $(|\Uparrow \downarrow\rangle_i
- |\Downarrow \uparrow\rangle_i)/\sqrt{2}$.
The symmetric combination splits off,
since it represents an unphysical spin-triplet state.
As a result, one arrives at the
representation (\ref{z1}) in which, however, the electron projection
operators are given by Eqs. (\ref{2.5}-\ref{2.5*}). All the parts of this Hamiltonian
possess the global discrete Z$_2$ symmetry
whereas the global continuous SU(2) symmetry is completely lost.
Close to half--filling,
the Ising $t$--$J$ Hamiltonian takes on the form,\cite{mmfk}
\begin{eqnarray}
H^{Ising}_{t-J} &=& \sum_{ij\sigma}t_{ij}
d_{i\sigma}^{\dagger} d_{j\sigma}
+ J\sum_{\langle ij\rangle}\left[\left(S^z_i S^z_j-\frac{1}{4}\right) \right.
\nonumber \\
&& \left. + S_i^zM^z_j +S_j^zM^z_i \right],
\label{2.6}
\end{eqnarray}
which is to be accompanied by the constraint (\ref{2.4}).
The magnetic $M_i^zM^z_j$ term has been dropped
as being small of order $\delta^2$
in the limit $\delta:=\langle n_d\rangle \ll 1$.
In practical calculations, we find convenient
to implement the constraint (\ref{2.4})
with the help of a Lagrange multiplier.
Since $S^z_iM^z_i+ n^d_i/4\ge 0$,
the global Lagrange multiplier
\begin{equation}
\lambda\sum_i\left(S^z_iM^z_i+ \frac{1}{4}n^d_i\right)
\label{cons}
\end{equation}
enforces the constraint (\ref{2.4}) locally. The unphysical
occupancy of an arbitrary site would enhance the total energy by $\lambda\to +\infty$.
Therefore, all unphysical on-site states are
automatically eliminated, so that the local constraint is
taken into account rigorously.
The main difference between the $t$--$J_z$ and Ising $t$--$J$ models
originates from the different symmetries of the hopping terms as discussed in.\cite{mmfk}
The one--hole energy
obtained for the isotropic $t$--$J$ model
is shown to be in between the results obtained for the $t$--$J_z$ and Ising models.
In the regime $J\ll t$, the differences between $t$--$J$ and $t$--$J_z$ models are comparable
to those between $t$--$J$ and Ising models.
\subsection{Monte Carlo approach}
The Hamiltonian (\ref{2.6}) together with the constraint (\ref{cons}) describe a system that
contains both classical ($S^z_i$) and quantum ($d_{i\sigma}$) degrees of freedom.
However, there is no direct interaction between the quantum particles. Therefore, the Ising $t$--$J$
model is closely related to a (multicomponent) Falicov--Kimball model and we can apply an efficient
Monte Carlo (MC) approach derived exclusively for the latter model.\cite{mmkc}
This numerical approach can be applied neither to the standard $t$--$J$ model nor to the $t$--$J_z$ one.
In the latter case one should use the SU(2) invariant constraint (\ref{2.3}) which involves $S^{\pm}$ operators.
The classical MC method has already
been applied to the Ising $t$--$J$ model in order to study dynamics of holes and destruction of the
AFM order with increasing hole concentration.\cite{mmfk} Here, however, we are not
interested in the thermodynamics of the system, but rather in its {\em ground state} properties.
Therefore, the Metropolis algorithm is used for simulated annealing.\cite{kirkpatrick} And once again, since
simulations are performed by random walk through the configuration space of the classical variables,
there is no need for quantum annealing and relatively large lattices can be studied. The size of
the lattice for a given $J/t$ and a given number of holes is adjusted so that the size of the polaron
be always significantly smaller. Since the holes can propagate only within the FM region,
the finite size effects become negligible. Most of the calculations are carried out on
20$\times$20 and 30$\times$30 lattices, but in some cases larger clusters, up to 50$\times$50 are necessary
as well. From now on, a value of the nearest neighbor hopping amplitude is used
as the energy unit ($t_{ij}=t=1$ for $i$ and $j$ being nearest neighbors and $t_{ij}=0$ otherwise).
\section{Test of a single polaron problem}
Neglecting the spin--flip term is generally a crude approximation, hence
the standard and the Ising $t$--$J$ models describe
very different systems. However, we will show that for vanishing doping
and small $J$ both approaches give rise to the same physical picture of the Nagaoka
polaron.
The density matrix renormalization group (DMRG) studies\cite{white} of the
2D $t$--$J$ model have shown that for $J <0.03$ the Nagaoka polaron is indeed stable.
Its size and energy can be determined by balancing
the kinetic energy of a hole freely propagating within a FM bubble
against the magnetic energy of the FM bubble relative to the energy of the N\'eel state.
Minimizing the sum of these two energies one easily finds expressions for the
radius $R$ and energy $E$ of the Nagaoka polaron (see Ref. \onlinecite{white}):
\begin{equation}
R \simeq 1.12 J^{-1/4}, \quad \quad E \simeq -4+9.2\sqrt{J},
\label{white}
\end{equation}
which for $J<0.03$ accurately fit the DMRG data.
The Ising $t$--$J$ model displays essentially the same physics.
The constraint (\ref{cons}) allows for propagation of holes only within
{\em static FM} bubbles where $S^z_j=- M^z_j$ and formation of these bubbles takes place at the expense
of the exchange interaction, which favors AFM alignment of $S^z_i$. In both models
the magnetic energy of a bubble is qualitatively similar and is proportional to $J$ multiplied by the number
of FM bonds. Quantitative differences should arise from different energies per bond of the N\'eel ground state of the undoped systems.
Figs. \ref{fig1} and \ref{fig2} show quantitative comparison of both models. Here, we show the ground
state properties of a single Nagaoka polaron in the Ising $t$--$J$ model for a $50 \times 50$ system
with periodic boundary conditions and $\lambda=300$. The radius and energy of the Nagaoka polaron have been compared with expressions (\ref{white}) as well as with the bare DMRG data
for the SU(2) $t$--$J$ model taken from Ref. \onlinecite{white}. The overall agreement is clearly visible.
To conclude this section, we notice that the Ising $t$--$J$ model provides a simple and
reasonably accurate description of a single Nagaoka polarons for $J \ll 1$ simply because
the spin--flip term becomes irrelevant inside a sufficiently large FM bubble which confines the movement of holes.
\begin{figure}
\includegraphics[width=0.48\textwidth]{fig1}
\caption{(Color online) $J$--dependence of the radius of the Nagaoka polaron
formed by a single hole. Points show results from the MC calculations for the Ising $t$--$J$ model,
continuous line is the power--law fit and the dashed line shows the dependence described by
Eq.~(\ref{white}). Inset shows the same results but on the log--log scale.}
\label{fig1}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig2}
\caption{(Color online) The same as in Fig. \ref{fig1} but for the energy of a single Nagaoka
polaron relatively to the energy of the homogeneous N\'eel state. The energy is compared with DMRG
results for the $t$--$J$ model taken from Fig. 4 of Ref. \onlinecite{white}. The other lines correspond
to Eq.~(\ref{white}) for the isotropic $t$--$J$ (dot-dashed violet line) and a similar expression for the
$t$--$J_z$ model from Ref. \onlinecite{white} (dashed green line). The continuous line shows a power-law
fit to the present results.}
\label{fig2}
\end{figure}
\section{The Nagaoka bipolaron problem}
In the preceding section we have successfully tested the case of a single carrier doped into Mott insulator.
Since our aim is to study a system at finite doping such a test might still be insufficient. Therefore,
we investigate also the system doped with two holes. On the one hand it is a first nontrivial step
towards understanding the spatial hole distribution in doped systems (uniform versus inhomogeneous).
On the other hand, due to large spatial dimensions of the Nagaoka polaron, it is already
a challenging problem for fully quantum numerical approaches.
To investigate the Nagaoka bipolaron in the isotropic $t$--$J$ model, we employ the EDLFS method. This method
describes properties of carrier/carriers doped into a planar ordered antiferromagnet.\cite{janez1}
One starts from a translationally invariant state of two carriers in the N\' eel background
\begin{equation}
\vert \phi_{0}{\rangle}_p = \sum_{\boldsymbol{\gamma}}(-1)^{M(\boldsymbol{\gamma})}
c_0c_{\boldsymbol{\gamma}}\vert {\rm Neel }\rangle,
\end{equation}
where the sum runs over two nearest neighbors to site 0 and $M(\boldsymbol{\gamma})$ sets the appropriate sign to generate $p_{x(y)}$-wave symmetry.
The kinetic part $H_k$ as well as the off--diagonal spin--flip part $\tilde{H}_J$ of the Hamiltonian (\ref{2.1})
are applied
up to $N_h$ times generating the basis vectors:
\begin{equation}
\left\{|\phi_{l}^{n_h} \rangle \right\}=[H_k + \tilde{H}_J]^{n_h}
|\phi_0 \rangle_p, \quad n_h=0,...,N_h.
\label{EDLS}
\end{equation}
Then, the ground state $|\Psi_0\rangle$ is calculated within the limited functional space by means of the
Lancz\"os method. The advantage of EDLFS over the standard exact diagonalization (ED) approach follows
from systematic generation of selected states which contain spin excitations in the vicinity of the
carriers. It enables investigation of much larger systems, which is particularly important in the
Nagaoka regime. We apply this method and study the average distance $D$ between two holes
in the $t$--$J$ model.
From the construction of EDLFS follows that $N_h$ determines maximum distance between
holes and spin excitations as well as the maximum accessible $D$.
Therefore, all characteristic length--scales of investigated problem should be smaller
than a certain $\xi(N_h)$. Since the successive application of the nearest neighbor hopping [see Eq. (\ref{EDLS})]
closely resembles the random walking process we expect that $\xi (N_h) \propto \sqrt{N_h}$.
The numerical complexity of the two--holes problem allows us to study $N_h$ up to 12.
Therefore, for small $J$ we carry out a finite size scaling with respect to the size of the Hilbert space
and study $D(J) \equiv D(J, N_h \rightarrow \infty)$.
The finite size effects should vanish when
$\xi (N_h)/D(J) \gg 1 $ or equivalently $N_h/D^2(J) \gg 1 $. We have
found that this vanishing can universally be described by the Gaussian:
\begin{equation}
D(J)-D(J,N_h)=\alpha \exp\left\{-\beta\left[\frac{N_h}{D^2(J)}\right]^2\right\},
\label{extr}
\end{equation}
where parameters $\alpha$ and $\beta$ are independent of $J$.
Note that for a given $J$ fitting of $D(J,N_h)$ involves only a single free parameter
$D(J)$ which simultaneously represents the extrapolated distance between two holes.
\begin{figure}[h]
\includegraphics[width=0.53\textwidth]{fig3}
\caption{(Color online) Distance between two holes $D$ in the isotropic $t$--$J$ obtained in a functional
Hilbert space generated for finite $N_h$.
Numerical results (points) are fitted and extrapolated according to Eq. (\ref{extr}).}
\label{extrap}
\end{figure}
Fig.~\ref{extrap} shows the extrapolation, while the resulting $D(J)$ is shown in Fig.~\ref{fit_extrap}.
\begin{figure}[h]
\includegraphics[width=0.53\textwidth]{fig4}
\caption{(Color online) Extrapolated distance between two holes D(J) for the isotropic $t$--$J$ model.}
\label{fit_extrap}
\end{figure}
Finally, the quality and the universality of the finite--size scaling is directly shown in Fig.~\ref{scaling}.
\begin{figure}[h]
\includegraphics[width=0.53\textwidth]{fig5}
\caption{(Color online) The same data as in Fig. \ref{extrap}, but rescaled according to Eq. (\ref{extr}).}
\label{scaling}
\end{figure}
One can clearly see that all data for
$J=0.04,\ldots, 0.14$ merge into a single curve, which strongly supports the assumptions behind
Eq. (\ref{extr}). This also demonstrates a self-consistency of the approach.
These results provide solution of the long--standing open problem, i.e., whether two holes in the $t$--$J$ model form a bound state in the small $J$ limit.~\cite{twoholes,leung02}
Since $D(J) > 3.5$ for $J < 0.10$, the problem can hardly be solved by exact diagonalization on 32--site cluster~\cite{leung02} where the maximal possible distance between two holes is four lattice spacings.
Note that the symmetry of the bound state is $p$--wave for $J \lesssim 0.15$.
Therefore, the results using the EDLFS method indicate that two holes in the single band $t$--$J$ model are always bound, which may not necessarily be the case in more general models describing the ${\rm CuO_2}$ plane.~\cite{lau11}
The most important result shown in Fig. \ref{fit_extrap} concerns the power--law dependence:
\begin{equation}
D(J) \simeq 1.93J^{-0.27},
\label{fitD}
\end{equation}
hence $D(J)$ is roughly proportional to the radius of a single--hole polaron $R(J)$ [see Eq. (\ref{white})].
Already this result suggests that the linear dimensions of polaron and bipolaron are determined
by the same mechanisms, which are properly captured by the Ising $t$--$J$ model. In order to show that
this expectation holds true we have calculated $D$ in the Ising version of the $t$--$J$ model.
Results are shown as (red) stars in Fig \ref{compar}.
\begin{figure}[h]
\includegraphics[width=0.47\textwidth]{fig6}
\caption{(Color online) Comparison of the distance between two holes in the $t$--$J$ (blue circles) and
Ising $t$--$J$ (red stars) models. "Ising $t$--$J$ rescaled" means that all values are multiplied by
1.33. Note that the rescaled values can be described by Eq. (\ref{fitD}), see also the inset where compounded $D(J)$ data are presented on the log - log scale.
}
\label{compar}
\end{figure}
In the same figure results for the $t$--$J$ model are presented as (blue) dots. One can clearly see that the
distance between holes $D$ in the Ising $t$--$J$ model increases with decreasing $J$ slower than in the
full $t$--$J$ model. This result is not a surprise: in the isotropic
$t$--$J$ model (as well as in the $t$--$J_z$ one) a hole is able to enter the AFM
surrounding of the FM bubble. Since the boundary of the FM
bubble is unpenetrable in the Ising $t$--$J$ model, we expect a smaller average distance between holes in the latter case.
However, it turns out that the
difference has only a quantitative character, i.e., only the coefficient in Eq. (\ref{fitD}) is approx. 33\%
smaller. This agreement shows that these two methods, i.e., the EDLFS method for the full $t$--$J$ model
and the MC method for the Ising $t$--$J$ model are complementary in a sense: The applicability of
the EDLFS method is limited by the maximum size of the Hilbert space, and since the distance between holes
increases with decreasing $J$, this method cannot be used when $J$ is too small. On the other hand, the
importance of the spin--flip term in the $t$--$J$ Hamiltonian diminishes with decreasing $J$ and the
approximation that leads to the Ising $t$--$J$ model becomes more reliable in the region where the EDLFS
method cannot be applied any longer. The main advantage of the Ising $t$--$J$ model is that it can be studied
within the framework of the classical MC method on clusters sufficiently large to describe large
polarons that emerge at small $J$.
\section{FINITE HOLE DENSITY}
Up to this point we have been analyzing one and two holes in the whole system. Since the size of the (bi)polaron and
its energy do not depend on the size of the lattice [provided the lattice is significantly larger than the
(bi)polaron size], these results effectively describe the case of the vanishing density of holes. Then, the
important question arises as to how the ground state of the Ising $t$--$J$ model evolves when the number of holes
increases. Possible scenarios include phase separation or homogeneous distribution of holes. It is also
possible that a single polaron becomes unstable at some critical value of the hole number
giving way to smaller polarons.
In order to study this problem we calculate the total energy of the system as a function
of the number of holes $E(N)$. Convex $E(N)$ for some $N$ indicates that it is energetically favorable
to split a FM bubble with $N$ holes into smaller bubbles with $M<N$ holes,
provided that $E(M)$ is concave. If $E(N)$ is convex for
arbitrary $N$, holes will not form polarons with more than one hole. On the other hand, if $E(N)$ is
concave for arbitrary $N$, all holes introduced into the system will gather in a single FM
region. In other words, the phase separation into a hole--rich FM region and an AFM region
without holes takes place.
Accurate determination of $E(N)$ is generally a difficult task. For $N=1$ the shape of the polaron is
almost circular, but for higher $N$ the geometry becomes nontrivial.
Fig. \ref{multipolarons} shows the Nagaoka polarons and corresponding hole wave functions for $N=2,\ldots,5$.
The shape of the polaron follows directly from the spatial structure of the occupied orbitals. The diagonal orientation of almost rectangular polarons minimizes the magnetic energy along the line between
FM and AFM regions.
\begin{figure*}[h]
\includegraphics[width=\textwidth]{fig7}
\caption{(Color online) Leftmost column: shapes of the Nagaoka polarons including from 2 to 5 holes. Filled
circles indicate spin-up lattice sites and empty circles spin-down lattice sites. The rest of the panels
show wave functions of the holes. In all cases J=0.03 was assumed.}
\label{multipolarons}
\end{figure*}
With increasing $N$ the size of the FM bubble increases, so a large lattice is necessary to avoid the
finite size effects. The advantage of the Ising version of the $t$--$J$ model
becomes evident,
since it can be reliably investigated on lattices much larger than those accessible to the
fully quantum methods like quantum MC, exact diagonalization or even EDLFS.
Using larger lattices we study polarons containing up to 10 holes.
In Fig. \ref{E_vs_N} we show the polaron energy $E(N)$ as a function of the number of holes $N$
for $J=0.01$.
\begin{figure}[h]
\includegraphics[width=0.53\textwidth]{fig8}
\caption{(Color online) Energy $E(N)$ of a polaron containing $N$ holes relatively to the energy of the homogeneous
N\' eel state (points) for $J=0.01t.$ The full line represents a fit to the numerical data as given in the legend, the dashed line represents analytical result in Eq.~(\ref{Etot}).}
\label{E_vs_N}
\end{figure}
Studying other values of $J$ from 0.01 to 0.1 (not shown) we fitted the energy with a function
$E(N)=a N + b\sqrt{N}$. In Sec. V.A. we justify such a form of $E(N)$. We have found that in all cases $b$ is positive,
which means that $E(N)$ is concave.
This in turn implies that the Ising version of $t$--$J$ model displays a phase separation for all
those values of $J$ at which it is still equivalent to the isotropic $t$--$J$ model.
An important question concerns the fraction of the system occupied by each of the magnetic phases. It can be answered by comparing the size of the polaron to the size of the whole system.
\begin{figure}[h]
\includegraphics[width=0.50\textwidth]{fig9}
\caption{(Color online) Fraction of the lattice occupied by the FM polaron as a function
of the concentration of holes for different values of $J$. The straight lines represent
analytical results given in Eq.~(\ref{ratio}).}
\label{R_vs_N}
\end{figure}
Fig. \ref{R_vs_N} shows
the relation between the fraction of the lattice sites with ferromagnetically aligned spins $N_p/N_s$ and the density of holes $\delta=N/N_s$.
$N_p$ is the number of lattice sites in the polaron and $N_s$ is the size of the lattice.
This dependence can be fitted by a linear function, which extrapolated to $N_p/N_s=1$
gives the threshold value of the hole density $\delta_t$.
If the concentration of holes is close but still smaller
than $\delta_t$, most of the system is occupied by the FM phase,
while the rest forms an AFM island (or islands). Finally,
for concentrations larger than $\delta_t$ the whole system would be in a fully polarized state.
However, the latter regime is probably not accessible by the present approach.
In the Nagaoka regime, the Ising and isotropic $t$--$J$ models give the same results because the
physics of the Nagaoka regime is determined by the competition between the magnetic and kinetic energies.
However, as soon as the FM bubble covers the whole system other mechanisms come into play, e.g.,
a direct hole-hole interaction and/or interference of the carriers paths around loops.
\begin{figure}[h]
\includegraphics[width=0.47\textwidth]{fig10}
\caption{(Color online) Critical value of the concentration of holes above which the whole system is
in a fully polarized FM state. The point at $J=0$ is added as a result of the Nagaoka theorem. The dashed line represents analytical result, Eq.~(\ref{deltacr}).}
\label{delta-crit}
\end{figure}
Fig. \ref{delta-crit} shows $\delta_t$ as a function of $J$ (the point $\delta_t=0,\ J=0$ is a result
of the Nagaoka theorem). The obtained square--root dependence between both quantities follows immediately
from Eq. (\ref{white}) and the proportionality $N_p \propto N$ shown in Fig. \ref{R_vs_N}.
\subsection{Analytical approach for finite doping}
We consider the FM polaron of size $N_p$ with $N$ doped holes that can be treated as spinless noninteracting fermions. The FM polaron is furthermore placed in the hole-depleted N\' eel spin background. We furthermore consider the limit of small hole-density that allows for quadratic expansion of the single particle kinetic energy:
\begin{equation}
E_{\rm kin}^{(1)}(k) = -2(\cos{k_x}+\cos{k_y})\sim -4 +k^2.
\label{ekin1}
\end{equation}
We obtain the kinetic energy of $N$ holes by integrating Eq.~(\ref{ekin1}) up to $k_F=2\sqrt{\pi N/ N_p}$
\begin{eqnarray}
E_{\rm kin}&=& -4N + 2\pi N^2/N_p.
\end{eqnarray}
We proceed by writing the total energy $E(N)$ as:
\begin{eqnarray}
E(N) &=& E_{\rm kin} + E_{\rm spin},\nonumber \\
E(N) &=& -4N + 2\pi N^2/N_p + N_pJ,
\end{eqnarray}
where the last term represents the magnetic energy of the FM polaron relative to the energy of the N\' eel state. After the minimization ${\partial E/ \partial N_p}=0$ we obtain
\begin{equation}
{N_p\over N_s} = \sqrt{2\pi\over J}\delta,
\label{ratio}
\end{equation}
representing the ratio between the polaron size $N_p$ and the total size of the system $N_s$ as a function of hole doping $\delta$. The comparison of Eq.~(\ref{ratio}) with numerical data is shown in Fig.~\ref{R_vs_N}. From
Eq.~(\ref{ratio}) we obtain as well the critical doping for the transition to the FM state
\begin{equation}
\delta_t = \sqrt{J \over 2\pi},
\label{deltacr}
\end{equation}
shown along the numerical results in Fig.~\ref{delta-crit}.
Notice, that Eq. (\ref{deltacr}) agrees with that derived within the semiclassical calculations of the 2D
isotropic $t$--$J$ model\cite{eisenberg} which
suggest that at small hole concentration and rather weak AFM coupling the
FM Nagaoka state becomes unstable towards a creation of an AFM bubble.
This agreement is quite natural, since the spins are considered to be frozen
in both the classical large-spin limit of the isotropic $t$--$J$ model and
its full Ising version.
Finally, we obtain the total energy of the system
\begin{equation}
E(N) = -4t\left(1-\sqrt{\pi J\over 2}\right)N.
\label{Etot}
\end{equation}
The comparison with numerical results is shown in Fig.~\ref{E_vs_N}.
Here we have neglected the effects along the line, separating the FM polaron from the N\' eel
spin background as well as the dependence of the kinetic energy on the shape of
the bubble. As a results only the linear term in $E(N)$ is reproduced.
Since the phase separation is determined by the nonlinear part of $E(N)$,
these effects give rise to small, nevertheless important corrections.
The former one is proportional to the length of the borderline ($\propto \sqrt{N}$) and
it was the reason for the choice of the fitting function in Fig. \ref{E_vs_N}.
\section{Summary}
The main difficulty in analyzing the $t$--$J$ model in the small--$J$ limit is that a large
size of the lattice is required to correctly describe the dynamics of holes.
This requirement significantly restricts
the applicability of numerical approaches like the quantum MC or exact
diagonalization method.
In the small--$J$ regime, however, the holes are confined in a FM polaron,
so that the spin--flip processes are strongly reduced.
This justifies the applicability of the Ising version of the $t$--$J$
model to study the small--$J$ limit of the original $t$--$J$ model.
For small but finite values of $J$, our results for one and two holes
are in a good agreement with those obtained within the fully quantum approaches
(DMRG, EDLFS). However, we are able to extend our calculations to the regimes
of smaller $J$ and larger number of holes inaccessible by the former methods.
We show that it is energetically favorable
for the system to segregate
into the FM hole--rich phase and hole--depleted AFM phase. The size (surface)
of the FM bubble depends linearly on the number of holes while its dependence on $J$ is given by
the square--root function. With increasing concentration of holes and/or with decreasing $J$ the size
of the FM polaron increases and eventually for
$\delta_t\simeq 0.44 \sqrt{J}$
it occupies the whole lattice.
Our numerical results thus suggest that Nagaoka state breaks down by forming an AFM bubble.
This observation fully agrees with a conjecture discussed earlier within the isotropic $t$--$J$ model.\cite{eisenberg} We, however, expect that
the results obtained for isotropic and Ising $t$--$J$ models start to deviate from each other
when doping becomes larger that the threshold density $\delta_t$ even for
$J \ll 1$.
A rather simple analytic treatment of the holes doped in the FM polaron that is furthermore placed in a N\' eel, hole-depleted spin background, leads to a good agreement with numerical data. Among other results, it provides a simple expression for the threshold density
$\delta_t=\sqrt{J/2\pi}$.
The theory reproduces only the linear dependence of the total energy on the number of holes
and does not provide information on whether the system phase separates.
Corrections
(e.g., due to line contributions) are expected to give rise to a positive $\sqrt{N}$ term in the total energy, as obtained from the numerical data.
\acknowledgments
M.M.M. acknowledge support from the Foundation for Polish Science under the ``TEAM'' program for the years
2011-2014. M.M. and M.M.M. acknowledge support under Grant No. N N202 052940 from Ministry of Science and Higher Education
(Poland). J.B. and L.V. acknowledge support under Grant No. P1-0044 from ARRS (Slovenia). J.B. acknowledges the Gordon Godfrey bequest of UNSW, Sydney (Australia) where part of this work has been performed.
|
2,877,628,091,325 | arxiv | \section*{Acknowledgments}}
\newcommand{\references}{
\section*{References}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\Reof}[1]{\mathfrak{Re}(#1)}
\newcommand{\Imof}[1]{\mathfrak{Im}(#1)}
\newcommand{\modulo}[1]{\quad (\textrm{mod}\ #1 )}
\newcommand{\atopp}[2]{\genfrac{}{}{0pt}{}{#1}{#2}}
\newcommand{\eval}[1]{\left\langle {#1} \right\rangle}
\newcommand{\leval}[1]{\langle {#1} \rangle}
\newcommand{\reval}[1]{\overline{#1}}
\newcommand{\dx}[1] {\mathrm{d}{#1}}
\newcommand{\deenne}[2]{\frac{\partial^#2}{\partial #1 ^#2}}
\newcommand{\vett}[1]{#1}
\newcommand{\tinyfrac}[2] {\genfrac{}{}{}{1}{#1}{#2} }
\newcommand{\Lfrac}[2] {\genfrac{}{}{}{0}{#1}{#2} }
\newtheorem{ansatz}{Ansatz}[section]
\newtheorem{theor}{Theorem}
\newtheorem{coroll}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{rem}{Remark}
\begin{document}
\begin{center}
\def\par\leavevmode\hbox {\it PACS:89.80.+h, 75.10.Nr} (old ones){\par\leavevmode\hbox {\it PACS:89.80.+h, 75.10.Nr} (old ones)}
\Large
{Disordered Systems, Spanning Trees and SLE}
\normalsize
\author
{
Davide Fichera,
}
Davide Fichera\\
{Universit\`a degli Studi di Milano - Dip. di Fisica and INFN,
\\ via Celoria 16, I-20133 Milano
\\ \noindent
\begin{tabular}{cl}
Mail address:& \tt{David{}e.Fiche{}ra@mi.in{}fn.it}
\end{tabular}
}
\date{\today}
\end{center}
\begin{abstract}
We define a minimization problem for paths on planar graphs that, on the
honeycomb lattice, is equivalent to the exploration path of the critical
site percolation and than has the same scaling limit of $\mathrm{SLE}_6$.
We numerically study this model (testing several SLE properties on other
lattices and with different boundary conditions) and state it in terms of
spanning trees. This statement of the problem allows the definition of a
random growth process for trees
on two dimensional graphs such that SLE is recovered as a special choice of
boundary conditions.
\end{abstract}
\noindent
{\it Keywords: Domain Walls, SLE, Combinatorial Optimization, Matching
Prob\-lem, Spin Glasses,
Spanning Trees }
\section{Introduction}
\label{sec.intro}
Recently, some efforts have been done to relate minimal paths in
two dimensional disordered systems and SLE processes\footnote{Schramm-Loewner
Evolution (SLE) is a stochastic process that
describes the growth of random curves in simply connected two
dimensional
domains; for a review see \cite{BauerBernard} and references therein.},
see for example
the boundary walls in Ising Spin Glasses \cite{Hartmann-Amoruso}
\cite{BernardLeDoussal}.
In this draft a minimization problem on paths equivalent to $\mathrm{SLE}_6$
is in\-tro\-duced on two dimensional honeycomb lattices.
This model is presented in a more general framework (the one of
spanning trees) and, in order to understand the origin of conformal
invariance and Markov property, it is studied in some detail also on
other lattices and with different boundary conditions.
Such an approach is interesting not only to investigate the possible
relation of disordered systems with SLE, but also because it could
give us a deeper
insight into the understanding of two dimensional disordered systems.
\section{The model}
\label{sec:themodel}
We are given a planar two dimensional lattice $\mathcal{G}$ and a set of real
weights $\omega(f)$ on its faces (plaquettes).
Any edge $e \equiv (i,j)$ linking two vertices $i$ and $j$, is adjacent to two
plaquettes: $f_{i,j}$ and $f_{j,i}$ (unless $e$ is an edge on the border of
$\mathcal{G}$). For a given set of $\{\omega (f)\}$, and a fixed threshold
$\theta$, we associate to each edge a weight
\begin{equation}
\label{eqn:makeweights}
W_{i,j} = [\omega(Pl_{i,j,1}) -\theta] \cdot [\omega(Pl_{i,j,2}) - \theta]
\end{equation}
Let $(\mathcal{G},W)$ denote the graph with so defined weights on the edges.
Given a path $\gamma$ of length $N(\gamma) = |\gamma|$ with endpoints $i_0$
and $i_N$ we associate to this path the ordered list (in decreasing order):
$\vec{W} (\gamma) = \mathrm{sort}(\{W_e\}_{e \in \gamma}) = (W_1 (p), \dots ,
W_N (\gamma)) $
We define the order relation "$<$" among paths as follows:
$\gamma_1 < \gamma_2$ if
\begin{itemize}
\item Exists $k$ such that $W_j (\gamma_1) = W_j (\gamma_2)\; \forall j<k$,\,
$W_k (\gamma_1) < W_k (\gamma_2)$;
\item $N(\gamma_1) < N(\gamma_2)$ and $W_j (\gamma_1) = W_j (\gamma_2)
\; \forall j \leq N(\gamma_1)$
\end{itemize}
with this definition, either $\gamma_1 \equiv \gamma_2$, or $\gamma_1
\lessgtr \gamma_2$, i.e. we have a full order provided $W_e \neq
W_{e^\prime}$ for $e \neq e^\prime$.
Notice that for each $\gamma_1 < \gamma_2$ exists a $\beta$ such that
$\forall \beta^\prime \geq \beta$, $\sum_{e\in \gamma_1} \exp(\beta^\prime W_e)
< \sum_{e\in \gamma_2} \exp(\beta^\prime W_e)$, so that the function $f_\beta
(\gamma):= \sum_{e\in\gamma} \exp(\beta W_e)$, in the large $\beta$ limit,
is an addictive cost function
($f_\beta (\gamma_1 \cup \gamma_2) = f_\beta (\gamma_1) + f_\beta (\gamma_2)$)
which reproduce our order relation.
With abuse of language the word cost will be used also for
the first entry $W_1 (\gamma)$ of the vector $\vec{W}(\gamma)$.
\subsection{Some Remarks}
First of all notice that the optimal path connecting two vertices is always
a simple path. Call $p_{i,j}$ the optimal path connecting $i$ to $j$. Simple
reasonings show that $\forall i,j,k,l$ (also coincident), $p_{i,j} \cup
p_{k,l}$ cannot contain any cycle(as happens for every addictive cost
function without negative cost loops). Fur\-ther\-more, for our cost function,
the
union $T := \cup_{i,j \in V} p_{i,j}$ of all the optimal paths is a tree.
This tree, spanning for definition, is also the one which minimizes the global
cost function $\mathcal{H}(T) = \sum{e\in T} W_e$ on the ensemble of the
spanning tree; it is the Minimum Spanning Tree\footnote{
The Minimum Spanning Tree of an arbitrary weighted graph is the loopless
cover of the graph $\mathcal{G}$ that covers every vertex and minimizes the
sum of the weights on the edges.
}
of $(\mathcal{G}, W)$, as one sees analysing the Prim's Algorithm (cfr
App.~\ref{appendix:MST})
All of this holds for a generic $(\mathcal{G},W)$, but crucially relies
on our choice of order relation (and thus of optimality for $\gamma_{i,j}$)
Summarizing: given a planar lattice with arbitrary weights on the plaquettes
we introduced some weights on the edges of the graph as to obtain a graph
with weighted edges $(\mathcal{G},W)$, a cost function associated to each path
and an order relation associated to the set of all the paths.
We stress the fact that the union of all the optimal paths is the MST
for $(\mathcal{G},W)$. Now we can specialize to a set of planar graphs
(we will consider only rectangular domains) and to a probability measure
for the weights on the plaquettes (we will only consider i.i.d. weights
on the plaquettes).
Consider now a simply connected
two dimensional domain (e.g. a square) covered with a honeycomb lattice.
Extract the weights $\omega(\mathcal{G})$ from the dis\-tri\-bu\-tion
$\chi_{[0,1]}$ and let the threshold be $\theta = 0.5$.
Fix two different edges on the boundary: $s$ (start) and $t$ (end).
Constrain all
the boundary plaquettes on the right of $s$ and $t$ to
have a cost larger than the threshold and all the other boundary plaquettes
to have a cost smaller than the threshold; then the path
starting in $s$ and ending on $t$ has cost less than $0$
and is exactly the boundary wall of the percolation process on the
lattice with weights $\omega(\mathcal{G})$ and threshold $0.5$.
Then the measure of the optimal paths $p_{s,e}$ is the same as that of
critical site percolation exploration path on the honeycomb lattice,
that is (see \cite{Camia}), in the termodynamic limit (infinitesimal
lattice spacing), SLE measure with $\kappa =6.$
\subsection{Motivations}
In the study of domain walls in disordered systems it happens that the
union of
the domain walls is a tree; this is the case for example in Ising Spin
Glasses for domain walls constrained to start from a fixed point, it is the
case also for the boundary given by the symmetric difference of opportune
matchings on planar graph \cite{FicheraSportiello}.
Such ubiquity suggests to search for a disordered model not only such as
to reproduce the SLE distribution of probability, but also such
that the union of the optimal paths was a tree and, both for his
mathematical properties and for numerical reasons, we wanted this tree
to be easy to find.
One of the properties of MST that make it easy to find (in computational
sense) is a locality property (see \ref{appendix:MST}) similar to the
locality property of SLE for $\kappa = 6$.
\section{Results}
\label{sec:Results}
\subsection{The Samples}
We simulated numerically rectangular samples of sizes ranging
from $32\times 32$ to $1024\times 1024$.
For clarity, suppose that the four
vertices of the rectangle are $(0,0)$, $(0,a)$, $(b,0)$ and $(a,b)$. Let
$s = (a/2 , 0)$ be the point at one half of the bottom edge of the rectangle
and $e = (a/2 , b)$ be the point at one half of the top edge. We will consider
four different boundary conditions and a slightly different model:
\begin{itemize}
\item $Free$: the weights on the boundary plaquettes are extracted as
the bulk ones.
\item $SLE-like$: the weights on the boundary plaquettes on the left of
$s$ and $t$ are constrained to have a weight higher than
$\theta$ and the other boundary plaquettes are constrained to have a weight
lower than $\theta$.
\item $SLE/Free$: The boundary plaquettes on the right edge of the rectangle
are constrained to be higher than $\theta$ and the ones on the left edge are
constrained to have a weight lower than $\theta$. The weights on the top edge
and on the bottom edge are unconstrained (free).
\item $Repulsive$: All the boundary plaquettes are constrained to have a
weight higher than the threshold $\theta$.
\item $Random$: the weights on the edges are i.i.d. variables. Remark that it
is not a peculiar choice of the boundary conditions for our model. It is
another model: the random MST model. We study this random measure on the
weights of the edges just for comparison with this well known model.
\end{itemize}
In order to study systems at criticality we mainly concentrate on a
$\theta$ equal to the percolation threshold for site percolation (0.5 on
the honeycomb lattice, 0.5927463 on the square one). On the square models
some boundary
conditions ($SLE-like$, $SLE-free$) break the left-right symmetry because
the critical threshold is different from $0.5$, so we simulated our model
on the square lattice also with $\theta = 0.5$, we expected a trivial limit
for these paths, the fact that we did not observe it means that the scaling
limit was not reached by the numerical simulations.
\subsection{Observables}
\label{sec:Observables}
We measured the fractal-dimension of the paths. We know that the
fractal-dimension of SLE-Walks is linked to the parameter $\kappa$ by
the relation $d = 1 + \kappa /8 $.
We measured left-passage probability, which because of dilation
invariance, has to be a function only of the radial coordinate
on the half plane. Schramm's formula (see~\cite{Schramm}) links the shape
of the
left-passage probability to the parameter $\kappa$. As we observed (see
\cite{FicheraSportiello}) it can happen, for disordered systems,
that the parameter $\kappa$ found by left-passage probability and the
fractal dimension are not compatible, indicating that SLE and minimizing paths
are not equal in measure.
As it is usual in literature (\cite{Hartmann-Amoruso}
\cite{BernardLeDoussal}) we consider both the path starting in
$s$ and ending in $t$ and the optimal path among
the ones starting on the bottom and ending on the top. Notice that, for
$SLE-like$ boundary conditions, the two paths coincide: the optimal path
connecting the bottom to the top is also the one starting in $s$ and
ending in $t$.
If we want to compare the left-passage probability of paths on rectangles
with Schramm's formula we have to transform the domain into the half
plane. For the path starting in $s$ and ending in $t$ we choose the
conformal transformation that
maps $(a/2 , 0)$ in $(0,0)$, $(a/2,b)$ in $\infty$, $(0,0)$ in $(-1,0)$ and
$(a,0)$ in $(1,0)$. For the path connecting the top to the bottom we consider
the conformal transformation that maps the rectangle to the semi-annulus
such that the vertices of the rectangle are sent on the vertices of the
rect angles of
the half annulus and $(0,b/2)$, $(a,b/2)$ are sent respectively in $(-1,0)$
$(1,0)$. This transformation sends the rectangle to the half plane only in
the limit $b/a \rightarrow \infty$, the limit considered in
$\cite{BernardLeDoussal}$ to study the horizontal displacement. For $b/a
< \infty$ boundary effects are observed at top and bottom.
The horizontal displacement is the difference $\Delta(x)$ between the
abscissae of starting and ending point for the optimal path connecting the
top of the square to the bottom. We measured the average value
of ${\Delta x}^2$, with $\Delta x$ expressed in unit of $a$, the horizontal
lenght of the rectangle, so that $\Delta x \in [-1,1]$ for every path.
\subsubsection{Fractal Dimension}
The fractal dimensions of the curves is measured by comparing the
number of steps of the paths in lattices of different sizes. Having
fixed the boundary conditions, the fractal dimension is independent of
the path considered.
\begin{center}
\begin{tabular}{|r||c|c|}
\hline
& Square & Honeycomb \\
\hline
\hline
$Free$ & 1.21 $\pm$ 0.01 & 1.75 $\pm$ 0.01 \\
$SLE-like$ & 1.20 $\pm$ 0.01 & 1.75 $\pm$ 0.01 \\
$SLE/Free$ & 1.22 $\pm$ 0.01 & 1.75 $\pm$ 0.01 \\
$Repulsive$ & 1.22 $\pm$ 0.01 & 1.75 $\pm$ 0.01 \\
$Random$ & 1.21 $\pm$ 0.01 & 1.22 $\pm$ 0.01 \\
\hline
\end{tabular}
\end{center}
\subsubsection{Left-Passage Probability}
Left passage probabilities (the probability for a point in the domain to
be at the left or at the right of the path) have been measured in
rectangular domains. For the path with ends in $s$ and $t$ we
transformed the domain to the half plane to compare the measured
probability (over $10^5$ samples) to the Schramm formula:
$$
1/2 + \frac{\Gamma(\frac{4}{\kappa})}{\sqrt{\pi} \Gamma
(\frac{8-\kappa}{2\kappa})} \tan t \cdot \phantom{}_2 F_1
\left[ \frac{1}{2} , \frac{4}{\kappa}, \frac{3}{2}, -\tan^2 (t)\right]
$$
where $t$ is the angle subtended between the ray in $s$ and the real axis.
For the paths with free ends we compared measured probabilities wih the
formula via the identification of $x$ (the coordinate on the rectangle) with
the angle $\theta$ on the half plane.
\begin{center}
\begin{tabular}{|r||c|c|}
\hline
Path $s \rightarrow t$ & Square & Honeycomb \\
\hline
\hline
$Free$ & 2.8 $\pm$ 0.1 & 2.7 $\pm$ 0.1 \\
$SLE-like$ & XXX & 6.0 $\pm$ 0.1 \\
$SLE/Free$ & XXX & XXX \\
$Repulsive$ & XXX & XXX \\
$Random$ & 2.8 $\pm$ 0.1 & 2.8 $\pm$ 0.1 \\
\hline
\hline
Optimal path & Square & Honeycomb \\
\hline
\hline
$Free$ & 3.2 $\pm$ 0.1 & 2.9 $\pm$ 0.1 \\
$SLE/Free$ & XXX & XXX \\
$Repulsive$ & 5.9 $\pm$ 0.1 & XXX \\
$Random$ & 3.2 $\pm$ 0.1 & 3.2 $\pm$ 0.1 \\
\hline
\end{tabular}
\end{center}
The entries marked with XXX correspond to measured left passage probabilities
that do not fit with Schramm's formula for any value of $\kappa$.
Notice that the critical model (on both the lattices) has compatible
values of $\kappa$ for $Free$ and $Random$ boundary
conditions, but they are very different for $Repulsive$ boundary conditions.
\subsubsection{Horizontal Displacement}
We observe that when the height of the rectangle is bigger than the
base, the position of the starting point and the position of the
ending point are uncorrelated. As a consequence the average value
of ${\Delta x}^2$
is constant for $b \gg a$ and converges to a value $\langle {\Delta x}^2
\rangle$. For
$b \ll a$ we measure the exponent $l$ in $\Delta x (b/a) \sim (b/a)^l$
\\
\begin{center}
\begin{tabular}{|r||c|c|}
\hline
Square & $\langle {\Delta x}^2 \rangle$ & l \\
\hline
\hline
$Free$ & 0.134 $\pm$ 0.05 & 2.07 $\pm$ 0.03 \\
$SLE/Free$ & 0.126 $\pm$ 0.002 & 2.10 $\pm$ 0.16 \\
$Repulsive$ & 0.24 $\pm$ 0.01 & 2.13 $\pm$ 0.09 \\
$Random$ & 0.128 $\pm$ 0.005 & 2.14 $\pm$ 0.26 \\
\hline
\hline
Honeycomb & $\langle {\Delta x}^2 \rangle$ & l \\
\hline
\hline
$Free$ & 0.104 $\pm$ 0.001 & 2.22 $\pm$ 0.14 \\
$SLE/Free$ & 0.190 $\pm$ 0.01 & 2.05 $\pm$ 0.02 \\
$Repulsive$ & 0.190 $\pm$ 0.01 & 2.05 $\pm$ 0.12 \\
$Random$ & 0.129 $\pm$ 0.005 & 2.24 $\pm$ 0.19 \\
\hline
\end{tabular}
\end{center}
\subsubsection{Conformal Invariance of the Trees}
We have investigated the conformal invariance of the Minimum Spanning Tree.
As we know \cite{Wilson}, for the Random Spanning Tree the conformal
invariance does not hold. To test conformal invariance we measure
the distribution of probability of the triple point $T$ on the square.
The triple point is defined as the unique site in the tree connected
to $(0,0)$, $(0,b)$, $(a,0)$ by
three paths with null intersection.
We transform conformally the rectangle into a disk so to map the
points $(0,0)$, $(0,b)$, $(a,0)$ on the vertices of an equilateral triangle
inscribed in the disk. If conformal invariance for the tree holds
(as it happens for instance for the uniform spanning trees), the
transformed distribution of probability should be invariant under rotations
of $2\pi /3$ of the disk.
This test has been done for all the models with boundary conditions that do
not break conformal invariance ($Free$ and $Repulsive$) and has shown that
conformal invariance does not hold for the trees we defined.
\section{Conclusions and Perspectives}
\label{sec:conclusions}
Several surprising facts emerged from the numerical simulations. The fractal
dimension of paths is irrespective of the boundary conditions, but it
depends dramatically on the kind of lattice. The left-passage probability,
also on the honeycomb lattice, is not compatible with the fractal dimension
for every choice of the boundary conditions different from the standard one,
anyway on the square lattice with $Repulsive$ boundary conditions and
at the critical percolation threshold, the left-passage probability
obtained is consistent with $\kappa = 6$. These
facts are not well understood and need more investigations also on
different lattices.
Given two vertices $i$, $j$, we say that they are connected if the
cost of the minimal path between $i$ and $j$ is less than $0$.
The behaviour of the connection probability could be studied both
numerically and theoretically using CFT's tools, as in \cite{Ziff} for
critical percolation.
The structure of the connected domains is better understood in the
scheme of the Krushkal's algorithm (see \ref{appendix:MST}).
The SLE boundary conditions are peculiar because all the boundary sites
but two are disconnected.
It is possible to define a process of growth of trees in the scheme
of Prim's algorithm, in fact one could start to grow the tree from a
starting point on the boundary and progressively increasing
it with Prim's algorithm. This is the definition of a process
of growth for Spanning
Trees. It would be interesting to understand if, using the
reparametrization of the time such that the rate of increase of the
capacity be constant, the continuum limit of this evolution process
makes sense.
Notice that $\mathrm{SLE}_6$ is recovered as the growth of the tree with
opportune boundary conditions.
This method to define growth processes for trees such that, with opportune
boundary conditions, $\mathrm{SLE}_6$ is recovered could be easily generalized to
other spin models.
In fact, given a spin configuration extracted with the Gibbs measure and the
usual boundary conditions to force a boundary wall to exist starting on $s$
and ending on $t$, we
need only to associate a weight bigger than $\theta$ to sites with up spins
and smaller than $\theta$ to sites with down spin. Then, the minimum spanning
tree on the honeycomb lattice with weights induced by equation
(\ref{eqn:makeweights}) will contain by construction the boundary
between up spins and down spins starting in $s$ and ending in $t$.
In this draft we studied only the optimal spanning tree; it could be
interesting also to study the low temperature behaviour: the almost
optimal trees.
One could investigate the stability of walks under perturbations
of the in\-stance. It is not a very hard task when working in the scheme of
Krushkal's algorithm, thanks to MST properties.
|
2,877,628,091,326 | arxiv | \section{Introduction}
In quantum mechanics, a quantum system is
associated with a separable complex Hilbert space $H$, i.e.,
the state space. A quantum state is described as a density operator
$\rho\in{\mathcal T}(H)\subseteq{\mathcal B}(H)$ which is positive
and has trace 1, where ${\mathcal B}(H)$ and ${\mathcal T}(H)$
denote the von Neumann algebras of all bounded linear operators and
the trace-class of all operators $T$ with $\|T\|_{\rm Tr}={\rm
Tr}((T^\dagger T)^{\frac{1}{2}})<\infty$, respectively. $\rho$ is a
pure state if $\rho^2=\rho$; $\rho$ is a mixed state if
$\rho^2\not=\rho$. Let us denote by ${\mathcal S}(H)$ the set of all
states acting on $H$.
Recall also that the fidelity of states $\rho$ and $\sigma$ in
${\mathcal S}(H)$ is defined to be
$$ F(\rho,\sigma )={\rm Tr} \sqrt{\rho^{1/2}\sigma\rho^{1/2}}.\eqno(1.1)$$
Fidelity is a very useful measure of closeness between two states
and has several nice properties including the Uhlmann's theorem.
Uhlmann and co-workers developed Eq.(1.1) by the transition
probability in the more general context of the representation theory
of C*-algebras \cite{U,A,AU}. The result in \cite{U} (also ref.
\cite{U2}) implies that, if $\dim H<\infty$, then the equality
$$F(\rho,\sigma )=\max |\langle\psi|\phi\rangle|, \eqno(1.2)$$ holds, where the
maximization is over all purifications $|\psi\rangle$ of $\rho$ and
$|\phi\rangle$ of $\sigma$ into a larger system of $H\otimes H$.
This result is then referred as the Uhlmann's theorem. Eq.(1.2)
does not provide a calculation tool for evaluating the fidelity, as
does Eq.(1.1). However, in many instances, the properties of the
fidelity are more easily deduced using Eq.(1.2) than Eq.(1.1). For
example, Eq.(1.2) makes it clear that $0\leq
F(\rho,\sigma)=F(\sigma,\rho)\leq 1$; $F(\rho,\sigma)=1$ if and only
if $\rho=\sigma$.
In \cite{J}, Jozsa presented an elementary proof of the Uhlmann's
theorem without involving the representation theory of C*-algebras.
In this paper we will consider the fidelity of states in infinite
dimensional systems, give an elementary proof of the infinite
dimensional version of Uhlmann's theorem, and then, apply it to
generalize several properties of the fidelity from finite
dimensional case to infinite dimensional case. Of course, not all
results for finite dimensional case can be generalized fully to
infinite dimensional case. For example, in the finite dimensional
case, it is known that $F(\rho,\sigma)=\min_{\{E_m\}} F(p_m,q_m),$
where the minimum is over all POVMs (positive operator-valued
measure) $\{E_m\}$, and $p_m={\rm Tr}(\rho E_m)$, $q_m={\rm
Tr}(\sigma E_m)$ are the probability distributions for $\rho$ and
$\sigma$ corresponding to the POVM $\{E_m\}$. However, this is not
true for infinite dimensional case. What we have is that
$F(\rho,\sigma)=\inf_{\{E_m\}} F(p_m,q_m)$. The infimum attains the
minimum if and only if $\rho$ and $\sigma$ meet certain condition.
Let $H$ be a complex Hilbert space, $A\in{\mathcal B}(H)$ and
$T\in{\mathcal T}(H)$. It is well known from the operator theory
that $|{\rm Tr}(AT)|\leq \|AT\|_{\rm Tr}\leq \|A\|\|T\|_{\rm Tr}$.
This fact will be used frequently in this paper.
\section{Infinite dimensional version of the Uhlmann's theorem and an elementary proof}
Recall that an operator $V\in{\mathcal B}(H)$ is called an isometry
if $V^\dag V=I$; is called a co-isometry if $VV^\dag =I$. If $\dim
H=\infty$ and $T\in{\mathcal B}(H)$, then, by the polar
decomposition, there exists an isometry or a co-isometry $V$ such
that $T=V|T|$, where $|T|=(T^\dag T)^{1/2}$. Generally speaking, $V$
may not be unitary. In fact, there exists a unitary operator $U$
such that $T=U|T|$ if and only if $\dim \ker T=\dim\ker T^\dag$.
However, the following lemma says that it is the case if $T$ is a
product of two positive operators.
{\bf Lemma 2.1.} {\it Let $H$ be a Hilbert space and
$A,B\in{\mathcal B}(H)$. If $A\geq 0$ and $B\geq 0$, then there
exists a unitary operator $V\in{\mathcal B}(H)$ such that
$AB=V|AB|$.}
{\bf Proof.} We need only to show that $\dim\ker AB=\dim\ker BA$ if
both $A$ and $B$ are positive operators.
Note that, since $A\geq 0$ and $B\geq 0$, we have
$$\ker AB=\ker B\oplus \ker A\cap (\ker B)^\bot \eqno(2.1)$$
and
$$\ker BA=\ker A\oplus \ker B\cap (\ker A)^\bot .\eqno(2.2)$$
Obviously, if $\dim \ker A=\dim\ker B=\infty$, then $\dim\ker
AB=\dim\ker BA=\infty$; if $A$ (or $B$) is injective, then $\dim\ker
AB=\dim\ker BA=\dim\ker B$ (or $\dim\ker AB=\dim\ker BA=\dim\ker
A$).
Assume that $\dim\ker A<\infty$ and $\dim\ker B=\infty$. By
Eqs.(2.1)-(2.2) we need only to check that $\dim \ker B\cap (\ker
A)^\bot =\infty$. This is equivalent to show the following
assertion.
{\bf Assertion.} If $B\geq 0$ and $\dim\ker B=\infty$, then, for
any subspace $M\subset H$ with $\dim M^\bot< \infty$, $\dim\ker
P_MBP_M|_M=\infty$.
In fact, by the space decomposition $H=M\oplus M^\bot$, we may write
$B=\left(\begin{array}{cc}B_{11}
&B_{12}\\B_{12}^\dag&B_{22}\end{array}\right),$ where
$B_{11}=P_MBP_M|_M$. Since $B\geq 0$, there exists some contractive
operator $D$ such that $B_{12}=B_{11}^{1/2}DB_{22}^{1/2}$ (for
example, see \cite{H1}). Thus
$$\ker B=\ker B_{11}\oplus \ker B_{22}\oplus L,$$
where $$L= \{|x\rangle\oplus |y\rangle : |x\rangle\in (\ker
B_{11})^\bot, \ |y\rangle\in(\ker B_{22})^\bot,\
B_{11}|x\rangle+B_{12}|y\rangle=0\ \mbox {and}\ B_{12}^\dag
|x\rangle+B_{22}|y\rangle=0\}.$$ Note that $\dim\ker B_{22}<\infty$
and $\dim L\leq \dim (\ker B_{22})^\bot<\infty$, we must have
$\dim\ker B_{11}=\infty$.
Finally, assume that both $\ker A$ and $\ker B$ are finite
dimensional. With respect to the space decomposition $H=(\ker
A)^\bot\oplus\ker A$, we have
$$A=\left(\begin{array}{cc} A_1 &0\\0&0\end{array}\right) \quad \mbox{and}\quad
B=\left(\begin{array}{cc}B_{11}
&B_{12}\\B_{12}^\dag&B_{22}\end{array}\right).$$ As $A_1$ is
injective with dense range,
$$AB=\left(\begin{array}{cc} A_1B_{11} &A_1B_{12}\\0&0\end{array}\right) \quad \mbox{and}\quad
BA=\left(\begin{array}{cc}B_{11}A_1 &0\\B_{12}^\dag
A_1&0\end{array}\right),$$ we see that
$$\begin{array}{rl}\ker AB=&\{|x\rangle\oplus |y\rangle : |x\rangle\in (\ker A)^\bot,\ |y\rangle\in\ker A,\
B_{11}|x\rangle+B_{12}|y\rangle=0\}\\
=&(\ker B_{11}\oplus\ker B_{12})\\ &+\{|x\rangle\oplus |y\rangle:
|x\rangle\in(\ker B_{11})^\bot,\ |y\rangle\in(\ker B_{12})^\bot,\
B_{11}|x\rangle+B_{12}|y\rangle=0\}\end{array}
$$ and
$$\ker BA=\ker A \oplus \{|x\rangle : |x\rangle\in (\ker A)^\bot\cap\ker
B_{11}\}=\ker A\oplus\ker B_{11}.$$ Since $\dim \{|x\rangle\oplus
|y\rangle: |x\rangle\in(\ker B_{11})^\bot,\ |y\rangle\in(\ker
B_{12})^\bot,\ B_{11}|x\rangle+B_{12}|y\rangle=0\}\leq\dim(\ker
B_{12})^\bot$ and $\dim\ker B_{12}+\dim (\ker B_{12})^\bot=\dim\ker
A$, one gets
$$\dim \ker AB\leq \dim \ker BA.$$
Symmetrically, we have $\dim\ker BA\leq \dim\ker AB$, and therefore,
$\dim\ker AB=\dim\ker BA$. Complete the proof of the lemma.
\hfill$\Box$
If $\dim H<\infty$, then, for any $T\in{\mathcal B}(H)$, we have
$\|T\|_{\rm Tr}={\rm Tr}(|T|)=\max\limits_{U}\{{\rm Tr}(AU)\}$,
where the maximum is over all unitary operators. This result is not
valid even for trace-class operators if $\dim H=\infty$. The next
lemma says that the above result is true if the operator is a
product of two positive operators.
{\bf Lemma 2.2.} {\it Let $H$ be a complex Hilbert space and
$A,B\in{\mathcal B}(H)$. If $A,B$ are positive and $AB\in{\mathcal
T}(H)$, then
$$\|AB\|_{\rm Tr}={\rm Tr}(|AB|)=\max \{{\rm Tr}(ABU) :
U\in{\mathcal U}(H)\}, \eqno(2.3)$$ where ${\mathcal U}(H)$ is the
unitary group of all unitary operators in ${\mathcal B}(H)$.}
{\bf Proof.} For any unitary operator $U\in{\mathcal U}(H)$, we
have
$$ |{\rm Tr}(ABU)|\leq\|U\|\|AB\|_{\rm Tr}=\|AB\|_{\rm Tr}={\rm
Tr}(|AB|).$$ On the other hand, by Lemma 2.1, there exists a unitary
operator $V$ such that $AB=V|AB|$. Thus $|AB|=V^\dag AB$ and
$$\|AB\|_{\rm Tr}={\rm
Tr}(|AB|)={\rm Tr}(V^\dag AB)={\rm Tr}(ABV^\dag).$$ Hence Eq.(2.3)
holds. \hfill$\Box$
{\bf Lemma 2.3.} {\it Let $H$, $K$ be separable infinite dimensional
complex Hilbert spaces and $A\in {\mathcal B}(H)$, $B\in {\mathcal
B}(K)$. Let $\{|i\rangle\}_{i=1}^\infty$,
$\{|i'\rangle\}_{i=1}^\infty$ be any orthonormal bases of $H$, $K$
respectively, and $U$ be the unitary operator defined by
$U|i\rangle=|i'\rangle$. For each positive integer $N$, let
$|m_N\rangle=\sum_{i=1}^N |i\rangle |i'\rangle$. If $A$ or $B$ is a
trace-class operator, then, $$ \lim_{N\rightarrow\infty}\langle
m_N|A\otimes B|m_N\rangle ={\rm Tr}(UA^\dag U^\dag B).$$}
{\bf Proof.} Clearly, $UA^\dag U^\dag B\in{\mathcal T}(K)$ and
$${\rm Tr}(UA^\dag U^\dag B)=\sum _{i,j} \langle i'|UA^\dag U^\dag |j'\rangle\langle j'|B
|i'\rangle = \sum _{i,j} \langle i|A^\dag |j\rangle\langle j'|B
|i'\rangle, $$ which is absolutely convergent. Hence
$$ \lim _{N\rightarrow\infty} \sum _{i,j=1}^N \langle i|A^\dag |j\rangle\langle j'|B
|i'\rangle={\rm Tr}(UA^\dag U^\dag B). \eqno(2.4)$$
On the other hand,
$$\langle m_N|A\otimes B|m_N\rangle =\sum_{i,j=1}^N \langle j|\langle j'|A\otimes B|i\rangle
|i'\rangle =\sum_{i,j=1}^N \langle j|A|i\rangle\langle
j'|B|i'\rangle = \sum _{i,j=1}^N \langle i|A^\dag |j\rangle\langle
j'|B |i'\rangle.
$$
So, by Eq.(2.4), one obtains that
$$
\lim_{N\rightarrow\infty}\langle m_N|A\otimes B|m_N\rangle ={\rm
Tr}(UA^\dag U^\dag B),$$ as desired. \hfill$\Box$
The following is the infinite dimensional version of the Uhlmann's
theorem. Recall that a unit vector $|\psi\rangle\in H\otimes K$ is
said to be a purification of a state $\rho$ on $H$ if $\rho={\rm
Tr}_K(|\psi\rangle\langle\psi|)$.
{\bf Theorem 2.4.} {\it Let $H$ and $K$ be separable infinite
dimensional complex Hilbert spaces. For any states $\rho$ and
$\sigma$ on $H$, we have
$$ F(\rho,\sigma)=\max\{ |\langle \psi |\phi\rangle | :
|\psi\rangle\in{\mathcal P}_\rho,\ |\phi\rangle\in{\mathcal
P}_\sigma\}, $$ where ${\mathcal P}_\rho= \{|\psi\rangle\in H\otimes
K : |\psi\rangle \ \mbox{is a purification of}\ \rho\}$. }
{\bf Proof.} Assume that $\rho, \sigma\in{\mathcal S}(H)$. Then
there exist orthonormal bases of $H$, $\{|i_H\rangle\}_{i=1}^\infty$
and $\{|i'_H\rangle\}_{i=1}^\infty$ such that
$\rho=\sum_{i=1}^\infty p_i|i_H\rangle\langle i_H|$ and
$\sigma=\sum_{i=1}^\infty q_i|i'_H\rangle\langle i'_H|$ with
$\sum_{i=1}^\infty p_i=\sum_{i=1}^\infty q_i=1$. If $|\psi\rangle,
|\phi\rangle \in H\otimes K$ are purifications of $\rho$, $\sigma$,
respectively, then there exist orthonormal sets
$\{|i_K\rangle\}_{i=1}^\infty$ and $\{|i'_K\rangle\}_{i=1}^\infty$
in $K$ such that $|\psi\rangle =\sum_{i=1}^\infty
\sqrt{p_i}|i_H\rangle |i_K\rangle$ and $|\phi\rangle
=\sum_{i=1}^\infty \sqrt{q_i}|i'_H\rangle |i'_K\rangle$.
Pick any orthonormal bases $\{|i''_H\rangle\}_{i=1}^\infty$ of $H$
and $\{|i''_K\rangle\}_{i=1}^\infty$ of $K$. Let $U_H, U_K, V_H,
V_K$ be partial isometries defined by respectively
$$U_H|i''_H\rangle
=|i_H\rangle,\ U_K|i''_K\rangle =|i_K\rangle, \ V_H|i''_H\rangle
=|i'_H\rangle, \ V_K|i''_K\rangle =|i'_K\rangle \eqno(2.5)$$ for
each $i=1,2,\ldots $. For any integer $N>0$, let
$$|m_N\rangle
=\sum_{i=1}^N|i''_H\rangle|i''_K\rangle .$$ Then
$$|\psi_N\rangle=\sum_{i=1}^N
\sqrt{p_i}|i_H\rangle |i_K\rangle=\sum_{i=1}^N
\sqrt{\rho}(U_H\otimes U_K)|i''_H\rangle
|i''_K\rangle=(\sqrt{\rho}U_H\otimes U_K)|m_N\rangle$$ and
$$|\phi_N\rangle=\sum_{i=1}^N
\sqrt{q_i}|i'_H\rangle |i'_K\rangle=\sum_{i=1}^N
\sqrt{\sigma}(V_H\otimes V_K)|i''_H\rangle
|i''_K\rangle=(\sqrt{\sigma}V_H\otimes V_K)|m_N\rangle.$$ It follows
from Lemma 2.3 that
$$\begin{array}{rl}
|\langle \psi |\phi\rangle|= & \lim_{N\rightarrow\infty} |\langle
\psi_N |\phi_N\rangle|=\lim_{N\rightarrow\infty} |\langle
m_N|U_H^\dag\sqrt{\rho}\sqrt{\sigma}V_H\otimes U_K^\dag
V_K|m_N\rangle| \\= &|{\rm
Tr}(UV_H^\dag\sqrt{\sigma}\sqrt{\rho}U_HU^\dag U_K^\dag V_K)|\leq
\|U_HU^\dag U_K^\dag V_KUV_H^\dag\| {\rm
Tr}(|\sqrt{\sigma}\sqrt{\rho}|)
\\
\leq & {\rm Tr}(|\sqrt{\sigma}\sqrt{\rho}|)={\rm
Tr}\sqrt{\rho^{1/2}\sigma\rho^{1/2}}=F(\rho,\sigma),
\end{array} \eqno(2.6)
$$
where $U$ is the unitary operator defined by $U|i''_H\rangle
=|i''_K\rangle$. Therefore, we have proved that
$$ F(\rho,\sigma)\geq\sup\{ |\langle \psi |\phi\rangle | :
|\psi\rangle\in{\mathcal P}_\rho,\ |\phi\rangle\in{\mathcal
P}_\sigma\}.$$
Now, to complete the proof, it suffices to find
$|\psi\rangle\in{\mathcal P}_\rho$ and $|\phi\rangle\in{\mathcal
P}_\sigma$ such that $|\langle\psi|\phi\rangle|=F(\rho,\sigma)$.
By applying Lemma 2.1, we see that $\sqrt{\sigma}\sqrt{\rho}$ has a
polar decomposition
$\sqrt{\sigma}\sqrt{\rho}=U_0|\sqrt{\sigma}\sqrt{\rho}|$ with $U_0$
a unitary operator.
Let $\{|i_K\rangle\}_{i=1}^\infty$ be an orthonormal basis of $K$
and let $|\psi\rangle =\sum_{i=1}^\infty \sqrt{p_i}|i_H\rangle
|i_K\rangle$ and $|\phi\rangle =\sum_{i=1}^\infty
\sqrt{q_i}|i'_H\rangle |i_K\rangle$. Then $|\psi\rangle\in{\mathcal
P}_\rho$ and $|\phi\rangle\in{\mathcal P}_\sigma$. Let
$|i''_H\rangle=|i_H\rangle$, $|i''_K\rangle=|i_K\rangle$,
$i=1,2\ldots$. Then, by Eq.(2.5), $U_H=I$, $U_K=I$, $V_H$ is a
unitary operator determined by $V_H|i_H\rangle=|i'_H\rangle$. Take
$|i'_K\rangle$ so that $V_K=UU_0^\dag V_HU^\dag$. Then for such
choice of $|\psi\rangle$ and $|\phi\rangle$ we have
$$\begin{array}{rl}
|\langle \psi |\phi\rangle|= & \lim_{N\rightarrow\infty} |\langle
\psi_N |\phi_N\rangle|=\lim_{N\rightarrow\infty} |\langle
m_N|\sqrt{\rho}\sqrt{\sigma}V_H\otimes V_K|m_N\rangle| \\= &|{\rm
Tr}(UV_H^\dag\sqrt{\sigma}\sqrt{\rho}U^\dag V_K)|=| {\rm Tr}(U^\dag
V_KUV_H^\dag U_0|\sqrt{\sigma}\sqrt{\rho}|)|
\\
=&| {\rm Tr}(|\sqrt{\sigma}\sqrt{\rho}|)|=F(\rho ,\sigma ),
\end{array}
$$
completing the proof. \hfill$\Box$
By checking the proof of Theorem 2.4, it is easily seen that the
following holds.
{\bf Corollary 2.5.} {\it Let $H$ and $K$ be separable infinite
dimensional complex Hilbert spaces. For any states $\rho$ and
$\sigma$ on $H$, we have
$$ F(\rho,\sigma)=\max\{ |\langle \psi_0 |\phi\rangle | :
|\phi\rangle\in{\mathcal
P}_\sigma\}=\max\{ |\langle \psi |\phi_0\rangle | :
|\psi\rangle\in{\mathcal
P}_\rho\}, $$ where $|\psi_0\rangle$ is any fixed purification of
$\rho$ of the form $|\psi_0\rangle =\sum_{i=1}^\infty
\sqrt{p_i}|i_H\rangle |i_K\rangle$ with $\{ |i_K\rangle\}$ an
orthonormal basis of $K$ and $|\phi_0\rangle$ is any fixed
purification of $\sigma$ of the form $|\phi_0\rangle
=\sum_{i=1}^\infty \sqrt{q_i}|i_H'\rangle |i_K'\rangle$ with $\{
|i_K'\rangle\}$ an orthonormal basis of $K$.}
The fidelity is not a distance because it does not meet the
triangular inequality. However, like to the finite dimensional case,
by use of Theorem 2.4 and Corollary 2.5, one can show that the
are-cosine of fidelity is a distance.
{\bf Corollary 2.6.} {\it $A(\rho,\sigma)=:\arccos F(\rho,\sigma)$
is a distance on ${\mathcal S}(H)$.}
Several remarkable properties of fidelity in finite dimensional case
are still valid for infinite dimensional case. For instance,
{\bf Monotonicity of the fidelity} For any quantum channel
${\mathcal E}$, we have
$$ F({\mathcal E}(\rho), {\mathcal E}(\sigma))\geq F(\rho,\sigma).
\eqno(2.7)$$ Recall that a quantum channel is a
completely positive and trace preserving linear map from ${\mathcal
T}(H)$ into ${\mathcal T}(K)$.
{\bf Strong concavity of the fidelity} Let $p_i$ and $q_i$ be
probability distributions over the same index set, and $\rho_i$ and
$\sigma_i$ states also indexed by the same index set. Then
$$F(\sum_ip_i\rho_i, \sum_iq_i\sigma_i)\geq \sum_i
\sqrt{p_iq_i}F(\rho_i,\sigma_i). \eqno(2.8)$$
\section{Connection to the classical fidelity and trace distance}
If $\dim H<\infty$, the quantum fidelity is related to the classical
fidelity by considering the probability distributions induced by a
measurement. In fact \cite[pp. 412]{NC}
$$F(\rho,\sigma)=\min\limits_{\{E_m\}} F(p_m,q_m), \eqno(3.1)$$
where the minimum is over all POVMs (positive operator-valued
measure) $\{E_m\}$, and $p_m={\rm Tr}(\rho E_m)$, $q_m={\rm
Tr}(\sigma E_m)$ are the probability distributions for $\rho$ and
$\sigma$ corresponding to the POVM $\{E_m\}$.
It is natural to ask whether or not Eq.(3.1) is true if $\dim
H=\infty$? The following result is our answer.
For a positive operator $A\in{\mathcal B}(H)$, with respect to the
space decomposition $H=(\ker A)^\bot \oplus \ker A$,
$A=\left(\begin{array}{cc} A_1 &0\\ 0& 0\end{array}\right)$, where
$A_1:(\ker A)^\bot \rightarrow(\ker A)^\bot $ is injective and hence
$A_1^{-1}$ makes sense. In this paper, we always denote $A^{[-1]}$
for the may unbounded densely defined positive operator defined by
$A^{[-1]}=\left(\begin{array}{cc} A_1^{-1} &0\\ 0&
0\end{array}\right)$ with domain ${\mathcal D}(A^{[-1]})={\rm ran}
(A)\oplus \ker A$. Here ran($A)$ stands for the range of $A$.
{\bf Theorem 3.1.} {\it Let $H$ be a separable infinite dimensional
complex Hilbert space. Then, for any states $\rho,\sigma\in
{\mathcal S}(H)$, we have
$$F(\rho,\sigma)=\inf\limits_{\{E_m\}} F(p_m,q_m), \eqno(3.2)$$
where the infimum is over all POVMs $\{E_m\}$, and $p_m={\rm
Tr}(\rho E_m)$, $q_m={\rm Tr}(\sigma E_m)$ are the probability
distributions for $\rho$ and $\sigma$ corresponding to the POVM
$\{E_m\}$. Moreover, the infimum attains the minimum if and only if
the operator
$M=\rho^{[-1/2]}\sqrt{\rho^{1/2}\sigma\rho^{1/2}}\rho^{[-1/2]}$ (may
unbounded) is diagonal.}
Firstly, we need a lemma.
{\bf Lemma 3.2.} {\it Let $H$ be an infinite dimensional complex
Hilbert space. Assume that $A\in{\mathcal T}(H)$ and $ \{
T_n\}_{n=0}^\infty\subset{\mathcal B}(H)$. If
SOT-$\lim_{n\rightarrow\infty}T_n=T_0$, then
$\lim_{n\rightarrow\infty}{\rm Tr}(T_nA)={\rm Tr}(T_0A)$. Here SOT
means the strong operator topology. }
{\bf Proof.} As $T_n$ converges to $T_0$ under the strong operator
topology, there is a constant $d>0$ such that $\sup_n\|T_n\|\leq d$.
For any $\varepsilon
>0$, there exists a finite rank projection $P_\varepsilon$ such that
$\|A-P_\varepsilon AP_\varepsilon\|_{\rm Tr}<\varepsilon/(2d+1)$
because $A\in{\mathcal T}(H)$. On the other hand, $P_\varepsilon$ is
of finite rank, together with SOT-$\lim_{n\rightarrow\infty}
T_n=T_0$, implies that
$$\lim_{n\rightarrow\infty}\|P_\varepsilon(T_n-T_0)P_\varepsilon\|=0.$$
So, for above $\varepsilon>0$, there exists some $N$ such that
$$\|P_\varepsilon(T_n-T_0)P_\varepsilon\|<\frac{1}{(2d+1)\|A\|_{{\rm Tr}}}\varepsilon$$whenever
$n>N$. Thus we have
$$\begin{array}{rl}|{\rm Tr}((T_n-T_0)A)|
&\leq|{\rm Tr}((T_n-T_0)(A-P_\varepsilon AP_\varepsilon))|+|{\rm Tr}((T_n-T_0)P_\varepsilon AP_\varepsilon)|\\
&\leq\|T_n-T_0\|\|A-P_\varepsilon AP_\varepsilon\|_{{\rm
Tr}}+\|P_\varepsilon(T_n-T_0)P_\varepsilon\|\|A\|_{{\rm
Tr}}\\
&<2d\|A-P_\varepsilon AP_\varepsilon\|_{{\rm
Tr}}+\frac{\varepsilon}{2d+1}\\
&<\frac{2d}{2d+1}\varepsilon
+\frac{\varepsilon}{2d+1}=\varepsilon.\end{array}$$ Therefore,
$\lim_{n\rightarrow\infty}{\rm Tr}(T_nA)={\rm Tr}(T_0A)$.
\hfill$\Box$
{\bf Proof of Theorem 3.1.} Let $\{E_m\}$ be a POVM. Then $E_m\geq
0$ and $\sum_m E_m=I$, here the series converges under the strong
operator topology. By Lemma 2.1, there exists a unitary operator $U$
such that
$\sqrt{\rho^{1/2}\sigma\rho^{1/2}}=U\sqrt{\sigma}\sqrt{\rho}$. Thus,
by the Cauchy-Schwarz inequality and Lemma 3.2,
$$ \begin{array}{rl}
F(\rho,\sigma)= & {\rm Tr}(U\sqrt{\sigma}\sqrt{\rho})=\sum_m{\rm
Tr}(U\sqrt{\sigma}\sqrt{E_m}\sqrt{E_m}\sqrt{\rho}) \\
\leq & \sum_m\sqrt{{\rm Tr}(\rho E_m){\rm Tr}(\sigma E_m)}
=\sum_m\sqrt{p_mq_m}=F(p_m,q_m).\end{array}\eqno(3.3)
$$ Hence we have
$$F(\rho,\sigma)\leq\inf\limits_{\{E_m\}} F(p_m,q_m).$$
Next we show that the equality holds in the above inequality, that
is, Eq.(3.2) holds. By the spectral decomposition, there is an
orthonormal basis $\{|i\rangle\}_{i=1}^\infty$ of $H$ such that
$\rho=\sum_{i} r_i|i\rangle\langle i|$ with $\sum_i r_i=1$. For any
positive integer $n$, let $H_n$ be the $n$-dimensional subspace
spanned by $|1\rangle, |2\rangle,\ldots , |n\rangle$, and $P_n$ be
the projection from $H$ onto $H_n$. Define $\rho _n=\alpha_n^{-1}
P_n\rho P_n$ and $\sigma_n=\beta_n^{-1} P_n\sigma P_n$, where
$\alpha_n={\rm Tr}(P_n\rho P_n)$ and $\beta_n={\rm Tr}(P_n\sigma
P_n)$. Clearly, $\lim_{n\rightarrow\infty} \alpha_n=1$,
$\lim_{n\rightarrow\infty} \beta_n=1$,
SOT-$\lim_{n\rightarrow\infty}
\rho_n=$SOT-$\lim_{n\rightarrow\infty} P_n\rho P_n=\rho$ and
SOT-$\lim_{n\rightarrow\infty}
\sigma_n=$SOT-$\lim_{n\rightarrow\infty} P_n\sigma P_n=\sigma$. By
\cite{ZM}, we see that $\lim_{n\rightarrow\infty} \rho_n=\rho$ and
$\lim_{n\rightarrow\infty} \sigma_n=\sigma$ under the trace norm
topology. It follows that
$\lim_{n\rightarrow\infty}\sqrt{\rho_n^{1/2}\sigma_n\rho_n^{1/2}}=\sqrt{\rho^{1/2}\sigma\rho^{1/2}}$
under the trace norm
topology, which implies that
$\lim_{n\rightarrow\infty}F(\rho_n,\sigma_n)=F(\rho,\sigma)$. So,
for any $\varepsilon>0$, there exists some $N_1$ such that
$$|F(\rho,\sigma)-\alpha_n\beta_nF(\rho_n,\sigma_n)|<\varepsilon/2 \eqno(3.4)$$
whenever $n>N_1$.
On the other hand, note that
$\lim_{n\rightarrow\infty}\sqrt{\alpha_n\beta_n}{\rm Tr}(\rho
P_n)=1$ and $\lim_{n\rightarrow\infty}\sqrt{\alpha_n\beta_n}{\rm
Tr}(\sigma P_n)=1$. Thus, for the above $\varepsilon>0$, there
exists some $N_2$ such that
$$|1-\sqrt{\alpha_n\beta_n}{\rm
Tr}(\rho P_n)|<\varepsilon/2\quad{\rm and}\quad
|1-\sqrt{\alpha_n\beta_n}{\rm Tr}(\sigma P_n)|<\varepsilon/2
\eqno(3.5)$$ whenever $n>N_2$.
Now, consider $\rho_n$ and $\sigma_n$ for $n\geq \max\{N_1,N_2\}$.
With respect to the space decomposition $H=H_n\oplus H_n^\bot$, we have $\rho_n=\left(\begin{array}{cc} \rho_0 &0\\
0& 0\end{array}\right)$ and $\sigma_n=\left(\begin{array}{cc} \sigma_0 &0\\
0& 0\end{array}\right)$, where $\rho_0,\sigma_0\in{\mathcal
S}(H_n)$. Applying Eq.(3.1) to $\rho_0$ and $\sigma_0$, there exists
POVM $\{E_m'\}\subseteq{\mathcal B}(H_n)$ with
$\sum_{m=1}^nE_m'=I_n$ such that
$$F(\rho_0,\sigma_0)=\sum_{m=1}^n\sqrt{{\rm Tr}(\rho_0 E_m'){\rm Tr}(\sigma_0 E_m')}.$$
Let $E_m=E_m'\oplus 0$ and $E_{n+1}=I-P_n$. It is obvious that
$\sum_{m=1}^{n+1}E_m=I$ and
$$F(\rho_n,\sigma_n)=F(\rho_0,\sigma_0)
=\sum_{m=1}^n\sqrt{{\rm Tr}(\rho_0 E_m'){\rm Tr}(\sigma_0
E_m')}=\sum_{m=1}^{n+1}\sqrt{{\rm Tr}(\rho_n E_m){\rm Tr}(\sigma_n
E_m)}.$$
Now define $F_m=\sqrt{\alpha_n\beta_n}P_nE_mP_n$ for
$m=1,2,\cdots,n+1$ and $F_0=I-\sqrt{\alpha_n\beta_n}P_n$. It is
clear that $\{F_m\}$ is a POVM. Furthermore
$$\begin{array}{rl}\sum_{m=1}^{n+1}\sqrt{{\rm Tr}(\rho F_m){\rm Tr}(\sigma F_m)}
=&\sum_{m=1}^{n+1}\sqrt{{\alpha_n\beta_n}{\rm Tr}(P_n\rho P_n
E_m){\rm
Tr}(P_n\sigma P_nE_m)}\\
=&\sum_{m=1}^{n+1}\sqrt{\alpha_n\beta_n}\sqrt{\alpha_n{\rm
Tr}(\rho_n
E_m)\beta_n{\rm Tr}(\sigma_nE_m)}\\
=&\sum_{m=1}^{n+1}\alpha_n\beta_n\sqrt{{\rm Tr}(\rho_n E_m){\rm
Tr}(\sigma_nE_m)}\\
=&\alpha_n\beta_nF(\rho_n,\sigma_n).\end{array}\eqno(3.6)$$ Hence,
by Eqs.(3.4)-(3.6), we get
$$\begin{array}{rl}
&|F(\rho,\sigma)-\sum_{m=0}^{n+1}\sqrt{{\rm Tr}(\rho F_m){\rm
Tr}(\sigma F_m)}|\\
\leq&|F(\rho,\sigma)-\sum_{m=1}^{n+1}\sqrt{{\rm
Tr}(\rho F_m){\rm Tr}(\sigma F_m)}|+\sqrt{{\rm Tr}(\rho F_0){\rm
Tr}(\sigma F_0)}\\
=&|F(\rho,\sigma)-\alpha_n\beta_nF(\rho_n,\sigma_n)|+\sqrt{(1-\sqrt{\alpha_n\beta_n}{\rm
Tr}(\rho P_n))(1-\sqrt{\alpha_n\beta_n}{\rm Tr}(\sigma P_n))}\\
<&\varepsilon/2+\varepsilon/2=\varepsilon.\end{array}$$ Thus we have
proved that, for any $\varepsilon>0$, there exists some POVM
$\{F_m\}$ such that
$$F(p_m,q_m)<F(\rho,\sigma)+\varepsilon,$$ where $p_m={\rm
Tr}(\rho F_m)$, $q_m={\rm Tr}(\sigma F_m)$ are the probability
distributions for $\rho$ and $\sigma$ corresponding to the POVM
$\{F_m\}$. So Eq.(3.2) is true.
It is clear that the infimum of Eq.(3.2) attains the minimum if and
only if there exists a POVM $\{E_m\}$ such that the Cauchy-Schwarz
inequality is satisfied with equality for each term in the sum of
Eq.(3.3), that is,
$$\sqrt{E_m}\sqrt{\rho}=\lambda_m\sqrt{E_m}\sqrt{\sigma} U^\dag \eqno(3.7)$$ for
some set of numbers $\lambda_m \geq 0$. Note that
$\sqrt{\rho^{1/2}\sigma\rho^{1/2}}=U\sqrt{\sigma}\sqrt{\rho}=\sqrt{\rho}\sqrt{\sigma}U^\dag$,
we get that the range of $\sqrt{\rho^{1/2}\sigma\rho^{1/2}}$ is
contained in the range of $\rho^{1/2}$ and hence
$$\sqrt{\sigma}U^\dag=\rho^{[-1/2]}\sqrt{\rho^{1/2}\sigma\rho^{1/2}}.
\eqno(3.8)$$ Substituting Eq.(3.8) into Eq.(3.7), we find that
$$\sqrt{E_m}\sqrt{\rho}=\lambda_m\sqrt{E_m}\rho^{[-1/2]}\sqrt{\rho^{1/2}\sigma\rho^{1/2}} \eqno(3.9)$$
for each $m$. It follows that $
\sqrt{E_m}\sqrt{\rho}\not=0\Rightarrow \lambda _m\not=0$. While, if
$\sqrt{E_m}\sqrt{\rho}=0$, one may take $\lambda_m=0$. Let $H_0={\rm
span}\{ {\rm ran}(\sqrt{E_m}): \sqrt{E_m}\sqrt{\rho}=0\}$, and $P_0$
be the projection onto $H_0$. Then, Eq.(3.9) implies that Eq.(3.7)
is equivalent to
$$\sqrt{E_m}(I-P_0-\lambda_m M)=0 \eqno(3.10)$$
holds for all $m$, where
$M=\rho^{[-1/2]}\sqrt{\rho^{1/2}\sigma\rho^{1/2}}\rho^{[-1/2]}$ (may
unbounded). Now it is easily seen that the closure of
ran($\sqrt{E_m}$) reduces $M$ to the scalar operator
$\lambda_m^{-1}$ if $\sqrt{E_m}\sqrt{\rho}\not=0$, and $\ker M=H_0$.
Thus $0,\lambda_m^{-1}\in\sigma_p(M)$, the point spectrum (i.e.,
eigenvalues) of $M$. Since $\sum_m E_m=I$, we see that $\sum_m {\rm
ran} (\sqrt{E_m})=H$ and the spectrum of $M$,
$\sigma(M)\subseteq{\rm cl} \ \{0,\lambda_m^{-1}\}={\rm cl}\
\sigma_p(M)$. So $M$ must be diagonal. Conversely, if $M$ is
diagonal, say $M=\sum_m \gamma_m |m\rangle\langle m|$ with
$\{|m\rangle\}$ an orthonormal basis of $H$. Let
$\lambda_m=\gamma_m^{-1}$ if $\gamma_m\not=0$; $\lambda_m=0$ if
$\gamma_m=0$. Then the POVM $\{E_m=|m\rangle\langle m|\}$ satisfies
Eq.(3.10) and thus Eq.(3.9). Hence $F(\rho,\sigma)=\sum_m\sqrt{{\rm
Tr}(\rho E_m){\rm Tr}(\sigma E_m)} =F(p_m,q_m)$. This completes the
proof.\hfill$\Box$
{\bf Remark 3.3.} There do exist some $\rho$ and $\sigma$ such that
there is no POVM $\{E_m\}$ satisfying
$F(\rho,\sigma)=\sum_m\sqrt{{\rm Tr}(\rho E_m){\rm Tr}(\sigma E_m)}
$. For example, let $H=L_2([0,1])$ and $M_t$ the operator defined by
$(M_tf)(t)=tf(t)$ for any $f\in H$. Then $M_t$ is positive and is
not diagonal because $\sigma(M_t)=[0,1]$ and the point spectrum
$\sigma_p(M_t)=\emptyset$. Let $\rho\in{\mathcal S}(H)$ be injective
as an operator. Then $d={\rm Tr}(M_t^2\rho)\not=0$. Let
$M=d^{-1}M_t$ and $\sigma =M\rho M$. As ${\rm Tr}(M^2\rho)=1$,
$\sigma$ is a state. Now it is clear that
$M=\rho^{-1/2}\sqrt{\rho^{1/2}\sigma\rho^{1/2}}\rho^{-1/2}$, which
is not diagonal. Thus by Theorem 3.1, the infimum in Eq.(3.2) does
not attain the minimum.
For two states $\rho$ and $\sigma$, recall that the trace distance
of them is defined by
$D(\rho,\sigma)=\frac{1}{2}\|\rho-\sigma\|_{\rm Tr}$. By use of
Uhlmann's theorem and Eq.(3.1), it holds for finite dimensional case
that
$$1-F(\rho,\sigma)\leq D(\rho,\sigma)\leq
\sqrt{1-F(\rho,\sigma)^2}. \eqno(3.11)$$ This reveals that the trace
distance and the fidelity are qualitatively equivalent measures of
closeness for quantum states. Now Theorem 3.1 allows us to establish
the same relationship between fidelity measure and trace distance
measure for states of infinite dimensional systems.
{\bf Theorem 3.4.} {\it Let $H$ be an infinite dimensional
separable complex Hilbert space. Then for any states
$\rho,\sigma\in{\mathcal S}(H)$, the inequalities in Eq.(3.11)
hold.}
{\bf Proof.} Firstly, it is obvious that if both
$\rho=|a\rangle\langle a|$ and $\sigma=|b\rangle\langle b|$ are pure
states, then $D(\rho,\sigma)=D(|a\rangle,
|b\rangle)=\sqrt{1-F(|a\rangle,|b\rangle)^2}=\sqrt{1-F(\rho,\sigma)^2}.
$ (Ref. \cite[pp. 415]{NC} for a proof that is valid for both finite
and infinite dimensional cases.)
Let $\rho$ and $\sigma$ be any two states, and let $|\psi\rangle$
and $|\phi\rangle$ be purifications chosen such that
$F(\rho,\sigma)=|\langle\psi|\phi\rangle |$ by Theorem 2.4. Since
the trace distance is non-increasing under the partial trace, we see
that
$$ D(\rho,\sigma)\leq D(|\psi\rangle,
|\phi\rangle)=\sqrt{1-F(|\psi\rangle,|\phi\rangle)^2}=\sqrt{1-F(\rho,\sigma)^2}.$$
This establishes the inequality
$$D(\rho,\sigma)\leq \sqrt{1-F(\rho,\sigma)^2}. \eqno(3.12)$$
To see the other inequality of Eq.(3.11) is true, Theorem 3.1 is
needed.
For any given $\varepsilon>0$, by Theorem 3.1, we may take a POVM
$\{E_m\}$ such that
$$ F(\rho,\sigma)\leq F(p_m,q_m)=\sum_m\sqrt{p_mq_m}< F(\rho,\sigma)+\varepsilon, \eqno(3.13)$$
where $p_m={\rm Tr}(\rho E_m)$ and $q_m={\rm Tr}(\sigma E_m)$ are
the probabilities for obtaining outcome $m$ for the states $\rho$
and $\sigma$, respectively. Observe that, for both finite and
infinite dimensional cases, we have
$$ D(\rho,\sigma)=\max\limits_{\{E_m\}} D(p_m,q_m),\eqno(3.14)$$
where $D(p_m,q_m)=\frac{1}{2}\sum_m |p_m-q_m|$ and the maximun is
over all POVM $\{E_m\}$. It follows from Eq.(3.14) and
$$ \sum_m(\sqrt{p_m}-\sqrt{q_m})^2=\sum_mp_m+\sum_mq_m-2F(p_m,q_m)
=2(1-F(p_m,q_m)),$$ that
$$\begin{array}{rl} 2(1-F(\rho,\sigma))-2\varepsilon <&
2(1-F(p_m,q_m))=\sum_m(\sqrt{p_m}-\sqrt{q_m})^2 \\ \leq & \sum_m
|\sqrt{p_m}-\sqrt{q_m}|(\sqrt{p_m}+\sqrt{q_m})=\sum_m |p_m-q_m|
\\=&2D(p_m,q_m)\leq 2D(\rho,\sigma). \end{array}
$$
Thus we have proved that
$$(1-F(\rho,\sigma))-\varepsilon <D(\rho,\sigma)$$
holds for any $\varepsilon>0$. This forces that
$$1-F(\rho,\sigma)\leq D(\rho,\sigma),$$ which, combining the
inequality (3.12), completes the proof of the theorem.\hfill$\Box$
\section{Fidelities connected to channels}
For finite dimensional case, ensemble average fidelity and
entanglement fidelity are two kinds of important fidelities
connected to a quantum channel. In this section we give the
definitions of ensemble average fidelity and entanglement fidelity
connected to a quantum channel for an infinite dimensional system,
and discuss their relationship.
Let $H$ be an infinite dimensional separable complex
Hilbert space. Recall that a quantum channel ${\mathcal E}:
{\mathcal T}(H)\rightarrow{\mathcal T}(H)$ is a trace preserving
completely positive linear map. Like the finite dimensional case,
for such quantum channel ${\mathcal E}$ and a given ensemble
$\{p_j,\rho_j\}_{j=1}^\infty$, one can define ensemble average
fidelity by
$$\overline{F}=\sum_jp_jF(\rho_j,{\mathcal E}(\rho_j))^2.\eqno(4.1)$$
Similarly, for a state $\rho$, one can define the entanglement
fidelity by
$$\begin{array}{rl}F(\rho,{\mathcal E})=&F(|\psi\rangle,({\mathcal E}\otimes I)(|\psi\rangle\langle\psi|))^2\\
=&\langle\psi|({\mathcal E}\otimes
I)(|\psi\rangle\langle\psi|)|\psi\rangle,\end{array}\eqno(4.2)$$
where $|\psi\rangle\in H\otimes H$ is a purification of $\rho$. Note
that the definition $F(\rho,{\mathcal E})$ does not depend on the
choices of purifications. To see this, let
$|\psi\rangle=\sum_j\sqrt{p_j}|j\rangle|\mu_j\rangle$ be any
purification, where $\{j\}$ is an orthonormal basis and $\{\mu_j\}$
is an orthonormal set of $H$. By \cite{H}, there exists a sequence
of operators $\{E_i\}\subseteq {\mathcal B}(H)$ with
$\sum_iE_i^\dag E_i=I$ such that $${\mathcal
E}(\sigma)=\sum_iE_i\sigma E_i^\dag \quad{\rm for\ \ all}\quad
\sigma\in{\mathcal S}(H).$$ Thus
$$\begin{array}{rl}F(\rho,{\mathcal E})=&\sum_i\langle\psi|(E_i\otimes I)(|\psi\rangle\langle\psi|)(E_i^\dag\otimes I)|\psi\rangle\\
=&\sum_i\langle\psi|\sum_{j,k}\sqrt{p_jp_k}(E_i\otimes
I)(|j\rangle|\mu_j\rangle\langle k|\langle\mu_k|)(E_i^\dag\otimes
I) |\psi\rangle\\
=&\sum_i\sum_{j,k}p_jp_k\langle j|E_i|j\rangle\langle
k|E_i^\dag |k\rangle\\
=&\sum_i|{\rm Tr}(E_i\rho)|^2,\end{array}\eqno(4.3)$$ which is
dependent only to $\rho$ and $\mathcal E$.
In the sequel we will give some properties of entanglement fidelity
for infinite dimensional systems.
Firstly note that, by monotonicity of the fidelity Eq.(2.7), it is
easily checked that
$$F(\rho,{\mathcal E})\leq[F(\rho,{\mathcal E}(\rho))]^2.\eqno(4.4)$$
{\bf Proposition 4.1.} {\it Let $H$ be an infinite dimensional
separable complex Hilbert space. Assume that ${\mathcal E}:
{\mathcal T}(H)\rightarrow{\mathcal T}(H)$ is a quantum channel and
$\rho\in{\mathcal S}(H)$. Then the entanglement fidelity
$F(\rho,{\mathcal E})$ is a convex function of $\rho$.}
{\bf Proof.} Take any states $\rho_1,\rho_2\in{\mathcal S}(H)$.
Define a real function $f:{\mathbb R}\rightarrow{\mathbb R}$ by
$$f(x)\equiv F(x\rho_1+(1-x)\rho_2,{\mathcal
E}),\quad \forall x\in{\mathbb R}.$$ By using of Eq.(4.3) and
elementary calculus, one sees that the second derivative of $f$ is
$$f''(x)=\sum_i|{\rm Tr}((\rho_1-\rho_2)E_i)|^2.$$ Hence $f''(x)\geq
0$, which implies that $F(\rho,{\mathcal E})$ is convex, as desired.
\hfill$\Box$
{\bf Proposition 4.2.} {\it Let $H$ be an infinite dimensional
separable complex Hilbert space. Assume that ${\mathcal E}:
{\mathcal T}(H)\rightarrow{\mathcal T}(H)$ is a quantum channel.
Then for any given ensemble $\{p_j,\rho_j\}$, we have
$F(\sum_jp_j\rho_j,{\mathcal E})\leq \overline{F}$.}
{\bf Proof.} For any $k\in{\mathbb N}$, let
$\lambda_k=\sum_{j=1}^kp_j$. Then, by Proposition 4.1, we have
$$\begin{array}{rl}F(\sum_jp_j\rho_j,{\mathcal E})=&
F(\lambda_k(\sum_{j=1}^k\frac{p_j}{\lambda_k}\rho_j)+(1-\lambda_k)(\sum_{j=k+1}^\infty\frac{p_j}{1-\lambda_k}\rho_j),{\mathcal
E})\\
\leq& \lambda_k F(\sum_{j=1}^k\frac{p_j}{\lambda_k}\rho_j,{\mathcal
E})+(1-\lambda_k)F(\sum_{j=k+1}^\infty\frac{p_j}{1-\lambda_k}\rho_j,{\mathcal
E})\\
\leq& \lambda_k \sum_{j=1}^k\frac{p_j}{\lambda_k}F(\rho_j,{\mathcal
E})+(1-\lambda_k)F(\sum_{j=k+1}^\infty\frac{p_j}{1-\lambda_k}\rho_j,{\mathcal
E})\\
=&\sum_{j=1}^k{p_j}F(\rho_j,{\mathcal
E})+(1-\lambda_k)F(\sum_{j=k+1}^\infty\frac{p_j}{1-\lambda_k}\rho_j,{\mathcal
E}). \end{array}\eqno(4.5)$$ Note that $0\leq F(\rho,{\mathcal
E})\leq 1$ and $\lim_{k\rightarrow\infty}\lambda_k=\sum_{j=1}^\infty
p_j=1$. So
$$\lim_{k\rightarrow\infty}(1-\lambda_k)F(\sum_{j=k+1}^\infty\frac{p_j}{1-\lambda_k}\rho_j,{\mathcal
E})=0.$$ Thus, for any $\varepsilon>0$, there exists some $N$ such
that
$$(1-\lambda_k)F(\sum_{j=k+1}^\infty\frac{p_j}{1-\lambda_k}\rho_j,{\mathcal
E})<\varepsilon\eqno(4.6)$$ whenever $k>N$. It follows from
Eq.(4.5) that
$$F(\sum_jp_j\rho_j,{\mathcal E})<\sum_{j=1}^\infty{p_j}F(\rho_j,{\mathcal
E})+\varepsilon.$$ By the arbitrariness of $\varepsilon$ and
Eq.(4.4), we obtain that
$$\begin{array}{rl}F(\sum_jp_j\rho_j,{\mathcal E})\leq&\sum_{j=1}^\infty{p_j}F(\rho_j,{\mathcal
E})\\
\leq&\sum_{j=1}^\infty{p_j}F(\rho_j,{\mathcal
E}(\rho_j))^2=\overline{F},\end{array}$$ Completing the proof.
\hfill$\Box$
\section{Conclusion}
In this paper we prove the infinite dimensional version of the
Uhlmann's theorem by an elementary approach, which states that the
fidelity of states $\rho$ and $\sigma$ is larger than or
equal to the absolute value of the inner product of any purifications $|\psi\rangle$ and $|\phi\rangle$ of $\rho$ and
$\sigma$,
i.e., $F(\rho,\sigma)\geq |\langle\psi|\phi\rangle|$; moreover, there exist some purifications such that the equality
holds. This allows us to generalize a large part of the results concerning
fidelity for finite dimensional systems to that for infinite
dimensional systems. We also
discuss the relationship between quantum fidelity and classical
fidelity and show that $F(\rho,\sigma)=\inf_{\{E_m\}}F(p_m,q_m)$.
Not like to that of finite dimensional case, the infimum can not
attain the minimum in general. We give a necessary and sufficient
condition for the infimum attains the minimum. Using this result, we
find that the fidelity and the trace distance are equivalent in
describing the closeness of states. The concepts of
ensemble average fidelity and entanglement fidelity for a channel are
generalized to infinite dimensional case. The relationship of such
two fidelities is discussed.
|
2,877,628,091,327 | arxiv | \section{Introduction}
The aim of the "fastNLO" project is to make the inclusion of jet data
into global fits of parton density functions (PDFs) feasible.
Due to the prohibitive computing time required for the jet cross sections
using standard calculation techniques,
jet data have either been omitted in these fits completely
or they were included using a simple approximation.
The fastNLO project implements a method that offers exact and
very fast pQCD calculations
for a large number of jet data sets
allowing to take full advantage of their direct sensitivity
to the gluon density in the proton in future PDF fits.
This includes Tevatron jet data beyond
the inclusive jet cross section and also
HERA jet data which have
been used to determine the proton's gluon
density~\cite{Adloff:2000tq,Chekanov:2001bw,Chekanov:2002be,Chekanov:2005nn},
but which are ignored in current
PDF fits~\cite{Alekhin:2005gq,Martin:2004ir,Pumplin:2002vw}.
\section{Concept}
\subsection{Cross Sections in Perturbative QCD}
Perturbative QCD predictions for observables in
hadron-induced processes depend on the strong coupling
constant $\alpha_s$ and on the PDFs of the hadron(s).
Any cross section in hadron-hadron collisions
can be written as the convolution of
the strong coupling constant $\alpha_s$ in order $n$,
the perturbative coefficient $c_{n,i}$ for the partonic
subprocess $i$,
and the corresponding linear combination of PDFs
from the two hadrons $F_i$
which is a function of the fractional hadron momenta
$x_{a,b}$ carried by the partons
\begin{equation}
\sigma(\mu_r,\mu_f) = \sum_{n,i} \, c_{n,i}(x_a, x_b, \mu_r,\mu_f)
\otimes
\left[ \alpha_s^n(\mu_r) \cdot F_i(x_a,x_b,\mu_f) \right] \,.
\label{eq:fnmain}
\end{equation}
The PDFs and $\alpha_s$ also depend on the factorization and the
renormalization scales $\mu_{f,r}$, respectively,
as does the perturbative prediction for the cross section
in finite order $n$.
An iterative PDF fitting procedure
using exact NLO calculations for jet data,
based on Monte-Carlo integrations of~(\ref{eq:fnmain}),
is too time-consuming.
Only an approximation of~(\ref{eq:fnmain}) is, therefore,
currently being used in global PDF fits.
\subsection{A Simple Approach}
\begin{figure}[t]
\centerline{
\psfig{figure=kfactorapprox1.eps,height=4.9cm}
}
\caption{The $k$-factor for the inclusive $p\bar{p}$ jet cross section
at $\sqrt{s}=1.96$\,TeV as a function of $p_T$ at different rapidities $y$
for the total cross section (solid line) and for different
partonic subprocesses:
gluon-gluon (dashed), gluon-quark (dotted) and the sum of all
quark and/or anti-quark induced subprocesses (dashed-dotted).
\label{fig:kfactor}}
\end{figure}
The ``$k$-factor approximation''
as used in~\cite{Martin:2004ir,Pumplin:2002vw}
parameterizes higher-order corrections
for each bin of the observable by a factor
$\displaystyle k = \frac{\sigma_{\rm NLO}}{\sigma_{\rm LO}}
= \frac{\sigma_{(2)}+\sigma_{(3)}}{\sigma_{(2)}}$
computed from the contributions
with $n=2$ ($\sigma_{(2)}$) and $n=3$ ($\sigma_{(3)}$)
for a fixed PDF, averaged over all subprocesses~$i$.
In the iterative fitting procedure
only the LO cross section is computed
and multiplied with $k$ to obtain an estimate of
the NLO cross section.
This procedure does not take into account that
different partonic subprocesses can have largely
different higher-order corrections.
Fig.~\ref{fig:kfactor} shows that the $k$-factors
for quark-only and gluon-only induced subprocesses
can differ by more than $\pm20\%$ from the average.
The $\chi^2$ is therefore minimized under an incorrect assumption
of the true PDF dependence of the cross section.
Further limitations of this approach are:
\begin{itemize}
\item
Even the LO Monte-Carlo integration of~(\ref{eq:fnmain})
is a trade-off between speed
and precision. With finite statistical errors,
however, theory predictions are not ideally smooth
functions of the fit parameters.
This contributes to numerical noise in the $\chi^2$
calculations~\cite{Pumplin:2000vx}
distorting the $\chi^2$ contour during the
PDF error analysis, especially for fit parameters
with small errors.
\item
The procedure can only be used for observables for
which LO calculations are fast.
Currently, this prevents the global PDF analyses from
using Tevatron dijet data and DIS jet data.
\end{itemize}
In a time when phenomenology is aiming towards
NNLO precision~\cite{Alekhin:2005gq,Martin:2004ir},
the $k$-factor approximation is clearly not satisfying concerning both
its limitation in precision and its restrictions concerning data sets.
\subsection{The fastNLO Solution}
\begin{figure}[t]
\centerline{
\psfig{figure=subprocfrac-pp-2.eps,width=15cm}
}
\caption{Contributions of different partonic subprocesses to
the inclusive jet cross section at
RHIC (left), the Tevatron (middle) and the LHC (right)
as a function of $p_T$ and $x_T = 2 p_T/\sqrt{s}$.
The subprocess $gq \rightarrow {\rm jets}$ has been
separated into the contributions (2) and (3) where either the
quark- or the gluon momentum fraction is larger.
\label{fig:fnsubprocpp}}
\end{figure}
\begin{figure}[t]
\centerline{
\psfig{figure=pdfunc-pp.eps,width=15cm}
}
\caption{Comparison of PDF uncertainties for
the inclusive jet cross section at
RHIC (left), the Tevatron (middle) and the LHC (right).
The uncertainty band is obtained for the CTEQ6.1M
parton density functions and the results are shown
as a function of $p_T$ and $x_T = 2 p_T/\sqrt{s}$.
\label{fig:fnpdfuncpp}}
\end{figure}
A better solution is implemented in the fastNLO project.
The basic idea is to transform the convolution
in~(\ref{eq:fnmain}) into the factorized expression~(\ref{eq:fnfinal}).
Many proposals for this have been made in the past, originally
related to solving the DGLAP parton evolution equations~\cite{Pascaud:1994vx}
and later to computing of jet cross
sections~\cite{Lobo:1996,Graudenz:1995sk,Kosower:1997vj,Wobisch:2000dk,Carli:2005ji}.
The fastNLO method is an extension of the
concepts developed for DIS jet production~\cite{Lobo:1996,Wobisch:2000dk}
which have been applied at HERA
to determine the gluon density in the proton from DIS jet data~\cite{Adloff:2000tq}.
Starting from~(\ref{eq:fnmain}) for the following discussion the
renormalization scale is set equal to the factorization scale
($\mu_{r,f}=\mu$).
The extension to $\mu_r \ne \mu_f$ is, however, trivial.
The $x$ dependence of the PDFs and the
scale dependence of $\alpha_s^n$ and the PDFs can be approximated
using an interpolation between sets of fixed values $x^{(k)}$
and $\mu^{(m)}$
($k=1, \cdots, k_{\rm max}\,;\, m =1, \cdots, m_{\rm max}$)
\begin{eqnarray}
& \alpha^n_s(\mu)& \cdot \; F_i(x_a,x_b,\mu) \; \simeq
\hskip28mm
{[{\scriptstyle \mbox{``='' is true for
$k_{\rm max}, l_{\rm max}, m_{\rm max}\rightarrow \infty $} }]}
\nonumber \\
& &
\sum_{k,l,m} \alpha^n_s(\mu^{(m)}) \cdot F_i(x_a^{(k)}, x_b^{(l)}, \mu^{(m)})
\, \cdot \, e^{(k)}(x_a) \cdot e^{(l)}(x_b) \cdot b^{(m)}(\mu)
\end{eqnarray}
where $e^{(k,l)}(x)$ and $b^{(m)}(\mu)$ are interpolation functions
for the $x$ and the $\mu$ dependence, respectively.
All information of the perturbatively calculable piece
(including phase space restrictions, jet definition, etc.\
but excluding $\alpha_s$ and the PDFs)
is fully contained in the quantity
\begin{equation}
\tilde{\sigma}_{n,i,k,l,m}(\mu) =
c_{n,i}(x_a, x_b, \mu) \otimes
\left[ e^{(k)}(x_a) \cdot e^{(l)}(x_b) \cdot b^{(m)}(\mu) \right] \, .
\label{eq:sigmatilde}
\end{equation}
In the final prediction for the cross section
the convolution in~(\ref{eq:fnmain}) is then reduced
to a simple product
\begin{equation}
\sigma(\mu) \, \simeq \sum_{n,i,k,l,m}
\tilde{\sigma}_{n,i,k,l,m}(\mu) \, \cdot \,
\alpha^n_s(\mu^{(m)}) \cdot
F_i(x_a^{(k)}, x_b^{(l)}, \mu^{(m)}) \, .
\label{eq:fnfinal}
\end{equation}
The time-consuming step involving the calculation of the universal
(PDF and $\alpha_s$ independent) $\tilde\sigma$
is therefore factorized and needs to be done only once.
Any further calculation of the pQCD prediction
for arbitrary PDFs and $\alpha_s$ values can later
be done very fast by computing the simple sum of products
in~(\ref{eq:fnfinal}).
While the extension of the method from one
initial-state hadron~\cite{Wobisch:2000dk}
to two hadrons was conceptually trivial, the case of two hadrons
requires additional efforts to improve the efficiency
and precision of the interpolation.
Both, the efficiency and the precision, are directly related to the
choices of the points
$x^{(k,l)}$, $\mu^{(m)}$ and the
interpolation functions $e(x)$, $b(\mu)$.
The implementation in
fastNLO achieves a precision of better than $0.1\%$
for $k_{\rm max},l_{\rm max} =10$ and $m_{\rm max}\le 4$.
Computation times for cross sections in fastNLO are roughly
40-200\,$\mu$s per order $\alpha_s$ (depending on
$m_{\rm max}$).
Further details are given in Ref~\cite{fastnlo}.
The $\tilde{\sigma}$ in~(\ref{eq:sigmatilde}) are computed using
{\tt NLOJET++}~\cite{Nagy:2003tz,Nagy:2001fj}.
A unique feature in fastNLO is the inclusion of the $O(\alpha_s^4)$
threshold correction terms to the
inclusive jet cross section~\cite{Kidonakis:2000gi},
a first step towards a full NNLO calculation.
\section{Results}
\begin{figure}[!h]
\centerline{
\psfig{figure=alljets.eps,width=14.3cm}
}
\caption{An overview of data over theory ratios for
inclusive jet cross sections, measured
in different processes at different center-of-mass energies.
The data are compared to calculations obtained by fastNLO
in NLO precision (for DIS data) and including
${\cal O}(\alpha_s^4)$ threshold corrections (for $p\bar{p}$ data).
The inner error bars represent the statistical errors and the
outer error bars correspond to the quadratic sum of all
experimental uncertainties.
In all cases the perturbative predictions have been
corrected for non-perturbative effects.
\label{fig:fnresults}}
\end{figure}
\begin{figure}[!h]
\centerline{
\psfig{figure=ps-incl.eps,width=11cm}
}
\caption{The phase space in $x$ and $p_T$
covered by the data sets shown in the previous figure.
\label{fig:fnresults2}}
\end{figure}
Calculations by fastNLO
are available at {\tt http://hepforge.cedar.ac.uk/fastnlo}
for a large set of (published, ongoing, or planned)
jet cross section measurements at
HERA, RHIC, the Tevatron, and the LHC
(either online or as computer code for inclusion in PDF fits).
Some fastNLO results for the inclusive jet cross section
in different reactions are shown in this section.
The contributions from different partonic subprocesses
to the central inclusive jet cross section
are compared in Fig.~\ref{fig:fnsubprocpp} for different
colliders:
For $pp$ collisions at RHIC and the LHC,
and for $p\bar{p}$ scattering at Tevatron Run II energies.
It is seen that the quark-induced subprocesses are dominated
by the valence quarks:
In proton-proton collisions (RHIC, LHC)
the quark-quark subprocesses (4,5) give much larger
contributions than the quark-antiquark subprocesses (6,7)
while exactly the opposite is true for proton-antiproton collisions
at the Tevatron.
The contribution from gluon-induced subprocesses is
significant at all colliders over the whole $p_T$ ranges.
It is interesting to note that at fixed $x_T = 2 p_T/\sqrt{s}$
the gluon contributions are largest at RHIC.
Here, the jet cross section at
$x_T = 0.5$ still receives $55\%$
contributions from gluon-induced subprocesses,
as compared to only $35\%$ at the Tevatron or $38\%$ at the LHC.
As shown in Fig.~\ref{fig:fnpdfuncpp}, this results in much larger
PDF uncertainties for the high $x_T$ inclusive jet cross section
at RHIC, as compared to the Tevatron and the LHC for which
PDF uncertainties are roughly
of the same size (at the same $x_T$).
This indicates that the PDF sensitivity at the same $x_T$
is about the same at the Tevatron and at the LHC,
while it is much higher at RHIC.
An overview over published measurements of the inclusive
jet cross section in different reactions and at different
center-of-mass energies is given in Fig.~\ref{fig:fnresults}.
The results are shown as ratios of data over theory.
The theory calculations include the best available
perturbative predictions (NLO for DIS data and NLO +
${\cal O}(\alpha_s^4)$ threshold corrections for $p\bar{p}$ data)
which have been corrected for non-perturbative effects.
Over the whole phase space of $8 < p_T < 700$\,GeV
jet data in DIS and $p\bar{p}$ collisions are well-described
by the theory predictions using
CTEQ6.1M PDFs~\cite{Pumplin:2002vw}.
The phase space in $x$ and $p_T$ covered
by these measurements is shown in Fig.~\ref{fig:fnresults2},
demonstrating what can be gained by using fastNLO
to include these data sets in future PDF fits.
A first study using fastNLO on the future potential
of LHC jet data has been published in Ref.~\cite{cmsptdrv2}.
|
2,877,628,091,328 | arxiv | \section{Introduction}
It has been known for a long time that two-nucleon ($N\!N$) scattering
at very low energies can be described by the effective range
expansion~\cite{Bethe:1949yr}, from which deuteron and $^1S_0$ virtual state
properties emerge without information about details of the strong
interaction~\cite{BethePeierls:1935}. At these energies effects from
pion-exchange physics cannot be resolved. It is therefore possible to describe
such a system using only nonrelativistic nucleons as degrees of freedom that
interact via short-range (contact)
forces~\cite{Bedaque:1997qi,vanKolck:1997ut,Kaplan:1998tg,Bedaque:1998mb,
Kaplan:1998we,Birse:1998dk,vanKolck:1998bw,Chen:1999tn,Bedaque:1999vb,
Gabbiani:1999yv}. The systematic approach to implement this procedure, known as
\emph{pionless effective field theory} (pionless EFT), is based on the
experimental fact that the $N\!N$ $S$-wave scattering lengths are much larger
than the corresponding effective ranges, so that a nonperturbative resummation
of non-derivative two-body contact interactions is required at leading order
(LO) to reproduce the shallow $N\!N$ bound and virtual states.
The extension of these ideas to the triton ($^3$H) and helion ($^3$He) was not
immediate because a system of three nucleons collapses under the sole effect of
attractive, non-derivative contact forces~\cite{Thomas:1935zz}. The solution
within pionless EFT is the existence of a non-derivative three-body
interaction~\cite{Bedaque:1999ve,Hammer:2000nf,Hammer:2001gh,Bedaque:2002yg,
Afnan:2003bs,Griesshammer:2005ga} at LO. This force provides not only
saturation, but also a three-body parameter which correlates certain observables
such as the triton binding energy and the doublet neutron-deuteron ($nd$)
scattering length (``Phillips line''~\cite{Phillips:1968zze}). This framework
recovers Efimov's universal approach to the three-nucleon
problem~\cite{Efimov:1970zz,Efimov:1981aa,Hammer:2010kp}. With recent progress,
pionless EFT also allows for an elegant and efficient fully perturbative
treatment of contributions beyond
LO~\cite{Hammer:2001gh,Vanasse:2013sda,Konig:2013cia,Vanasse:2014kxa}.
An important question is how far up the nuclear chart this EFT applies. Nuclear
density tends to increase with nucleon number $A$, implying larger typical
nucleon momenta within the nucleus. Calculations suggest that pionless EFT
holds for
$A=4$~\cite{Platter:2004zs,Kirscher:2009aj,Kirscher:2011uc,Kirscher:2015ana}
and perhaps up to $A=6$ systems~\cite{Kirscher:2015ana,Stetcu:2006ey} without a
four-body force at LO. As the pion mass increases, its range of applicability
increases, and this framework has been established as a powerful tool that can
be used to analyze and supplement calculations of light nuclei directly from
lattice QCD~\cite{Barnea:2013uqa,Beane:2015yha,Kirscher:2015yda}. (See
Refs.~\cite{Braaten:2003eu,Epelbaum:2006jc,Hammer:2007kq} for earlier work on
pionless EFT for unphysical quark masses using input from chiral potentials.)
The application of EFT to nuclei requires an understanding of the importance of
Coulomb and other electromagnetic effects. The long-range nature of the
Coulomb force implies that it becomes dominant at very low energies, \textit{i.e.},
precisely where the EFT is supposed to work best. To describe scattering in
this regime, one thus has to resum Coulomb effects to all orders to recover the
Coulomb-modified effective-range expansion. In the proton-proton ($pp$) sector
of the pionless theory this was first carried out by Kong and
Ravndal~\cite{Kong:1998sx,Kong:1999sf}, with subsequent discussions of the
charged two-body sector given in
Refs.~\cite{Kong:2000px,Butler:2001jj,Barford:2002je,Ando:2007fh,Ando:2008va,
Ando:2008jb}. An early attempt at $pd$ scattering was made in
Ref.~\cite{Rupak:2001ci}, and extended to lower center-of-mass energies in
Ref.~\cite{Konig:2011yq}. A calculation of the Coulomb-modified $pd$
scattering length, with the consistent use of a Yukawa-screened Coulomb
potential in momentum space, has been presented in Ref.~\cite{Konig:2013cia}.
The \ensuremath{{}^3\mathrm{H}}\xspace--\ensuremath{{}^3\mathrm{He}}\xspace binding energy difference has been studied using pionless EFT
in a number of papers. Systems of three and four nucleons including the
Coulomb interaction as part of an effective potential were first analyzed by
Kirscher~\textit{et al.}\xspace using the resonating group
method~\cite{Kirscher:2009aj,Kirscher:2011uc,Kirscher:2011zn,Kirscher:2015ana}.
Ando and Birse~\cite{Ando:2010wq} carried out a momentum-space LO calculation
which was nonperturbative both in the sense that it looked for the bound state
as a pole in the $pd$ doublet-channel amplitude, as well as in the
electromagnetic sector, where Coulomb effects were included through a fully
off-shell Coulomb T-matrix, using methods developed by Kok~\textit{et al.}\xspace\ in
Refs.~\cite{Kok:1979aa,Kok:1981aa}. A subset of the present authors presented
a calculation of $pd$ scattering and the bound-state regime in
Refs.~\cite{Konig:2011yq,Koenig:2013,Hammer:2014rba}.
Of these, Refs.~\cite{Konig:2011yq,Koenig:2013} included a perturbative
framework using trinucleon wave functions. An updated version of this
calculation that corrects some issues of the previous approach has recently been
given in Ref.~\cite{Konig:2014ufa}. That work also includes a nonperturbative
treatment, in which all $\mathcal{O}(\alpha)$ Coulomb diagrams are resummed to all
orders. This calculation was similar to that of Ref.~\cite{Ando:2010wq} but
found that the full Coulomb T-matrix is not necessary for an accurate
description of the \ensuremath{{}^3\mathrm{He}}\xspace bound state.
At LO in the strong interactions, the perturbative and nonperturbative results
of Ref.~\cite{Konig:2014ufa} were found to agree with each other (as well
as with the experimental value for the \ensuremath{{}^3\mathrm{H}}\xspace--\ensuremath{{}^3\mathrm{He}}\xspace binding energy difference)
to within roughly 30\%. While at first sight this seems fine if one keeps in
mind the expected uncertainty based on the EFT expansion, the relevant
parameter in this case is actually not the $Q/\Lambda_{\slashed\pi}$ of the strong sector
(with the typical low-momentum scale set by the deuteron binding momentum,
$Q\sim\gamma_d \simeq 45~\ensuremath{\mathrm{MeV}}$ and the breakdown scale $\Lambda_{\slashed\pi}\simM_\pi\simeq
140~\ensuremath{\mathrm{MeV}}$), but rather the $\alphaM_N/Q$ scale (with $M_N\simeq940~\ensuremath{\mathrm{MeV}}$)
set by the Coulomb contributions. Since in the bound-state regime the momentum
scale is set by the trinucleon binding momentum, $\gamma_T \sim 80~\ensuremath{\mathrm{MeV}}$,
Coulomb effects are expected to be a small perturbation. Based on this, one
should expect better agreement between perturbative and nonperturbative
calculations.
The purpose of this work is to investigate a rearrangement of pionless EFT that
explores the role of the increasing nuclear binding momentum in the simplest
context, the trinucleon systems. We develop a new formulation to include
Coulomb and other electromagnetic effects in nuclear ground states using
perturbation theory. There are two important ingredients to this procedure.
First, we introduce a new counting scheme that takes as LO the spin-singlet
channel in the unitarity limit (infinite scattering length) and only includes
the finite ${}^1S_0$ scattering length as a perturbative correction. We will
show with the \ensuremath{{}^3\mathrm{H}}\xspace binding energy and with doublet $nd$ scattering phase
shifts that deviations from ${}^1S_0$ unitarity are indeed small.
In the case of $pp$ scattering at very low energies, the scattering-length term
is iterated along with Coulomb effects so that renormalization can be carried
out by matching to the Coulomb-modified effective-range expansion.
This is essentially what was introduced in Ref.~\cite{Kong:1999sf}, but here
we isolate the contribution from the divergent single-photon bubble, which was
missed in the perturbative calculation of Ref.~\cite{Konig:2014ufa}.
This new scheme is the second ingredient which then allows us to use the
counterterm fixed by $pp$ scattering also in the perturbative calculation of the
\ensuremath{{}^3\mathrm{H}}\xspace--\ensuremath{{}^3\mathrm{He}}\xspace binding energy difference at next-to-leading order (NLO) in the new
counting. The result differs from that of the nonperturbative calculation by
much less than the 30\% previously obtained, and is slightly closer to the
experimental value of the binding energy difference. At NLO we end up about
1.5\% off the \ensuremath{{}^3\mathrm{He}}\xspace binding energy.
Besides finally obtaining a complete calculation of the energy splitting that
includes Coulomb effects only as perturbative corrections, we also employ a
fully perturbative treatment of corrections in the strong sector. This was
already done in Ref.~\cite{Vanasse:2014kxa}, which showed that a new,
isospin-breaking three-body counterterm is needed for renormalization in the
presence of range corrections when two-body Coulomb effects are resummed to all
orders. A further motivation for the current paper is to revisit this issue
with a completely perturbative inclusion of those contributions as
well.\footnote{The viability of perturbative Coulomb exchange for the
trinucleon system was also investigated in independent
work~\cite{Kirscher:2015zoa}, which appeared shortly after submission of our
manuscript.}
This paper is structured as follows. In Sec.~\ref{sec:Formalism} we discuss
the basic formalism of pionless EFT with explicit isospin breaking. The
two-body sector and in particular our new method to treat the $pp$ channel is
described in detail in Sec.~\ref{sec:TwoBody}. In Sec.~\ref{sec:ThreeBody} we
discuss the implications for the three-body sector. The new results for the
\ensuremath{{}^3\mathrm{H}}\xspace and \ensuremath{{}^3\mathrm{He}}\xspace binding energies are presented in Sec.~\ref{sec:Results}.
We conclude in Sec.~\ref{sec:Conclusion} and provide technical details about
the divergent Coulomb-bubble diagram in the Appendix.
\section{Effective Lagrangian}
\label{sec:Formalism}
We are interested in describing nuclear bound states in terms of the most
general dynamics consistent with QCD symmetries built from a nucleon field $N$
of mass $M_N$. We use a modified and extended version of the notation and
conventions in Refs.~\cite{Koenig:2013,Konig:2014ufa}. To NLO we write the
Lagrangian as
\begin{equation}
\mathcal{L} = N^\dagger\left(\mathrm{i} D_0+\frac{\boldsymbol{D}^2}{2M_N}\right)N
+\mathcal{L}_\mathrm{2d}+\mathcal{L}_\mathrm{2t}+\mathcal{L}_3
\,,
\label{eq:L-Nd}
\end{equation}
where $D_\mu = \partial_\mu + \mathrm{i} eA_\mu \hat{Q}_N$ includes the direct nucleon
coupling to the electromagnetic field via the charge operator
$\hat{Q}_N=(1+\tau_3)/2$.
It is convenient to express the interaction terms via auxiliary dibaryon
fields $d^i$ and $t^A$ in the $N\!N$ channels projected with
\begin{equation}
P^i_d = \sigma^2\sigma^i\tau^2 / \sqrt8 \mathtext{,}
P^A_t = \sigma^2\tau^2\tau^A/\sqrt8 \,,
\end{equation}
respectively, where lowercase (uppercase) letters are spin-$1$ (isospin-$1$)
indices. While $\hat{Q}_N$ ensures that Coulomb photons couple only to
protons, the isospin-$1$ basis used for the spin-singlet field $t^A$ otherwise
mixes contributions from protons and neutrons. If we take $A=1,2,3$ to be an
index in the Cartesian basis, then the $np$ configurations are completely
contained in the $A=3$ component, whereas $nn$ and $pp$ are obtained from
linear combinations $1\pm\ii2$. This corresponds to using a spherical basis
$\tilde{A} = -1,0,1$, where one would immediately have the desired separation.
We thus define new projectors
\begin{equation}
\tilde{P}^{-1}_t = \frac{1}{\sqrt{2}}\left(P_t^1-\mathrm{i} P_t^2\right)
\mathtext{,}
\tilde{P}^{0}_t = P_t^3
\mathtext{,}
\tilde{P}^{+1}_t = {-}\frac{1}{\sqrt{2}}\left(P_t^1+\mathrm{i} P_t^2\right)
\end{equation}
that correspond to $pp$, $np$ and $nn$ configurations, respectively. Since the
transformation from $P^A_t$ to $\tilde{P}^{\tilde{A}}_t$ is a unitary rotation,
the normalization is automatically correct:
\begin{equation}
\mathrm{Tr}\left((\tilde{P}^{\tilde{A}}_t)^\dagger
\tilde{P}^{\tilde{B}}_t\right)
= \frac12 \delta_{\tilde{A}\tilde{B}} \,.
\end{equation}
In terms of these fields and projectors, the two-body interactions are
\begin{equation}
\mathcal{L}_\mathrm{2d} = -d^{i\dagger}\left[\sigma_d
+ c_d\left(\mathrm{i} D_0+\frac{\boldsymbol{D}^2}{4M_N}\right)\right]d^i
+ y_d\left[d^{i\dagger}\left(N^T P^i_d N\right)+\mathrm{h.c.}\right]
\label{eq:L-3S1}
\end{equation}
and
\begin{multline}
\mathcal{L}_\mathrm{2t} = -t^{0\dagger}\left[\sigma_t
+ c_t\left(\mathrm{i} D_0+\frac{\boldsymbol{D}^2}{4M_N}\right)\right]t^0
- t^{{-1}\dagger}\left[\sigma_{t,pp}
+ c_{t,pp}\left(\mathrm{i} D_0+\frac{\boldsymbol{D}^2}{4M_N}\right)\right]t^{-1}\\
- t^{{+1}\dagger}\left[\sigma_{t,nn}
+ c_{t,nn}\left(\mathrm{i} D_0+\frac{\boldsymbol{D}^2}{4M_N}\right)\right]t^{+1}
+y_t\left[t^{{\tilde A}\dagger}
\left(N^T \tilde{P}^{\tilde A}_t N\right)+\mathrm{h.c.}\right] \, .
\label{eq:L-1S0}
\end{multline}
The covariant derivatives include the appropriate charge operators $\hat{Q}$.
To keep the notation as simple as possible, we use the plain subscript ``$t$''
to refer to the $np$ channel from here on and use further qualifications only
to denote $pp$ and $nn$ (the latter is not explicitly considered in the rest of
this paper, except at the end of Sec. \ref{sec:Results}). The parameters
$\sigma_{d}$ and $\sigma_{t(,\cdot\cdot)}$ are related to the respective
scattering lengths; each is actually a sum of contributions from various orders:
\begin{align}
\sigma_d^{\phantom{()}}
&= \sigma_d^{(0)} + \sigma_d^{(1)} + \cdots \,, \\
\sigma_{t(,\cdot\cdot)}^{\phantom{(01)}}
&= \sigma_{t}^{(0)} + \sigma_{t(,\cdot\cdot)}^{(1)} + \cdots \,.
\label{eq:sigmas}
\end{align}
A more detailed discussion is given in Ref.~\cite{Konig:2014ufa}. Departing
from what is used there we follow here Ref.~\cite{Griesshammer:2004pe} and
simply set
\begin{equation}
y_d^2 = y_t^2 = \frac{4\pi}{M_N} \,.
\label{eq:simple-y}
\end{equation}
At the same time, the new parameters $c_d$ and $c_{t(,\cdot\cdot)}$
have been introduced to incorporate effective-range corrections,
\textit{cf.}\xspace~Fig.~\ref{fig:Corr-rd-rt}, starting at NLO:
\begin{align}
c_d^{\phantom{(1)}} &= c_d^{(1)} + \cdots \,, \\
c_{t(,\cdot\cdot)} &= c_{t}^{(1)} + \cdots \,.
\label{eq:cs}
\end{align}
In writing Eqs. \eqref{eq:sigmas} and \eqref{eq:cs} we imposed isospin symmetry
at the lowest order of each parameter. The reason is that isospin violation
comes from either electromagnetism or the up-down quark mass splitting. These
are associated with mass scales $\alpha M_N \sim 7$ MeV and $m_u-m_d\sim 3$ MeV
that are small compared to the breakdown scale $\Lambda_{\slashed\pi}$.
The three-nucleon interaction that is needed already at LO to
renormalize the doublet-channel amplitude~\cite{Bedaque:1999ve} can be written
as~\cite{Ando:2010wq,Griesshammer:2011md}
\begin{equation}
\mathcal{L}_3=\frac{h}{3}N^\dagger\left[y_d^2\,
d^{i\dagger} d^j \sigma^i \sigma^j+y_t^2\,t^{A\dagger} t^B \tau^A\tau^B
- y_dy_t\left(d^{i\dagger} t^A \sigma^i \tau^A + \mathrm{h.c.}\right) \right]N \,,
\label{eq:L-3}
\end{equation}
where the coupling $h$ is also split up in various orders,
\begin{equation}
h = h^{(0)} + h^{(1)} + \cdots \,.
\end{equation}
As we are going to show below, there is no need for an isospin-breaking
three-body force to NLO in our expansion. Higher-order terms, including further
isospin violation, are briefly discussed in Sec.~\ref{sec:Results}.
\section{Two-body sector}
\label{sec:TwoBody}
In this section we use the Lagrangian from the previous section to derive the
two-body propagators and amplitudes which are basic ingredients of the
three-body calculation described in the next section.
\subsection{Spin-triplet propagator}
\label{sec:SpinTriplet}
The dibaryon residual mass $\sigma_d$ represents the physics of the triplet
$N\!N$ scattering length or, alternatively, the deuteron binding momentum
$\gamma_d$. For momenta $Q\sim \gamma_d\ll \Lambda_{\slashed\pi}$, the standard power
counting of pionless EFT
applies~\cite{vanKolck:1997ut,Kaplan:1998tg,Kaplan:1998we,vanKolck:1998bw}.
The bare LO dibaryon propagator is simply $-\mathrm{i}/\sigma_d^{(0)}$, and it has
to be dressed by nucleon bubbles to all orders in order to get the full LO
expression. Summing up the geometric series shown diagrammatically in
Fig.~\ref{fig:DibaryonProp}(a) gives
\begin{equation}
\mathrm{i}\Delta_{d}^{(0)}(p_0,\mathbf{p})
= \frac{-\mathrm{i}}{\sigma_{d}^{(0)} + y_{d}^2 I_0(p_0,\mathbf{p})} \,,
\end{equation}
with the bubble integral
\begin{multline}
I_0(p_0,\mathbf{p}) = M_N\int^\Lambda\dq{q}
\frac{1}{M_N p_0 - \mathbf{p}^2/4 - \mathbf{q}^2 + \mathrm{i}\varepsilon} \\
= {-}\frac{M_N}{4\pi}
\left(\frac{2\Lambda}{\pi} - \sqrt{\frac{\mathbf{p}^2}{4}-M_N p_0-\mathrm{i}\varepsilon}\right)
+\mathcal{O}(1/\Lambda) \,.
\label{eq:I0-cutoff}
\end{multline}
Here, we have used a simple momentum cutoff $\Lambda$ for regularization (not
to be confused with the physical scale $\Lambda_{\slashed\pi}$ at which pionless EFT breaks
down because new dynamical degrees of freedom are resolved). To get
expressions in the more commonly used power divergence subtraction (PDS)
scheme~\cite{Kaplan:1998tg}, one has to replace $2\Lambda/\pi \rightarrow \mu$,
where $\mu$ is a scale introduced through dimensional regularization. Of
course, in a properly renormalized theory the choice of regulator is arbitrary.
While power counting is clean and transparent in the PDS scheme, we find it
convenient here to work with the simple cutoff instead, because this regulator
is more straightforward when it comes to the consistent treatment of Coulomb
contributions. For completeness we note that Eq.~\eqref{eq:I0-cutoff} is valid
up to corrections proportional to inverse powers of the cutoff, which we
neglect here.
\begin{figure}[tb]
\centering
\includegraphics[clip]{DibaryonProp}
\caption{Full dibaryon propagators in (a) the $^3S_1$ state (\textit{i.e.}, the deuteron)
and (b) the $^1S_0$ state. A single solid line represents the propagation
of a nucleon, a double dashed line a bare $d$ propagator, and a line of
circles a bare $t$ propagator.}
\label{fig:DibaryonProp}
\end{figure}
Range corrections are accounted for by the dibaryon kinetic parameter $c_d$,
which is included fully perturbatively here. At NLO, we consider one insertion
of the operator shown in Fig.~\ref{fig:Corr-rd-rt}(a) between $\Delta_d^{(0)}$s.
Depending on the renormalization procedure, the insertion of $c_d^{(1)}$ might
require a concomitant insertion of $\sigma_d^{(1)}$,
\begin{equation}
\mathrm{i}\Delta_d^{(1)}(p_0,\mathbf{p})
= \mathrm{i}\Delta_d^{(0)}(p_0,\mathbf{p})
\times
\left[{-\mathrm{i}}{\sigma_d^{(1)}}{-\mathrm{i}}c_d^{(1)}
\left(p_0-\frac{\mathbf{p}^2}{4M_N}\right) \right]
\times \mathrm{i}\Delta_d^{(0)}(p_0,\mathbf{p}) \,.
\label{eq:Delta-d-1}
\end{equation}
\begin{figure}[tb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\vcenteredhbox{\includegraphics[width=5.5em,clip]{Corr-rd}}
\vcenteredhbox{\scalebox{1}{$\;\sim\;
(-\mathrm{i}c_d)\left(p_0-\frac{\mathbf{p}^2}{4M_N}\right)$}}\\[0.77em]
(a)
\end{minipage}\begin{minipage}{0.49\textwidth}
\centering
\vcenteredhbox{\includegraphics[width=5.5em,clip]{Corr-rt}}
\vcenteredhbox{\scalebox{1}{$\;\sim\;
(-\mathrm{i}c_t)\left(p_0-\frac{\mathbf{p}^2}{4M_N}\right)$}}\\[0.77em]
(b)
\end{minipage}
\caption{Dibaryon kinetic-energy corrections in (a) the $^3S_1$ state
and (b) the $^1S_0$ state.}
\label{fig:Corr-rd-rt}
\end{figure}
In the spin-triplet channel, we use the effective range expansion around the
deuteron pole for renormalization and thus require that
\begin{equation}
-\mathrm{i} T(k)
= (\mathrm{i} y_d)^2\, \mathrm{i}\Delta_{d}\!\left(p_0=k^2/M_N,\mathbf{p}=\mathbf{0}\right)
= \mathrm{i} \frac{4\pi}{M_N}\frac{1}{k\cot\delta_{d}(k) - \mathrm{i} k}\,,
\label{eq:Tnd}
\end{equation}
with the perturbative expansion
\begin{multline}
\frac{1}{k\cot\delta_{d}(k) - \mathrm{i} k}
= \frac{1}{\rule{0pt}{1.33em}{-}\gamma_d + \dfrac{\rho_d}2\big(k^2+\gamma_d^2\big)
+ \cdots - \mathrm{i} k}
= \frac{1}{{-}\gamma_d - \mathrm{i} k}
\left[1 + \frac{\rho_d}{2}\frac{k^2+\gamma_d^2}{\gamma_d+\mathrm{i} k}
+ \cdots\right] \, ,
\label{eq:Delta-d-renorm}
\end{multline}
where $\gamma_d = \sqrt{\mathstrutM_N E_d}\simeq 45.7$ MeV~\cite{vanderLeun:1982aa}
is the deuteron binding momentum and $\rho_d\simeq 1.765$
fm~\cite{deSwart:1995ui} the deuteron effective range.
At LO, the effective range $\rho_d\sim 1/\Lambda_{\slashed\pi}$ does not contribute and
one simply has
\begin{equation}
\sigma_d^{(0)} = \frac{2\Lambda}{\pi}-\gamma_d \,.
\label{eq:sigmad0}
\end{equation}
The corresponding propagator $\Delta_d^{(0)}$ is given by the term outside the
parentheses in Eq.~\eqref{eq:Delta-d-renorm}, up to an additional minus sign.
The corrections in the parentheses come from subleading orders. The first
of these are $\mathcal{O}(Q/\Lambda_{\slashed\pi}, \gamma_d/\Lambda_{\slashed\pi})$, or NLO, and
\begin{equation}
\sigma_d^{(1)} = \frac{\rho_d\gamma_d^2}{2} \mathtext{,}
c_d^{(1)} = \frac{M_N\rho_d}{2} \,.
\label{eq:sigmad1}
\end{equation}
\subsection{Spin-singlet propagators}
\label{sec:SpinSinglet}
In the spin-singlet channel, in the absence of Coulomb effects, we can follow
the same procedure as in the spin-triplet channel (see
Figs.~\ref{fig:DibaryonProp}(b) and~\ref{fig:Corr-rd-rt}(b)), and arrive at the
LO propagator
\begin{equation}
\mathrm{i}\Delta_{t}^{(0)}(p_0,\mathbf{p})
= \frac{-\mathrm{i}}{\sigma_{t}^{(0)} + y_{t}^2 I_0(p_0,\mathbf{p})} \,.
\end{equation}
At NLO, we have
\begin{equation}
\mathrm{i}\Delta_{t}^{(1)}(p_0,\mathbf{p})
= \mathrm{i}\Delta_{t}^{(0)}(p_0,\mathbf{p})\times
\bigg[{-}\mathrm{i}\sigma_{t}^{(1)}
- \mathrm{i} c_{t}^{(1)} \left( p_0-\frac{\mathbf{p}^2}{4M_N}\right) \bigg]
\times \mathrm{i}\Delta_{t}^{(0)}(p_0,\mathbf{p}) \,.
\label{eq:Delta-t-NLO-1}
\end{equation}
Renormalization is then performed by matching $\Delta_t$ to the ${}^1S_0$
effective range expansion around zero momentum,
\begin{equation}
k\,\cot\delta_t(k) = {-}\frac{1}{a_t} + \frac{r_t}{2} k^2 + \mathcal{O}(k^4) \,,
\label{eq:ERE-1S0}
\end{equation}
where $a_t\simeq -23.7~\ensuremath{\mathrm{fm}}$ and $r_t\approx2.73~\ensuremath{\mathrm{fm}}$~\cite{Preston:1975} are
the $np$ scattering length and effective range, respectively. As for the
triplet, $r_t\sim 1/\Lambda_{\slashed\pi}$, but $|a_t| \gg 1/\gamma_d$.
\subsubsection{Standard approach}
\label{sec:SpinSinglet-Conventional}
Typically, one is interested in low momenta $Q\sim 1/|a_t|\ll \Lambda_{\slashed\pi}$ and
demands that at leading order the scattering length is reproduced.
Effective-range and higher corrections are included perturbatively, since they
are $\mathcal{O}(Q/\Lambda_{\slashed\pi}, 1/(|a_t|\Lambda_{\slashed\pi}))$. Up to \text{NLO}\xspace in this
{\it standard} scheme one sets
\begin{equation}
\sigma_t^{(0,\mathrm{st})} = \frac{2\Lambda}{\pi} - \frac{1}{a_t}
\mathtext{,}
\sigma_t^{(1,\mathrm{st})} = 0
\mathtext{,}
c_t^{(1,\mathrm{st})} = \frac{M_Nr_t}{2} \,.
\label{eq:tparconv}
\end{equation}
There is no adjustment of $\sigma_t$ when the effective-range expansion is
performed around the zero-energy threshold.
\subsubsection{Unitarity limit}
\label{sec:SpinSinglet-Unitarity}
The standard approach makes sense for momenta $Q\sim 1/|a_t|$, so at LO we
have contributions from both the unitarity cut and the scattering length.
However, if we are interested in momenta $1/|a_t|\ll Q\ll \Lambda_{\slashed\pi}$, we are
close to the unitarity limit and take instead
\begin{equation}
\sigma_t^{(0)} = \frac{2\Lambda}{\pi}
\mathtext{,}
\sigma_t^{(1)} = {-}\frac{1}{a_t}
\mathtext{,}
c_t^{(1)} = \frac{M_Nr_t}{2} \,,
\label{eq:sigma-t-unitarity}
\end{equation}
so the actual finiteness of the scattering length only enters as a perturbative
correction. The difference with respect to Eq.~\eqref{eq:tparconv} is a
result of $|a_t| \gg 1/\gamma_d$. For example, for $Q\sim \gamma_d$, we are
performing an extra expansion in the ratio $1/(|a_t|\gamma_d)\ll 1$. Meanwhile,
other effective range parameters require no special treatment, being similar in
both channels---for example, $r_t \sim \rho_d$.
Of course, to the extent that this expansion works one might as well resum
$\sigma_t^{(1)}$ and use Eq.~\eqref{eq:tparconv} with
$\sigma_t^{(0,\mathrm{st})} = \sigma_t^{(0)} + \sigma_t^{(1)}$
over the whole range $Q\ll \Lambda_{\slashed\pi}$. However, there are also advantages in
keeping a strict ordering. It makes clear, for example, that the singlet LO is
parameter free. Also, as we show shortly, it matches well with the
perturbative expansion of electromagnetic effects.
\subsection{Coulomb insertions}
\label{sec:Coulomb}
Simple dimensional analysis shows that the Coulomb expansion is in powers of
$\alpha M_N/Q$, while other electromagnetic corrections are suppressed by
at least $(Q/\Lambda_{\slashed\pi})^2$. To NLO we need only keep the contribution from
static Coulomb photons. In this case, the matching discussed above is correct
for the $np$ part of the ${}^1S_0$ dibaryon. If one neglects strong isospin
breaking, it can also be used to describe the $nn$ component. For $pp$
configurations, however, one has to use the Coulomb-modified effective
range expansion~\cite{Bethe:1949yr,Kong:1999sf},
\begin{equation}
C_\eta^2 \left(k\cot\delta_{t,pp}(k) - \mathrm{i} k\right) + \alphaM_N H(\eta)
= {-}\frac{1}{a_C} + \frac{r_C}{2} k^2 + \cdots
\label{eq:ERE-pp}
\end{equation}
because electromagnetic effects dominate the very-low-energy scattering
regime, $Q\lesssim\alphaM_N$. These are encoded in the Coulomb parameter
\begin{equation}
\eta(k) = \frac{\alphaM_N}{2k} \,,
\label{eq:eta-k}
\end{equation}
the Gamow factor
\begin{equation}
C_\eta^2 = \frac{2\pi\eta}{\mathrm{e}^{2\pi\eta} - 1} \,,
\end{equation}
and the function
\begin{equation}
H(\eta)= \psi({\mathrm{i}}\eta) +\frac{1}{2{\mathrm{i}}\eta} - \log({\mathrm{i}}\eta) \,,
\label{eq:H-eta}
\end{equation}
with $\psi$ the derivative of the Euler Gamma function. Here $a_C \simeq
-7.81~\ensuremath{\mathrm{fm}}$ and $r_C\simeq 2.79~\ensuremath{\mathrm{fm}}$~\cite{Bergervoet:1988zz} are the
Coulomb-modified scattering length and effective range, respectively. One
arrives at Eq.~\eqref{eq:ERE-pp} after subtracting the pure Coulomb amplitude
with Coulomb phase shifts
\begin{equation}
\exp(2\mathrm{i}\sigma_0) = \Gamma(1+\mathrm{i}\eta)/\Gamma(1-\mathrm{i}\eta)
\end{equation}
from the full amplitude.
\subsubsection{Kong+Ravndal approach}
\label{sec:Coulomb-KR}
Since $\alpha M_N\sim 1/|a_t|$, describing physics at the scale of the $^1S_0$
virtual state requires also a resummation of Coulomb exchange. This has been
studied in detail by Kong and Ravndal in Ref.~\cite{Kong:1999sf} in a setup
without dibaryon fields. We refer to that reference for details and note here
that the relation of our parameter $\sigma_{t,pp}^{(0,\mathrm{st})}$ to the
$C_0$ of Kong and Ravndal is $\sigma_{t,pp}^{(0,\mathrm{st})} = {-}{4\pi}/{(M_N
C_0)}$.
\medskip
The key ingredient is the fully dressed Coulomb bubble shown in
Fig.~\ref{fig:DressedBubble}. From an evaluation of the Coulomb Green's
function, Kong and Ravndal find that it is given by the divergent bubble
integral
\begin{equation}
J_0(k) = M_N \int \dq{q}\,\frac{2\pi\eta(q)}{\mathrm{e}^{2\pi\eta(q)}-1}
\frac{1}{k^2-q^2+\mathrm{i}\varepsilon} \,,
\label{eq:J0-int}
\end{equation}
and they separate the divergent part with a subtraction at $k=0$:
\begin{spliteq}
J_0(k) &= M_N \int \dq{q}\,\frac{2\pi\eta(q)}{\mathrm{e}^{2\pi\eta(q)}-1}
\frac{k^2}{q^2(k^2-q^2+\mathrm{i}\varepsilon)}
- M_N \int \dq{q}\,\frac{2\pi\eta(q)}{\mathrm{e}^{2\pi\eta(q)}-1}\frac{1}{q^2} \\
&\equiv J_0^{\text{fin}}(k) + J_0^{\text{div}} \,,
\label{eq:J0-fin-div}
\end{spliteq}
where the finite piece is
\begin{equation}
J_0^{\text{fin}}(k) = {-}\frac{\alphaM_N^2}{4\pi} H(\eta) \,.
\label{eq:J0-fin}
\end{equation}
With a simple momentum cutoff, the divergent part is
\begin{equation}
J_0^{\text{div}} = {-}\frac{M_N\Lambda}{2\pi^2}
+ \frac{\alphaM_N^2}{4\pi}
\left(\log\frac{2\Lambda}{\alphaM_N}
- C_E\right) + \mathcal{O}(1/\Lambda) \,,
\label{eq:J0-div-Cutoff}
\end{equation}
where $C_E\approx0.5772$ is the Euler--Mascheroni constant, and
we neglect higher-order corrections in $1/\Lambda$.
\begin{figure}[tb]
\centering
\includegraphics[clip]{DressedBubble}
\caption{Fully dressed proton--proton bubble. A wavy line denotes
a Coulomb photon exchange.}
\label{fig:DressedBubble}
\end{figure}
Resumming now this dressed bubble along with $\sigma_{t,pp}^{(0,\mathrm{st})}$,
the singlet dibaryon propagator in the $pp$ channel becomes
\begin{equation}
\mathrm{i}\Delta_{t,pp}^{(0,\mathrm{st})}(p_0,\mathbf{p})
= \dfrac{{-}\mathrm{i}}
{\rule{0pt}{1.66em}\sigma_{t,pp}^{(0,\mathrm{st})} - \dfrac{2\Lambda}{\pi}
+ \alphaM_N\left(\log\dfrac{2\Lambda}{\alphaM_N}- C_E\right)
- \alphaM_N H(\eta)} \,,
\label{eq:Delta-t-pp-KR}
\end{equation}
with
\begin{equation}
\eta = \eta\!\left(\mathrm{i}\sqrt{\mathbf{p}^2/4 - M_N p_0 - \mathrm{i}\varepsilon}\right) \,.
\label{eq:eta-gen}
\end{equation}
Matching to the Coulomb-modified effective range expansion is performed via
the part of the T-matrix where Coulomb interferes with the short-range
interactions,
\begin{equation}
-\mathrm{i} T_{SC}(k) = C_\eta^2 \mathrm{e}^{2\mathrm{i}\sigma_0}
(\mathrm{i} y_t)^2 \, \mathrm{i} \Delta_{t,pp}\!\left(p_0=k^2/M_N,\mathbf{p}=\mathbf{0}\right)
= \mathrm{i} \frac{4\pi}{M_N}
\frac{\mathrm{e}^{2\mathrm{i}\sigma_0}}{k\cot\delta_{t,pp}(k) - \mathrm{i} k} \,.
\label{eq:coulmatch}
\end{equation}
The additional factors here compared to the $np$ component, Eq.~\eqref{eq:Tnd},
arise from the inclusion of initial and final-state Coulomb interactions in
order to get the amplitude from the propagator. Combining this relation with
Eq.~\eqref{eq:ERE-pp} one finds that both the Gamow factor and the pure Coulomb
phase shift drop out and one arrives at
\begin{equation}
\sigma_{t,pp}^{(0,\mathrm{st})} = -\frac{1}{a_C} + \dfrac{2\Lambda}{\pi}
- \alphaM_N\left(\log\dfrac{2\Lambda}{\alphaM_N}- C_E\right) \,.
\label{eq:renorm-sigma-t-pp}
\end{equation}
Range corrections were also considered in Ref.~\cite{Kong:1999sf}.
\subsubsection{Separate leading-order resummation}
\label{sec:Coulomb-Unitarity}
Here we develop a new approach that allows us to consider a fully
isospin-symmetric leading order in the spin-singlet channel, including the
$pp$ part. Since $a_C$ is still large compared to the typical nuclear length
scale set by the inverse pion mass ($1/m_\pi \sim 1.4~\ensuremath{\mathrm{fm}}$), in the momentum
window $\alpha M_N \lesssim 1/a_C \ll Q \ll \Lambda_{\slashed\pi}$ we remain close to
unitarity in the singlet channels. We should be able to treat Coulomb
perturbatively along with finite scattering-length corrections. The key idea
is that with the new counting, the LO singlet propagator now behaves exactly
like $1/k$, \textit{i.e.}, it has the same infrared behavior as Coulomb contributions,
for which the relevant parameter (in the $pp$ system) is
$\eta = \alphaM_N/(2k)$.
At LO we thus have, with an isospin-symmetric $\sigma_t^{(0)}$ satisfying
Eq.~\eqref{eq:sigma-t-unitarity},
\begin{equation}
\mathrm{i}\Delta_{t,pp}^{(0)}(p_0,\mathbf{p}) = \mathrm{i}\Delta_t^{(0)}(p_0,\mathbf{p})
= \frac{{-}\mathrm{i}}
{\rule{0pt}{1.66em}\sqrt{\frac{\mathbf{p}^2}{4}-M_N p_0-\mathrm{i}\varepsilon}} \,.
\label{eq:Delta-t-pp-LO}
\end{equation}
At NLO, we need perturbative insertions not only of $\sigma_{t,pp}^{(1)}$
(Fig.~\ref{fig:Corr-at-bub}(a)) and $c_{t}^{(1)}$
(Fig.~\ref{fig:Corr-rd-rt}(b)), but also of a single-photon exchange, see
Fig.~\ref{fig:Corr-at-bub}(b). Using the notation of Ref.~\cite{Kong:1999sf},
we call the single-photon piece $\delta I_0(k)$ and find for the correction to
the $pp$ propagator:
\begin{multline}
\mathrm{i}\Delta_{t,pp}^{(1)}(p_0,\mathbf{p})
= \mathrm{i}\Delta_{t,pp}^{(0)}(p_0,\mathbf{p})\times
\bigg[{-}\mathrm{i}\sigma_{t,pp}^{(1)}
- \mathrm{i} c_{t}^{(1)} \left( p_0-\frac{\mathbf{p}^2}{4M_N}\right) \\
- \mathrm{i} y_t^2\delta I_0\!\left(\mathrm{i}\sqrt{\mathbf{p}^2/4-M_N p_0-\mathrm{i}\varepsilon}\right)\bigg]
\times \mathrm{i}\Delta_{t,pp}^{(0)}(p_0,\mathbf{p}) \,.
\label{eq:Delta-pp-NLO-1}
\end{multline}
In order to match to the Coulomb-modified effective range expansion, we work in
the $pp$ center-of-mass frame in the remainder of this section. From the
expression for $\delta I_0(k)$ in Appendix~\ref{sec:SinglePhotonBubble}, we find
for $p_0=k^2/M_N$, $\mathbf{p}=\mathbf{0}$, and $\eta$ as defined in Eq.~\eqref{eq:eta-k}
that
\begin{equation}
\delta I_0(k) = \frac{\alphaM_N^2}{4\pi}
\left[\log\mathrm{i}\eta
+ \log\frac{2\Lambda}{\alpha M_N} - C_\zeta\right] + \mathcal{O}(1/\Lambda) \,,
\label{eq:delta-I0}
\end{equation}
where $C_\zeta \simeq 1.119$ and we again drop terms proportional to
inverse powers of $\Lambda$. The log divergence is absorbed into
$\sigma_{t,pp}^{(1)}$, whereas the $\log(\mathrm{i}\eta)$ constitutes the one-photon
contribution to $H(\eta)$ as defined in Eq.~\eqref{eq:H-eta}.
\begin{figure}[tb]
\centering
\begin{minipage}{0.333\textwidth}
\centering
\vcenteredhbox{\includegraphics[width=6em,clip]{Corr-at}}
\vcenteredhbox{\scalebox{1.2}{$\;\sim\;{-}\mathrm{i}\sigma_{t,pp}^{(1)}$}}\\[0.77em]
(a)
\end{minipage}
\begin{minipage}{0.333\textwidth}
\centering
\includegraphics[width=6.35em,clip]{Corr-bub-single}\\[0.52em]
(b)
\end{minipage}
\caption{Corrections in the $pp$ channel: (a) Coulomb-corrected
scattering length; (b) one-photon exchange.}
\label{fig:Corr-at-bub}
\end{figure}
In order to find the renormalization condition for $\sigma_{t,pp}^{(1)}$,
we now consider smaller $Q$, which requires the resummation of both
$\sigma_{t,pp}^{(1)}$ and Coulomb exchange. We determine $\sigma_{t,pp}^{(1)}$
from this calculation, and then use it in the regime where Coulomb is
perturbative, to order $\alpha$. For this, it is important to be consistent
about which finite terms get absorbed into $\sigma_{t,pp}^{(1)}$ along with the
logarithmic divergence.
The resummation of Coulomb produces a new dressed bubble, shown in
Fig.~\ref{fig:DressedBubble-new}. The important point is that this new dressed
bubble excludes the empty piece (no photon exchange inside the bubble) because
that has already been resummed at LO. We denote by $\delta J_0(k)$ the part
remaining after subtracting the single-photon piece $\delta I_0(k)$.
In Appendix~\ref{sec:FullyDressedBubble-T} it is demonstrated that
\begin{equation}
\delta J_0(k) = {-}\frac{\alphaM_N^2}{4\pi}
\left[\psi(\mathrm{i}\eta) + \frac{1}{2\mathrm{i}\eta} + C_\Delta\right]
+ \frac{M_N}{4\pi} \mathrm{i} k \,,
\label{eq:delta-J0}
\end{equation}
where $C_\Delta \approx 0.579$. Hence, we find for the new dressed bubble
\begin{equation}
\delta I_0(k) + \delta J_0(k)
= \frac{\alphaM_N^2}{4\pi}
\bigg[\log\frac{2\Lambda}{\alphaM_N} - C_\zeta - C_\Delta - H(\eta)\bigg]
+ \frac{M_N}{4\pi} \mathrm{i} k \,.
\label{eq:delta-I0-J0}
\end{equation}
\begin{figure}[tb]
\centering
\includegraphics[clip]{DressedBubble-new}
\caption{Dressed proton--proton bubble excluding the empty piece.}
\label{fig:DressedBubble-new}
\end{figure}
The resummation of $\sigma_{t,pp}^{(1)}$ and the new dressed bubble is
now straightforward. We consider here explicitly the on-shell
case in the center-of-mass frame---$p_0=k^2/M_N$, $\mathbf{p}=\mathbf{0}$---and find
\begin{spliteq}
\mathrm{i}\Delta_{t,pp}^{\text{res}}(k)
&= \mathrm{i}\Delta_{t,pp}^{(0)}(k) + \mathrm{i}\Delta_{t,pp}^{(0)}(k)
\left({-}\mathrm{i}\sigma_{t,pp}^{(1)}
- \mathrm{i} y_t^2\delta I_0(k) - \mathrm{i} y_t^2\delta J_0(k)\right)
\mathrm{i}\Delta_{t,pp}^{(0)}(k)
+ \cdots \\
&= \frac{{-}\mathrm{i}}{\rule{0pt}{1.66em}
\underbrace{\sigma_t^{(0)} - \dfrac{2\Lambda}{\pi}}_{\null=0}
\null + \sigma_{t,pp}^{(1)}
+ \alphaM_N\left(\log\dfrac{2\Lambda}{\alphaM_N} - C_\zeta - C_\Delta\right)
- \alphaM_N H(\eta)} \,,
\label{eq:Delta-pp-res}
\end{spliteq}
where the explicit imaginary part $\mathrm{i} k$ cancels. Now, from
Eqs.~\eqref{eq:coulmatch} and~\eqref{eq:ERE-pp},
\begin{equation}
\sigma_{t,pp}^{(1)} = {-}\frac{1}{a_C}
- \alphaM_N\left(\log\dfrac{2\Lambda}{\alphaM_N}
- C_\zeta - C_\Delta\right) \,,
\label{eq:renorm-sigma-t-pp-1}
\end{equation}
Our renormalization of $\sigma_{t,pp}^{(0)}+\sigma_{t,pp}^{(1)}$ is consistent
with Eq.~\eqref{eq:renorm-sigma-t-pp}, but with a different constant finite
piece. As discussed in more detail in the Appendix, the reason for this
change is that we have isolated here the divergent piece---the bubble with a
single photon exchange---and only regularized that, whereas
Ref.~\cite{Kong:1999sf} regularizes the fully resummed bubble---including the
empty part---at once. Effectively, this amounts to using different
regularization schemes. Our approach has the advantage that it allows for a
consistent matching between the perturbative and nonperturbative regimes.
The expression for the $pp$ phase shift in the regime $\alpha M_N \lesssim 1/|a_C|
\ll Q \ll \Lambda_{\slashed\pi}$ can now be found by inserting Eqs.~\eqref{eq:Delta-t-pp-LO}
and~\eqref{eq:Delta-pp-NLO-1} into Eq.~\eqref{eq:coulmatch}:
\begin{equation}
k\cot\delta_{t,pp}(k)
= {-}\frac{1}{a_C} + \alpha M_N C_\Delta+\frac{r_t}{2} k^2
+ \alphaM_N \log\left(\frac{\alphaM_N}{2k}\right) + \cdots \,.
\label{eq:ERE-pppert}
\end{equation}
This is consistent with a direct expansion of Eq.~\eqref{eq:ERE-pp} in powers of
$\alpha M_N/Q$ and $1/(|a_C|Q)$. Using
\begin{equation}
\psi(i\eta) = \frac{i}{\eta} -C_E + \mathcal{O}(\eta) \,,
\label{eq:psiexpanded}
\end{equation}
one finds Eq.~\eqref{eq:ERE-pppert} with $C_\Delta=C_E$ and
\begin{equation}
r_C^{\text{NLO}} = r_t \,.
\label{eq:rCprediction}
\end{equation}
Moreover, the same result as in Eq.~\eqref{eq:ERE-pppert} can be obtained in a
direct calculation that includes all $\mathcal{O}(\alpha)$ diagrams contributing to $pp$
scattering, treated in perturbation theory.
As discussed in Appendix~\ref{sec:FullyDressedBubble-T}, a numerical calculation
indeed yields $C_\Delta\approx0.579$, very close to $C_E\approx0.5772$.
Equation \eqref{eq:ERE-pppert} gives circumstantial evidence that in fact
$C_\Delta=C_E$, and we expect that increased numerical accuracy would
reveal the same. Lacking a formal proof, however, we keep $C_\Delta$ in
the expressions here and merely note that the deviation from $C_E$ is
negligible for the results presented later.
Equation~\eqref{eq:ERE-pppert} is in agreement with the Nijmegen phase-shift
analysis~\cite{Stoks:1993tb} up to about 15\% for $k \gtrsim 60~\ensuremath{\mathrm{MeV}}$. In
particular, it captures the correct slope at intermediate momenta, which is a
consequence of the fact that Eq.~\eqref{eq:rCprediction} works at the 3\%
level. The assumption of isospin symmetry in $c_t^{(1)}$, which means that we
can neglect the splitting in the spin-singlet ranges, is a good one.
\section{Three-body sector}
\label{sec:ThreeBody}
We now consider the three-body sector in order to calculate the \ensuremath{{}^3\mathrm{H}}\xspace and
\ensuremath{{}^3\mathrm{He}}\xspace binding energies. These bound states arise in the $nd$ and $pd$
spin-doublet channels, respectively. Since we do not reorganize the pionless
EFT expansion in the $N\!N$ spin-triplet channel, no changes are needed in the
$nd$ and $pd$ spin-quartet channels with respect to
Refs.~\cite{Bedaque:1997qi,Bedaque:1998mb,Bedaque:1999vb,Gabbiani:1999yv,
Vanasse:2013sda,Konig:2013cia}.
We take the point of view that the scale that characterizes these bound states,
the triton binding momentum $\gamma_T$, is comparable to the deuteron binding
momentum $\gamma_d$, but both are much larger than $\alpha M_N$, $1/|a_t|$, and
$1/|a_C|$. Thus the bound-state energies can be expanded not only in powers
of $Q/\Lambda_{\slashed\pi}$ but also of $\aleph_0/Q$, where $\aleph_0 \sim \alphaM_N \sim
1/|a_t| \sim 1/|a_C|$.\footnote{Note that $\aleph_0$ is an extension of
the original definition~\cite{vanKolck:1997ut,vanKolck:1998bw} to include
additional scales, in particular $\alpha M_N$.} For simplicity we pair the two
expansions by taking $Q\sim (\aleph_0 \Lambda_{\slashed\pi})^{1/2}$.
\subsection{More formalism}
\label{sec:ThreeBody-Formalism}
According to the standard power
counting~\cite{Bedaque:1997qi,Bedaque:1998mb,Bedaque:1999vb,Gabbiani:1999yv}
one-nucleon exchange between nucleon and dibaryon has to be treated exactly.
To NLO it is sufficient to consider the $S$-wave projected one-nucleon-exchange
diagram at energy $E$,
\begin{equation}
K_\sss(E;k,p) \equiv \frac{1}{kp}\;
Q_0\left(\frac{k^2+p^2-M_N E-\mathrm{i}\varepsilon}{kp}\right)
= \frac{1}{2kp}\log\!\left(
\frac{k^2+p^2+kp-M_N E-\mathrm{i}\varepsilon}{k^2+p^2-kp-M_N E-\mathrm{i}\varepsilon}\right) \,,
\label{eq:KS}
\end{equation}
where $k$ ($p$) is the incoming (outgoing) center-of-mass momentum. It is well
known~\cite{Bedaque:1999ve,Hammer:2000nf,Hammer:2001gh,Bedaque:2002yg,
Afnan:2003bs,Griesshammer:2005ga} that at LO the resummation of one-nucleon
exchange is renormalized by the three-body force given in Eq.~\eqref{eq:L-3},
with
\begin{equation}
h^{(0)}= \frac{M_N H(\Lambda)}{\Lambda^2} \,,
\end{equation}
where $\Lambda$ is now the momentum cutoff applied in the three-body equations
discussed below, and $H(\Lambda)$ a known log-periodic function of the cutoff
that depends on a three-body parameter $\Lambda_*$. Here we follow the
procedure employed in Ref.~\cite{Bedaque:1997qi} and much of the subsequent
literature, in which the two-body cutoff is taken to be very large and
$\mathcal{O}(1/\Lambda)$ terms in Eqs.~\eqref{eq:I0-cutoff} and~\eqref{eq:delta-I0}
are neglected.\footnote{This is effectively equivalent to using dimensional
regularization in order to renormalize the two-body sector first.}
In order to calculate bound-state energies and perturbative corrections to
them, we use the formalism and notation of
Refs.~\cite{Konig:2014ufa,Koenig:2013} to study the $Nd$ doublet channel. We
introduce a three-component vector of vertex functions in channel space,
\begin{equation}
\vec{\Bgen_\sss}\equiv\left(\BS^\mathrm{d,a},\BS^\mathrm{d,b1},\BS^\mathrm{d,b2}\right)^T \,,
\end{equation}
where $\BS^\mathrm{d,a}$ corresponds to the deuteron channel and $\BS^\mathrm{d,b1}$ and $\BS^\mathrm{d,b2}$ are
the $np$ and $pp$/$nn$ components of the ${}^1S_0$ multiplet, respectively.
These are given by (properly normalized~\cite{Konig:2011yq}) solutions of the
homogeneous equation
\begin{equation}
\vec{\Bgen_\sss} = (\hat{K}\hat{D})\otimes\vec{\Bgen_\sss} \mathtext{,} E = -E_B \,,
\label{eq:BS-IntEq}
\end{equation}
where $\otimes$ represents an integral over the intermediate momentum,
$\hat{D}$ is a diagonal matrix of dibaryon propagators written in the form
\begin{equation}
D_{d,t}(E;q) = \Delta_{d,t}\!\left(E-\frac{q^2}{2M_N};q\right) \,,
\end{equation}
and
\begin{equation}
\hat{K}\equiv\begin{pmatrix}
-g_{dd}\left(K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right) &
g_{dt}\left(3K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right) &
g_{dt}\left(3K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right)\\[0.5em]
g_{dt}\left(K_\sss+\frac{2H(\Lambda)}{3\Lambda^2}\right) &
g_{tt}\left(K_\sss-\frac{2H(\Lambda)}{3\Lambda^2}\right) &
-g_{tt}\left(K_\sss+\frac{2H(\Lambda)}{3\Lambda^2}\right)\\[0.5em]
g_{dt}\left(2K_\sss+\frac{4H(\Lambda)}{3\Lambda^2}\right) &
-g_{tt}\left(2K_\sss+\frac{4H(\Lambda)}{3\Lambda^2}\right) &
-g_{tt}
\frac{4H(\Lambda)}{3\Lambda^2}
\end{pmatrix} \,.
\label{eq:K-B}
\end{equation}
The factors $g_{dd}$ {\it etc.}~contain the coupling constants $y_d$ and $y_t$.
In our present conventions, we simply have
\begin{equation}
g_{dd} = g_{tt} = g_{dt} = 2\pi \,.
\end{equation}
For illustration, we note that with a single channel and no three-body force,
the homogeneous integral equation written out explicitly would be
\begin{equation}
\Bgen_\sss(p) = \frac{1}{\pi}\int_0^\Lambda\mathrm{d} q\,q^2\,K_\sss(E;p,q)\,
\Delta\!\left(E-\frac{q^2}{2M_N};q\right) \Bgen_\sss(q) \,,
\end{equation}
where $\Delta$ denotes a generic dibaryon propagator, and $\Lambda$ is the
momentum cutoff employed in the three-body sector.
To study doublet-channel $nd$ scattering, we have to solve the
Lippmann--Schwinger equation. This can be done in a simpler two-channel
formalism, where it becomes
\begin{multline}
\begin{pmatrix}\TS^\mathrm{d,a} \\[0.5em] \TS^\mathrm{d,b}\end{pmatrix}
= \begin{pmatrix}
g_{dd}\left(K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right)\\[0.5em]
-g_{dt}\left(3K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right)
\end{pmatrix} \\
+\begin{pmatrix}
-g_{dd}D_d\left(K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right) &
g_{dt}D_t\left(3K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right) \\[0.5em]
g_{dt}D_d\left(3K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right) &
-g_{tt}D_t\left(K_\sss+\frac{2H(\Lambda)}{\Lambda^2}\right)
\end{pmatrix}
\otimes\begin{pmatrix}\TS^\mathrm{d,a}\\[0.5em]\TS^\mathrm{d,b}\end{pmatrix} \,.
\label{eq:IntEq-TSdb}
\end{multline}
Again, the notation is the same as in Refs.~\cite{Konig:2014ufa,Koenig:2013}.
The $nd$ phase shift is then obtained from the on-shell amplitude,
\begin{equation}
\delta_{\text{$n$--$d$}}(k) = \frac{1}{2\mathrm{i}}
\log\!\left(1+\frac{2\mathrm{i} kM_N}{3\pi} Z_0\TS^\mathrm{d,a}(E_k;k,k)\right)
\mathtext{,} E_k = \frac{3k^2}{4M_N}-\frac{\gamma_d^2}{M_N} \,,
\label{eq:delta-nd}
\end{equation}
where $Z_0$ is the deuteron wavefunction renormalization determined by
\begin{equation}
Z_0^{-1} = \mathrm{i}\frac{\partial}{\partial p_0}
\left.\frac{1}{\Delta_d(p)}\right|_{p_0 =-\frac{\gamma_d^2}{M_N},\,\mathbf{p}=0} \,.
\label{eq:Z0}
\end{equation}
The scattering length is determined by the on-shell amplitude in the
limit $k\to0$,
\begin{equation}
\ensuremath{{}^2a_{\text{$n$--$d$}}} = {-}\frac{M_N}{3\pi}\lim\nolimits_{k\to0} Z_0\TS^\mathrm{d,a}(E_k;k,k) \,.
\label{eq:and-2}
\end{equation}
\subsection{Perturbative range corrections}
\label{sec:ThreeBody-RangeCorrections}
Before we discuss Coulomb matrix elements between vertex functions, it is
instructive to consider effective-range corrections in this framework. As we
emphasized in Sec. \ref{sec:TwoBody}, range corrections are $\mathcal{O}(Q/\Lambda_{\slashed\pi})$.
In analogy to Eq.~\eqref{eq:Delta-d-renorm}, we write
\begin{multline}
D_d(E;q) = D_d^{(0)}(E;q) + D_d^{(1)}(E;q) + \cdots \\
= \frac{{-}1}{-\gamma_d+\sqrt{3q^2/4-M_N E-\mathrm{i}\varepsilon}}
\times\left[1 + \frac{\rho_d}{2}\frac{\left(3q^2/4-M_N E-\gamma_d^2\right)}
{-\gamma_d+\sqrt{\strut3q^2/4-M_N E-\mathrm{i}\varepsilon}}
+ \cdots \right] \,,
\label{eq:Prop-d-expansion}
\end{multline}
with an analogous expression for the spin-singlet part. Suppose now we have
solved the homogeneous equation~\eqref{eq:BS-IntEq} at leading order. The
binding-energy shift due to the deuteron effective range, shown in
Fig.~\ref{fig:DeltaE-rdt}(a), is then given by
\begin{equation}
\Delta E^{(1)}_{\rho_d} = \frac{\rho_d}{4\pi^2}\int_0^\Lambda \mathrm{d} q \,q^2\,
\abs{\BS^\mathrm{d,a}(q)}^2 \frac{\left(3q^2/4-M_N E-\gamma_d^2\right)}{\rule{0pt}{1.8em}
\bigg({-}\gamma_d+\sqrt{\strut3q^2/4-M_N E-\mathrm{i}\varepsilon}\bigg)^{\!2}} \,.
\label{eq:DeltaE-rd}
\end{equation}
More details about this can be found in
Refs.~\cite{Vanasse:2013sda,Vanasse:2014kxa}. In particular, we note that the
correction $h^{(1)}$ to the three-body force is fitted to keep whatever
physical parameter has been used at LO (triton binding energy or
doublet-channel scattering length) unchanged.
\begin{figure}[tb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=0.4275\textwidth,clip]{DeltaE-rd}\\[0.77em]
(a)
\end{minipage}\begin{minipage}{0.49\textwidth}
\centering
\vspace*{0.5em}
\includegraphics[width=0.441\textwidth,clip]{DeltaE-rt}\\[0.77em]
(b)
\end{minipage}
\caption{Range corrections contributing to the trinucleon binding energy
in perturbation theory. A shaded oval represents a vertex function.}
\label{fig:DeltaE-rdt}
\end{figure}
In the {\it standard} counting used previously (finite scattering lengths at
LO), there is a contribution analogous to Eq.~\eqref{eq:DeltaE-rd} with
$\gamma_d\to1/a_t$ in the denominator (and no $\gamma_d$ in the numerator). In our
new scheme with the spin-singlet LO in the unitarity limit, the range
corrections in the spin-singlet channels are particularly simple. For example,
from Fig.~\ref{fig:DeltaE-rdt}(b) we have
\begin{equation}
\Delta E^{(1)}_{r_t,\text{b1}} = \frac{3r_t}{4\pi^2}
\int_0^\Lambda \mathrm{d} q \,q^2\,\abs{\BS^\mathrm{d,b1}(q)}^2 \,,
\end{equation}
and an analogous expression that involves $\BS^\mathrm{d,b2}$. NLO corrections that are
linear in the range have been known for a long time to generate cutoff
dependence that can be compensated by
$h^{(1)}$~\cite{Bedaque:1999ve,Hammer:2001gh,Griesshammer:2005ga}. For three
bosons at unitarity, this divergence was discussed in
Refs.~\cite{Platter:2008cx,Ji:2010su}. At the same time, we have of course now
perturbative insertions of the scattering length, \textit{e.g.},
\begin{equation}
\Delta E^{(1)}_{a_t,\text{b1}} = \frac{3}{2\pi^2a_t}\int_0^\Lambda \mathrm{d} q\,q^2\,
\frac{\abs{\BS^\mathrm{d,b1}(q)}^2}{3q^2/4+\MNE_B} \,,
\label{eq:3bscattlencor}
\end{equation}
which are corrections of $\mathcal{O}(\aleph_0/Q)$ relative to LO.
\subsection{Coulomb matrix elements}
\label{sec:ThreeBody-Coulomb}
Range corrections as discussed in the previous section apply in general
to both $nd$ and $pd$ systems. In the bound-state regime, they simply
correspond to matrix elements between trinucleon wavefunctions that are
diagonal in momentum space (only one loop integral is required to calculate
them) as well as in cluster-configuration (channel)
space~\cite{Griesshammer:2004pe}.
Now we want to include Coulomb corrections, which are $\mathcal{O}(\aleph_0/Q)$.
In general, contributions to the \ensuremath{{}^3\mathrm{H}}\xspace--\ensuremath{{}^3\mathrm{He}}\xspace energy splitting $\Delta E$ can
be non-diagonal in both spaces. We get such contributions when we calculate
$\Delta E$ in perturbation theory, which comes from diagram topologies shown
in Figs.~\ref{fig:DeltaE-conv} and~\ref{fig:DeltaE-bub-C}. This approach
starts by taking the trinucleon state to be the triton in a three-channel
formalism. This way, one can easily isolate the channel that corresponds to the
$pp$ configuration in \ensuremath{{}^3\mathrm{He}}\xspace. Such a calculation was carried out in
Ref.~\cite{Konig:2014ufa} for the diagrams in Fig.~\ref{fig:DeltaE-conv}, which
give convergent results. We briefly summarize this calculation here before
getting to the new diagrams in Fig.~\ref{fig:DeltaE-bub-C}.
\begin{figure}[tb]
\centering
\begin{minipage}{0.333\textwidth}
\centering
\includegraphics[width=0.63\textwidth,clip]{DeltaE-bub}\\[0.77em]
(a)
\end{minipage}\begin{minipage}{0.333\textwidth}
\centering
\vspace*{0.47em}
\includegraphics[width=0.71\textwidth,clip]{DeltaE-box}\\[0.77em]
(b)
\end{minipage}\begin{minipage}{0.333\textwidth}
\centering
\vspace*{0.47em}
\includegraphics[width=0.71\textwidth,clip]{DeltaE-tri}\\[0.77em]
(c)
\end{minipage}
\caption{Convergent diagrams contributing to the $^3\mathrm{H}$--$^3\mathrm{He}$
binding energy difference in perturbation theory.}
\label{fig:DeltaE-conv}
\end{figure}
\begin{figure}[tb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\vspace*{0.25em}
\includegraphics[width=0.4275\textwidth]{DeltaE-bub-C}\\[0.77em]
(a)
\end{minipage}\begin{minipage}{0.49\textwidth}
\centering
\vspace*{0.5em}
\includegraphics[width=0.441\textwidth]{DeltaE-aC}\\[0.77em]
(b)
\end{minipage}
\caption{Divergent Coulomb bubble diagram (a) contributing to the \ensuremath{{}^3\mathrm{H}}\xspace--\ensuremath{{}^3\mathrm{He}}\xspace
binding energy difference along with the associated counterterm (b).}
\label{fig:DeltaE-bub-C}
\end{figure}
The diagram shown in Fig.~\ref{fig:DeltaE-conv}(a) is still diagonal in
cluster-configuration space. In order to calculate it, we need the kernel
function corresponding to the photon being exchanged between the deuteron
bubble and the individual proton. It is given by~\cite{Konig:2011yq}
\begin{equation}
K_\mathrm{bubble}(E;k,p)
= {-}\alphaM_N \int_{-1}^1\mathrm{d}\!\cos\theta\,
\frac{\mathcal{I}_\mathrm{bubble}(E;\mathbf{k},\mathbf{p})}{(\mathbf{k}-\mathbf{p})^{2}+\lambda^2}
\mathtext{,} \mathbf{k}\cdot\mathbf{p} = kp\cos\theta \,,
\label{eq:K-bubble}
\end{equation}
with
\begin{equation}
\mathcal{I}_\mathrm{bubble}(E;\mathbf{k},\mathbf{p})
= \frac{\arctan\left(\frac{2\mathbf{p}^2-\mathbf{k}^2-\mathbf{k}\cdot\mathbf{p}}
{\sqrt{3\mathbf{k}^2-4M_N E-\mathrm{i}\varepsilon}\sqrt{(\mathbf{k}-\mathbf{p})^2}}\right)
+\arctan\left(\frac{2\mathbf{k}^2-\mathbf{p}^2-\mathbf{k}\cdot\mathbf{p} }
{\sqrt{3\mathbf{p}^2-4M_N E-\mathrm{i}\varepsilon}\sqrt{(\mathbf{k}-\mathbf{p})^2}}\right)}
{\sqrt{(\mathbf{k}-\mathbf{p})^2}}
\label{eq:I-bubble-pd} \,,
\end{equation}
and where $\lambda$ is a photon mass introduced for regularization in the
infrared. In practice, we do not perform the angular integral numerically but
rather use the explicit $S$-wave projection given in
Ref.~\cite{Vanasse:2014kxa}. The contribution to the energy shift is then
given by
\begin{multline}
\Delta E^{(1)}_{\ref{fig:DeltaE-conv}(a)} = \frac{1}{2\pi^3}
\int_0^\Lambda \mathrm{d} q_1 \,q_1^2 \int_0^\Lambda \mathrm{d} q_2 \,q_2^2\,
\BS^\mathrm{d,a}(q_1)\,D_d^{(0)}(-E_B,q_1) \\
\times K_\mathrm{bubble}(E;q_1,q_2)
\,D_d^{(0)}(-E_B,q_2)\,\BS^\mathrm{d,a}(q_2) \,.
\label{eq:DeltaE-bub}
\end{multline}
The analogous diagram with $np$ singlet propagators (not shown explicitly
in Fig.~\ref{fig:DeltaE-conv}) is given by essentially the same expression with
the replacements $\BS^\mathrm{d,a} \rightarrow \BS^\mathrm{d,b1}$ and $D_d \rightarrow D_t$. For the
``box'' and ``triangle'' contributions, Figs.~\ref{fig:DeltaE-conv}(b) and~(c),
the corresponding kernel functions are~\cite{Konig:2014ufa}
\begin{multline}
K_{\text{box}}(E;k,p) = -\alphaM_N \\
\times\frac12\int_{-1}^1\mathrm{d}\!\cos\theta\,
\Bigg\{\frac{\arctan\Big(\frac{2\mathbf{p}^2-\mathbf{k}^2-\mathbf{k}\cdot\mathbf{p}}
{\sqrt{3\mathbf{k}^2-4M_N E-\mathrm{i}\varepsilon}\sqrt{(\mathbf{k}-\mathbf{p})^2}}\Big)
+\arctan\Big(\frac{2\mathbf{k}^2-\mathbf{p}^2-\mathbf{k}\cdot\mathbf{p} }
{\sqrt{3\mathbf{p}^2-4M_N E-\mathrm{i}\varepsilon}\sqrt{(\mathbf{k}-\mathbf{p})^2}}\Big)}
{(\mathbf{k}^2+\mathbf{p}^2+\mathbf{k}\cdot\mathbf{p}-M_N E-\mathrm{i}\varepsilon)\sqrt{(\mathbf{k}-\mathbf{p})^2}} \\
- \frac{\lambda}{(\mathbf{k}^2+\mathbf{p}^2+\mathbf{k}\cdot\mathbf{p}-M_N E-\mathrm{i}\varepsilon)^2}
+ \mathcal{O}(\lambda^2) \Bigg\} \,,
\label{eq:K-box}
\end{multline}
and
\begin{subequations}%
\begin{equation}
K_\text{tri}^{(\text{out})}(E;k,p) = -\alphaM_N \\
\times\frac12\int_{-1}^1\mathrm{d}\!\cos\theta
\frac{\mathcal{I}_{\text{tri}}(E;\mathbf{k},\mathbf{p})}
{\mathbf{k}^2+\mathbf{p}^2+\mathbf{k}\cdot\mathbf{p}-M_N E-\mathrm{i}\varepsilon} \,,
\label{eq:K-tri-out}
\end{equation}
\begin{equation}
K_\text{tri}^{(\text{in})}(E;k,p) = K_\text{tri}^{(\text{out})}(E;p,k) \,,
\label{eq:K-tri-in}
\end{equation}
\label{eq:K-tri}%
\end{subequations}%
where the superscripts ``out'' and ``in'' indicate whether the Coulomb-photon
exchange is on the left or right side of the diagram. This notation is
taken over from Ref.~\cite{Konig:2014ufa}, which allowed for incoming and
outgoing $pd$ states. The loop function appearing in Eq.~\eqref{eq:K-tri-out}
is given by
\begin{multline}
\mathcal{I}_{\text{tri}}(E;\mathbf{k},\mathbf{p})
= \frac{\mathrm{i}}{2\sqrt{\mathbf{k}^2/4+\mathbf{k}\cdot\mathbf{p}+\mathbf{p}^2}} \\
\times\Bigg\{
\log\left(\frac{\mathrm{i}(\mathbf{k}^2/2-\mathbf{k}\cdot\mathbf{p}-\mathbf{p}^2-\lambda^2-M_N E-\mathrm{i}\varepsilon)}
{\sqrt{\mathbf{k}^2/4+\mathbf{k}\cdot\mathbf{p}+\mathbf{p}^2}}
+2\sqrt{\lambda^2+3\mathbf{k}^2/4-M_N E-\mathrm{i}\varepsilon}\right) \\
-\log\left(\frac{\mathrm{i}(\mathbf{k}^2+\mathbf{p}^2+\mathbf{k}\cdot\mathbf{p}-\lambda^2-M_N E-\mathrm{i}\varepsilon)}
{\sqrt{\mathbf{k}^2/4+\mathbf{k}\cdot\mathbf{p}+\mathbf{p}^2}}+2\lambda\right)\Bigg\} \,.
\end{multline}
As for $K_\mathrm{bubble}(E;k,p)$, explicit $S$-wave projections where the
integral over $\cos\theta$ has been carried out analytically can be found in
Ref.~\cite{Vanasse:2014kxa}. Resulting contributions to the energy shift are
of the form
\begin{multline}
\Delta E^{(1)}_{\ref{fig:DeltaE-conv}(b)} = \frac{1}{2\pi^3}
\int_0^\Lambda \mathrm{d} q_1 \,q_1^2 \int_0^\Lambda \mathrm{d} q_2 \,q_2^2\,
\BS^\mathrm{d,a}(q_1)\,D_d^{(0)}(-E_B,q_1) \\
\times K_\mathrm{box}(E;q_1,q_2)
\,D_d^{(0)}(-E_B,q_2)\,\BS^\mathrm{d,a}(q_2)
\label{eq:DeltaE-box}
\end{multline}
and
\begin{multline}
\Delta E^{(1)}_{\ref{fig:DeltaE-conv}(c)} = {-}\frac{3}{2\pi^3}
\int_0^\Lambda \mathrm{d} q_1 \,q_1^2 \int_0^\Lambda \mathrm{d} q_2 \,q_2^2\,
\BS^\mathrm{d,b1}(q_1)\,D_{t}^{(0)}(-E_B,q_1) \\
\times K_\mathrm{tri}^{(\mathrm{out})}(E;q_1,q_2)
\,D_d^{(0)}(-E_B,q_2)\,\BS^\mathrm{d,a}(q_2) \,,
\label{eq:DeltaE-tri}
\end{multline}
with analogous expressions for equivalent topologies but different combinations
of dibaryon propagators and vertex functions (see Ref.~\cite{Konig:2014ufa} for
details).
The above summarizes the diagrams included in the perturbative calculation of
Ref.~\cite{Konig:2014ufa}; all these contributions are convergent as the cutoff
$\Lambda$ is increased. As mentioned in the introduction, the contribution
from the diagram shown in Fig.~\ref{fig:DeltaE-bub-C}(a) has not been included
so far. This diagram is logarithmically divergent, but this is precisely the
same divergence of the one-photon bubble that we isolated in the new treatment
of the two-body sector discussed in Sec.~\ref{sec:Coulomb-Unitarity}. Hence,
it can be renormalized by including it together with the counterterm diagram
shown in Fig.~\ref{fig:DeltaE-bub-C}(b), which is proportional to
$\sigma_{t,pp}^{(1)}$ as given in Eq.~\eqref{eq:renorm-sigma-t-pp-1}. The
resulting contribution to the energy shift, written out explicitly, is
\begin{multline}
\Delta E^{(1)}_{\text{\ref{fig:DeltaE-bub-C}(a+b)}}
= \frac{3}{4\pi^2} \int_0^\Lambda \mathrm{d} q \,q^2\,
\frac{\abs{\BS^\mathrm{d,b2}(q)}^2}{\strut3q^2/4 + \MNE_B} \\
\times \left\{\dfrac{1}{a_C} - \alphaM_N\left[C_\Delta
+ \log\!\left(\dfrac{\alphaM_N}{2\sqrt{\mathstrut M_N E_B+3q^2/4}}\right)
\right]\right\} \,.
\label{eq:DeltaE-aC}
\end{multline}
Note that the constant $C_\zeta$ drops out here against the same contribution
from the photon bubble, \textit{cf.}\xspace~Eq.~\eqref{eq:delta-I0}. Expanding the
\emph{renormalized} $pp$ propagator of Refs.~\cite{Ando:2010wq,Konig:2014ufa} in
$\alpha$ gives Eq.~\eqref{eq:DeltaE-aC} with $C_\Delta \to C_E$,
as expected from their similar values (\textit{cf.}\xspace the discussion in
Sec.~\ref{sec:Coulomb-Unitarity}).
We stress here that the new approach takes all spin-singlet propagators in the
unitarity limit at LO. In the $pp$ channel, the finite scattering length
$a_C$ is included together with the single-photon bubble contribution,
resulting in Eq.~\eqref{eq:DeltaE-aC}. At the same time, we also include linear
insertions of $1/a_t$ in the $np$ spin-singlet channel, as given in
Eq. \eqref{eq:3bscattlencor}.
\section{Results and discussion}
\label{sec:Results}
We summarize our new expansion as follows: at leading order, we include
\begin{itemize}
\item the standard $N\!N$ spin-triplet (pionless) amplitude (parameter $\gamma_d$),
\item the unitary $N\!N$ spin-singlet amplitude (parameter-free),
\item a contact three-body force (parameter $\Lambda_*$).
\end{itemize}
Our new NLO includes\footnote{These NLO contributions induce corrections to the
spin-triplet two- and three-body force parameters that already appeared at LO,
but these corrections introduce no new parameters.}
\begin{itemize}
\item the effective range in the $N\!N$ spin-triplet channel (parameter
$\rho_d$),
\item the isospin-symmetric range in the $N\!N$ spin-singlet channel (parameter
$r_t$),
\item a scattering-length correction to unitarity in the $N\!N$ spin-singlet
$np$ and $nn$ channels (parameter $a_t$),
\item a scattering-length correction to unitarity in the $N\!N$ $pp$ channel
(parameter $a_C$),
\item one-photon exchange (parameter $\alpha= 1/137$).
\end{itemize}
The two-body parameters we use in our numerical calculation are summarized
in Table~\ref{tab:Params}. For the nucleon mass we take $M_N = 938.918~\ensuremath{\mathrm{MeV}}$.
\begin{table}[tb]
\centering
\begin{tabular}{ccc}
Parameter & Value & Ref.\\
\hline\hline
\rule{0pt}{1.1em}$\gamma_d$ & $45.7~\ensuremath{\mathrm{MeV}}$ & \cite{vanderLeun:1982aa} \\
$\rho_d$ & $1.765~\ensuremath{\mathrm{fm}}$ & \cite{deSwart:1995ui} \\
$a_t$ & $-23.714~\ensuremath{\mathrm{fm}}$ & \cite{Preston:1975} \\
$r_t$ & $2.73~\ensuremath{\mathrm{fm}}$ & \cite{Preston:1975} \\
$a_C$ & $-7.8063~\ensuremath{\mathrm{fm}}$ & \cite{Bergervoet:1988zz}
\end{tabular}
\caption{Parameters used for the numerical calculation.}
\label{tab:Params}
\end{table}
Unitarity in the $N\!N$ spin singlet at LO means that our results for the
binding energies and scattering of nuclei differ from previous calculations,
for example the \ensuremath{{}^3\mathrm{H}}\xspace binding energy and $nd$ scattering in the doublet
channel~\cite{Bedaque:1999ve,Hammer:2000nf,Hammer:2001gh,Bedaque:2002yg,
Afnan:2003bs}. In order to facilitate the comparison with existing,
standard-LO results, we also consider below an ``incomplete'' new NLO, where
we set $\rho_d = r_t = 0$.
In the spin singlet we perform an extra $\aleph_0/Q$ expansion on top of the
standard $Q/\Lambda_{\slashed\pi}$ expansion. We first show that the $\aleph_0/Q$ expansion
gives results that are in good agreement with the standard leading-order $N\!N$
spin-singlet amplitude in the absence of Coulomb effects. As $\Lambda_*$ is
varied with fixed $N\!N$ input, doublet-channel observables change in a
correlated way. The simplest example is the Phillips
line~\cite{Phillips:1968zze} in the plane of \ensuremath{{}^3\mathrm{H}}\xspace binding energy and $nd$
scattering length, see Fig.~\ref{fig:Phillips-U}. Five curves are shown for a
three-body cutoff $\Lambda=2.4$~GeV; effects from further increasing the cutoff
are negligible. In three of the curves the ranges are set to zero. We see that
the new LO curve (with $a_t\to \infty$) is within 1\% of the standard LO curve
(with $a_t$ at its physical value), and the new LO+(incomplete)NLO (with
$a_t\to \infty$ at LO and $1/a_t$ at its physical value treated in first-order
perturbation theory, but zero range) is closer still. The inset magnifies a
region of the plot to show the small differences among these curves. This
agreement is not fortuitous and survives the inclusion of range corrections,
displayed in the other two curves. Again, the new LO+NLO curve ($a_t\to \infty$
at LO, and both physical $1/a_t$ and ranges in first-order perturbation theory)
is very close to the standard LO+NLO ($a_t$ at its physical value at LO, ranges
in first-order perturbation theory).
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth,clip]{Phillips-U}
\caption{Correlation (Phillips line) between \ensuremath{{}^3\mathrm{H}}\xspace binding energy (in \ensuremath{\mathrm{MeV}})
and doublet $nd$ scattering length (in \ensuremath{\mathrm{fm}}) at LO and NLO. The light (green)
and dark (blue) solid lines are the results of the standard expansion at LO and
LO+NLO. The light (green) dotted line is the new LO, the light (green) dashed
line is the new LO+NLO with ranges set to zero, and the dark (blue) dashed line
is the full new LO+NLO. All curves are for a three-body cutoff
$\Lambda=2.4$~GeV. Horizontal and vertical (black) dotted lines indicate
experimental values for binding energy and scattering length, respectively.}
\label{fig:Phillips-U}
\end{figure}
In Fig.~\ref{fig:Phillips-U} we also indicate the experimental values of the
\ensuremath{{}^3\mathrm{H}}\xspace binding energy and doublet $nd$ scattering length by, respectively,
horizontal and vertical lines. Leading-order curves (new as well as standard)
lie close to the experimental point. Next-order curves (new as well as
standard) are small shifts in the direction of data, overshooting a bit.
We can use either the binding energy or the doublet-channel scattering length to
determine $\Lambda_*$, and then the other is a prediction that nearly agrees
with data. Here we use the \ensuremath{{}^3\mathrm{H}}\xspace binding energy, $E_B(\ensuremath{{}^3\mathrm{H}}\xspace)=8.48~\ensuremath{\mathrm{MeV}}$,
as input. At LO this is done by adjusting $h^{(0)}$; at NLO, $h^{(1)}$ ensures
that the \ensuremath{{}^3\mathrm{H}}\xspace binding energy remains at its experimental value. The same
procedure is used in the standard expansion with $a_t$ in the LO propagators,
and our values for $h^{(0,1)}$ come out very close to results in that
approach~\cite{Vanasse:2014kxa}. Then the $nd$ scattering length converges as
the cutoff $\Lambda$ increases, as shown in Fig.~\ref{fig:2and-U}, where
five curves analogous to those in Fig.~\ref{fig:Phillips-U} are displayed.
More generally, Fig.~\ref{fig:Phase-D-U} shows the predictions for the
doublet-channel $nd$ phase shifts at low momenta. Again, the effects of
treating the finite value of the singlet scattering length in perturbation
theory are small.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth,clip]{2and-U}
\caption{$nd$ doublet-channel scattering length (in $\ensuremath{\mathrm{fm}}$) as a function of the
cutoff $\Lambda$ (in $\ensuremath{\mathrm{MeV}}$). Numerically, the limit in Eq.~\eqref{eq:and-2}
has been taken by setting $k=0.01~\ensuremath{\mathrm{MeV}}$. Notation as in
Fig.~\ref{fig:Phillips-U}.}
\label{fig:2and-U}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth,clip]{Phase-D-U}
\caption{$nd$ spin-doublet phase shift (in degrees) at LO and NLO
as function of the center-of-mass momentum (in $\ensuremath{\mathrm{MeV}}$). Notation as in
Fig.~\ref{fig:Phillips-U}.}
\label{fig:Phase-D-U}
\end{figure}
Thus, the $\aleph_0/Q$ expansion works quite well in the absence of Coulomb
interactions. In fact, range corrections seem larger than those from the finite
singlet scattering length, which suggests that the $\aleph_0/Q$ expansion works
better than the $Q/\Lambda_{\slashed\pi}$ expansion. With the \ensuremath{{}^3\mathrm{H}}\xspace channel properly
renormalized and the three-body force fixed, we now consider Coulomb corrections
to the \ensuremath{{}^3\mathrm{He}}\xspace binding energy. Because our LO Lagrangian is isospin-symmetric, the
\ensuremath{{}^3\mathrm{H}}\xspace--\ensuremath{{}^3\mathrm{He}}\xspace binding energy difference vanishes in our new LO, but it is a
prediction---a low-energy theorem---at NLO.
To gauge the effects of perturbative Coulomb corrections, we show in
Fig.~\ref{fig:He3-LO} the result of the new calculation presented here at NLO,
as a function of the cutoff. The photon mass $\lambda$ has been extrapolated to
zero in the same way as in Ref.~\cite{Konig:2014ufa}, that is,
a linear extrapolation based on the range $\lambda = 0.4\ldots0.6~\ensuremath{\mathrm{MeV}}$. All
Coulomb effects, including those in the $pp$ sector, are included fully
perturbatively here, meaning that we only consider matrix elements between
trinucleon wavefunctions that involve a single Coulomb-photon exchange. We find
that the inclusion of the renormalized Coulomb-bubble diagram,
Fig.~\ref{fig:DeltaE-bub-C}, ensures proper renormalization of the three-body
energy. This is in contrast with Ref.~\cite{Vanasse:2014kxa}, which resums
some Coulomb contributions already at LO and finds a logarithmic divergence at
NLO when $r_C = r_t$.
The new contribution also provides a sizable (as compared to the total energy
splitting) modification of the incomplete perturbative results of
Ref.~\cite{Konig:2014ufa}. It brings the full perturbative result very close to
the non-perturbative leading-order calculation of Ref.~\cite{Konig:2014ufa},
which extended the results of Ref.~\cite{Ando:2010wq} to much larger cutoff
values. This establishes that Coulomb effects really are a completely
perturbative correction in the \ensuremath{{}^3\mathrm{He}}\xspace bound state compared to the \ensuremath{{}^3\mathrm{H}}\xspace. We
stress that ``leading-order'' means something different in that paper than in
the new approach presented here: Ref.~\cite{Konig:2014ufa}, as pionless
calculations preceding it, resums certain Coulomb effects and includes the
scattering length in the leading-order spin-singlet propagators, whereas we take
those in the unitarity limit and only include the finite scattering lengths as
perturbative corrections. At the same time, our calculation thus shows that
this is a good approximation, as expected from the $\aleph_0/Q$ expansion, which
embodies the fact that these scattering lengths are very large.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth,clip]{He3-LO}
\caption{Results for the \ensuremath{{}^3\mathrm{He}}\xspace binding energy (in $\ensuremath{\mathrm{MeV}}$) as a function of the
cutoff (in $\ensuremath{\mathrm{MeV}}$). The upper solid curve shows the result of the
nonperturbative calculation presented in Ref.~\cite{Konig:2014ufa}; the result
of Ref.~\cite{Ando:2010wq} is shown as a red triangle. The incomplete
perturbative result of Ref.~\cite{Konig:2014ufa} is given by the lower solid
line. Crosses represent the new complete perturbative calculation up to linear
order in the spin-singlet scattering lengths. For comparison, dotted horizontal
lines indicate the experimental values for the \ensuremath{{}^3\mathrm{H}}\xspace and \ensuremath{{}^3\mathrm{He}}\xspace binding energies.
The photon mass $\lambda$ has been linearly extrapolated to zero.}
\label{fig:He3-LO}
\end{figure}
At \text{N$^2$LO}\xspace and higher, isospin-breaking effects from the quark masses and
higher-order electromagnetic effects will contribute. For example,
the $\mathcal{O}(\alpha^2)$ Coulomb contribution is attractive and tends to reduce
the splitting found at NLO. Unfortunately, further shorter-range interactions
will appear. $N\!N$ input can, to some extent, be determined from $N\!N$ data.
Even in this case, however, there may be an isospin-breaking three-body force
needed for proper renormalization. At that point, one can no longer predict the
binding energy difference, unless one can determine this force's parameter from
another isospin-violating observable.
One of the higher-order effects comes from isospin violation in the effective
ranges. At our NLO there is no isospin breaking in these, so $r_t=r_C$ and
there are no effective-range effects in the binding energy difference. This is
a feature by construction in our new approach: our LO state is isospin-symmetric
in the spin-singlet channels, so perturbative corrections from isospin-symmetric
ranges exactly cancel via the NLO adjustment of the existing three-nucleon force
that keeps the triton in the right place. Once one includes isospin breaking
in the spin-singlet ranges, $r_t \neq r_C$, at some higher order, one recovers
again the linear divergence (as a function of the ultraviolet cutoff $\Lambda$)
that has been identified in Ref.~\cite{Vanasse:2014kxa}.
Further contributions that are proportional to the effective ranges come from
the direct coupling of photons to the dibaryon fields, which are generated by
the covariant derivatives in Eqs.~\eqref{eq:L-3S1} and~\eqref{eq:L-1S0}. The
corresponding diagrams are shown in Fig.~\ref{fig:DeltaE-sim}. The expressions
for these diverge logarithmically as a function of the momentum cutoff
$\Lambda$, as identified in
Refs.~\cite{Koenig:2013,Vanasse:2014kxa,Konig:2014ufa}. Essentially, the
scaling of the diagrams in Fig.~\ref{fig:DeltaE-sim} is the same as for the
proton bubble with a single photon exchange that we discuss in
Sec.~\ref{sec:Coulomb}. Whereas in that case we had momentum-independent
vertices and a single nucleon propagator $\sim q^{-2}$ left in each loop (after
carrying out the energy integrals), we now get a factor $1/q$ each from the
ultraviolet behavior of the dibaryon propagators and trinucleon vertices,
respectively. Compared to the Coulomb correction~\eqref{eq:DeltaE-bub} the
diagrams shown in Fig.~\ref{fig:DeltaE-sim} are suppressed by $Q/\Lambda_{\slashed\pi}$.
This relative ordering is in fact exactly the same as in previous
calculations~\cite{Vanasse:2014kxa,Konig:2014ufa,Konig:2013cia}, but in our new
counting scheme it means that these diagrams are \text{N$^2$LO}\xspace.
Thus, these two divergences associated with the effective ranges appear at
higher orders in our approach. Note that Ref.~\cite{Vanasse:2014kxa} contains
a third source of divergence which is linear in the effective ranges: the
interference between non-perturbative Coulomb and perturbative range effects.
This additional logarithmic divergence occurs because
Ref.~\cite{Vanasse:2014kxa} employs the full Coulomb-dressed dibaryon
propagator at leading order. We emphasize that this divergence is absent in
our new approach where all Coulomb contributions are treated perturbatively.
We see here an example of the more general fact that, when singular interactions
are involved, the cutoff dependence of perturbative diagrams is not necessarily
the same as that of the resummed series. As a consequence, as noted above, no
new three-body interaction is needed for renormalization at NLO in our
new counting.
\begin{figure}[tb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=0.4275\textwidth,clip]{DeltaE-sim-d}\\[0.77em]
(a)
\end{minipage}\begin{minipage}{0.49\textwidth}
\centering
\vspace*{0.5em}
\includegraphics[width=0.441\textwidth,clip]{DeltaE-sim-t}\\[0.77em]
(b)
\end{minipage}
\caption{Combined ``Coulomb + range'' corrections contributing to the
trinucleon binding energy in perturbation theory,
a higher-order effect in our expansion.}
\label{fig:DeltaE-sim}
\end{figure}
The discussion here can be generalized to other isospin-violating effects. Our
expansion for explicit electromagnetic effects is in powers of $\alpha M_N/Q$,
which we are counting as $\aleph_0/Q$ and pairing, for simplicity, with the
standard pionless EFT expansion $Q/\Lambda_{\slashed\pi}$. In addition to photon exchange,
there are ``indirect'' electromagnetic effects that take place at short
distances and appear in the Lagrangian as interactions among nucleons.
Moreover, the up-down quark mass difference also generates isospin-breaking
interactions. The form of the interactions are dependent on the way isospin is
broken~\cite{VanKolck:1993ee,vanKolck:1995cb}. Since the quark masses break
charge symmetry (a rotation of $\pi$ around the second axis in isospin space),
their low-energy footprints will break charge symmetry as well, at least in
first-order perturbation theory. In contrast, electromagnetic interactions
break isospin more generally.
The most obvious consequence of isospin-breaking interactions is the
neutron-proton mass splitting $\delta M_N$. We can estimate this as
$\delta M_N=\mathcal{O}(\alpha M_N/(4\pi), m_u-m_d)$, where we included a $4\pi$
expected from a photon loop. It is well known that these effects have opposite
signs and are comparable in magnitude, but, as indicated by this estimate, the
quark-mass contribution is somewhat larger and makes the neutron heavier. One
might worry that this splitting, appearing as a mass term in the Lagrangian,
should be compared to the kinetic terms shown in Eq.~\eqref{eq:L-Nd}. However,
the nucleon mass splitting term can be removed by a redefinition of the nucleon
field~\cite{Friar:2004ca}, and once this is done the splitting appears only in
the kinetic term itself. It is thus $\mathcal{O}(\delta M_N/M_N)$ relative to leading
order, a very small effect.
The dominant isospin-breaking effects are expected to appear in the short-range
two-nucleon interactions, represented in Eq.~\eqref{eq:L-Nd} via dibaryon
fields. With the choice in Eq.~\eqref{eq:simple-y}, one counts the dibaryon
residual masses as small scales, \textit{e.g.}, $\sigma_t=\mathcal{O}(\aleph_0)$. To first order
in the isospin-breaking parameters, $\sigma_{t,nn}-\sigma_t$ is proportional
to $m_u-m_d$, while $\sigma_{t,pp}-\sigma_t$ is proportional to both $(m_u-m_d)$
and $\alpha M_N$. The largest sizes they are expected to have are
$(\sigma_{t,nn}-\sigma_t)/\sigma_t = \mathcal{O}((m_u-m_d)/\aleph_0)$ and
$(\sigma_{t,pp}-\sigma_t)/\sigma_t = \mathcal{O}(\alpha M_N/\aleph_0,
(m_u-m_d)/\aleph_0)$. The electromagnetic contribution in the $pp$ channel of
$\mathcal{O}(\alpha M_N/\aleph_0) = \mathcal{O}(1)$ is just the one required to renormalize
Coulomb treated as NLO; the $\alpha M_N$ appears explicitly in
Eq.~\eqref{eq:renorm-sigma-t-pp-1}. This counting is consistent since it yields
$a_t/a_{C}-1 = \mathcal{O}(1)$, while empirical values give $a_t/a_{C}-1 \approx 2$.
How we count the quark-mass effects is a matter of choice, since $m_u-m_d$ is at
the QCD level an independent parameter. The estimate above suggests
$a_t/a_{t,nn}-1 = \mathcal{O}((m_u-m_d)/\aleph_0) \sim 0.3$, again consistent with the
standard value $a_{t,nn}\simeq -18.7$~fm~\cite{GonzalezTrotter:2006wz}, which
gives $a_t/a_{t,nn}-1 \approx 0.25$. However, there are significant
uncertainties in the value of $a_{t,nn}$. For example, a value $a_{t,nn}
\simeq -16.1$~fm has also been obtained~\cite{Huhn:2001yk}, which would mean
more significant quark mass effects. Conversely, a value closer to $a_t$ would
more clearly indicate $m_u-m_d$ as a separate scale, much smaller than
$\aleph_0$. In Ref.~\cite{Kirscher:2011zn} a pionless EFT analysis of
the trinucleon energy splitting $\Delta E$
was carried out at LO in the standard power counting with an additional
quark-mass, isospin-breaking $N\!N$ interaction. In this case, $\Delta E$ is
correlated with $a_{t,nn}$, and one can use the experimental value of the former
to determine the latter. It was found that $a_{t,nn}\simeq -(22.9\pm 4.1)$~fm,
which gives $a_t/a_{t,nn}-1$ ranging from $-0.1$ to $0.25$. This suggests that
the pairing $m_u-m_d \sim \alpha M_N/4\pi$ that one could infer from the
nucleon mass splitting can be applied to $N\!N$ interactions as well, which
results in quark-mass effects about an order of magnitude below electromagnetic
ones. We therefore took in this paper the standpoint that these are \text{N$^2$LO}\xspace
effects and do not contribute to the order we were working. Once the most
important quark mass contribution is relegated to an order where new,
undetermined counterterms appear, it is no longer possible to constrain
$a_{t,nn}$ from $\Delta E$~\cite{Hammer:2014rba}.
For higher terms in the $Q/\Lambda_{\slashed\pi}$ expansion, similar arguments can be used,
but now taking into account that their parameters are determined by the
high scales as given by the pionless EFT power counting. For
example~\cite{vanKolck:1997ut,Kaplan:1998tg,Kaplan:1998we,vanKolck:1998bw},
$c_t = \mathcal{O}(M_N/\Lambda_{\slashed\pi})$, so we expect ${(c_{t,nn}-c_t)/c_t} =
\mathcal{O}((m_u-m_d)/\Lambda_{\slashed\pi})$ and $(c_{t,pp}-c_t)/c_t = \mathcal{O}(\alpha M_N/\Lambda_{\slashed\pi},
(m_u-m_d)/\Lambda_{\slashed\pi})$. These relations indicate a suppression of
$\mathcal{O}(\aleph_0/\Lambda_{\slashed\pi})$ relative to breaking in $\sigma_{t(,\cdot\cdot)}$.
They imply $r_{C}/r_t-1 = \mathcal{O}(\alpha M_N/\Lambda_{\slashed\pi})\sim 0.05$, in agreement
with $r_{C}/r_t-1\approx 0.02$ from empirical values. Thus, it is consistent to
take the electromagnetic isospin-breaking range as an \text{N$^2$LO}\xspace effect, with the
prediction~\eqref{eq:rCprediction} valid up to about 5\%. As we pointed out
above, a linear divergence appears in the three-nucleon system which then
requires a new, isospin-breaking three-body force at the same order.
Our result for the \ensuremath{{}^3\mathrm{He}}\xspace binding energy is
\begin{multline}
E_B(\ensuremath{{}^3\mathrm{He}}\xspace)^{\text{LO+NLO}} = E_B(\ensuremath{{}^3\mathrm{H}}\xspace) + \Delta E^{\text{NLO}} \\
= 8.48~\ensuremath{\mathrm{MeV}} - (0.86\pm 0.17)~\ensuremath{\mathrm{MeV}}
= (7.62\pm 0.17)~\ensuremath{\mathrm{MeV}} \,,
\label{eq:He3-result}
\end{multline}
where we estimated the error in the energy difference as $\mathcal{O}(\alpha M_N/Q,
(m_d - m_u)/\aleph_0) \sim 20\%$---slightly larger than the ratio NLO/LO.
Eq.~\eqref{eq:He3-result} represents 98.7\% of the observed value,
$E_B(\ensuremath{{}^3\mathrm{He}}\xspace)^{\text{exp}} \approx 7.72~\ensuremath{\mathrm{MeV}}$. There is room for higher-order
contributions, but most of the splitting is accounted for at \text{NLO}\xspace.
We emphasize that the error in Eq.~\eqref{eq:He3-result} should be understood
as a rough estimate of higher-order contributions. The above value comes
from taking the mean of $a_C$ and $a_t$ for $1/\aleph_0$. By using the
physical value for $a_C$ as \text{NLO}\xspace input, we are overestimating the magnitude of
electromagnetic effects, because \text{N$^2$LO}\xspace corrections of both electromagnetic
and quark-mass origin contribute to the $pp$ scattering length. Potential-model
calculations of photon exchange (for example, Ref.~\cite{Friar:1987zzc}) tend
to give a $\Delta E$ consistent with an almost model-independent estimate of
about 680 keV~\cite{Brandenburg:1978aa}, but not the full isospin violation in
the $N\!N$ scattering lengths. Our result cannot be directly compared with
these older perturbative-photon calculations because we include also
shorter-range interactions---as required to ensure renormalization---that give
the measured $pp$ scattering length. However, the EFT allows us to directly
study effects when $a_C$ is closer to $a_t$, effectively simulating a
potential-model calculation with an interaction that does not give the physical
splitting in the scattering lengths. For example we find that $a_C = -9.0\;
(-10.0)~\ensuremath{\mathrm{fm}}$ leads to a trinucleon splitting of about $700$ $(600)~\ensuremath{\mathrm{keV}}$, and we
can thus confirm that the older calculations are consistent with an uncertainty
of about 20\% in $a_C$, in line with our estimate of higher-order contributions
above.
\section{Summary and outlook}
\label{sec:Conclusion}
In this work, we have established a rearrangement of the perturbative expansion
in pionless effective field theory that takes the spin-singlet nucleon--nucleon
channels in the unitarity limit. Not only does this allow us to demonstrate
quantitatively that nature is very close to this scenario, it also enables
us to include Coulomb corrections perturbatively on the same footing. An
important ingredient to this is the consistent isolation of the divergent
one-photon piece in the $pp$ sector, which then guarantees the renormalization
of the corresponding contribution in the three-nucleon sector.
By studying the trinucleon bound-state sector, we confirm that the new
perturbative expansion works very well: both the Phillips line and
doublet-channel $nd$ phase shifts are barely changed by perturbative
corrections that include the actual finiteness of these quantities, proving
that the new leading order is a good starting point for a perturbative
expansion. By comparison with previous nonperturbative calculations of the
energy splitting in the \ensuremath{{}^3\mathrm{H}}\xspace--\ensuremath{{}^3\mathrm{He}}\xspace iso-doublet, we furthermore show that the
Coulomb interaction is indeed a completely perturbative effect in these bound
states. Our new NLO with effective-range corrections set to zero is almost on
top of previous leading-order calculations that resum certain (but not all)
Coulomb contributions. The same holds when isospin-symmetric ranges are
included. There is no new divergence at this order, which allows us to predict
the energy splitting as $\Delta E^{\text{NLO}} = {-}(0.86\pm 0.17)$ MeV,
to be compared with the experimental value $\Delta E^{\text{exp}} \simeq
-0.764$~MeV.
The main point of our reorganization is that, by treating small quantities in
perturbation theory, we remove inessential parameters from the lowest orders.
For example, a 30\% accuracy for nuclear observables susceptible to pionless
EFT should be obtained from only two parameters at LO, with four other
parameters at NLO ensuring a 10\% accuracy or better. An investigation of this
reorganization in heavier systems is our next goal.
Further improvement comes from higher orders. Isospin-symmetric effective
ranges do not contribute to the \ensuremath{{}^3\mathrm{H}}\xspace--\ensuremath{{}^3\mathrm{He}}\xspace energy difference. The inclusion
of isospin breaking in these effective ranges will recover the previously
identified divergence generated by this effect; the same is true for
contributions that are of the same order in $\alpha$ as other Coulomb effects
included here, but suppressed further by $\rho_d,r_t \sim 1/\Lambda_{\slashed\pi}$. Confirming
that these contributions are indeed best accounted for at \text{N$^2$LO}\xspace, and carrying
out a fully consistent renormalization at this order, is beyond the scope of
this work and relegated to future investigations.
\begin{acknowledgments}
We thank Giovanni Chirilli, Sid Coon, Dick Furnstahl, Doron Gazit, and
Johannes Kirscher for useful discussions, and the latter also for comments on
the manuscript. Three of us (SK, HWG, UvK) are grateful to the organizers and
participants of the workshop \textsc{Lattice Nuclei, Nuclear Physics and QCD --
Bridging the Gap} at the ECT*, Trento (Italy), for stimulating presentations and
an atmosphere which inspired the completion of this article. This material is
based upon work supported in part by the NSF under Grant No. PHY--1306250 (SK),
by the NUCLEI SciDAC Collaboration under DOE Grant DE-SC0008533 (SK), by the
Dean's Research Chair programme of the Columbian College of Arts and Sciences
of The George Washington University (HWG), by the U.S. Department of Energy,
Office of Science, Office of Nuclear Physics, under Award Numbers
DE-FG02-95ER-40907 and DE-SC0015393 (HWG) and DE-FG02-04ER41338 (UvK), by the
BMBF under contracts 05P12PDFTE and 05P15RDFN1 (HWH), and by the Helmholtz
Association under contract HA216/EMMI (HWH).
\end{acknowledgments}
|
2,877,628,091,329 | arxiv | \section{Introduction and main results}
Throughout this paper, we assume that the reader have a knowledge of the fundamental results and the standard notations of the Nevanlinna value distribution theory. See(\cite{h3,y1,y2}). In the following, a meromorphic function $f$ means meromorphic in the whole complex plane. Define
$$\rho(f)=\varliminf_{r\rightarrow\infty}\frac{log^{+}T(r,f)}{logr},$$
$$\rho_{2}(f)=\varlimsup_{r\rightarrow\infty}\frac{log^{+}log^{+}T(r,f)}{logr}$$
by the order and the hyper-order of $f$, respectively. When $\rho(f)<\infty$, we say $f$ is of finite order.
By $S(r,f)$, we denote any quantity satisfying $S(r, f) = o(T(r, f))$, as $r\to \infty $ outside of a possible exceptional set of finite logarithmic measure. A meromorphic function $a(z)$ satisfying $T(r,a)=S(r,f)$ is called a small function of $f$. We denote $S(f)$ as the family of all small meromorphic functions of $f$ which includes the constants in $\mathbb{C}$. Moreover, we define $\hat{S}(f)=S(f)\cup\{\infty\}$. We say that two non-constant meromorphic functions $f$ and $g$ share small function $a$ CM(IM) if $f-a$ and $g-a$ have the same zeros counting multiplicities (ignoring multiplicities). Moreover, we introduce the following notation: $S_{(m,n)}(a)=\{z|z $ is a common zero of $f(z+c)-a(z)$ and $f(z)-a(z)$ with multiplicities $m$ and $n$ respectively$\}$. $\overline{N}_{(m,n)}(r,\frac{1}{f-a})$ denotes the counting function of $f$ with respect to the set $S_{(m,n)}(a)$. $\overline{N}_{n)}(r,\frac{1}{f-a})$ denotes the counting function of all distinct zeros of $f-a$ with multiplicities at most $n$. $\overline{N}_{(n}(r,\frac{1}{f-a})$ denotes the counting function of all zeros of $f-a$ with multiplicities at least $n$.
We say that two non-constant meromorphic functions $f$ and $g$ share small function $a$ CM(IM)almost if
$$N(r,\frac{1}{f-a})+N(r,\frac{1}{g-a})-2N(r,f=a=g)=S(r,f)+S(r,g),$$
or
$$\overline{N}(r,\frac{1}{f-a})+\overline{N}(r,\frac{1}{g-a})-2\overline{N}(r,f=a=g)=S(r,f)+S(r,g),$$
respectively.
For a meromorphic function $f(z)$, we denote its shift by $f_{c}(z)=f(z + c)$.
Rubel and Yang \cite{ruy} studied the uniqueness of an entire function concerning its first order derivative, and proved the following result.
\
{\bf Theorem A} \ Let $f(z)$ be a non-constant entire function, and let $a, b$ be two finite distinct complex values. If $f(z)$ and $f'(z)$
share $a, b$ CM, then $f(z)\equiv f'(z)$.
Zheng and Wang \cite{zw} improved Theorem A and proved
\
{\bf Theorem B} \ Let $f(z)$ be a non-constant entire function, and let $a(z)\not\equiv\infty, b(z)\not\equiv\infty$ be two distinct small functions of $f(z)$. If $f(z)$ and $f^{(k)}(z)$ share $a(z), b(z)$ CM, then $f(z)\equiv f^{(k)}(z)$.
Li and Yang \cite {ly3} improved Theorem B and proved
\
{\bf Theorem C} \ Let $f(z)$ be a non-constant entire function, and let $a(z)\not\equiv\infty, b(z)\not\equiv\infty$ be two distinct small functions of $f(z)$. If $f(z)$ and $f^{(k)}(z)$
share $a(z)$ CM, and share $b(z)$ IM. Then $f(z)\equiv f^{(k)}(z)$.
Recently, the value distribution of meromorphic functions concerning difference analogue has become a popular research, see [1-9, 12-14, 16-18].
Heittokangas et al \cite{hkl} obtained a similar result analogue of Theorem A concerning shifts.
\
{\bf Theorem D}
Let $f(z)$ be a non-constant entire function of finite order, let $c$ be a nonzero finite complex value, and let $a, b$ be two finite distinct complex values.
If $f(z)$ and $f(z+c)$ share $a, b$ CM, then $f(z)\equiv f(z+c).$
In 2022, Huang\cite{h} obtained
\
{\bf Theorem E}
Let $f(z)$ be a transcendental entire function of finite order, let $\eta\neq0$ be a finite complex number, $n\geq1, k\geq0$ two integers and let $a, b$ be two distinct finite complex values. If $f(z)$ and $(\Delta_{\eta}^{n}f(z))^{(k)}$ share $a_{1}$ CM and share $a_{2}$ IM, then either $f(z)\equiv(\Delta_{\eta}^{n}f(z))^{(k)}$ or $a_{1}=2a_{2}$,
$$f(z)\equiv a_{2}e^{2(cz+d)}-2a_{2}e^{cz+d}+2a_{2},$$
and
$$(\Delta_{\eta}^{n}f(z))^{(k)}\equiv a_{2}e^{cz+d},$$
where $c=(-2)^{-\frac{n+1}{k}}$ for $k\geq1$and $d$ are two finite constants.
In the following, we define the differential polynomials of a meromorphic function. Let $f(z)$ be a non-constant entire function and
\begin{align}
g(z)=b_{-1}+\sum_{i=0}^{n}b_{i}f^{(k_{i})}(z+c_{i}),
\end{align}
where $b_{-1}$ and $b_{i} (i=0\ldots,n)$ are small meromorphic functions of $f$, $k_{i}\geq0 (i=0\ldots,n)$ are integers and $c_{i}(i=0\ldots,n)$ are finite complex numbers.
Of above theorem, it's naturally to ask whether the condition two finite complex numbers can be replaced by two distinct small functions, and $f'$ can be replaced by $g$?
In this article, we give a positive answer. In fact, we prove the following more general result.
\
{\bf Theorem 1}
Let $f(z)$ be a transcendental meromorphic function of $\rho_{2}(f)<1$ such that $\overline{N}(r,f)=S(r,f)$, let $g(z)$ be a differential polynomials of $f$ as define in (1.1), and let $a_{1}, a_{2}$ be two distinct finite complex numbers. If $f(z)$ and $g(z)$ share $a_{1}, \infty$ CM, and $a_{2}$ IM. Then either $f(z)\equiv g(z)$ or
$$f(z)\equiv a_{2}+(a_{1}-a_{2})(h-1)^{2},$$
and
$$g(z)\equiv a_{1}+(a_{1}-a_{2})(h-2),$$
where $h(z)$ is a non-constant meromorphic function of $\rho_{2}(h)<1$.
\
{\bf Remark 1} When $a_{1}$ and $a_{2}$ are two distinct small functions of $f$, it is easy to see in the following proofs that Lemma 2.4 and Lemma 2.5 are still true under the assumptions that $N(r,a_{1})+N(r,a_{2})+\overline{N}(r,f)=S(r,f)$ and $f$ and $g$ share $a_{1}$ CM almost and $a_{2}$ IM almost. So we can know that Theorem 1 is still true when $f$ and $g$ share $a_{1}$ CM almost and $a_{2}$ IM almost.
\
{\bf Corollary 1} Let $f(z)$ be a transcendental meromorphic function of $\rho_{2}(f)<1$, let $c$ be a nonzero finite value, $k$ be a positive integer, and let $a(z)\not\equiv\infty, b(z)\not\equiv\infty\in \hat{S}(f)$ be two distinct small functions. If $f^{(k)}(z+c)$ and $f(z)$ share $a(z),\infty$ CM, and share $b(z)$ IM, then $f^{(k)}(z)\equiv f(z+c)$.
\
{\bf Example 1} \cite{hf}
Let $f(z)=\frac{2}{1-e^{-2z}}$, and let $c=\pi i$. Then $f'(z)$ and $f(z+c)$ share $0$ CM and share $1,\infty$ IM, but $f'(z)\not\equiv f(z+c)$.
This example shows that for meromorphic functions, the conclusion of Theorem 1 doesn't hold even when sharing $\infty$ CM is replaced by sharing $\infty$ IM when $k=1$. We believe there are examples for any $k$, but we can not construct them.
As for $k=0$, Li and Yi \cite{ly} obtained
\
{\bf Theorem F} Let $f(z)$ be a transcendental entire function of $\rho_{2}(f)<1$, let $c$ be a nonzero finite value, and let $a(z)\not\equiv\infty, b(z)\not\equiv\infty\in \hat{S}(f)$ be two distinct small functions. If $f(z)$ and $f(z+c)$ share $a(z)$ and $b(z)$ IM, then $f(z)\equiv f(z+c)$.
\
{\bf Remark 2} Theorem F holds when $f(z)$ is a non-constant meromorphic function of $\rho_{2}(f)<1$ such that $\overline{N}(r,f)=S(r,f)$.
Heittokangas, et. \cite{hkl1} proved.
\
{\bf Theorem G} Let $f(z)$ be a non-constant meromorphic function of $\rho_{2}(f)<1$, let $c$ be a nonzero finite value, and let $a_{1}(z)\not\equiv\infty$, $a_{2}(z)\not\equiv\infty$ and $a_{3}(z)\not\equiv\infty\in \hat{S}(f)$ be three distinct small functions such that $a_{1}(z)$, $a_{2}(z)$ and $a_{3}(z)$ are periodic functions with period $c$. If $f(z)$ and $f(z+c)$ share $a_{1}(z),a_{2}(z)$ CM, and $a_{3}(z)$ IM, then $f(z)\equiv f(z+c)$.
We can ask a question that whether the small periodic function $a_{3}(z)$ of $f(z)$ can be replaced by any small function of $f(z)$?\\
In this paper, we obtain our second result.
\
{\bf Theorem 2} Let $f(z)$ be a transcendental meromorphic function of $\rho_{2}(f)<1$, let $c$ be a nonzero finite value, and let $a_{1}(z)\not\equiv\infty, a_{2}(z)\not\equiv\infty\in \hat{S}(f)$ be two distinct small functions of $f(z)$ such that $a(z)$ is a periodic function with period $c$ and $b(z)$ is any small function of $f(z)$. If $f(z)$ and $f(z+c)$ share $a_{1}(z),\infty$ CM, and share $a_{2}(z)$ IM, then either $f(z)\equiv f(z+c)$ or $$e^{p(z)}\equiv \frac{f(z+c)-a_{1}(z+c)}{f(z)-a_{1}(z)}\equiv \frac{a_{2}(z+c)-a_{1}(z+c)}{a_{2}(z)-a_{1}(z)},$$
where $p(z)$ is a non-constant entire function of $\rho(p)<1$ such that $e^{p(z+c)}\equiv e^{p(z)}$.
We can obtain the following corollary from the proof of Theorem 2.
\
{\bf Corollary 1} Under the same condition as in Theorem 2, then $f(z)\equiv f(z+c)$ holds if one of conditions satisfies\\
(i) $a_{2}(z)$ is a periodic function with period $c$ or $2c$;\\
(ii) $\rho(a_{2}(z))<\rho(e^{p(z)})$;\\
(iii) $\rho(a_{2}(z))<1$.
\
{\bf Example 2} \ Let $f(z)=\frac{e^{z}}{1-e^{-2z}}$, and let $c=\pi i$. Then $f(z+c)=\frac{-e^{z}}{1-e^{-2z}}$, and $f(z)$ and $f(z+c)$ share $0,\infty$ CM, but $f(z)\not\equiv f(z+c)$.
\
{\bf Example 3} \ Let $f(z)=e^{z}$, and let $c=\pi i$. Then $f(z+c)=-e^{z}$, and $f(z)$ and $f(z+c)$ share $0,\infty$ CM, $f(z)$ and $f(z+c)$ attain
different values everywhere in the complex plane, but $f(z)\not\equiv f(z+c)$.
Above two examples of show that "2CM+1IM" is necessary.
\
{\bf Example 4} Let $f(z)=e^{e^{z}}$, then $f(z+\pi i)=\frac{1}{e^{e^{z}}}$. It is easy to verify that $f(z)$ and $f(z+\pi i)$ share $0, 1, \infty$ CM, but $f(z)=\frac{1}{f(z+\pi i)}$. On the other hand, we obtain $f(z)=f(z+2\pi i)$.
Example 4 tells us that if we drop the assumption $\rho_{2}(f)<1$, we can get another relation.
By Theorem 1 and Theorem 2, we still believe the latter situation of Theorem 2 can be removed, that is to say, only the case $f(z)\equiv f(z+c)$ occurs. So we raise a conjecture here.
\
{\bf Conjecture} Under the same condition as in Theorem 2, is $f(z)\equiv f(z+c)$ ?
\section{Some Lemmas}
\begin{lemma}\label{1}\cite{h3} Let $f$ be a non-constant meromorphic function of $\rho_{2}(f)<1$, and let $c$ be a non-zero complex number. Then
$$m(r,\frac{f(z+c)}{f(z)})=S(r, f),$$
for all r outside of a possible exceptional set E with finite logarithmic measure.
\end{lemma}
\begin{lemma}\label{2}\cite{h3} Let $f$ be a non-constant meromorphic function of $\rho_{2}(f)<1$, and let $c$ be a non-zero complex number. Then
$$T(r,f(z))=T(r,f(z+c))+S(r,f),$$
for all r outside of a possible exceptional set E with finite logarithmic measure.
\end{lemma}
\begin{lemma}\label{3}\cite{hk3,y1,y2} Let $f_{1}$ and $f_{2}$ be two non-constant meromorphic functions in $|z|<\infty$, then
$$N(r,f_{1}f_{2})-N(r,\frac{1}{f_{1}f_{2}})=N(r,f_{1})+N(r,f_{2})-N(r,\frac{1}{f_{1}})-N(r,\frac{1}{f_{2}}),$$
where $0<r<\infty$.
\end{lemma}
\begin{lemma}\label{4} Let $f(z)$ be a transcendental meromorphic function of $\rho_{2}(f)<1$ such that $\overline{N}(r,f)=S(r,f)$, let $g(z)$ be a differential polynomials of $f$ as define in (1.1), and let $a_{1}, a_{2}$ be two distinct finite complex numbers. If
$$H=\frac{f‘}{(f-a_{1})(f-a_{2})}-\frac{g’}{(g-a_{1})(g-a_{2})}\equiv0,$$
And $f$ and $g$ share $a_{1}$ CM, and $a_{2}$ IM, then either $2T(r,f)\leq\overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f)$, or $f=g$.
\end{lemma}
\begin{proof}
Integrating $H$ which leads to
$$\frac{g-a_{2}}{g-a_{1}}=C\frac{f-a_{2}}{f-a_{1}},$$
where $C$ is a nonzero constant.\\
If $C=1$, then $f=g$. If $C\neq1$, then from above, we have
$$\frac{a_{1}-a_{2}}{g-a_{1}}\equiv \frac{(C-1)f-Ca_{2}+a_{1}}{f-a_{1}},$$
and
$$T(r,f)=T(r,g)+S(r,f)+S(r,g).$$
It follows that $N(r,\frac{1}{f-\frac{Ca_{2}-a_{1}}{C-1}})=N(r,\frac{1}{a_{1}-a_{2}})=S(r,f)$. Then by Lemma 2.4,
\begin{eqnarray*}
\begin{aligned}
2T(r,f)&\leq \overline{N}(r,f)+\overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+\overline{N}(r,\frac{1}{f-\frac{Ca_{2}-a_{1}}{C-1}})+S(r,f)\\
&\leq \overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f),
\end{aligned}
\end{eqnarray*}
that is $2T(r,f)\leq \overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f)$.
\end{proof}
\begin{lemma}\label{5} Let $f(z)$ be a transcendental meromorphic function of $\rho_{2}(f)<1$ such that $\overline{N}(r,f)=S(r,f)$, let $g(z)$ be a differential polynomials of $f$ as define in (1.1), and let $a_{1}, a_{2}$ be two distinct finite complex numbers. If $f(z)$ and $g(z)$ share $a_{1}$ CM, and $N(r,\frac{1}{g(z)-(b_{-1}+b_{0}a_{1})})=S(r,f)$, then Then there are two meromorphic functions $h$ and $H$ on $\mathbb{C}^{n}$ such that either $g=Hh+G$, where $H\not\equiv0$ and $G=b_{-1}+b_{0}a_{1}$ are two small functions of $h$, or $T(r,h)=S(r,f)$.
\end{lemma}
\begin{proof}
Since $f(z)$ is a non-constant meromorphic function of $\rho_{2}(f)<1$, and $f(z)$ and $g(z)$ share $a_{1},\infty$ CM, then there is a meromorphic function $h$ such that
\begin{align}
f-a_{1}=h(g-G)+h(G-a_{1}),
\end{align}
where $G(z)=b_{-1}+b_{0}a_{1}$, and the zeros and poles of $h$ come from the zeros and poles of $b_{-1}$ and $b_{i}(i=0,1,\ldots,n)$
Suppose on the contrary that $T(r,h)\neq S(r,f)$. Set $Q=g-G$. Do induction from (2.2) that
\begin{align}
Q=\sum_{i=0}^{n}b_{i}(hQ)^{(k_{i})}+\sum_{i=0}^{n}b_{i}(h(G-a_{1}))^{(k_{i})}.
\end{align}
Easy to see that $Q\not\equiv0$. Then we rewrite (2.3) as
\begin{eqnarray}
1-\frac{\sum_{i=0}^{n}b_{i}(h(G-a_{1}))^{(k_{i})}}{Q}=Fh,
\end{eqnarray}
where
\begin{align}
F&=\frac{\sum_{i=0}^{n}b_{i}(hQ)^{(k_{i})}}{hQ}
\end{align}
Note that $N(r,\frac{1}{g-G})=N(r,\frac{1}{Q})=S(r,f)$, and hence
\begin{align}
T(r,F)&\leq\sum_{i=0}^{n}(T(r,\frac{(hQ)^{(k_{i})}}{Qh})+S(r,f)\notag\\
&\leq m(r,\frac{(hQ)^{(k_{i})}}{hQ})+N(r,\frac{(hQ)^{(k_{i})}}{hQ})+\overline{N}(r,f)\notag\\
&+S(r,h)+S(r,f)=S(r,h)+S(r,f).
\end{align}
By (2.1) and Lemma 2.1, we get
\begin{align}
T(r,h)&\leq T(r,f)+T(r,g)+S(r,f)\notag\\
&\leq 2T(r,f)+S(r,f).
\end{align}
Then it follows from (2.5) that $T(r,F)=S(r,f)$. Next we discuss two cases.
{\bf Case1.} \quad $h^{-1}-F\not\equiv0$. Rewrite (2.4) as
\begin{align}
h(h^{-1}-F)=\sum_{i=0}^{n}b_{i}(h(G-a_{1}))^{(k_{i})}.
\end{align}
We claim that $F\equiv0$. Otherwise, it follows from (2.9) that $N(r,\frac{1}{h^{-1}-F})=S(r,f)$. Then use Lemma 2.4 to $h$ we can obtain
\begin{align}
T(r,h)&=T(r,h^{-1})+O(1)\notag\\
&\leq \overline{N}(r,h^{-1})+\overline{N}(r,\frac{1}{h^{-1}})+\overline{N}(r,\frac{1}{h^{-1}-F})\notag\\
&+O(1)=S(r,f),
\end{align}
which contradicts with assumption. Thus $F\equiv0$. Then by (2.9) we get
\begin{align}
g=\sum_{i=0}^{n}b_{i}(h(G-a_{1})^{(k_{i})})+G=Hh+G,
\end{align}
where $H\not\equiv0$ is a small function of $h$.
{\bf Case2.} \quad $h^{-1}-F\equiv0$. Immediately, we get $T(r,h)=S(r,f)$.
\end{proof}
\begin{lemma}\label{6}\cite{y1}
Let $f$ be a non-constant meromorphic function, and $R(f)=\frac{P(f)}{Q(f)}$, where
$$P(f)=\sum_{k=0}^{p}a_{k}f^{k} \quad and \quad Q(f)=\sum_{j=0}^{q}a_{j}f^{q}$$
are two mutually prime polynomials in $f$. If the coefficients ${a_{k}}$ and ${b_{j}}$ are small functions of $f$ and $a_{p}\not\equiv0$, $b_{q}\not\equiv0$, then
$$T(r,R(f))=max\{p,q\}T(r,f)+S(r,f).$$
\end{lemma}
\begin{lemma}\label{7}\cite{y1} Let $f$ be a non-constant meromorphic function, and let $P(f)=a_{0}f^{p}+a_{1}f^{p-1}+\cdots+a_{p}(a_{0}\neq0)$ be a polynomial of degree $p$ with constant coefficients $a_{j}(j=0,1,\ldots,p)$. Suppose that $b_{j}(j=0,1,\ldots,q)(q>p)$. Then
$$m(r,\frac{P(f)f'}{(f-b_{1})(f-b_{2})\cdots(f-b_{q})})=S(r,f).$$
\end{lemma}
\begin{lemma}\label{8}\cite{y1}
Let $f$ be a non-constant meromorphic function, and let $P(f)=a_{0}+a_{1}f+a_{2}f^{2}+\cdots+a_{n}f^{n}$, where $a_{i}$ are small functions of $f$ for $i=0,1,\ldots,n$. Then
$$T(r,P(f))=nT(r,f)+S(r,f).$$
\end{lemma}
\begin{lemma}\label{9} \cite{lh} Let $f$ and $g$ be two non-constant meromorphic functions. If $f$ and $g$ share $0,1,\infty$ IM, and $f$ is a bilinear transformation of $g$, then $f$ and $g$ assume one of the following six relations: (i) $fg=1$; (ii) $(f-1)(g-1)=1$; (iii) $f+g=1$; (iv) $f=cg$; (v) $f-1=c(g-1)$; (vi) $[(c-1)f+1][(c-1)g-c]=-c$, where $c\neq0,1$ is a complex number.
\end{lemma}
In the proof of Theorem 2, we will use the following Lemma proved by G. G. Gundersen \cite{g}.
\begin{lemma}\label{10}\cite{g}
Let $f$, $F$ and $g$ be three non-constant meromorphic functions, where $g=F(f)$. Then $f$ and $g$ share three values IM if and only if there exist an entire function $h$ such that,
by a suitable linear fractional transformation, one of the following cases holds: \\
(i) $f\equiv g$;\\
(ii) $f=e^{h}$ and $g=a(1+4ae^{-h}-4a^{2}e^{-2h})$ have three IM shared values $a\neq0$, $b=2a$ and $\infty$;\\
(iii) $f=e^{h}$ and $g=\frac{1}{2}(e^{h}+a^{2}e^{-h})$ have three IM shared values $a\neq0$, $b=-a$ and $\infty$;\\
(iv) $f=e^{h}$ and $g=a+b-abe^{-h}$ have three IM shared values $ab\neq0$ and $\infty$;\\
(v) $f=e^{h}$ and $g=\frac{1}{b}e^{2h}-2e^{h}+2b$ have three IM shared values $b\neq0$, $a=2b$ and $\infty$;\\
(vi) $f=e^{h}$ and $g=b^{2}e^{-h}$ have three IM shared values $a\neq0$, $0$ and $\infty$.
\end{lemma}
\begin{lemma}\label{11} \cite{hk3,y1,y2} Let $f$ and $g$ be two non-constant meromorphic functions, and let $\rho(f)$ and $\rho(g)$ be the order of $f$ and $g$, respectively. Then
$\rho(fg)\leq \max\{\rho(f), \rho(g)\}$.
\end{lemma}
\begin{lemma}\label{12}\cite{y3} Let $f(z)$ be a non-constant meromorphic function, and let $a_{1}, a_{2}, a_{3}\in \hat{S}(f)$ be three distinct small functions of $f$. Then
$$T(r,f)\leq \sum_{j=1}^{3}\overline{N}(r,\frac{1}{f-a_{j}})+S(r,f).$$
\end{lemma}
\
{\bf Remark 3} We can see from the proof that Lemma 2.9 \cite{lh} and Lemma 2,10 \cite{y1} are still true when $f$ and $g$ share three value IM almost.
\begin{lemma}\label{13} Let $f$ be a transcendental meromorphic function, let $k_{j}(j=1,2,\ldots,q)$ be distinct constants, and let $a_{1}\not\equiv\infty$ and $a_{2}\not\equiv\infty$ be two distinct small functions of $f$ . Again let $d_{j}=a-k_{j}(a-b)$ $(j=1,2,\ldots,q)$. Then
$$m(r,\frac{L(f)}{f-a_{1}})=S(r,f), \quad m(r,\frac{L(f)}{f-d_{j}})=S(r,f).$$
for $1\leq i\leq q$ and
$$m(r,\frac{L(f)f}{(f-d_{1})(f-d_{2})\cdots(f-d_{m})})=S(r,f),$$
where $L(f)=(a'_{1}-a'_{2})(f-a_{1})-(a_{1}-a_{2})(f'-a'_{1})$, and $2\leq m\leq q$.
\end{lemma}
\begin{proof}
We first claim that $L(f)\not\equiv0$. Otherwise, if $L(f)\equiv0$, then we can get $\frac{f'-a'_{1}}{f-a_{1}}\equiv\frac{a'_{1}-a'_{2}}{a_{1}-a_{2}}$. Integrating both side of above we can obtain $f-a_{1}=C_{1}(a_{1}-a_{2})$, where $C_{1}$ is a nonzero constant. So by Lemma 2.3, we have $T(r,f)=T(r,f_{c})+S(r,f)=T(r,C(a_{1}-a_{2})+a_{1})=S(r,f)$, a contradiction. Hence $L(f)\not\equiv0$. Obviously, we have
$$m(r,\frac{L(f)}{f-a_{1}})\leq m(r,\frac{(a'-b')(f-a_{1})}{f-a_{1}})+m(r,\frac{(a-b)(f'-a'_{1})}{f-a_{1}})=S(r,f),$$
and
$$\frac{L(f)f}{(f-d_{1})(f-d_{2})\cdots(f-d_{q})}=\sum_{i=1}^{q}\frac{C_{i}L(f)}{f-d_{i}},$$
where $C_{i}=\frac{d_{j}}{\prod\limits_{j\neq i}(d_{i}-d_{j})}$ are small functions of $f$. By Lemma 2.1 and above, we have
\begin{align}
&m(r,\frac{L(f)f}{(f-d_{1})(f-d_{2})\cdots(f-d_{q})})=m(r,\sum_{i=1}^{q}\frac{C_{i}L(f)}{f-d_{i}})\notag\\
&\leq\sum_{i=1}^{q}m(r,\frac{L(f_{c})}{f_{c}-d_{i}})+S(r,f)=S(r,f).
\end{align}
\end{proof}
\section{The proof of Theorem 1}
If $f\equiv g$, there is nothing to prove. Suppose $f\not\equiv g$. Since $f$ is a non-constant meromorphic function, $f$ and $g$ share $a_{1},\infty$ CM, then we get
\begin{align}
\frac{g-a_{1}}{f-a_{1}}=q,
\end{align}
where $q$ is a meromorphic function, and (2.2) implies $h=\frac{1}{q}$.\\
Since $f$ and $g$ share $a_{1},\infty$ CM and share $a_{2}$ IM, then Lemma 2.1, Lemma 2.2 and Lemma 2.12 we have
\begin{eqnarray*}
\begin{aligned}
T(r,f)&\leq\overline{N}(r,f)+ \overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f)= \overline{N}(r,\frac{1}{g-a_{1}})\\
&+\overline{N}(r,\frac{1}{g-b})+S(r,f)\leq N(r,\frac{1}{f-g})+S(r,f)\\
&\leq T(r,f-g)+S(r,f)\leq m(r,f-g)+S(r,f)\\
&\leq m(r,f-\sum_{i=0}^{n}b_{i}f^{(k_{i})}_{c_{i}})+S(r,f)\\
&\leq m(r,f)+m(r,1-\frac{\sum_{i=0}^{n}b_{i}f^{(k_{i})}_{c_{i}}}{f})+S(r,f)\\
&\leq T(r,f)+S(r,f).
\end{aligned}
\end{eqnarray*}
That is
\begin{eqnarray}
T(r,f)=\overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f).
\end{eqnarray}
According to (3.1) and (3.2) we have
\begin{eqnarray}
T(r,f)=T(r,f-g)+S(r,f)=N(r,\frac{1}{f-g})+S(r,f).
\end{eqnarray}
and
\begin{align}
T(r,q)&=m(r,q)+S(r,f)\notag\\
&=m(r,\frac{g-a_{1}}{f-a_{1}})+S(r,f)\notag\\
&\leq m(r,\frac{1}{f-a_{1}})+S(r,f).
\end{align}
Then it follows from (3.1) and (3.3) that
\begin{align}
m(r,\frac{1}{f-a_{1}})&=m(r,\frac{q-1}{f-g})\notag\\
&\leq m(r,\frac{1}{f-g})+m(r,q-1)\notag\\
&\leq T(r,q)+S(r,f).
\end{align}
Then by (3.4) and (3.5)
\begin{align}
T(r,q)= m(r,\frac{1}{f-a_{1}})+S(r,f).
\end{align}
On the other hand, (3.1) can be rewritten as
\begin{align}
\frac{g-f}{f-a_{1}}=q-1,
\end{align}
which implies
\begin{align}
\overline{N}(r,\frac{1}{f-a_{2}})\leq \overline{N}(r,\frac{1}{q-1})=T(r,q)+S(r,f).
\end{align}
Thus, by (3.2), (3.6) and (3.8)
\begin{eqnarray*}
\begin{aligned}
m(r,\frac{1}{f-a_{1}})+N(r,\frac{1}{f-a_{1}})&= \overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f)\\
&\leq \overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{q-1})+S(r,f)\\
&\leq\overline{N}(r,\frac{1}{f-a_{1}})+m(r,\frac{1}{f-a_{1}})+S(r,f),
\end{aligned}
\end{eqnarray*}
that is
\begin{align}
N(r,\frac{1}{f-a_{1}})=\overline{N}(r,\frac{1}{f-a_{1}})+S(r,f).
\end{align}
And then
\begin{align}
\overline{N}(r,\frac{1}{f-a_{2}})=T(r,q)+S(r,f).
\end{align}
Set
\begin{eqnarray}
\varphi=\frac{f'(f-g)}{(f-a_{1})(f-a_{2})},
\end{eqnarray}
and
\begin{eqnarray}
\psi=\frac{g'(f-g)}{(g-a_{1})(g-a_{2})}.
\end{eqnarray}
Easy to know that $\varphi\not\equiv0$ because of $f\not\equiv g $, and $N(r,\varphi)=S(r,f)$. By Lemma 2.1 and Lemma 2.7 we have
\begin{eqnarray*}
\begin{aligned}
&T(r,\varphi)=m(r,\varphi)=m(r,\frac{f'(f-g)}{(f-a_{1})(f-a_{2})})+S(r,f)\notag\\
&=m(r,\frac{ff'}{(f-a_{1})(f-a_{2})}\frac{f-\sum_{i=0}^{n}b_{i}f^{(k_{i})}_{c_{i}}}{f})+m(r,\frac{b_{-1}ff'}{(f-a_{1})(f-a_{2})})+S(r,f)\\
&\leq m(r,\frac{ff'}{(f-a_{1})(f-a_{2})})+m(r,1-\frac{\sum_{i=0}^{n}b_{i}f^{(k_{i})})_{c_{i}}}{f})+S(r,f)=S(r,f),
\end{aligned}
\end{eqnarray*}
that is
\begin{align}
T(r,\varphi)=S(r,f).
\end{align}
Let $d=a_{1}-j(a_{1}-b_{1})(j\neq0,1)$. Obviously, by Lemma 2.1, Lemma 2.12 and the First Fundamental Theorem of Nevanlinna, we obtain
\begin{eqnarray*}
\begin{aligned}
2T(r,f)&\leq \overline{N}(r,f)+\overline{N}(r,\frac{1}{f-a_{1}})+ \overline{N}(r,\frac{1}{f-a_{2}})+\overline{N}(r,\frac{1}{f}) +S(r,f)\\
&\leq T(r,f)+T(r,f)-m(r,\frac{1}{f})+S(r,f),
\end{aligned}
\end{eqnarray*}
which implies
\begin{align}
m(r,\frac{1}{f})=S(r,f).
\end{align}
And by (3.14)
\begin{align}
m(r,\frac{1}{f-d})&=m(r,\frac{f'(f-g)}{\varphi (f-a_{1})(f-a_{2})(f-d)})\leq m(r,1-\frac{g}{f})\notag\\
&+m(r,\frac{ff'}{(f-a_{1})(f-a_{2})(f-d)})+S(r,f)=S(r,f).
\end{align}
Set
\begin{align}
\phi=\frac{g'}{(g-a_{1})(g-a_{2})}-\frac{f'}{(f-a_{1})(f-a_{2})}.
\end{align}
We discuss two cases.\\
{\bf Case 1}\quad $\phi\equiv0$. Integrating the both side of (3.16) which leads to
\begin{align}
\frac{f-a_{2}}{f-a_{1}}=C\frac{g-a_{2}}{g-a_{1}},
\end{align}
where $C$ is a nonzero constant.
Then by Lemma 2.4 we get
\begin{eqnarray}
2T(r,f)\leq\overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f),
\end{eqnarray}
which contradicts with (3.2).
{\bf Case 2} \quad $\phi \not\equiv0$. By (3.3), (3.13) and (3.16), we can obtain
\begin{align}
m(r,f)&=m(r,f-g)+S(r,f)\notag\\
&=m(r,\frac{\phi(f-g)}{\phi})+S(r,f)=m(r,\frac{\psi-\varphi}{\phi})+S(r,f)\notag\\
&\leq T(r,\frac{\phi}{\psi-\varphi})+S(r,f)\leq T(r,\psi-\varphi)+T(r,\phi)+S(r,f)\notag\\
&\leq T(r,\psi)+T(r,\phi)+S(r,f)\notag\\
&\leq T(r,\psi)+\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f),
\end{align}
on the other hand,
\begin{align}
T(r,\psi)&=T(r,\frac{g'(f-g)}{(g-a_{1})(g-a_{2})})\notag\\
&=m(r,\frac{g'(f-g)}{(g-a_{1})(g-a_{2})})+S(r,f)\notag\\
&\leq m(r,\frac{g'}{g-a_{2}})+m(r,\frac{f-g}{g-a_{1}})\notag\\
&\leq m(r,\frac{1}{f-a_{1}})+S(r,f)=\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f).
\end{align}
Hence combining (3.19) and (3.20), we obtain
\begin{align}
T(r,f)\leq 2\overline{N}(r,\frac{1}{f-a_{2}})+S(r,f).
\end{align}
Next, Case 2 is divided into two subcases.
{\bf Subcase 2.1}\quad $a_{1}=G$, where $G$ is defined as (2.2) in Lemma 2.5. Then by (3.1) and Lemma 2.1 we can get
\begin{align}
m(r,q)=m(r,\frac{g-G}{f-a_{1}})=S(r,f).
\end{align}
Then by (3.10), (3.21) and (3.22) we can have $T(r,f)=S(r,f)$, a contradiction.\\
{\bf Subcase 2.2} \quad $a_{2}=G$. Then by (3.6), (3.10) and (3.21), we get
\begin{align}
T(r,f)&\leq m(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{g-G})+S(r,f)\notag\\
&\leq m(r,\frac{1}{g-G})+\overline{N}(r,\frac{1}{g-G})+S(r,f)\notag\\
&\leq T(r,g)+S(r,f).
\end{align}
From the fact that
\begin{align}
T(r,g)\leq T(r,f)+S(r,f),
\end{align}
which follows from (3.23) that
\begin{align}
T(r,f)=T(r,g)+S(r,f).
\end{align}
By Lemma 2.2, (3.2) and (3.25), we have
\begin{eqnarray*}
\begin{aligned}
2T(r,f)&\leq 2T(r,g)+S(r,f)\\
&\leq\overline{N}(r,g)+\overline{N}(r,\frac{1}{g-a_{1}})+\overline{N}(r,\frac{1}{g-G})+\overline{N}(r,\frac{1}{g-d})+S(r,f)\\
&\leq \overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})+T(r,\frac{1}{g-d})-m(r,\frac{1}{g-d})+S(r,f)\\
&\leq T(r,f)+T(r,g)-m(r,\frac{1}{g-d})+S(r,f)\\
&\leq 2T(r,f)-m(r,\frac{1}{g-d})+S(r,f).
\end{aligned}
\end{eqnarray*}
Thus
\begin{eqnarray}
m(r,\frac{1}{g-d})=S(r,f).
\end{eqnarray}
From the First Fundamental Theorem, Lemma 2.1, (3.14)-(3.15), (3.25)-(3.26) and $f$ is a non-constant meromorphic function, we obtain
\begin{eqnarray*}
\begin{aligned}
m(r,\frac{f-d}{g-d})&\leq m(r,\frac{f}{g-d})+m(r,\frac{d}{g-d})+S(r,f)\\
&\leq T(r,\frac{f}{g-d})-N(r,\frac{f}{g-d})+S(r,f)\\
&=m(r,\frac{g-d}{f})+N(r,\frac{g-d}{f})-N(r,\frac{f}{g-d})\\
&+S(r,f)\leq N(r,\frac{1}{f})-N(r,\frac{1}{g-d})+S(r,f)\\
&=T(r,\frac{1}{f})-T(r,\frac{1}{g-d})+S(r,f)\\
&=T(r,f)-T(r,g)+S(r,f)=S(r,f).
\end{aligned}
\end{eqnarray*}
Thus we get
\begin{eqnarray}
m(r,\frac{f-d}{g-d})=S(r,f).
\end{eqnarray}
It's easy to see that $N(r,\psi)=S(r,f)$ and (3.12) can be rewritten as
\begin{eqnarray}
\psi=[\frac{a_{1}-d}{a_{1}-a_{2}}\frac{g'}{g-a_{1}}-\frac{a_{2}-d}{a_{1}-a_{2}}\frac{g'}{g-a_{2}}][\frac{f-d}{g-d}-1].
\end{eqnarray}
Then by (3.27) and (3.28) we can get
\begin{eqnarray}
T(r,\psi)=m(r,\psi)+N(r,\psi)=S(r,f).
\end{eqnarray}
By (3.2), (3.19), and (3.29) we get
\begin{eqnarray}
\overline{N}(r,\frac{1}{f-a_{1}})=S(r,f).
\end{eqnarray}
Moreover, by (3.2), (3.25) and (3.30), we have
\begin{eqnarray}
m(r,\frac{1}{g-G})=S(r,f),
\end{eqnarray}
which implies
\begin{eqnarray}
\overline{N}(r,\frac{1}{f-a_{2}})=m(r,\frac{1}{f-a_{2}})\leq m(r,\frac{1}{g-G})=S(r,f).
\end{eqnarray}
Then by (3.2) we obtain $T(r,f)=S(r,f)$, a contradiction.\\
{\bf Subcase 2.3} $a_{1}\not\equiv G, a_{2}\not\equiv G$. So by (3.6), (3.10), (3.21) and Lemma 2.1, we can get
\begin{eqnarray*}
\begin{aligned}
T(r,f)&\leq 2m(r,\frac{1}{f-a_{1}})+S(r,f)\leq2m(r,\frac{1}{g-G})\\
&+S(r,f)=2T(r,g)-2N(r,\frac{1}{g-G})+S(r,f)\\
&\leq\overline{N}(r,g)+\overline{N}(r,\frac{1}{g-a_{1}})+\overline{N}(r,\frac{1}{g-a_{2}})+\overline{N}(r,\frac{1}{g-G})\\
&-2N(r,\frac{1}{g-G})+S(r,f)\leq T(r,f)-N(r,\frac{1}{g-G})+S(r,f),
\end{aligned}
\end{eqnarray*}
which deduces that
\begin{align}
N(r,\frac{1}{g-G})=S(r,f).
\end{align}
It follows from (3.33) and Lemma 2.12 that
\begin{eqnarray*}
\begin{aligned}
T(r,g)&\leq \overline{N}(r,g)+\overline{N}(r,\frac{1}{g-G})+\overline{N}(r,\frac{1}{g-a_{1}})+S(r,f)\\
&\leq \overline{N}(r,\frac{1}{g-a_{1}})+S(r,f)\leq T(r,g)+S(r,f),
\end{aligned}
\end{eqnarray*}
which implies that
\begin{align}
T(r,g)=\overline{N}(r,\frac{1}{g-a_{1}})+S(r,f).
\end{align}
Similarly
\begin{align}
T(r,g)=\overline{N}(r,\frac{1}{g-a_{2}})+S(r,f).
\end{align}
Then by (3.21) we get
\begin{align}
T(r,f)=2T(r,g)+S(r,f).
\end{align}
Easy to see from (3.16) that
\begin{align}
T(r,\phi)=N(r,\phi)+S(r,f)\leq\overline{N}(r,\frac{1}{g-a_{2}})+S(r,f).
\end{align}
We claim that
\begin{align}
T(r,\phi)=\overline{N}(r,\frac{1}{g-a_{2}})+S(r,f).
\end{align}
Otherwise,
\begin{align}
T(r,\phi)<\overline{N}(r,\frac{1}{g-a_{2}})+S(r,f).
\end{align}
We can deduce from (3.2), (3.12), Lemma 2.1 and Lemma 2.3 that
\begin{eqnarray*}
\begin{aligned}
T(r,\psi)&=T(r,\frac{g'(f-g)}{(g-a_{1})(g-a_{2})})=m(r,\frac{g'(f-g)}{(g-a_{1})(g-a_{2})})+S(r,f)\notag\\
&\leq m(r,\frac{g'}{g-a_{1}})+m(r,\frac{f-a_{2}}{g-a_{2}}-1)\notag\\
&\leq m(r,\frac{g-a_{2}}{f-a_{2}})+N(r,\frac{g-a_{2}}{f-a_{2}})-N(r,\frac{f-a_{2}}{g-a_{2}})+S(r,f)\\
&\leq m(r,\frac{1}{f-a_{2}})+N(r,\frac{1}{f-a_{2}})-N(r,\frac{1}{g-a_{2}})+S(r,f)\\
&\leq T(r,f)-\overline{N}(r,\frac{1}{g-a_{2}})+S(r,f)\leq \overline{N}(r,\frac{1}{f-a_{1}})+S(r,f),
\end{aligned}
\end{eqnarray*}
which is
\begin{align}
T(r,\psi)\leq \overline{N}(r,\frac{1}{f-a_{1}})+S(r,f).
\end{align}
Then combining (3.2), (3.39)-(3.40) and the proof of (3.19), we obtain
\begin{eqnarray*}
\begin{aligned}
&\overline{N}(r,\frac{1}{f-a_{1}})+\overline{N}(r,\frac{1}{f-a_{2}})=T(r,f)+S(r,f)\notag\\
&\leq \overline{N}(r,\frac{1}{f-a_{1}})+T(r,\phi)+S(r,f),
\end{aligned}
\end{eqnarray*}
that is
\begin{align}
\overline{N}(r,\frac{1}{g-a_{2}})\leq T(r,\phi)+S(r,f),
\end{align}
a contradiction. Similarly, we can also obtain
\begin{align}
T(r,\psi)=\overline{N}(r,\frac{1}{g-a_{1}})+S(r,f).
\end{align}
By Lemma 2.5, if
\begin{align}
g=Hh+G,
\end{align}
where $H\not\equiv0$ is a small function of $h$.\\
Rewrite (3.16) as
\begin{align}
\phi\equiv\frac{g'(f-a)(f-b)-f'(g-a_{1})(g-a_{2})}{(f-a_{1})(f-a_{2})(g-a_{1})(g-a_{2})}.
\end{align}
Set $a=\frac{h'}{h}$. Since $N(r,h)+N(r,\frac{1}{h})=S(r,f)$, we obtain from Lemma 2.1 that
$$T(r,a)=m(r,\frac{h'}{h})+N(r,\frac{h'}{h})=S(r,f),$$
which implies that $a$ is a small function of $f$.
Combing (2.1) with (3.44), we can set
\begin{align}
P&=g'(f-a)(f-b)-f'(g-a_{1})(g-a_{2})\notag\\
&=\sum_{i=0}^{5}\alpha_{i}h^{i},
\end{align}
and
\begin{align}
Q&=(f-a_{1})(f-a_{2})(g-a_{1})(g-a_{2})\notag\\
&=\sum_{j=0}^{6}\beta_{j}h^{j},
\end{align}
where $\alpha_{i}$ and $\beta_{j}$ are small functions of $h$, and $\alpha_{5}\not\equiv0$, $\beta_{6}\not\equiv0$.
If $P$ and $Q$ are two mutually prime polynomials in $e^{p}$, then by Lemma 2.9 we can get $T(r,\phi)=6T(r,h)+S(r,f)$. It follows from (3.10), (3.45)-(3.46) that $T(r,f)=S(r,f)$, a contradiction.\\
If $P$ and $Q$ are not two mutually prime polynomials in $h$, it's easy to see that the degree of $Q$ is large than $P$.\\
According to (3.38), (3.45), (3.46) and by simple calculation, we must have
\begin{align}
\phi=\frac{C}{g-a_{2}},
\end{align}
where $C\not\equiv0$ is a small function of $f$.\\
Put (3.47) into (3.16) we have
\begin{align}
\frac{C(g-a_{1})-g'}{(g-a_{1})(g-a_{2})}\equiv\frac{-f'}{(f-a_{1})(f-a_{2})}.
\end{align}
By (3.43) and (3.48), we claim that $CH\equiv DH+aH$. Otherwise, combining (3.16), (3.38),(3.43) and Lemma 2.8, we can get $T(r,h)=S(r,f)$. It follows from (3.10) and (3.21) that $T(r,f)=S(r,f)$, a contradiction. Then by (3.1), (3.48) and $CH\equiv H'+aH$, we have
\begin{align}
\frac{G'-C(G-a_{1})}{Hh+G-a_{2}}\equiv\frac{(2aH+H')h+a(G-a_{1})+G'}{h(Hh+G-a_{1})+a_{1}-a_{2}}.
\end{align}
From above equality and $CH\equiv H'+aH$, we obtain the followings equalities.
\begin{align}
A\equiv(a+C)H,
\end{align}
\begin{align}
[a(G-a_{1})+G'](G-a_{2})\equiv A(a_{1}-a_{2}),
\end{align}
and
\begin{align}
a(G-a_{1})+G'+(a+C)(G-a_{2})\equiv (a+C)(G-a_{1}),
\end{align}
where $A\equiv G'-C(G-a_{1})$. By (3.50)-(3.52) we have
\begin{align}
a_{2}\equiv G+H.
\end{align}
Differential above we get
\begin{align}
(H+G)' \equiv 0,
\end{align}
which implies
\begin{align}
(C-a)H+(a+C)H+C(G-a_{1})\equiv C(a_{2}+H-a_{1})\equiv0.
\end{align}
Therefore, we can see from (3.53) and (3.55) that
\begin{align}
G=2a_{2}-a_{1},
\end{align}
it follows from (3.53) that
\begin{align}
H=a_{1}-a_{2}.
\end{align}
Combining (3.43), (3.56) and (3.57), we have
\begin{align}
g=(a_{1}-a_{2})h+2a_{2}-a_{1}.
\end{align}
And then by (2.1) we have
\begin{align}
f=a_{2}+(a_{1}-a_{2})(h-1)^{2}.
\end{align}
If $m(r,h)=S(r,f)$, then by (3.10) and (3.21), we deduce $T(r,f)=S(r,f)$, a contradiction.\\
This completes the proof of Theorem 1.
\section{The proof of Corollary 1}
Assume that $f\equiv g$. Set
$F=\frac{f-a_{1}}{a_{2}-a_{1}}$ and $G=\frac{g-a_{1}}{a_{2}-a_{1}}$. We know that $F$ and $G$ share $0$ CM almost and $1$ IM almost. Obviously, we know that $G$ is still a differential-difference polynomial in $F$. Then by Theorem 1 and Remark 1, we have
\begin{align}
G=(a_{1}-a_{2})h+2a_{2}-a_{1},
\end{align}
and
\begin{align}
F=a_{2}+(a_{1}-a_{2})(h-1)^{2}.
\end{align}
Therefore, if $g=f^{(k)}(z+c)$, since $f$ is a transcendental meromorphic function with $\rho_{2}(f)<1$ and $f^{(k)}_{c}$ and $f$ share $ \infty$ CM, we can see from Lemma 2.1 and Lemma 2.3 that
\begin{eqnarray*}
\begin{aligned}
(1+o(1))N(r,f)+S(r,f)=N(r,f_{c})=N(r,f^{(k)}_{c}),
\end{aligned}
\end{eqnarray*}
and on the other hand
\begin{eqnarray*}
\begin{aligned}
k\overline{N}(r,f_{c})+N(r,f_{c})=N(r,f^{(k)}_{c}), \overline{N}(r,f_{c})=\overline{N}(r,f^{(k)})=\overline{N}(r,f),
\end{aligned}
\end{eqnarray*}
which follows from above equalities that $\overline{N}(r,f)=S(r,f)$. By Theorem 1, we have
\begin{align}
f=a_{2}+(a_{1}-a_{2})(e^{p}-1)^{2},
\end{align}
and
\begin{align}
f^{(k)}_{c}=(a_{1}-a_{2})e^{p}+2a_{2}-a_{1},
\end{align}
where the reason $h=e^{p}$ with $p$ a non-constant polynomial is that $f^{(k)}_{c}$ and $f$ share $a_{1}, \infty$ CM, and by (2.1) we know that $h$ is an entire function.
By (4.3) we have
\begin{align}
f^{(k)}_{c}=P(z)e^{2p_{c}}+Q(z)e^{p_{c}}+a_{2}^{(k)},
\end{align}
where $P(z)\not\equiv0$ and $Q(z)\not\equiv0$ are differential polynomial in $$A_{c}, A'_{c},\ldots,A^{(k)}, p_{c},p'_{c},\ldots,p^{(k)},$$
and $A=a_{1}-a_{2}$. So we can not obtain (4.4) from (4.3).
And hence from above discussions, we only obtain $f\equiv f^{(k)}_{c}$.
\section{The Proof of Theorem 2}
If $f(z)\equiv f(z+c)$, there is nothing to do. Assume that $f(z)\not\equiv f(z+c)$. Since $f(z)$ is a transcendental meromorphic function of $\rho_{2}(f)<1$, $f$ and $f(z+c)$ share $a_{1}(z),\infty$ CM, then there is a nonzero entire function $p(z)$ of order less than $1$ such that
\begin{eqnarray}
\frac{f(z+c)-a_{1}(z)}{f(z)-a_{1}(z)}=e^{p(z)},
\end{eqnarray}
then by Lemma 2.1 and $a(z)$ is a periodic function with period $c$,
\begin{eqnarray}
T(r,e^{p})=m(r,e^{p})=m(r,\frac{f(z+c)-a_{1}(z+c)}{f(z)-a_{1}(z)})=S(r,f).
\end{eqnarray}
On the other hand, (4.1) can be rewritten as
\begin{eqnarray}
\frac{f(z+c)-f(z)}{f(z)-a_{1}(z)}=e^{p(z)}-1.
\end{eqnarray}
We put
\begin{align}
\varphi(z)=\frac{L(f)(f(z+c)-f(z))}{(f(z)-a_{1}(z))(f(z)-a_{2}(z))},
\end{align}
where $L(f)$ is defined as in Lemma 2.13. Then by Lemma 2.1, Lemma 2.13 and the fact that $f(z)$ and $f(z+c)$ share $a_{1}(z)$ and $\infty$ CM, and $a_{2}(z)$ IM, we have
\begin{eqnarray*}
\begin{aligned}
T(r,\varphi(z))&=m(r,\varphi(z))+N(r,\varphi(z))\\
&\leq m(r,\frac{L(f)f(z)}{(f(z)-a_{1}(z))(f(z)-a_{2}(z))})+m(r,\frac{f(z+c)-f(z)}{f(z)})\\
&+\overline{N}(r,f(z))+S(r,f)\leq \overline{N}(r,f(z))+S(r,f),
\end{aligned}
\end{eqnarray*}
which implies $T(r,\varphi(z))\leq \overline{N}(r,f(z))+S(r,f)$. If $T(r,\varphi(z))< \overline{N}(r,f(z))+S(r,f)$, then we can know from the fact $f(z)$ and $f(z+c)$ share $\infty$ CM and the definition of $\varphi(z)$ that $\overline{N}(r,f(z))=S(r,f)$. Then we can see from Theorem G and Remark 1 that $f(z)\equiv f(z+c)$. Hence
\begin{align}
T(r,\varphi(z))= \overline{N}(r,f(z))+S(r,f).
\end{align}
Because the zeros of $f(z+c)-f(z)$ but not the zeros of $f(z)-a(z)$ nor $f(z)-b(z)$ are the zeros of $e^{p(z)}-1$, and hence by (5.4) we have
\begin{align}
N(r,\frac{1}{\varphi(z)})&= N(r,\frac{1}{L(f)})+N_{(2}(r,\frac{1}{f-a_{1}})+N_{(m,n)}(r,\frac{1}{f-a_{2}})\notag\\
&-\overline{N}_{(2}(r,\frac{1}{f-a_{1}})-\overline{N}_{(m,n)}(r,\frac{1}{f-a_{2}})+S(r,f).
\end{align}
On the other hand, we know from (5.1)-(5.4), Nevanlinna's first fundamental theorem, Lemma 2.2 and Lemma 2.13 that
\begin{eqnarray*}
\begin{aligned}
m(r,\frac{1}{\varphi(z)})&= m(r,\frac{1}{\varphi(z)(e^{p(z)}-1)})+S(r,f)=m(r,\frac{f-a_{2}}{L(f)})+S(r,f)\\
&=m(r,\frac{L(f)}{f-b})+N(r,\frac{L(f)}{f-a_{2}})-N(r,\frac{f-a_{2}}{L(f)})+S(r,f)\\
&=N(r,L(f))+N(r,\frac{1}{f-a_{2}})-N(r,f)-N(r,\frac{1}{L(f)})+S(r,f)\\
&=\overline{N}(r,f)+N(r,\frac{1}{f-a_{2}})-N(r,\frac{1}{L(f)})+S(r,f),
\end{aligned}
\end{eqnarray*}
the first equality holds because $T(r,e^{p})=S(r,f)$, so it follows from above that
\begin{align}
m(r,\frac{1}{\varphi(z)})=\overline{N}(r,f)+N(r,\frac{1}{f-a_{2}})-N(r,\frac{1}{L(f)})+S(r,f).
\end{align}
It is easy to see from (5.5)-(5.7) that
\begin{eqnarray*}
\begin{aligned}
&\overline{N}(r,f(z))=T(r,\varphi(z))+S(r,f)=m(r,\frac{1}{\varphi(z)})+N(r,\frac{1}{\varphi(z)})+S(r,f)\\
&=\overline{N}(r,f(z))+N(r,\frac{1}{f(z)-a_{2}(z)})-N(r,\frac{1}{L(f)})+N(r,\frac{1}{L(f)})\\
&+N_{(2}(r,\frac{1}{f(z)-a_{1}(z)})+N_{(m,n)}(r,\frac{1}{f(z)-a_{2}(z)})-\overline{N}_{(2}(r,\frac{1}{f(z)-a_{1}(z)})\\
&-\overline{N}_{(m,n)}(r,\frac{1}{f(z)-a_{2}(z)})+S(r,f)=\overline{N}(r,f(z))+N(r,\frac{1}{f(z)-a_{2}(z)})+N_{(2}(r,\frac{1}{f(z)-a_{1}(z)})\\
&+N_{(m,n)}(r,\frac{1}{f(z)-a_{2}(z)})-\overline{N}_{(2}(r,\frac{1}{f(z)-a_{1}(z)})-\overline{N}_{(m,n)}(r,\frac{1}{f(z)-a_{2}(z)})+S(r,f),
\end{aligned}
\end{eqnarray*}
which implies
\begin{align}
N_{(2}(r,\frac{1}{f(z)-a_{1}(z)})+N(r,\frac{1}{f(z)-a_{2}(z)})=S(r,f).
\end{align}
We can know from (5.8), Lemma 2.1 and Lemma 2.3 that
\begin{align}
N(r,\frac{1}{f(z+c)-a_{2}(z+c)})=N(r,\frac{1}{f(z)-a_{2}(z)})+S(r,f)=S(r,f).
\end{align}
Set
\begin{eqnarray}
\psi(z)=\frac{f(z+c)-a_{2}(z+c)}{f(z)-a_{2}(z)}.
\end{eqnarray}
It is easy to see that
\begin{eqnarray}
N(r,\frac{1}{\psi(z)})\leq N(r,\frac{1}{f(z+c)-a_{2}(z+c)})+N(r,a_{2}(z))= S(r,f),
\end{eqnarray}
\begin{eqnarray}
N(r,\psi(z))\leq N(r,\frac{1}{f(z)-a_{2}(z)})+N(r,a_{2}(z+c))= S(r,f).
\end{eqnarray}
Hence by Lemma 2.1,
\begin{align}
T(r,\psi(z))&=m(r,\psi(z))+N(r,\psi(z))\notag\\
&\leq m(r,\frac{f(z+c)-a_{2}(z+c)}{f(z)-a_{2}(z)})+N(r,\frac{1}{f(z)-a_{2}(z)})\notag\\
&+S(r,f)\leq S(r,f).
\end{align}
Subtracting (5.10) from (5.1), we have
\begin{eqnarray}
(e^{p(z)}-\psi(z))f(z)+\psi(z)a_{2}(z)+a_{1}(z)-a_{2}(z+c)-a_{1}(z)e^{p(z)}\equiv0.
\end{eqnarray}
We discuss following two cases.\\
{\bf Case 1} \quad $e^{p(z)}\not\equiv\psi(z)$. Then by (4.2), (4.10) and (4.13) we obtain $T(r,f)=S(r,f)$, a contradiction.\\
{\bf Case 2} \quad $e^{p(z)}\equiv\psi(z)$. Then by (5.1) we have
\begin{eqnarray}
f(z+c)=e^{p(z)}(f(z)-a_{1}(z))+a_{1}(z),
\end{eqnarray}
and
\begin{eqnarray}
N(r,\frac{1}{f(z+c)-a_{2}(z)})=N(r,\frac{1}{f(z)-a_{1}(z)+\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}})=S(r,f).
\end{eqnarray}
If $a_{2}(z)$ is a periodic function of period $c$, then by (5.11) we can get $e^{p(z)}\equiv1$, which implies $f(z)\equiv f(z+c)$, a contradiction. Obviously, $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\not\equiv a_{1}(z)$. Otherwise, we can deduce $a_{1}(z)\equiv a_{2}(z)$, a contradiction.\\
Next, we discuss three Subcases.
{\bf Subcase 2.1}\quad $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\not\equiv a_{2}(z)$ and $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\not\equiv a_{2}(z-c)$. Then according to (5.5), (5.6),(5.16) and Lemma 2.12, we can get $T(r,f)=S(r,f)$, a contradiction.\\
{\bf Subcase 2.2}\quad $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\equiv a_{2}(z)$, but $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\not\equiv a_{2}(z-c)$. It follows that $e^{p(z)}\equiv1$. Therefore by (5.1) we have $f(z)\equiv f(z+c)$, a contradiction.
{\bf Subcase 2.3}\quad $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\not\equiv a_{2}(z)$ and $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\equiv a_{2}(z-c)$. It is easy to see that
\begin{eqnarray}
\frac{a_{1}(z)-a_{2}(z)}{a_{1}(z-c)-a_{2}(z-c)}=e^{p(z)}.
\end{eqnarray}
Furthermore, (5.14) implies
\begin{eqnarray}
\frac{a_{1}(z+c)-a_{2}(z+c)}{a_{1}(z)-a_{2}(z)}=e^{p(z)},
\end{eqnarray}
\begin{eqnarray}
\frac{a_{1}(z)-a_{2}(z)}{a_{1}(z-c)-a_{2}(z-c)}=e^{p(z-c)}.
\end{eqnarray}
It follows from (5.17) and (5.19) that
\begin{eqnarray}
e^{p(z)}=e^{p(z+c)}.
\end{eqnarray}
We also set
\begin{align}
F(z)=\frac{f(z)-a_{1}(z)}{a_{2}(z)-a_{1}(z)}, \quad G(z)=\frac{f(z+c)-a_{1}(z)}{a_{2}(z)-a_{1}(z)}.
\end{align}
Since $f(z)$ and $f(z+c)$ share $a_{1}(z)$ and $\infty$ CM, and $a_{2}(z)$ IM, so $F(z)$ and $G(z)$ share $0,\infty$ CM almost, and $1$ IM almost. We claim that $F$ is not a bilinear transform of $G$. Otherwise, we can see from Lemma 2.9 that if (i) occurs, we have $F(z)\equiv G(z)$, that is $f(z)\equiv f(z+c)$. If (ii) occurs, we have
\begin{align}
N(r,\frac{1}{f(z)-a_{1}(z)})=S(r,f ), \quad N(r,f(z))=S(r,f).
\end{align}
Then by (5.8), (5.22) and Lemma 2.13, we can get $T(r,f)=S(r,f)$, a contradiction.\\
If (iii) occurs, we have
\begin{align}
N(r,\frac{1}{f(z)-a_{1}(z)})=S(r,f ), \quad N(r,\frac{1}{f(z)-a_{2}(z)})=S(r,f).
\end{align}
Then it follows from above, $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\not\equiv a_{1}(z)$, $a_{1}(z)-\frac{a_{1}(z)-a_{2}(z)}{e^{p(z)}}\not\equiv a_{2}(z)$ and Lemma 2.13 that $T(r,f)=S(r,f)$, a contradiction.\\
If (iv) occurs, we have $F(z)\equiv jG(z)$, that is
\begin{align}
f(z)-a_{1}(z)=j(f(z+c)-a_{1}(z)),
\end{align}
where $j\neq0,1$ is a finite constant. By (5.1) and (5.24), we have $e^{p(z)}\equiv j$. If $a_{2}(z+c)\not\equiv a_{2}(z-c)$, then by (5.9), (5.16) and Lemma 2.13, we get $T(r,f)=S(r,f)$, a contradiction. Thus, we have $a_{2}(z+c)\equiv a_{2}(z-c)$. Moreover, since $a(z)$ is a periodic function with period $c$, we can deduce $j^{2}=1$ from $e^{p(z)}\equiv j$ and (5.18)-(5.19). If $j=1$, we have $f(z+c)\equiv f(z)$, a contradiction. If $j=-1$, we obtain $f(z+c)=2a_{1}(z)-f(z)$. Then by Lemma 2.10, we know that either $f(z)\equiv f(z+c)$, a contradiction. Or $N(r,f(z))=S(r,f)$, and in this case we can see from Theorem G and Remark 1 that $f(z)\equiv f(z+c)$, a contradiction.\\
If (v) occurs, we have
\begin{align}
N(r,\frac{1}{f(z)-a_{1}(z)})=S(r,f ).
\end{align}
Then by Lemma 2.12, (5.9), (5.16) and $a_{2}(z-c)\not\equiv a_{1}(z)$, we obtain $T(r,f)=S(r,f)$, a contradiction.\\
If (vi) occurs, we have
\begin{align}
N(r,f(z))=S(r,f).
\end{align}
And hence we can see from Theorem G and Remark 1 that $f(z)\equiv f(z+c)$, a contradiction.\\
Therefore, $F(z)$ is not a linear fraction transformation of $G(z)$. If $a_{2}(z)$ is a small function with period $2c$, that is $a_{2}(z+c)\equiv a_{2}(z-c)$, we can set
\begin{align}
&D(z)=(f(z)-a_{2}(z))(a_{2}(z)-a_{2}(z-c))-(f(z+c)-a_{2}(z+c))(a_{2}(z+c)-a_{2}(z))\notag\\
&=(f(z)-a_{2}(z-c))(a_{2}(z)-a_{2}(z-c))-(f(z+c)-a_{2}(z))(a_{2}(z+c)-a_{2}(z))
\end{align}
If $D(z)\equiv0$, then we have $f(z+c)\equiv a_{2}(z+c)+a_{2}(z)-f(z)$, that is to say $F(z)$ is a linear fraction transformation of $G(z)$, a contradiction. Hence $D(z)\not\equiv0$, and by (5.9)-(5.10), (5.16) and Lemma 2.1, we have
\begin{align}
2T(r,f(z))&=m(r,\frac{1}{f(z)-a_{2}(z)})+m(r,\frac{1}{f(z)-a_{2}(z-c)})+S(r,f)\notag\\
&=m(r,\frac{1}{f(z)-a_{2}(z)}+\frac{1}{f(z)-a_{2}(z-c)})+S(r,f)\notag\\
&\leq m(r,\frac{D(z)}{f(z)-a_{2}(z)}+\frac{D(z)}{f(z)-a_{2}(z-c)})+m(r,\frac{1}{D(z)})+S(r,f)\notag\\
&\leq m(r,\frac{1}{(\psi (z)+1)(f(z)-a_{2}(z))(a_{2}(z)-a_{2}(z-c))})+S(r,f)\notag\\
&\leq m(r,\frac{1}{f(z)-a_{2}(z)})+S(r,f)\leq T(r,f)+S(r,f),
\end{align}
which implies $T(r,f)=S(r,f)$, a contradiction.
By (5.18) we have
\begin{align}
\frac{\Delta_{c}a_{2}(z)}{1-e^{p(z)}}+a_{2}(z)=a_{1}(z).
\end{align}
Combining (5.20) and the fact that $a(z)$ is a small function with period $c$, we can get
\begin{align}
\frac{\Delta_{c}a_{2}(z+c)}{1-e^{p(z)}}+a_{2}(z+c)=a_{1}(z).
\end{align}
According to (5.29) and (5.30), we obtain
\begin{align}
e^{p(z)}=\frac{a_{2}(z+2c)-a_{2}(z+c)}{\Delta_{c}^{2}a_{2}(z)}.
\end{align}
So if $\rho(b(z))<\rho(e^{p(z)})$, we can follows from (5.31) and Lemma 2.11 that
\begin{align}
\rho(e^{p(z)})=\rho(\frac{a_{2}(z+2c)-a_{2}(z+c)}{\Delta_{c}^{2}a_{2}(z)})\leq \rho(a_{2}(z))<\rho(e^{p(z)}),
\end{align}
which is a contradiction.
If If $\rho(a_{2}(z))<1$, we claim that $p(z)\equiv B$ is a non-zero constant. Otherwise, the order of right hand side of (5.31)is $0$, but the left hand side is $1$, which is impossible. Therefore, by (5.1) we know that $f(z+c)-a_{1}(z)=B(f(z)-a_{1}(z))$, that is to say $F(z)$ is a linear fraction transformation of $G(z)$, a contradiction.
This completes Theorem 2.
\
{\bf Acknowledgements} The author would like to thank to referee for his helpful comments and also to the previous referee for giving the Example 4. The author also would like to express his grateful to Tohge Kazuya for communication about this paper.
|
2,877,628,091,330 | arxiv | \section{Introduction}
The Peccei-Quinn solution to the strong CP problem requires the existence of a pseudo-scalar Goldstone boson called an \textit{axion}\cite{Peccei1977}. The axion is a strongly motivated particle beyond the Standard Model and is also a plausible candidate for cold dark matter\cite{Preskill1983, Abbott1983, Dine1983}. While most axion search experiments seek to observe the axion-photon interaction using the resonant cavity method, it has been suggested that axion-gluon coupling in the strong interactions may result in an electric dipole moment (EDM) from a hadron oscillating at the axion frequency\cite{Graham2011, Graham2013}. A non-zero EDM of a hadron would require $CP$-violation in strong interactions, so there have been many efforts to measure the EDM of neutrons\cite{Altarev1980, Ramsey1982, Abel2020} and plans to measure the EDM of protons and/or deuterons using a storage ring method\cite{Farley2004, Orlov2006, Anastassopoulos2016, Semertzidis2016}. Extending the experimental approach to the axion-induced oscillating EDM, a new axion-like dark matter search experiment was proposed using the storage ring method in the presence of an oscillating EDM\cite{Chang2019, Stephenson2020, Pretz2020}. Another recent study also proposed that the storage ring EDM method can be exploited to probe dark matter and dark energy\cite{Graham2021}.
The present paper also proposes using the storage ring method to search for axions, but with a different scheme. We introduce an RF Wien Filter (WF) and resonate the spin by applying the RF at the sidebands of the axion frequency and the $g-2$ frequency. The application of the WF to measure the EDM was studied in Ref. \cite{Morse2013}, but its target was a conventional static EDM, while the present study seeks to observe an oscillating EDM induced by an axion field. Also, by applying the WF at a frequency other than just $g-2$ frequency, the experiment can be freed from the severe systematics arising from beam and spin dynamics and WF misalignment issues.
\section{Spin Dynamics in Storage Rings}
The spin of a particle in a storage ring precesses as
\begin{align}\label{eqn:Spin}
\D{\mathbf{S}}{t} = \bm{\upomega}_s \times \mathbf{S},
\end{align}
where the angular spin frequency $\bm{\upomega}_s$ is given by the Thomas-BMT equation\cite{Thomas1926, Bargmann1959}:
\begin{widetext}
\begin{equation}\label{eqn:TBMT}
\begin{split}
\bm{\upomega}_s = -\frac{q}{m} \bigg[ \left(G+\frac{1}{\gamma} \right) \mathbf{B} - G \frac{\gamma}{\gamma+1} (\bm{\upbeta} \cdot \mathbf{B}) \bm{\upbeta}
- \left( G + \frac{1}{\gamma+1} \right) \frac{\bm{\upbeta} \times \mathbf{E}}{c}
+ \frac{\eta}{2} \left( \frac{\mathbf{E}}{c} - \frac{\gamma}{\gamma+1} \left( \bm{\upbeta} \cdot \frac{\mathbf{E}}{c} \right) \bm{\upbeta} + \bm{\upbeta} \times \mathbf{B} \right) \bigg],
\end{split}
\end{equation}
\end{widetext}
where $\bm{\upbeta} = \mathbf{v}/c$ is the particle velocity vector.
Here $G \equiv (g-2)/2$ is a magnetic anomaly and $\eta$ is a unitless EDM that plays the same role as the $g$-factor in the magnetic dipole moment. The magnetic dipole moment $\mu$ and the electric dipole moment $d$ can be written in forms:
\begin{align} \label{eq:definition_mu_and_d}
\mu = g \frac{q}{2m} S, \qquad d = \eta \frac{q}{2mc} S,
\end{align}
where $q$, $m$ and $S$ are the charge, mass and spin of the particle, respectively.
We work in an accelerator coordinate system $(x, y, s)$, where $x$ is an in-plane radial distance from the design orbit, $y$ is an out-of-plane vertical distance from the center and $s$ is along the arc length of the storage ring. The spin angular frequency has only transverse components $(x, y)$ under a homogeneous and uniform vertical dipole magnetic field and/or radial electric field in the storage ring.
Furthermore, we employ a paraxial accelerator approximation $\bm{\upbeta} \cdot \mathbf{B} = \bm{\upbeta} \cdot \mathbf{E} = 0$ only for the analytical estimations, but those terms are kept in the numerical simulations and the results were consistent. Accordingly, the radial and vertical components of the spin angular frequency are given by
\begin{equation}\label{eqn:omega_xy}
\begin{split}
\omega_{sx} &= -\frac{q}{2m} \eta(t) \left( \frac{E_x}{c} - \beta B_y \right), \\
\omega_{sy} &= -\frac{q}{m} \left[ \left( G + \frac{1}{\gamma} \right) B_y - \left( G + \frac{1}{\gamma+1} \right) \frac{\beta E_x}{c} \right].
\end{split}
\end{equation}
Here we used the time-varying EDM for a hadron as a result of axion-gluon coupling:
\begin{align} \label{eqn:eta}
\eta(t) = \eta_\text{DC} + \eta_\text{AC} \cos(\omega_\text{axion} t + \phi_\text{axion}).
\end{align}
For the analytical calculations of beam and spin dynamics for non-reference particles with non-zero EDM in storage rings, see Refs. \cite{Silenko2006, Fukuyama2013, Abusaif2021}.
\section{Spin Resonance with RF Wien Filter}
The RF Wien Filter (WF) is a perfect candidate to drive the spin resonance without affecting the beam betatron oscillations, since it exerts no Lorentz force on particles with a specific momentum. The EDM term $\eta(t)$ only contributes to the radial component of the spin angular frequency, so we set the electromagnetic field of the WF as follows.
\begin{equation}\label{eqn:WFfields}
\begin{split}
\mathbf{E}_\text{WF} &= E_0^\text{WF} \cos(\omega_\text{WF} t + \phi_\text{WF}) \hat{e}_x, \\
\mathbf{B}_\text{WF} &= \frac{E_0^\text{WF}}{\beta c} \cos(\omega_\text{WF} t + \phi_\text{WF}) \hat{e}_y
\end{split}
\end{equation}
An artificial spin resonance driven by a WF in the presence of a static non-zero EDM has been well studied\cite{Morse2013, Saleev2017, Slim2016, Rathmann2020}. We extend this idea to the oscillating component of the EDM. It is intuitive to expect that the vertical spin component will accumulate in one direction when the oscillation frequency of the EDM, $\omega_\text{axion}$, is the same as the WF frequency. Actually, it turns out that it is resonant with the sidebands of the axion and $g-2$ frequency: $\omega_\text{WF} = \omega_{g-2} \pm \omega_\text{axion}$.
It is also true that its sidebands with a cyclotron frequency, for instance $\omega_c - (\omega_{g-2} \pm \omega_\text{axion})$, are also resonance frequencies, because the WF is normally located in a specific position in the azimuth and the coherent spin motion with respect to the WF will include aliased Fourier components. However, in this paper we will assume the WF is continuously located in the azimuth to simplify the spin equation and solve it analytically.
To see the resonance condition $\omega_\text{WF} = \omega_{g-2} \pm \omega_\text{axion}$ explicitly, let us solve the spin equations for a reference particle that travels the storage ring in the reference orbit. Let $E_0$ and $B_0$ be the magnitudes of a constant radial electric field and a vertical magnetic field, respectively, needed to store the particle in the storage ring. Substituting the spin components in Eq. \eqref{eqn:Spin} with Eq. \eqref{eqn:omega_xy} and using the electromagnetic field $\mathbf{E} = E_0 \hat{e}_x + \mathbf{E}_\text{WF}$ and $\mathbf{B} = B_0 \hat{e}_y + \mathbf{B}_\text{WF}$ from Eq. \eqref{eqn:WFfields} yields
\begin{align}
\dot{S}_x &= -(\omega_{g-2} + \Omega_\text{WF}(t) ) S_s, \\
\dot{S}_y &= -\omega_\eta(t) S_s, \label{eqn:Sy} \\
\dot{S}_s &= \omega_\eta(t) S_y + (\omega_{g-2} + \Omega_\text{WF}(t) ) S_x,
\end{align}
where
\begin{align}
\omega_{g-2} &= \frac{q}{m} \left[ G B_0 - \left( G - \frac{1}{\gamma^2-1} \right) \frac{E_0}{c} \right]
\end{align}
is the $g-2$ frequency, and
\begin{align}
\Omega_\text{WF} (t) &= \frac{q}{m} \frac{G+1}{\gamma^2} \frac{E_0^\text{WF}}{\beta c} \cos(\omega_\text{WF} t + \phi_\text{WF}) \\
&\equiv a_\text{WF} \cos(\omega_\text{WF} t + \phi_\text{WF})
\end{align}
is the spin angular frequency component driven by the WF fields. Here $a_\text{WF}$ is a scaled WF field strength in units of the angular frequency. Finally, $\omega_\eta$ is the EDM-related term, namely
\begin{align}
\omega_\eta (t) &= -\frac{q}{2m} \left( \frac{E_0}{c} - \beta B_0 \right) \eta(t) \equiv -\frac{d(t)}{S} E^*,
\end{align}
where $E^* \equiv E_0 - v B_0$ is the effective electric field, which is proportional to the EDM signal. With a highly relativistic beam $v \approx c$, a vertical magnetic field of 1 T provides an effective electric field of roughly $300$ MV/m by itself.
Given the reasonable assumptions $|\omega_\eta| \ll |a_\text{WF}|$ and $|\omega_\eta| \ll |\omega_{g-2}|$, we adopt the strategy used in the section \Rom{2} of Ref. \cite{Morse2013}. First we write down the exact solution for the radial and longitudinal spin components without the EDM, then plug those solutions into Eq. \eqref{eqn:Sy} to obtain the approximate time derivative of the vertical spin component. Without the EDM term, the radial and longitudinal spin components are given in forms:
\begin{align}
S_x &= -\sin\left( \omega_{g-2}t + \frac{a_\text{WF}}{\omega_\text{WF}} (\sin(\omega_\text{WF}t + \phi_\text{WF}) - \sin\phi_\text{WF}) \right), \\
S_s &= \cos\left( \omega_{g-2}t + \frac{a_\text{WF}}{\omega_\text{WF}} (\sin(\omega_\text{WF}t + \phi_\text{WF}) - \sin\phi_\text{WF}) \right),
\end{align}
where the initial polarization is assumed to be longitudinal $(S_s(0) = 1)$. Then plugging this $S_s$ into Eq. \eqref{eqn:Sy} yields
\begin{align}
\dot{S_y} &\approx \frac{d(t)}{S} E^* \cos\left( \omega_{g-2}t + \frac{a_\text{WF}}{\omega_\text{WF}} \sin(\omega_\text{WF}t) \right)
\end{align}
by setting $\phi_\text{WF} = 0$ for a moment. This equation immediately shows the working principle of the frozen-spin method to measure the static EDM. When the WF is absent ($a_\text{WF} = 0$) and the spin is frozen ($\omega_{g-2} = 0$) then it leads to $\dot{S_y} \propto \eta_\text{DC}$, therefore, the vertical spin component accumulates with a slope proportional to the DC EDM. The argument is similar when the WF is present. We will use this equation separately for the DC and AC EDM terms to reveal the resonance conditions more explicitly. Recalling the EDM $\eta(t)$ in Eq. \eqref{eqn:eta}, it follows that
\begin{align} \label{eqn:S_y_DC}
\left( \dot{S_y} \right)_\text{DC} &\approx \frac{d_\text{DC}}{S} E^* \cos\left( \omega_{g-2}t + \frac{a_\text{WF}}{\omega_\text{WF}} \sin(\omega_\text{WF}t) \right)
\end{align}
for the DC EDM term. One can see the resonance condition is $\omega_\text{WF} = \omega_{g-2}$, as studied in Ref. \cite{Morse2013}. To see it more manifestly, take $\omega_{g-2}t$ in the cosine bracket as a sawtooth wave with a period $2\pi/\omega_{g-2}$. The remaining term of the argument is a sine function with a period $2\pi/\omega_\text{WF}$. If the periods of the two terms are not identical, then the phase argument uniformly sweeps from 0 to $2\pi$, which leads to $\langle \dot{S}_y \rangle = 0$ on average. The resonance happens when the two periods are identical, thus $\omega_\text{WF} = \omega_{g-2}$.
Similarly, the AC EDM term reads
\begin{widetext}
\begin{align}
\left( \dot{S_y} \right)_\text{AC} &\approx \frac{d_\text{AC}}{S} E^* \cos(\omega_\text{axion} t) \cos\left( \omega_{g-2}t + \frac{a_\text{WF}}{\omega_\text{WF}} \sin(\omega_\text{WF}t) \right) \label{eqn:S_y_AC_1} \\
&= \frac{d_\text{AC}}{2S} E^* \left[ \cos\left( (\omega_\text{axion} - \omega_{g-2})t - \frac{a_\text{WF}}{\omega_\text{WF}} \sin(\omega_\text{WF}t) \right) + \cos\left( (\omega_\text{axion} + \omega_{g-2})t + \frac{a_\text{WF}}{\omega_\text{WF}} \sin(\omega_\text{WF}t) \right) \right]. \label{eqn:S_y_AC}
\end{align}
\end{widetext}
Again we set $\phi_\text{axion} = 0$ for clarity. Restoring the WF phase and the axion phase in the above equations are straightforward, but of no importance at this moment. Using the same argument we made for the DC EDM term, we can clearly see that there are two resonance conditions: $\omega_\text{WF} = \omega_{g-2} \pm \omega_\text{axion}$. When the WF is absent, the resonance happens when $\omega_{g-2} = \omega_\text{axion}$ which is used in Ref. \cite{Chang2019}.
We point out that in the presence of the WF, the average slope of the vertical spin component $\left\langle \dot{S}_y \right\rangle$ in the resonance condition is multiplied by the following factor:
\begin{align} \label{eq:C_WF}
C_\text{WF} \equiv \left\langle \cos\left( \omega t + \frac{a}{\omega} \sin(\omega t) \right) \right\rangle,
\end{align}
where we dropped the subscript WF in the right hand side for a moment. Interestingly, it shows that $C_\text{WF} = -J_1 (a/\omega)$ where $J_1$ is the Bessel function of the first kind. See App. \ref{app:CWF_derivation} for the derivation. The maximum absolute value of $C_\text{WF}$ is therefore roughly 0.59 when $a/\omega \approx 1.84$. Eventually, we have the following expression for the vertical spin slope (EDM signal) on resonance.
\begin{align} \label{eq:EDMsignal}
\omega_d = -\frac{d_\text{AC}}{2S} E^* J_1 \left( \frac{a_\text{WF}}{\omega_\text{WF}} \right)
\end{align}
\begin{table*}[t]
\centering
\caption{Various methods seeking to probe either the stationary (DC) or oscillating (AC) EDM in storage rings. All methods exploit the vertical spin resonance for a specific resonance condition. $s$ is the spin quantum number. `srEDM' stands for the storage ring EDM experiments.}
\label{tab:various_methods}
\begin{tabular}{lcccc}
\hline\hline
Method & srEDM & srEDM + WF & srAxionEDM & srAxionEDM + WF \\
\hline
Measurement target & $d_\text{DC}$ & $d_\text{DC}$ & $d_\text{AC}$ & $d_\text{AC}$ \\
Resonance condition & $\omega_{g-2} = 0$ & $\omega_{g-2} = \omega_\text{WF}$ & $\omega_{g-2} = \omega_\text{axion}$ & $\omega_{g-2} = | \omega_\text{axion} \pm \omega_\text{WF} |$ \vspace{4pt} \\ \vspace{4pt}
Spin vertical slope ($\omega_d$) & $\dfrac{d_\text{DC}}{s \hbar} E^*$ & $\dfrac{d_\text{DC}}{s \hbar} E^* C_\text{WF}$ & $\dfrac{d_\text{AC}}{2 s \hbar} E^*$ & $\dfrac{d_\text{AC}}{2 s \hbar} E^* C_\text{WF}$ \\
References & \cite{Anastassopoulos2016} & \cite{Morse2013} & \cite{Chang2019, Pretz2020} & This work \\
\hline\hline
\end{tabular}
\end{table*}
An overview of the various methods used to measure EDM in storage rings, including this work, is provided in Table \ref{tab:various_methods}.
\section{Spin Tracking Simulation}
In this section a spin tracking simulation for the reference particle was performed as a proof-of-principle. A deuteron, whose magnetic anomaly is $G_d = -0.143$, was simulated in a purely magnetic ring with a radius of 30 m. The reference momentum was given to be 1 GeV/c. The corresponding $g-2$ frequency is around 121 kHz. Also, the axion frequency was arbitrarily assumed to be 180 kHz, which corresponds to an axion mass of 0.7 neV/c$^2$. The initial setup parameters are summarized in Table \ref{tab:initial_params}.
\begin{table}[t]
\centering
\caption{The initial parameters for the spin tracking simulation with the RF Wien Filter.}
\label{tab:initial_params}
\begin{tabular}{lr}
\hline\hline
Parameter\phantom{blahblahblahblah} & Type or Value \\
\hline
Particle & Deuteron \\
Magnetic anomaly ($G$) & $-0.143$ \\
Ring radius & 30 m \\
Magnetic field strength & 0.111 T \\
Reference momentum & 1 GeV/c \\
Reference velocity ($\beta$) & 0.47 \\
Reference Lorentz factor ($\gamma$) & 1.133 \\
$g-2$ frequency & 121 kHz \\
Axion frequency & 180 kHz \\
\hline\hline
\end{tabular}
\end{table}
First, the simulation was conducted without applying the WF. Fig. \ref{fig:Sy_noWF} shows the vertical spin component ($S_y$) of the reference particle in the presence of the EDM, with blue when there is only the DC component of the EDM (with $\eta_\text{DC} = 10^{-6}$) and orange for the AC component only ($\eta_\text{AC} = 10^{-6}$). Fig. \ref{fig:SyFFT_noWF} shows their Fourier spectra. One can immediately see $\eta_\text{DC}$ is responsible for the $g-2$ frequency and $\eta_\text{AC}$ is for its sidebands with the axion frequency: $f_\text{axion} \pm f_{g-2}$. The peak at the higher frequency sideband, $f_\text{axion} + f_{g-2} \approx 300$ kHz, clearly looks much smaller than the other sideband, $f_\text{axion} - f_{g-2} \approx 60$ kHz. The reason is quite straightforward. As implied in Eq. \eqref{eqn:S_y_AC_1}, without the WF ($a_\text{WF} = 0$), one gets the time derivative of the vertical spin component asymptotically proportional to $\cos(\omega_\text{axion}t) \cos(\omega_{g-2} t)$. Integrating this expression, one obtains two sidebands whose amplitudes are divided by each frequency. In this case, the ratio between the higher and lower sidebands is roughly 5, which is the magnitude ratio of peaks for the two sidebands in Fig. \ref{fig:SyFFT_noWF}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/Sy_noWF_etaDCAC.png}
\caption{Vertical spin component versus time, with only the DC EDM of $\eta_\text{DC} = 10^{-6}$ (blue), and only the AC EDM of $\eta_\text{AC} = 10^{-6}$ (orange).}
\label{fig:Sy_noWF}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/SyFFT_noWF_etaDCAC.png}
\caption{Fourier spectra of the vertical spin component, with only the DC EDM of $\eta_\text{DC} = 10^{-6}$ (blue) and only the AC EDM of $\eta_\text{AC} = 10^{-6}$ (orange). In the simulation the blue peak is located at $f_{g-2} \approx 120$ kHz and the orange peaks are located at $f_{g-2} \pm f_\text{axion}$ where $f_\text{axion} = 180$ kHz.}
\label{fig:SyFFT_noWF}
\end{figure}
Next, we want to observe the vertical spin resonance in the presence of the WF operating at one of the sidebands of the axion and $g-2$ frequency. The WF was assumed to occupy all parts of the storage ring continuously, and its electric field strength was set to 1 MV/m. As Fig. \ref{fig:Sy_WF} represents, the spin vertical component accumulates in one direction when the WF is applied at either one of the two sidebands. The directions of the accumulation of the spin vertical component for two cases are opposite; they depend on the sign of $\omega_\text{axion} \pm \omega_{g-2}$ and $\omega_\text{WF}$. And the slope depends on the sideband frequency like the previous argument as well: $\langle \dot{S}_y \rangle \propto 1/\omega_\text{WF}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/Sy_WFfapm.png}
\caption{Vertical spin component versus time, when an RF Wien Filter of 1 MV/m electric field strength was applied at the higher sideband $f_\text{axion} + f_{g-2}$ (blue) and lower sideband $f_\text{axion} - f_{g-2}$ (orange). The Wien Filter is continuously located on the storage ring in the simulation.}
\label{fig:Sy_WF}
\end{figure}
So far we have used a deuteron for the simulation. It is worth trying a proton as well, which has a magnetic anomaly $G_p = 1.793$. Its mass is around 938 MeV/c$^2$, almost half that of the deuteron. Assuming the same reference momentum $p = 1$ GeV/c and a WF electric field strength of 1 MV/m, the corresponding $g-2$ frequency for the proton becomes 3 MHz, which is 25 times larger than that of the deuteron. As a result, the vertical spin component growth rate for the proton is around an order of magnitude smaller than that of the deuteron, as shown in Fig. \ref{fig:Sy_WF_proton}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/Sy_WF_proton_1GeV.png}
\caption{Vertical spin component versus time for a proton with a momentum 1 GeV/c, when an RF Wien Filter of 1 MV/m electric field strength was applied at the higher sideband $f_\text{axion} + f_{g-2}$ (blue) and lower sideband $f_\text{axion} - f_{g-2}$ (orange). The Wien Filter is continuously located on the storage ring in the simulation.}
\label{fig:Sy_WF_proton}
\end{figure}
We also have briefly tested the WF taking only a fraction of the storage ring azimuth, instead of continuously located along the ring. Figure \ref{fig:Sy_WF_WFtheta} shows the vertical spin component on resonance with different WF azimuthal occupancy, which is consistent with the natural expectation; the slope scales the same with the WF occupancy.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/Sy_WFm.png}
\caption{Vertical spin component versus time for different Wien Filter azimuthal occupancy: continuously located along the storage ring (blue), taking only 10\% of the ring azimuth (orange) and 1\% (green).}
\label{fig:Sy_WF_WFtheta}
\end{figure}
\section{Systematic Error Studies}
\subsection{Wien Filter Misalignment}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/tilted_WF.png}
\caption{The electromagnetic field from the RF Wien Filter, tilted by an angle $\theta$. Ideally, the electric field of the Wien Filter designed for this storage ring should only have a radial component ($\hat{e}_x$) and the magnetic field should only have a vertical component ($\hat{e}_y$). But this small angle $\theta$ might be there because of the misalignment.}
\label{fig:tilted_WF}
\end{figure}
In Eqs. \eqref{eqn:S_y_DC} and \eqref{eqn:S_y_AC}, we found that the DC component of the EDM is sensitive to the $g-2$ frequency of WF, and the AC component is sensitive to the sidebands of the $g-2$ and axion frequency, respectively. It turns out that the former case is vulnerable to a systematic error from WF misalignment\cite{Morse2013}. When the RF electric and magnetic fields point in a slightly tilted direction, as illustrated in Fig. \ref{fig:tilted_WF}, the resulting radial component of the RF magnetic field can drive the resonance of the vertical spin component, mimicking the EDM signal. Specifically, the radial component of the spin angular frequency induced by WF misalignment reads
\begin{align}
\omega_{sx}^\text{WF} &= -\frac{q}{m} \left[ \left( G + \frac{1}{\gamma} \right) B_x^\text{WF} + \left( G + \frac{1}{\gamma + 1} \right) \frac{\beta E_y^\text{WF}}{c} \right] \\
&= \theta_m a_\text{WF} \cos(\omega_\text{WF} t + \phi_\text{WF}),
\end{align}
where $\theta_m$ is the tilted angle of the WF. This leads to the slope of the vertical spin component coming from the WF misalignment systematic effect,
\begin{align}
\left( \dot{S_y} \right)_\text{Syst.} = -\theta_m a_\text{WF} \cos(\omega_\text{WF} t + \phi_\text{WF}) S_s.
\end{align}
In the resonance condition $\omega_\text{WF} = \omega_{g-2}$, it leads to a systematic vertical spin slope proportional to
\begin{align} \label{eq:C_Syst}
C_\text{Syst.} &\equiv \left\langle \cos(\omega t) \cos\left( \omega t + \frac{a}{\omega} \sin(\omega t) \right) \right\rangle
\end{align}
having additional $\cos(\omega t)$ compared to Eq. \eqref{eq:C_WF}. It is also shown that $C_\text{Syst.}$ can be represented by the Bessel function of the first kind: $C_\text{Syst.} = \frac{1}{2} \left\{ J_0 \left( \frac{a}{\omega} \right) + J_2 \left( \frac{a}{\omega} \right) \right\}$. The derivation is provided in App. \ref{app:CWF_derivation}. Therefore, we have a systematic vertical spin slope, as follows.
\begin{align} \label{eq:omega_syst}
\left\langle \left( \dot{S_y} \right)_\text{Syst.} \right\rangle = -\frac{1}{2} \theta_m a \left[ J_0 \left( \frac{a}{\omega} \right) + J_2 \left( \frac{a}{\omega} \right) \right].
\end{align}
Equation \eqref{eq:omega_syst} was confirmed by the spin tracking simulation, as shown in Fig. \ref{fig:omega_syst}. The basic setup was the same as in Table \ref{tab:initial_params}, except there was a WF misalignment angle of 1 $\mu$rad for this case. Both the DC and AC EDM were set to 0 to make sure the effect was purely from the systematics. The slopes obtained from the spin tracking simulation were in an excellent agreement with the analytical calculation. Typically, this systematic effect can be frustrating if we cannot control the WF misalignment angle to really small tolerances, on the order of nanoradians, but the present method is largely free from it.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/OmegaSyst_tracking_analytical.pdf}
\caption{The vertical spin slope driven by the WF misalignment, obtained from the spin tracking simulation (blue circles) and by the analytical expression in Eq. \eqref{eq:C_Syst} (red curve). Both the DC and AC EDM were set to 0 for the spin tracking. The misalignment angle $\theta_m$ is 1 $\mu$rad.}
\label{fig:omega_syst}
\end{figure}
This systematic effect can be avoided if the WF frequency is not close to the $g-2$ frequency, at $\omega_\text{axion} \pm \omega_{g-2}$, as confirmed in Fig. \ref{fig:Sy_misalignedWF}. The WF with the lower sideband frequency was applied in the presence of just the AC EDM. Each color represents the degree of the misalignment, from 0 to 100 $\mu$rad. No matter what the tilted angle $\theta$, the vertical spin component grows with the same average slope. The large fluctuations on the growth for the case of 100 $\mu$rad, for example, are not a problem since the average slope is what matters for detecting the vertical spin resonance.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{figures/Sy_WF100perc_WFmisalign.png}
\caption{The vertical spin component versus time, when there is only the AC EDM component ($\eta_\text{AC} = 10^{-6}$) and the RF Wien Filter frequency is the lower sideband $f_\text{axion} - f_{g-2}$. The Wien Filter is assumed to be perfectly aligned (blue), misaligned by 1 $\mu$rad (orange), 10 $\mu$rad (green) and 100 $\mu$rad (red). The tilted direction is the same in all cases, following Fig. \ref{fig:tilted_WF}.}
\label{fig:Sy_misalignedWF}
\end{figure}
When the misalignment angle for the electric field is different from that of the magnetic field generated by the WF, the particles experience the Lorentz force, and this can be immediately corrected through precise beam position monitoring.
\subsection{Intrinsic Resonances and Field Errors}
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{figures/WF_with_random_field_error.png}
\caption{The fluctuations of the vertical spin component versus time in presence of the random field errors. The blue curve is the reference plot without the field errors. The magnetic field errors up to the octupole ($k=4$), 8-th azimuthal harmonics ($N=8$) were implemented, where the relative strength $b_{k, N}/B0$ are chosen to be 1 (orange) and 5 (green) parts-per-million (ppm), respectively. 10 independent series of simulation were conducted for each category, and the fluctuation between the maximum and the minimum values of the vertical spin component at each time bin is plotted.}
\label{fig:Sy_random_field_error}
\end{figure}
There are also intrinsic systematic error sources in particle accelerators, namely the betatron tune and spin resonances. A general condition of the spin resonance is given as\cite{Mane2005, Conte2008}
\begin{align}
N_\text{spin} \nu_\text{spin} + N_x \nu_x + N_y \nu_y + N_\text{sync} \nu_\text{sync} = N,
\end{align}
where $\nu_\text{spin}, \nu_x, \nu_y, \nu_\text{sync}$ are the spin, horizontal, vertical and synchrotron tunes, respectively, and the coefficients and $N$ are integers. Although not all set of the tunes satisfying the above condition lead to the resonance strong enough to depolarize the beam, it is recommended to avoid the resonances as best as one can, especially for the low-order ones with relatively small $|N|$s. This in general can be done by adjusting the focusing field index and carefully setting the spin precession frequency.
The field errors are closely related to the intrinsic resonances as well, since the beam experiences them periodically. The field error can be represented using the multipole and Fourier expansion,
\begin{align}
B_x (x, y, s) &= \sum_{k=1, N=1} b_{k, N} \,\Im \left( \frac{x + iy}{r_a} \right)^{k-1} \cos \left( N \frac{s}{R} + \phi_N \right), \\
B_y (x, y, s) &= \sum_{k=1, N=1} b_{k, N} \,\Re \left( \frac{x + iy}{r_a} \right)^{k-1} \cos \left( N \frac{s}{R} + \phi_N \right),
\end{align}
which describes the normal $2k$-pole, $N$-th azimuthal harmonic magnetic field, where $r_a$ is the beam storage acceptance radius and $R$ is the storage ring radius. One can swap the real and imaginary parts and put additional negative sign to $B_y$ to obtain the skewed multipole components.
We studied the effect of the field errors by implementing the above field components with randomly generated $b_{k, N}$ up to octupole ($k=4$), 8-th azimuthal harmonics ($N=8$), where $r_a$ is chosen to be 10 cm. Although the relative strength $b_{k, N}/B_0$ typically decreases as the order gets higher, we set the same scales for all $k$ and $N$s to avoid complications. The result is shown in Fig. \ref{fig:Sy_random_field_error}, where the blue curve is the nominal $S_y$ on resonance without the field error as reference. The other curves represent the maximum value of the random-generated relative field strength $b_{k, N}/B_0$, which are 1 parts-per-million (ppm) for the orange and 5 ppm for the green, respectively. The two curves with the field errors are actually plotting the fluctuations, filling between the maximum and minimum values of $S_y$ in 10 independent series of simulation done for each of them. It is clearly shown that the spin motion is fairly stable with the field errors of order of 1 ppm, but shows larger fluctuations with 5 ppm. These fluctuations are averaged out with long storage times.
In general, if the proposed experiment encounters a systematic spin resonance due to many potential sources such as intrinsic tune resonance or Berry’s phase, one can figure out whether it is a systematic noise or a real signal by readjusting the $g-2$ frequency and the Wien Filter frequency targeting the same axion frequency.
\section{Estimation of Statistical Sensitivity} \label{sec:sensitivity}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/Sensitivity.pdf}
\caption{The projected sensitivity to the axion-gluon coupling strength in the axion parameter space for axionlike dark matter. The colored dashed lines indicate the present study, depending on the spin coherence time $\tau_p$: $10^3$ s (red), $10^4$ s (green) and $10^5$ s (blue). It assumes the given values in Eq. \eqref{eq:sigma_d_nominal_value} for the proton. The integrated measurement time at each frequency is one year. The filled regions that were already excluded by the neutron EDM (nEDM) experiment\cite{Abel2017} and the big bang nucleosynthesis\cite{Blum2014} are shown as references, as well as the QCD axion band in Eq. \eqref{eq:QCDaxion}.}
\label{fig:Sensitivity}
\end{figure}
The statistical sensitivity of the oscillating EDM has been derived in a rigorous manner in App. \ref{app:Sensitivity}.
\begin{align} \label{eq:sigma_d_sensitivity}
\sigma_d = \frac{4.67 s \hbar}{P_0 A E^* C_\text{WF} \sqrt{\kappa N_\text{cyc} T_\text{exp} \tau_p}},
\end{align}
where $P_0$ is the initial beam polarization, $A$ is the analyzing power, $E^*$ is the equivalent electric field in the storage ring, $C_\text{WF}$ is the coefficient determined by the WF performance, $\kappa$ is the polarimeter efficiency, $N_\text{cyc}$ is the number of stored particles in one cycle (single measurement), $T_\text{exp}$ is the total experimental period and $\tau_p$ is the spin coherence time. Plugging in the typical experimental numbers in the ideal situation, one obtains
\begin{widetext}
\begin{align} \label{eq:sigma_d_nominal_value}
\sigma_d = 9.3 \times 10^{-31} \text{ [} e \cdot \text{cm]}
\left( \frac{s}{1/2} \right) \left( \frac{0.8}{P_0} \right) \left( \frac{0.6}{A} \right) \left( \frac{100 \text{ MV/m}}{E^*} \right) \left( \frac{0.59}{C_\text{WF}} \right) \sqrt{\left( \frac{1.1\%}{\kappa} \right) \left( \frac{10^{11}}{N_\text{cyc}} \right) \left( \frac{1 \text{ yr}}{T_\text{exp}} \right) \left( \frac{10^3 \text{ s}}{\tau_p} \right)}
\end{align}
\end{widetext}
for the case of the proton. This calculation is done assuming we can obtain a large $C_\text{WF}$ for all targeting frequencies in the axion parameter space. For the realistic case, $C_\text{WF}$ is more restrictive depending on the WF performance. For instance, $a_\text{WF}$ should be proportional to the azimuthal fraction the WF occupies in the storage ring. But we do not cover the technical difficulties and details to achieve the maximum $C_\text{WF}$ in this paper.
In a search for axionlike dark matter, we can exclude the axion-gluon coupling parameter space with a sensitivity proportional to $\sigma_d$. The QCD axion has the relationship\cite{diCortona2016}
\begin{align} \label{eq:QCDaxion}
m_a^\text{QCD} \approx 5.7 \text{ $\mu$eV } \left( \frac{10^{12} \text{ GeV}}{(f_a/C_G)^\text{QCD}} \right)
\end{align}
where $C_G$ is a model-dependent dimensionless coefficient of the axion-gluon coupling Lagrangian and $f_a$ is the symmetry breakdown scale. Exploiting this relation, the sensitivity to $(C_G/f_a)$ that we can exclude from the parameter space if the EDM is not discovered with the uncertainty $\sigma_d$, is given as
\begin{align} \label{eq:CGfa}
\left( \frac{C_G}{f_a} \right)_\text{exc.} = \left( \frac{C_G}{f_a} \right)^\text{QCD} \frac{\sigma_d}{\left| d_n^\text{QCD} \right|}
\end{align}
where $d_n^\text{QCD} \approx 9 \times 10^{-35} \cos(m_a t)$ [$e \cdot$ cm] holds for the QCD axion\cite{Graham2013}.
The projected sensitivity on $C_G/f_a$ is shown in Fig. \ref{fig:Sensitivity}, indicated by the colored dashed lines. These were obtained from Eq. \eqref{eq:CGfa} using Eq. \eqref{eq:sigma_d_nominal_value} for the given values within, except the spin coherence time. The minimum axion frequency that one can scan is determined by a single measurement time, as long as it is free from systematic effects. Therefore, a higher spin coherence time not only improves the sensitivity at a given frequency but also widens the scanning area. The optimum single measurement time to minimize the statistical uncertainty of the repeated measurement is shown to be $\tau_p/2$, as derived in App. \ref{app:Sensitivity}. There are kinks, after which the projected sensitivity starts to decrease fast. This is because the single measurement time $T$ has to be smaller than $\tau_p/2$ after this point, as the axion phase decoheres faster than $\tau_p/2$. Explicitly, $T$ is given by
\begin{align}
T = \min \left\{ \frac{Q_\text{axion}}{f_\text{axion}}, \frac{\tau_p}{2} \right\},
\end{align}
where $Q_\text{axion}$ is the axion quality factor, which was assumed to be $10^{6}$\cite{Krauss1985, Turner1990}.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/Sensitivity_gd.pdf}
\caption{The projected sensitivity to the ALP-nucleon EDM coupling strength ($g_d$) in the ALP parameter space for axionlike dark matter. The colored dashed lines indicate the present study, depending on the spin coherence time $\tau_p$: $10^3$ s (red), $10^4$ s (green) and $10^5$ s (blue). It assumes the given values in Eq. \eqref{eq:sigma_d_nominal_value} for the proton. The integrated measurement time at each frequency is one year. The filled regions that were excluded by excess cooling in SN1987A (light blue) and the static EDM measurement (orange) are adapted from Ref. \citep{Graham2013}, with proper extension to the range in the present parameter space.}
\label{fig:Sensitivity_gd}
\end{figure}
On the other hand, the parameter space for the new coupling between the axion-like particles (ALPs) and the nucleon, $g_d$, can be scanned. This coupling, which is directly responsible for the oscillating nucleon EDM, appears in the Lagrangian\citep{Graham2013}
\begin{align}
\mathcal{L} \ni -\frac{i}{2} g_d a \bar{N} \sigma_{\mu\nu} \gamma_5 N F^{\mu\nu},
\end{align}
where $a$ is the ALP field interacting with the nucleon $N$. Assuming the ALP makes up all of the local dark matter with a density $\rho_\text{DM} \approx 0.3 \text{ GeV/cm$^3$}$, the nucleon EDM is given by\citep{Graham2013}
\begin{align}
d_n \approx \left( 1.4 \times 10^{-25} \; e \cdot \text{cm} \right) \left( \frac{\text{eV}}{m_a} \right) \left( \frac{g_d}{\text{GeV}^{-2}} \right) \cos(m_a t).
\end{align}
Figure \ref{fig:Sensitivity_gd} shows the projected sensitivity to the EDM coupling $g_d$. The constraints from the excess cooling in SN198A and the static EDM measurement are provided in Ref. \citep{Graham2013}, where we have set the lower bound of the constraint from the static EDM to 1/130 Hz as the experimental time for a single shot was $t_\text{shot} = 130$ seconds\cite{Baker2006}.
Recently, there was a study offering new predictions of the ALPs coupling, called ALPs cogenesis\cite{Co2021}. Its projected region in the parameter space was above the QCD axion band, strengthening the motivation for scanning the axion parameter space above the QCD axion band.
\section{Conclusion}
Employing an RF Wien Filter in the storage ring EDM method provides a powerful method for probing an oscillating EDM. We revealed that vertical spin resonance happens when the Wien Filter is operating at the frequency of the sidebands of the axion and the $g-2$ frequency, $f_\text{axion} \pm f_{g-2}$. The approximated analytic solution for the spin resonance agreed well with the simulation results. Scanning the axion frequency would be straightforward, easily performed by tuning the Wien Filter frequency. This method avoids the large systematic effect that arises when the Wien Filter frequency is close to the $g-2$ frequency, as confirmed by both analytical calculations and spin tracking simulations. Even though when the Wien Filter frequency is close to the $g-2$ frequency to search for low-mass axion ($f_\text{axion} < 1$ Hz), the systematic effect from Wien Filter misalignment is well understood and can be corrected precisely.
A systematic effect from random field errors was also studied, which did not show a critical influence on the vertical spin component at least up to 5 ppm. Nonetheless, further intensive numerical studies are necessary under more realistic lattice and beam conditions to understand the details of all systematic effects before conducting an experiment.
This particular idea of introducing an RF Wien Filter to look for the axion-induced oscillating EDM might be of interest for frozen-spin proton and deuteron EDM experiments in storage rings, because it allows physics probing data to be taken simultaneously from the static DC EDM and the oscillating AC EDM. This method can be applied to existing storage rings, e.g., the muon $g-2$ experiment at Fermilab\cite{Abi2021}, by storing polarized proton or deuteron beams.
\begin{acknowledgements}
This work was supported by IBS-R017-D1-2021-a00. We appreciate informative discussions with members of the storage ring EDM collaboration.
\end{acknowledgements}
\onecolumngrid
|
2,877,628,091,331 | arxiv | \section*{Acknowledgements}
The work of JA is partially supported by Fondecyt 1010967, Ecos-Conicyt C01E05 and Secretaria de Estado de Universidades e
Investigaci\'on SAB2003-0238(Spain). He wants to thank A.A. Andrianov for several useful remarks; and interesting
conversations with H.A. Morales-T\'ecotl, L.F.Urrutia, D. Sudarsky, C. Kounnas, C. Bachas, V. Kazakov, A. Bilal,
M.B. Gavela, E. Alvarez and A. Gonz\'alez-Arroyo. He acknowledges the hospitality of the Perimeter Institute and Ecole Normale
Superieure,Paris.
|
2,877,628,091,332 | arxiv | \section{Introduction} \label{sec1}
Earth based neutrino oscillation experiments \cite{Abe:2013hdq, Ahn:2012nd, Abe:2012tg, An:2012eh, Araki:2004mb, Adamson:2008zt, Fukuda:1998mi, Ahmad:2002jz} have confirmed that neutrinos are massive, which is the first known departure from the Standard Model of particle physics where neutrinos are considered to be massless. The 3 neutrino flavor states ($\nu_e$, $\nu_{\mu}$, $\nu_{\tau}$) are quantum superpositions of the 3 mass eigenstates ($\nu_i$, with respective distinct masses, $m_i$ for $i=1,2,3$). However, because oscillation experiments use ultra relativistic neutrinos they are only sensitive to the mass-squared splittings ($\Delta m_{ij}^2 = m_i^2 - m_j^2$) and not the absolute masses, thus keeping the mass of the lightest neutrino unbounded. On the other hand, while magnitudes of $\Delta m_{21}^2$ and $\Delta m_{31}^2$ are known to considerable accuracy from the current oscillation data, sign of $\Delta m_{31}^2$ is unknown. This leads to two possible hierarchies of neutrino masses: $m_1<m_2\ll m_3$ (normal hierarchy or NH) and $m_3\ll m_1<m_2$ (inverted hierarchy or IH) depending on whether $\Delta m_{31}^2$ is positive or negative, respectively. As per the latest NuFit 4.0 \cite{Esteban:2018azc} global analysis of oscillations data, the values of the mass-squared splittings (in units of eV$^{2}$) are (limits are given at 1$\sigma$):
\begin{equation} \label{eq1}
\Delta m_{21}^2 = 7.39^{+0.21}_{-0.20}\times 10^{-5};~~ \Delta m_{31}^2= 2.525^{+0.033}_{-0.032}\times 10^{-3} (\textrm{NH});~~\Delta m_{32}^2= - 2.512^{+0.034}_{-0.032}\times 10^{-3} (\textrm{IH}).
\end{equation}
Here, the value for $\Delta m_{21}^2$ is applicable to both NH and IH, while the other values are for the particular hierarchies as mentioned in the brackets. It is to be noted that for IH, $\Delta m_{32}^2$ is provided (whose value is negative) instead of $\Delta m_{31}^2$, but since $\Delta m_{21}^2 \ll \Delta m_{32}^2$, sign of $\Delta m_{31}^2$ for IH is also negative (since $\Delta m_{31}^2 = \Delta m_{32}^2 + \Delta m_{21}^2$). See \cite{Forero:2014bxa, Gonzalez-Garcia:2014bfa, Esteban:2016qun, Capozzi:2016rtj, Capozzi:2017ipn, Caldwell:2017mqu} for results from other global analyses.
A solution to the neutrino mass hierarchy problem may come from cosmology, which currently provides the strongest bounds on the absolute neutrino mass scale, defined as the sum of the three active neutrino masses,
\begin{equation} \label{eq2}
\sum m_{\nu} = m_1 + m_2 + m_3.
\end{equation}
As far as known physics goes, at temperatures $T \gg {\rm MeV}$, neutrinos remain in equilibrium with the primordial plasma through the standard model weak interactions. At around $T\sim {\rm MeV}$ neutrinos decouple from the plasma and start free streaming. When neutrinos are relativistic ($T \gg m_{\nu}$) they contribute to the radiation energy density. This continues until much later when they turn non-relativistic at temperatures $T \sim m_{\nu}$, and then they contribute to the matter energy density. Effects of massive neutrinos on cosmological observables have been extensively studied in the literature \cite{Lesgourgues:2006nd, Wong:2011ip, Lesgourgues:2012uu, Abazajian:2013oma, Lesgourgues:2014zoa, Archidiacono:2016lnv, Lattanzi:2017ubx} and these effects help in constraining the sum of neutrino masses. If we consider neutrinos with masses $\ll 1$ eV, at the time of photon decoupling they are still relativistic, and their mass has very limited effect on the photon perturbations and their evolution. Hence, for the primary CMB signal, the effect of small neutrino masses can only be seen through the background evolution, and secondary anisotropies like Integrated Sachs-Wolfe (ISW) effect, and these too can be compensated partially by varying other free parameters in the $\Lambda$CDM model. Thus, strong bounds on $\sum m_{\nu}$ cannot be obtained with CMB power spectrum data alone. On the other hand, at late times, neutrinos affect the evolution of matter perturbations to a large extent. Due to the free-streaming effect of neutrinos, i.e. large thermal velocities, neutrinos do not cluster on small length scales, and this causes increasing suppression of small scale matter power spectrum with increasing fraction of neutrino energy density with respect to the total matter density \cite{Lesgourgues:2014zoa}. Thus if we augment CMB anisotropy data with data coming Large Scale Structure (LSS), CMB lensing (which is the weak lensing effect on CMB photons due to LSS) measurements etc, strong bounds on $\sum m_{\nu}$ can be obtained. Even so, currently it is only possible to get an upper bound on $\sum m_{\nu}$ from cosmological data alone.
Let us define the mass of the lightest neutrino mass eigenstate to be $m_0$. For normal hierarchy, $m_0 = m_1$, whereas for inverted hierarchy, $m_0 = m_3$. In terms of $m_0$, the sum of the neutrino masses can be defined as,
\begin{equation}\label{eq3}
\sum m_{\nu} = m_0 + \sqrt{\Delta m_{21}^2 + m_0^2} + \sqrt{\Delta m_{31}^2+ m_0^2} ~~~~~~~~~~~~~~~~~~~~~~~~(\textrm{NH}),
\end{equation}
and
\begin{equation} \label{eq4}
\sum m_{\nu} = m_0 + \sqrt{|\Delta m_{32}^2| + m_0^2} + \sqrt{|\Delta m_{32}^2|- \Delta m_{21}^2 + m_0^2} ~~~~~~~~~~~~~(\textrm{IH}).
\end{equation}
Putting $m_0 = 0$, one can obtain the minimum neutrino mass sums allowed by the two possible hierarchies, and these are $\sum m_{\nu} = 0.05885^{+0.00045}_{-0.00044}$ eV (1$\sigma$)(NH) and $\sum m_{\nu} = 0.100^{+0.00070}_{-0.00067}$ eV (1$\sigma$)(IH). Assuming that the normal hierarchy is the true one, the way cosmology can help is by constraining the $\sum m_{\nu}$ below the minimum mass required by inverted hierarchy with reasonable statistical significance, e.g., a determination of $\sum m_{\nu} = 0.058\pm 0.008$ eV (1$\sigma$) will exclude inverted hierarchy at 5$\sigma$ and also provide cosmological evidence for non-zero neutrino masses at 7.25 $\sigma$. Currently, the most recent and strongest bounds on $\sum m_{\nu}$ in the minimal $\Lambda$CDM+$\sum m_{\nu}$ model are around $\sum m_{\nu} <0.12$ eV (95\% C.L.) \cite{Choudhury:2018byy, Aghanim:2018eyx} with CMB and BAO data, whereas some other studies had reported bounds around $\sum m_{\nu} < 0.15$ eV (95\%) or better \cite{Vagnozzi:2017ovm, Palanque-Delabrouille:2015pga,DiValentino:2015wba,Cuesta:2015iho,Huang:2015wrx,Moresco:2016nqq,Giusarma:2016phn,Couchot:2017pvz,Caldwell:2017mqu,Doux:2017tsv,Wang:2017htc,Chen:2017ayg,Upadhye:2017hdl,Salvati:2017rsn,Nunes:2017xon,Zennaro:2017qnp,Wang:2018lun, Choudhury:2018adz, Giusarma:2018jei,Loureiro:2018pdz}. The strongest bounds are very close to the minimum $\sum m_{\nu}$ required for inverted hierarchy and thus it seems that inverted hierarchy is starting to get under pressure from cosmological data. These bounds are obtained with the assumption that all the three neutrino masses are equal ($m_i = \sum m_{\nu}/3$ for $i=1,2,3$), an approximation we denote as degenerate hierarchy (DH). There are also studies covering interesting aspects of measurement of neutrino hierarchy from cosmology \cite{Hannestad:2016fog, Vagnozzi:2017ovm,Xu:2016ddc,Gerbino:2016ehw,Simpson:2017qvj,Schwetz:2017fey,Long:2017dru,Gariazzo:2018pei, Heavens:2018adv,deSalas:2018bym,Loureiro:2018pdz,Mahony:2019fyb}. The current cosmological data, however, is not yet sensitive enough to make the distinction between the two hierarchies on a level that can be considered statistically significant, but there is a small preference for normal hierarchy \cite{Hannestad:2016fog, Vagnozzi:2017ovm}. It is to be noted that the bounds depend on the underlying cosmological model, and any extensions to the minimal $\Lambda\textrm{CDM}+\sum m_{\nu}$ model will usually lead to a more relaxed bound on $\sum m_{\nu}$ \cite{Choudhury:2018byy, Vagnozzi:2017ovm, Wang:2017htc, Chen:2017ayg,Hannestad:2005gj,Joudaki:2012fx,Yang:2017amu,Lorenz:2017fgo,Sutherland:2018ghu,Sahlen:2018cku,DiValentino:2017zyq}. However, it can happen that the neutrino mass bound improves when the extension to $\Lambda$CDM is done in such a way that the allowed parameter space of the new parameters prefers neutrino masses which are smaller than what we get in $\Lambda$CDM. In fact this is the case when one incorporates dynamical dark energy in the cosmological model but restricts the parameter space to non-phantom dark energy only, i.e. neutrino mass bounds in a cosmology with non-phantom dynamical dark energy are stronger than that in $\Lambda\textrm{CDM}$ \cite{Choudhury:2018byy, Choudhury:2018adz, Vagnozzi:2018jhn}. Another such model where neutrino mass bounds are stronger than that in $\Lambda\textrm{CDM}$ is holographic dark energy (HDE) \cite{Zhang:2015uhk,Wang:2016tsz}.
From now on, we shall use the abbreviations DH, NH, and IH for degenerate, normal and inverted hierarchies respectively.
While the degenerate hierarchy approximation has been predominant in the literature, and makes sense when neutrino masses are relatively large compared to the square-root of the mass-squared splittings (i.e. $m_i \gg \sqrt{\Delta m_{ij}^2}$), the cosmological neutrino mass bound is becoming strong enough that it should be replaced by a treatment using either the normal or the inverted hierarchy.
Hence, in this paper we have updated the bounds on the $\sum m_{\nu}$ while explicitly considering three different hierarchies (degenerate, normal and inverted), using latest datasets and likelihoods that are publicly available, for the minimal $\Lambda\textrm{CDM} + \sum m_{\nu}$ and some of its extensions. Except in the case of extension with the tensor-to-scalar ratio ($r$) parameter, all the other extensions studied in this paper includes new parameters which are considerably correlated with the sum of neutrino masses in the datasets considered. Details of the models are given in the next section. The neutrino mass bounds are supposed to relax in most of the extended models, and the difference between the upper limits obtained for the three hierarchies is supposed to diminish as the individual masses become much larger than the square root of mass-squared splittings. But still, our motivation to study these extended models is to see whether the latest datasets can make a difference.
To implement the normal and inverted hierarchy, we use the mean values of the mass squared splittings given in eq.(\ref{eq1}) along with the lightest neutrino mass $m_0$ to define $m_1$, $m_2$, and $m_3$, and use $m_0$ as a free parameter and $\sum m_{\nu}$ as a derived parameter. We ignore the errors in the measurement of the mass-squared splittings from oscillations data since they are small compared to the mean values and incorporating them would have a very small effect on the bounds on $\sum m_{\nu}$. For degenerate hierarchy, we simply have $\sum m_{\nu} = 3 m_0$.
It is to be understood here that the physical parameter cosmological data is primarily sensitive to $\sum m_{\nu}$ and not the individual neutrino masses. Even if we consider very optimistic future cosmological data, same value of $\sum m_{\nu}$ but different neutrino mass hierarchies lead to changes which are smaller than the expected observational errors and modeling uncertainties, i.e. it will be impossible to differentiate individual neutrino masses even with future data \cite{Archidiacono:2020dvx}. Thus, while analysing cosmological data, it makes sense to vary $\sum m_{\nu}$ as the cosmological parameter. However, same is not true for other experiments such as neutrino oscillations, kinematic measurements from beta decay etc and thus using the lightest mass $m_0$ instead of $\sum m_{\nu}$ allows one to be more general in their approach, and leaves the possibility of incorporating non-cosmological datasets for further analysis \cite{Gerbino:2016ehw} open. If one uses $\sum m_{\nu}$ as the cosmological parameter, a good way to incorporate DH, NH, and IH is to use the following non-informative flat priors: $\sum m_{\nu} \geq 0$ (DH), $\sum m_{\nu} \geq 0.06$ (NH), $\sum m_{\nu} \geq 0.10$ eV (IH), with suitable upper limits. Due to eq. (\ref{eq3}) and (\ref{eq4}), if one uses a flat prior on $m_0$ instead of on $\sum m_{\nu}$, the effective prior on $\sum m_{\nu}$ is not guaranteed to be for the NH and IH case, and requires further inspection. However, as shown in figure 6 of Ref \cite{Gerbino:2016ehw}, a flat prior on the lightest mass leads to an almost flat prior on $\sum m_{\nu}$, except that the prior probability rises only slightly close to the lowest possible $\sum m_{\nu}$ values in each hierarchical scenario. Thus it is expected that the bounds on $\sum m_{\nu}$ coming from the NH and IH scenarios with a flat prior on $m_0$ will be slightly stronger than what we can get by using a flat prior on $\sum m_{\nu}$. As we shall see that the differences in the bounds on $\sum m_{\nu}$ obtained using flat prior on $m_0$ are very close to the ones with a flat prior on $\sum m_{\nu}$ and thus using $m_0$ as a varying parameter is well-motivated.
Another important point is that current cosmological datasets are only able to provide an upper bound to the $\sum m_{\nu}$. This bound changes when one changes the neutrino mass hierarchy model from DH to NH to IH. This happens due to volume effects, i.e. change in the prior on $\sum m_{\nu}$. However, in case future cosmological data is able to detect $\sum m_{\nu}$, it has been shown in \cite{DiValentino:2016foa}, that irrespective of the correct neutrino mass model (i.e. NH or IH), using the unrealistic DH approximation (with the prior $\sum m_{\nu} \geq 0$) will actually lead to the recovery of the correct values of $\sum m_{\nu}$ with only a small reconstruction bias due to the assumption of wrong fitting model. A more recent study \cite{Archidiacono:2020dvx} also made a similar conclusion. Thus, the degenerate hierarchy approximation, while unrealistic, is still useful in cosmology. However, while using DH, we must be wary of any reconstruction bias in the recovered values of $\sum m_{\nu}$ in case it is detected from future data.
We would like to point it out here that the choice for a proper prior on neutrino masses has been a topic of intense debate among the concerned researchers, especially because different prior choices can lead to very different bounds on $\sum m_{\nu}$ and very different inferences on the hierarchy issue. See \cite{Hannestad:2016fog,Gerbino:2016ehw,Vagnozzi:2017ovm,Long:2017dru,Gariazzo:2018pei,Heavens:2018adv,Handley:2018gel,Simpson:2017qvj,Schwetz:2017fey} for discussions on this topic.
Among datasets, for CMB anisotropies we use the most recently released Planck 2018 likelihoods for the data on temperature, E mode polarisation and their cross-correlation. Other than Planck CMB anisotropies, we use latest data from measurements of Planck lensing, CMB B mode, BAO, and SNe Ia luminosity distance.
This paper is structured as follows. In section \ref{sec2} we provide brief details about the cosmological models and datasets used in this paper. In section \ref{sec3} we provide and explain the results of our MCMC analyses. In section \ref{sec5} we also include results from additional analyses using nested sampling and provide a comparison of bayesian evidence between NH and IH, and quantify the evidence against IH closely following \cite{Hannestad:2016fog}. In section \ref{sec4} we conclude.
\section{Methodology: Models and datasets} \label{sec2}
\subsection{Models}\label{sec2.1}
In this work we have performed our analyses using a variety of different cosmological models which we shall describe below. Note that when we label models we use the term $\sum m_\nu$ to refer to the neutrino mass. This is done in order to conform to the standard labelling used in cosmological parameter analyses. In fact all our runs use a flat prior on $m_0$, the mass of the lightest mass state, rather than a flat prior on $\sum m_\nu$.
In total we have investigated the following 7 sets of cosmological models:
\begin{itemize}
\item i) The minimal $\Lambda\textrm{CDM}+\sum m_{\nu}$ model: Below we list the vector of varying parameters in this model.
\begin{equation}
\theta \equiv \left[\omega_c, \omega_b, \Theta_{\textrm{s}}, \tau, n_{\textrm{s}}, \textrm{ln}[10^{10} A_{\textrm{s}}], m_0\right].
\end{equation}
Here the first six parameters correspond to the $\Lambda\textrm{CDM}$ model. $\omega_c = \Omega_c h^2$ and $\omega_b = \Omega_b h^2$ are the present cold dark matter and baryon energy densities respectively. $\Theta_{\textrm{s}}$ is the ratio between sound horizon $r_s$ and angular diameter distance $D_{\textrm{A}}$ at the time of photon decoupling. $\tau$ is the optical depth to re-ionization of the universe at late times. $n_{\textrm{s}}$ and $A_{\textrm{s}}$, on the other hand, relate to early universe cosmology. They are the power-law spectral index and power of the primordial scalar perturbations respectively, evaluated at the pivot scale of $k_{*} = 0.05h$ Mpc$^{-1}$. As defined in the previous section, the seventh parameter, $m_0$ is the mass of the lightest neutrino and it is the parameter of primary concern in this paper.
\item ii) The $\Lambda\textrm{CDM}+\sum m_{\nu}+r$ model: Apart from the main 7 parameters, in this model we also include the evolution of tensor perturbations in the analysis along with scalar perturbations, and add an additional free parameter $r$, which is the tensor-to-scalar ratio evaluated at the same pivot scale as $n_s$ and $A_s$.
\item iii) The $w\textrm{CDM}+\sum m_{\nu}$ model: Here instead of a cosmological constant with a dark energy equation of state (DE EoS hereafter) fixed at $w(z) = -1$ we opt for a DE EoS $w$ which varies as a free parameter, but does not vary in time (i.e. $w$ can assume different values but the values are fixed throughout the evolution history of the universe). Here $z$ denotes the cosmological redshift ($z = 1/a -1$, where $a$ is the scale factor of FRW metric). Here, as well as in all models including non-$\Lambda$ dark energy, we use the PPF prescription \cite{Hu:2007pj} for incorporating dark energy perturbations.
\item iv) The $w_0 w_a \textrm{CDM}+\sum m_{\nu}$ model: In this case we incorporate a dynamically varying DE EoS, i.e. $w(z)$ also varies with time. We parametrize the EoS with the $w_0-w_a$ approach. This is the Chevallier-Polarski-Linder (CPL) parametrization \cite{Chevallier:2000qy,Linder:2002et}:
\begin{equation} \label{eq5}
w(z) = w_0 + w_a (1-a) = w_0 + w_a \frac{z}{1+z}.
\end{equation}
We hereafter may simply refer to this model as DDE.
\item v) The $w_0 w_a \textrm{CDM}+\sum m_{\nu}$ model with $w(z)\geq -1$: This model has the same parametrization as the previous model, but the dynamical dark energy is forced to stay in the non-phantom range, i.e. $w(z) \geq -1$. This is achieved by noticing that $w(z)$ in eq. (\ref{eq5}) is a monotonic function, that at present day $w(z) = w_0$ (since $a = 1$ is the present value of the scale factor by convention), and that at the very early universe ($a \rightarrow 0$) we had $w(z)\rightarrow w_0 + w_a$. It thus suffices to have the following hard priors applied on the CPL parameters to keep them from crossing the phantom barrier \cite{Choudhury:2018byy,Choudhury:2018adz, Vagnozzi:2018jhn}:
\begin{equation}
w_0 \geq -1;~~~~~~~~~~~~~~~~w_0+w_a \geq -1
\end{equation}
We hereafter may simply refer to this model as NPDDE.
\item vi) The $\Lambda \textrm{CDM} + \sum m_{\nu} + A_{\textrm{Lens}}$ model: In this extended model we include $A_{\textrm{Lens}}$, which is the scaling of the lensing amplitude. In a particular model, the theoretical prediction for the gravitational potential (which generates the weak lensing of the CMB) corresponds to $A_{\textrm{Lens}} = 1$. When $A_{\textrm{Lens}}$ is varied, the weak lensing is uncoupled from the primary anisotropies which cause it, and then scaled by the value of $A_{\textrm{Lens}}$ \cite{Calabrese:2008rt}. $A_{\textrm{Lens}}$ acts as a consistency check parameter. In a particular model, if the data prefers $A_{\textrm{Lens}} > 1$, it means it prefers more smoothing of its acoustic peaks in the power spectra (typically caused by lensing) than what theoretically should be. The physical reason for this extra smoothing (if it can't be accounted for statistical fluctuation in the data) may not be extra lensing but may be any new effect that mimics lensing \cite{Aghanim:2018eyx}. There is a well-known $A_{\textrm{lens}}$ tension in the Planck high-$l$ data, but not in its measurements of lensing. In the $\Lambda\textrm{CDM}+A_{\rm Lens}$ model, with Planck 2018 TT+TE+EE+lowE data, one finds that the constraint $A_{\textrm{Lens}}=1.180\pm0.065$ (68\%) \cite{Aghanim:2018eyx} is more than 2$\sigma$ away from $A_{\rm Lens} = 1$. Solving the $A_{\textrm{Lens}}$ anomaly with a statistical significance of 5$\sigma$ or more requires more precise measurement of the E and B mode polarization of the CMB photons in small scales, which will be possible with future CMB experiments like LiteBIRD \cite{Matsumura:2013aja} (space based); and stage III ground based experiments, provided the accurate measurement of $\tau$ from Planck is included. Optimistic Stage IV CMB experiments can even provide a resolution at 10$\sigma$, also providing significant information on scale dependance of $A_{\textrm{Lens}}$, if any. See \cite{Renzi:2017cbg} for details.
\item vii) The $\Lambda \textrm{CDM} + \sum m_{\nu} + \Omega_k$ model: Here we go away from a flat universe to the one which can be curved. The curvature of the universe is parametrized by $\Omega_k$, which is called the curvature energy density, and we allow it to vary freely in this model.
\end{itemize}
We use the publicly available Markov Chain Monte-Carlo package CosmoMC \cite{Lewis:2002ah} (which uses the Boltzmann solver CAMB \cite{Lewis:1999bs}) to perform a Bayesian analysis of cosmological datasets and derive constraints on $\sum m_{\nu}$ and other cosmological parameters. We use the Gelman and Rubin statistics \cite{doi:10.1080/10618600.1998.10474787} to estimate the convergence of the chains. All our chains had achieved the convergence criterion of $R-1 < 0.01$. We use flat priors on all the the parameters that are varied in a particular model. The priors are listed in table \ref{tab1}.
We also calculate Bayesian evidences for the purpose of Bayesian model comparison between the NH and IH cases in the above cosmological models using the publicly available package CosmoChord \cite{Handley}, an extension to CosmoMC using PolyChord \cite{Handley:2015fda,10.1093/mnras/stv1911} for nested sampling. We used sufficiently broad priors on the cosmological parameters and 2000 live points for each CosmoChord run to keep the error to the estimated evidence small. The methodology to quantify evidence against IH closely follows \cite{Hannestad:2016fog}. The results are given in section \ref{sec5}.
\begin{table}
\begin{center}
\begin{tabular}{c c}
\hline
Parameter & Prior\\
\hline
$\omega_c$ & [0.005, 0.1]\\
$\omega_b$ & [0.001, 0.99]\\
$\Theta_{\rm s}$ & [0.5, 10]\\
$\tau$ & [0.01, 0.8]\\
$n_{\textrm{s}}$ & [0.8, 1.2]\\
$\textrm{ln}[10^{10}A_{\textrm{s}}]$ & [2, 4]\\
$m_0$ (eV) & [0, 1]\\
$r$ & [0, 1] \\
$w$ & [-3, -0.33] \\
$w_0$ & [-3, -0.33]\\
$w_a$ & [-2, 2]\\
$\Omega_k$ & [-0.3, 0.3] \\
$A_{\textrm{Lens}}$ & [0, 3]\\
\hline
\end{tabular}
\end{center}
\caption{\label{tab1} Flat priors on the main cosmological parameters constrained in this paper in the analyses with CosmoMC.}
\end{table}
\subsection{Datasets}\label{sec2.2}
~~~~~\textbf{CMB: Planck 2018.} We use the high-$l$ (30 $\leq$ $l$ $\leq$ 2508) and low-$l$ (2 $\leq$ $l$ $\leq$ 29) CMB TT likelihood along with high-$l$ E mode polarization and temperature-polarisation cross correlation likelihood from the recent Planck 2018 public release \cite{Aghanim:2019ame} and we call this combination ``TTTEEE". We use the Planck low-$l$ E mode polarization data, and in the text we mention it as ``lowE."
\textbf{CMB: Planck 2018 lensing.} While the CMB anisotropy power spectra is determined from 2-point correlation functions (TT, TE, EE), the power spectra of the lensing potential is proportional to the 4-point correlation functions such as TTTT, TTEB and so on \cite{Aghanim:2018eyx}. We use it in our analyses as an additional CMB probe to ascertain neutrino physics properties, as lensing of CMB photons is produced by the gravitational potential of large scale structure which in turn is affected greatly by the free streaming massive neutrinos. We will refer to this dataset simply as ``lensing'' from now on.
\textbf{CMB: BICEP2/Keck array data.} While running an MCMC analysis for the $\Lambda\textrm{CDM}+\sum m_{\nu}+r$ model, we also use the latest publicly available data (taken up to and including 2015) on the CMB BB spectra from the BICEP2/Keck collaboration (spanning the range: $20 < l < 330$) \cite{Ade:2018gkx}. This dataset is referred to as ``BK15" in the paper.
\textbf{Baryon acoustic oscillations (BAO) Measurements.} In this paper we use the latest measurements of the BAO signal from different galaxy surveys: SDSS-III BOSS DR12 (galaxy samples at the effective redshifts of $z_{\textrm{eff}} =$ 0.38, 0.51 and 0.61) \cite{Alam:2016hwk}, the DR7 Main Galaxy Sample (MGS) at $z_{\textrm{eff}} = 0.15$ \cite{Ross:2014qpa}, and the Six-degree-Field Galaxy Survey (6dFGS) survey at $z_{\textrm{eff}} = 0.106$ \cite{Beutler:2011hx}). We simply name this combined dataset as ``BAO".
All the CMB and BAO data together constitute our ``base'' dataset: \begin{center}{\textbf{Base} $\equiv$ \textbf{Planck 2018 TTTEEE + lowE + lensing + BAO.}} \end{center}
For the $\Lambda\textrm{CDM}+\sum m_{\nu}+r$ model, the ``base" dataset shall also include BK15.
\textbf{Supernovae luminosity distance measurements.} We use the most recent Supernovae Type-Ia (SNe Ia) luminosity distance measurements from the Pantheon Sample \cite{Scolnic:2017caz}, which consists of distance information of 1048 SNe Ia ($0.01<z<2.3$), largest till date. Out of the 1048, 279 are from the Pan-STARRS1 (PS1) Medium Deep Survey ($0.03 < z < 0.68$) and rest of them from SDSS, SNLS, various low-$z$ and HST samples. We call this dataset ``SNe".
To understand how the neutrino mass affects the cosmological observables measured by the above datasets and how we arrive at strong constraints on the same using these datasets, we refer the reader to \cite{Lattanzi:2017ubx}.
\section{Results}\label{sec3}
In this section we provide the results of our analyses on the bounds on neutrino masses considering the three different hierarchies (degenerate, normal, and inverted). In section \ref{sec3.1} we discuss the results in the minimal $\Lambda\textrm{CDM}+\sum m_{\nu}$ model. The results in the extended models is discussed in section \ref{sec3.2}. Details about models and datasets are given in section \ref{sec2.1} and \ref{sec2.2} respectively. All the marginalized limits are given at 68\% C.L. (1$\sigma$), whereas cases where only upper or lower bounds are available, bounds are given at 95\% C.L. ($2\sigma$). The main results are contained in tables \ref{tab2}-\ref{tab5}.
\subsection{Results in the minimal $\Lambda\textrm{CDM} + \sum m_{\nu}$ model}
\label{sec3.1}
In this subsection we provide results of our analyses in the $\Lambda\textrm{CDM} + \sum m_{\nu}$ model, considering three different hierarchies (degenerate, normal, and inverted) for two dataset combinations, namely Base and Base+SNe, where Base $\equiv$ Planck 2018 TTTEEE+lowE+lensing+BAO. The main results are contained in table \ref{tab2} which provides confidence limits on cosmological parameters for the Base and Base+SNe combinations.
\begin{table*}\centering
\ra{1.3}
\resizebox{\textwidth}{!}{\begin{tabular}{@{}rrrrcrrr@{}}\toprule
& \multicolumn{3}{c}{Base} & \phantom{abc} & \multicolumn{3}{c}{Base+SNe}\\
\cmidrule{2-4} \cmidrule{6-8}
& DH & NH & IH && DH & NH & IH\\ \midrule
$\Lambda\textrm{CDM}+\sum m_{\nu}$\\
$\omega_c$ & $0.1194\pm0.0009$ & $0.1192\pm 0.0009$ & $0.1191\pm 0.0009$ && $0.1193\pm 0.0009$ & $0.1191\pm 0.0009$ & $0.1189\pm 0.0009$\\
$\omega_b$ & $0.02242\pm 0.00013$ & $0.02242^{+0.00013}_{-0.00014} $ & $0.02243\pm 0.00013$&& $0.02243\pm 0.00013$& $0.02244\pm 0.00013$ & $0.02244\pm 0.00013$ \\
$\Theta_{\textrm{s}}$ & $1.04100\pm 0.00029$ & $1.04100\pm 0.00029$ & $1.04100\pm 0.00029$&& $1.04102\pm 0.00029$ & $1.04103\pm 0.00029$ & $1.04103\pm 0.00029$\\
$\tau$ & $0.0554^{+0.0068}_{-0.0076}$ & $0.0569^{+0.0066}_{-0.0076}$& $0.0585^{+0.0069}_{-0.0076}$&& $0.0556\pm0.0071$ &$0.0573^{+0.0069}_{-0.0076}$ & $0.0588^{+0.0068}_{-0.0077}$\\
$n_{\textrm{s}}$ & $0.9666\pm 0.0036$ & $0.9668\pm 0.0037$&$0.9671\pm 0.0037$&& $0.9669\pm 0.0036$& $0.9673\pm 0.0036$ & $0.9675\pm 0.0037$\\
$\textrm{ln}[10^{10}A_{\textrm{s}}]$ & $3.048^{+0.014}_{-0.015}$ & $3.051^{+0.014}_{-0.015}$ & $3.053 \pm 0.015$&& $3.046\pm0.014$ & $3.049\pm 0.014$ & $3.052^{+0.014}_{-0.015}$ \\
$m_0$ (eV) & $<0.040$ & $<0.040$& $<0.042$&& $<0.038$ & $<0.038$& $<0.039$\\
$\sum m_{\nu}$ (eV) & $<0.12$& $<0.15$& $<0.17$ && $<0.11$ & $<0.14$ & $<0.16$\\
$H_0$ (km/s/Mpc) & $67.81^{+0.54}_{-0.46}$ & $67.50^{+0.49}_{-0.44}$& $67.22\pm0.45$&&$67.89^{+0.52}_{-0.45}$& $67.59\pm0.44$ & $67.33\pm 0.43$\\
$\sigma_8$ & $0.814^{+0.010}_{-0.007}$ & $0.806^{+0.009}_{-0.006}$& $0.799^{+0.008}_{-0.006}$&& $0.815^{+0.010}_{-0.007}$ & $0.806^{+0.008}_{-0.006}$ & $0.799^{+0.008}_{-0.006}$\\
$S_8$ & $0.827\pm0.011$ & $0.823\pm0.011$& $0.820\pm0.011$&& $0.826\pm0.011$ & $0.822\pm0.011$ & $0.818\pm0.011$\\
\midrule
$\Delta \chi^2 = \chi^2 - \chi^2_{IH}$ & $-2.89$ & $-0.95$ & 0&& $-2.73$ & $-1.27$ & $0$\\
\bottomrule
\end{tabular}}
\caption{ Constraints on the cosmological parameters in the minimal $\Lambda\textrm{CDM}+\sum m_{\nu}$ model considering three different hierarchies (degenerate, normal, and inverted) with the Base and Base+SNe datasets, where Base $\equiv$ Planck 2018 TTTEEE+lowE+lensing+BAO. Here $m_0$ is the mass of the lightest neutrino in a particular hierarchy and a freely varying parameter in the model, whereas $\sum m_{\nu}$, $H_0$, $\sigma_8$, and $S_8$ are derived parameters. Marginalized constraints are given at 1$\sigma$ whereas upper or lower bounds are given at 2$\sigma$. The $\chi^2$ differences are calculated at best-fit points. Details about models and datasets are given in section \ref{sec2}.}\label{tab2}
\end{table*}
In figure \ref{fig:1} we depict the 1-D posterior distributions of $m_0$ (mass of the lightest neutrino in a hierarchy) and $\sum m_{\nu}$ in the $\Lambda\textrm{CDM}+\sum m_{\nu}$ model for Base and Base+SNe datasets considering different hierarchies. With the Base data recover the following 95\% bound on the mass sum: $\sum m_{\nu} < 0.12$ eV which is same as the bound of $\sum m_{\nu} < 0.12$ eV quoted by the Planck 2018 collaboration \cite{Ade:2018gkx} with the same data. The Planck 2015 result with similar data was $\sum m_{\nu} < 0.17$ eV (95\%, Planck 2015 TT,TE,EE+lowP+BAO). The main reason this improvement happens with Planck 2018 is because of improved measurement of $\tau$. Planck 2015 TT,TE,EE+lowP likelihoods produced a bound of $\tau = 0.079\pm 0.017$ (68\%) in the base $\Lambda\textrm{CDM}$ model \cite{Ade:2015xua}, which improved to $\tau = 0.0544^{+0.0070}_{-0.0081}$ (68\%) with Planck 2018 TT,TE,EE+lowE \cite{Aghanim:2018eyx}. The parameters $\tau$ and $\sum m_{\nu}$ are strongly correlated in the Planck TT data and high-$l$ polarization data \cite{Choudhury:2018byy}, and this degeneracy can be broken through better measurement of $\tau$ from the low-$l$ polarization data. Since an increase in $\sum m_{\nu}$ leads to suppression of the matter power spectrum which in turn leads to less gravitational lensing of the CMB photons, we see an increase in $\sum m_{\nu}$ leads to a decrease in the smearing of the CMB acoustic peaks (smearing happens due to lensing). This effect due to increasing neutrino masses, however, can be countered with increasing $\tau$, which exponentially suppresses power in the CMB anisotropy spectra (given other parameters are kept fixed). However, the effect of reionization is also found in the low-$l$ polarization data (TE, EE, BB) in the form of a ``reionization bump" whose amplitude is proportional to $\tau^2$ in the EE and BB spectra, and to $\tau$ in the TE spectra, and this change can't be compensated by varying other parameters in the model \cite{Reichardt2016}. Hence the improved removal of systematics in the low-$l$ polarization data with Planck 2018 helps in breaking the degeneracy between $\tau$ and $\sum m_{\nu}$, and this leads to stronger upper bounds on $\sum m_{\nu}$. However, not only $\tau$, Planck 2018 also largely corrects various systematic effects previously present in the high-$l$ polarization spectra of Planck 2015, and that also contributes towards obtaining a better bound on the $\sum m_{\nu}$.
\begin{figure}[tbp]
\centering
\includegraphics[width=.4963\linewidth]{m0_base_mnu_base.pdf}
\hfill
\includegraphics[width=.4963\linewidth]{mnu_base_mnu_base.pdf}
\caption{\label{fig:1}Comparison of 1-D marginalized posterior distributions for $m_0$ (eV) and $\sum m_{\nu}$ (eV) for the Base dataset in the $\Lambda\textrm{CDM}+\sum m_{\nu}$ model for the three hierarchies. The two dashed vertical lines are at $\sum m_{\nu} = 0.06$ and 0.10 eV are representing the minimum mass required for normal and inverted hierarchy respectively. }
\end{figure}
In our Base dataset combination, BAO is another effective tool in constraining neutrino masses. BAO data is useful in breaking the strong anti-correlation between the Hubble constant $H_0$ and $\sum m_{\nu}$ present in the Planck data. The comoving distance to the last scattering surface in a flat $\Lambda\textrm{CDM}+\sum m_{\nu}$ universe is defined as: $\chi(z_{dec}) = \int^{z_{dec}}_0 dz/H(z)$, where $z_{dec}$ is the redshift of photon decoupling, and $H(z) = \sqrt{\omega_{\gamma}(1+z)^4 + (\omega_c + \omega_b)(1+z)^3 + \omega_{\Lambda}+\rho_{\nu}(z)h^2/\rho_{cr,0}}$ (note: $\omega_i = \Omega_i h^2$, and $i\equiv \gamma, c, b, \Lambda$ with $\gamma\equiv$ photons, $c\equiv$ CDM, $b\equiv$ baryons, $\Lambda\equiv$ cosmological constant; $\rho_{\nu}(z)$ is the neutrino energy density at a redshift $z$, $\rho_{cr,0} = 3H_0^2/8\pi G$ is the critical density today). In a flat universe $\Omega_\Lambda = 1 - \Omega_{\gamma} - (\Omega_c + \Omega_b) - \Omega_{\nu}$ and at late times $\rho_{\nu}(z)h^2/\rho_{cr,0} = \Omega_{\nu}h^2 \propto \sum m_{\nu}$. Now, given that early universe physics remains unchanged, $\chi(z_{dec})$ is very well constrained through $\Theta_s$ which is the most well constrained parameter by the Planck CMB anisotropies. On the other hand, $\Omega_{\gamma}$ and $(\omega_c + \omega_b)$ are also well constrained by the data. Thus any change in $\chi(z_{dec})$ due to an increase in $H_0$ (or $h = H_0/100$ km/sec/Mpc) has to be compensated by a decrease in $\sum m_{\nu}$ and vice versa, and hence there is a large anti-correlation between $H_0$ and $\sum m_{\nu}$. Thus in the $\Lambda\textrm{CDM} + \sum m_{\nu}$ model, lower values of $H_0$ correspond to higher values of $\sum m_{\nu}$ and higher values of $H_0$ correspond to lower $\sum m_{\nu}$. BAO data breaks this degeneracy partially by rejecting the low $H_0$ values preferred by Planck data, well studied in previous literature \cite{Hou:2012xq,Vagnozzi:2017ovm,Choudhury:2018byy}. For instance, in $\Lambda\textrm{CDM} + \sum m_{\nu}$, Planck 2015 TT,TE,EE+lowP prefers a value of $H_0 = 66.17^{+1.96}_{-0.81}$ km/sec/Mpc, whereas TT,TE,EE+lowP+BAO prefers $H_0=67.67^{+0.54}_{-0.51}$ km/sec/Mpc \cite{Choudhury:2018byy}.
Apart from CMB power spectra and BAO, our Base dataset also contains Planck 2018 lensing likelihoods, which as per Planck 2018 collaboration \cite{Aghanim:2018eyx} prefers a slightly increased lensing power spectrum amplitude compared to Planck 2015, and thus leads to a slightly tighter neutrino mass constraints, contrary to Planck 2015 lensing likelihoods which used to relax the constraints. In our case, without the Planck 2018 lensing likelihoods, we obtained a bound of $\sum m_{\nu} < 0.13$ eV (95\%, Planck 2018 TT,TE,EE+lowE+BAO), i.e., Planck 2018 lensing has only a small effect in this $\Lambda\textrm{CDM} + \sum m_{\nu}$ model when used along with Planck 2018 CMB anisotropies, and BAO.
\begin{figure}[tbp]
\centering
\includegraphics[width=.3\linewidth]{h0_base_mnu_base.pdf}
\hfill
\includegraphics[width=.3\linewidth]{sigma8_base_mnu_base.pdf}
\hfill
\includegraphics[width=.3\linewidth]{s8_base_mnu_base.pdf}
\caption{\label{fig:2}Comparison of 1-D marginalized posterior distributions for $H_0$ (km/sec/Mpc), $\sigma_8$, and $S_8$ for the Base dataset in the $\Lambda\textrm{CDM}+\sum m_{\nu}$ model for the three hierarchies.}
\end{figure}
We see that the bounds on $\sum m_{\nu}$ do differ significantly across the different hierarchies. We see that with the Base dataset we get the following bounds: $\sum m_{\nu} < 0.15$ eV (NH), $\sum m_{\nu} < 0.17$ eV (IH) at 95\% C.L. As previously stated, we use the mass of the lightest neutrino, $m_0$ as the varying parameter and then use the mass-squared splittings given in eq. \ref{eq1} to determine the other masses in a particular hierarchy, and this implicitly puts a prior on the $\sum m_{\nu}$, i.e., $\sum m_{\nu} \geq 0.06$ eV for normal hierarchy, and $\sum m_{\nu} \geq 0.10$ eV for inverted hierarchy. The reason the bound on $\sum m_{\nu}$ differs significantly across the three hierarchies is possibly the priors imposed on $\sum m_{\nu}$ (see figure \ref{fig:1} for the visualization of the same). We cross-check this by running MCMC chains with the degenerate hierarchy, but with the priors: i) $\sum m_{\nu} \geq 0.06$ eV, and ii) $\sum m_{\nu} \geq 0.10$ eV. We found a 95\% upper bound of $\sum m_{\nu} < 0.15$ eV in the first case, and in the second case it was $\sum m_{\nu} < 0.18$ eV. It seems evident that the priors do play an important role in relaxing the bounds. However it is to be noted that the method we have used in this paper for the normal and inverted hierarchy (i.e. with lightest mass $m_0$ and the mass-squared splittings from oscillations experiments) produces bounds which are slightly stronger than the degenerate case with the priors $\sum m_{\nu} \geq 0.06$ eV and $\sum m_{\nu} \geq 0.10$ eV respectively, i.e. the two methods are not completely equivalent. The difference is clearer with Planck 2018 TT,TE,EE+lowE data, with which the NH case leads to a 95\% bound of $\sum m_{\nu} < 0.29$ eV, whereas the DH case with the $\sum m_{\nu} \geq 0.06$ eV prior gives us a bound of $\sum m_{\nu} < 0.32$ eV. For the same data, the IH case produces a 95\% bound of $\sum m_{\nu} < 0.33$ eV, whereas the DH case with the $\sum m_{\nu} \geq 0.10$ eV prior obtains a bound of $\sum m_{\nu} < 0.35$ eV. As discussed in the introduction, in case of NH and IH, with the lightest mass ($m_0$) parametrization and a uniform prior on $m_0$, the implicit prior on $\sum m_{\nu}$ is not completely flat or uniform (i.e. rises at the low values of $m_0$). This non-flat prior causes the low values of $\sum m_{\nu}$ to be more favoured compared to the case of a flat prior on $\sum m_{\nu}$, and hence, this can explain the slightly tighter bounds on $\sum m_{\nu}$ with the $m_0$ parametrization.
From table \ref{tab2} we can see an important trend: in the $\Lambda\textrm{CDM}+\sum m_{\nu}$ model with Base dataset, as we go from degenerate to normal to inverted hierarchy there appears a decrease in the preferred values of $H_0$ and $\sigma_8$. This is also visualized in figure \ref{fig:2}. The reason for the decrease in $H_0$ is simply the anti-correlation between $H_0$ and $\sum m_{\nu}$ as described above, and the fact that the normal hierarchy prefers neutrino masses which are larger than the degenerate case, and the inverted hierarchy prefers masses which are larger than the normal hierarchy. There is, again, a strong anti-correlation present between $\sigma_8$ and $\sum m_{\nu}$, since $\sigma_8$ is the amplitude of the linear matter power spectrum at a length scale of $8h^{-1}$MPc, and increasing $\sum m_{\nu}$ increases the neutrino energy density and that increases the suppression of the small scale matter power-spectrum leading to a lower $\sigma_8$. Thus, from degenerate to normal to inverted hierarchy, the $H_0$-tension between Planck and local measurements increases, whereas the $\sigma_8$-tension \cite{Kitching:2016hvn} between Planck and cosmic shear measurements decreases. This trend remains true for the other datasets we have considered in this model (as can be seen from table \ref{tab2}).
From figure \ref{fig:2} we also observe that the parameter $S_8 = \sigma_8 \sqrt{\Omega_m/0.3}$ also goes downward, but not as strongly as $\sigma_8$, since increasing neutrino masses also cause $\Omega_m$ to increase which compensates for the decreased $\sigma_8$, i.e. $S_8$ is defined such that its correlation with $\sum m_{\nu}$ is small in this model. Again, while $H_0$ and $\sigma_8$ both are strongly anti-correlated with $\sum m_{\nu}$, the magnitude of the correlation coefficients ($R_{ij} = C_{ij}/\sqrt{C_{ii}C_{jj}}$ where $i$,$j$ are the two parameters, and $C$ is the covariance matrix of the cosmological parameters) decrease from degenerate to normal to inverted hierarchy because of the implicit priors on $\sum m_{\nu}$, i.e the priors help in partially breaking the degeneracies. We find that $R_{H_0,\Sigma m_{\nu}} =-$0.55 (DH), $-$0.47 (NH), $-$0.43 (IH); and $R_{\sigma_8,\Sigma m_{\nu}} =-$0.76 (DH), $-$0.72 (NH), $-$0.67 (IH).
Apart from the results with the Base dataset, in table \ref{tab2}, we provide results with the Base+SNe dataset also. In the absence of a varying dark energy equation of state, the SNe data is able to constrain $\Omega_m$ effectively \cite{Scolnic:2017caz}, while the Planck CMB data can constrain $\Omega_m h^2$ well. Thus together they can effectively constrain $H_0$, and as found out in \cite{Choudhury:2018byy}, the Planck+SNe combination actually prefers $H_0$ values which are higher than Planck alone, and thus the SNe data can help in breaking the degeneracy between $H_0$ and $\sum m_{\nu}$ partially. BAO data is however much more efficient in breaking the degeneracy and with Base+SNe (note that Base already contains BAO). In the DH case, we find the following 95\% bounds on the neutrino mass sum in this $\Lambda\textrm{CDM}+\sum m_{\nu}$ model: $\sum m_{\nu}<0.11$ eV, which is only slightly stronger than the bound without the SNe data.
\textbf{Akaike information criterion (AIC).} To compare the goodness of fit of different models to the same data a popular method is to compute the Akaike information criterion (AIC) \cite{1100705} for each model, defined as :
\begin{equation}
\textrm{AIC} = \chi^2_{\textrm{best-fit}} + 2k
\end{equation}
where $k$ is the number of parameters in the model. Comparison with another model is done by computing the difference: $\Delta \textrm{AIC} = \Delta \chi^2_{\textrm{best-fit}} + 2\Delta k$. Between two models the model with a lower AIC is considered the preferred model. The $2\Delta k$ term penalizes the model with greater number of parameters as it is usually able to provide better fit because of the larger parameter space being available to it. In this work however, we are interested in the comparison of the quality of fits between the neutrino mass hierarchies and since we have the same number of parameters in all the different cases, $2\Delta k = 0$, and $\Delta \textrm{AIC} = \Delta \chi^2_{\textrm{best-fit}}$. We have provided the $\Delta \textrm{AIC} = \chi^2_{\textrm{best-fit}} - \chi^2_{\textrm{IH,best-fit}}$ values for each neutrino hierarchy in table \ref{tab2}. We find that cosmological datasets slightly prefer the normal hierarchy over the inverted one. In this $\Lambda\textrm{CDM}+\sum m_{\nu}$ model, with the Base data, we find that the NH is preferred to the IH by a $\Delta \textrm{AIC} = -0.95$, which is a very mild result.
This showcases that the current cosmological data is not sensitive enough to differentiate between the two hierarchies,
and is completely consistent with the findings in e.g. \cite{Hannestad:2016fog,Mahony:2019fyb} which both find that a formal sensitivity to $\sum m_\nu$ in the 0.01-0.02 eV range is required to guarantee a distinction between the two hierarchies in case the true value of $\sum m_\nu$ is sufficiently less than 0.1 eV (i.e. if the neutrino masses follow NH). This is not possible using current data, but will become possible in the near future using high precision data from e.g.\ EUCLID (see e.g. \cite{Hamann:2012fe,Brinckmann:2018owf,Sprenger:2018tdb,Chudaykin:2019ock} for discussions on this topic). However, if the true value of $\sum m_\nu > 0.1$ eV, we won't be able to tell one hierarchy from the other even with the said sensitivity.
With the other dataset combinations also the $\Delta \textrm{AIC}$ remains mild between NH and IH. The (unphysical) degenerate hierarchy on the other hand, produces a better fit to the data compared to both NH and IH in case of all the dataset combinations we have studied.
\subsection{Results in the extended models}
\label{sec3.2}
In this section we present the results in the extended cosmologies. Table \ref{tab4} contains the constraints on selected cosmological parameters in the extended models with Base and Base+SNe datasets. Marginalized constraints are given at 1$\sigma$ whereas upper or lower bounds are given at 2$\sigma$. Details about models and datasets are given in section \ref{sec2}. Note that the Base data also contains BK15 when we consider the $\Lambda\textrm{CDM}+\sum m_{\nu}+r$ model.
\begin{table*}\centering
\ra{1.3}
\resizebox{\textwidth}{!}{\begin{tabular}{@{}rrrrcrrr@{}}\toprule
& \multicolumn{3}{c}{Base} & \phantom{abc} & \multicolumn{3}{c}{Base+SNe}\\
\cmidrule{2-4} \cmidrule{6-8}
& DH & NH & IH && DH & NH & IH \\ \midrule
$\Lambda\textrm{CDM}+\sum m_{\nu}+r$\\
$r$ & $<0.0626$ & $<0.0632$& $<0.0618$&& $<0.0610$& $<0.0629$& $<0.0624$\\
$m_0$ (eV) & $<0.038$ & $<0.039$& $<0.040$&& $<0.037$& $<0.035$& $<0.038$\\
$\sum m_{\nu}$ (eV) &$<0.11$& $<0.14$ & $<0.17$&& $<0.11$& $<0.13$& $<0.16$\\
$H_0$ (km/s/Mpc) & $67.78^{+0.52}_{-0.46}$& $67.48^{+0.48}_{-0.45}$& $67.21\pm0.44$&& $67.86\pm0.48$& $67.58^{+0.44}_{-0.45}$ & $67.30^{+0.43}_{-0.44}$\\
$\sigma_8$ & $0.815^{+0.010}_{-0.007}$& $0.807^{+0.008}_{-0.006}$& $0.800^{+0.008}_{-0.006}$&& $0.816^{+0.010}_{-0.007}$& $0.808^{+0.008}_{-0.007}$& $0.800^{+0.007}_{-0.006}$ \\
$S_8$ & $0.832\pm0.011$& $0.825^{+0.011}_{-0.010}$& $0.822\pm0.011$&& $0.828\pm0.011$& $0.824\pm0.010$& $0.820\pm0.010$ \\
\midrule
$w\textrm{CDM}+\sum m_{\nu}$\\
$w$ & $ -1.042^{+0.072}_{-0.052}$ & $-1.068^{+0.071}_{-0.052}$ & $-1.089^{+0.070}_{-0.055}$ && $ -1.025^{+0.037}_{-0.033}$& $ -1.037^{+0.036}_{-0.032}$ & $ -1.048\pm0.034$\\
$m_0$ (eV) & $<0.062$ & $<0.066$ & $<0.066$&& $<0.053$& $<0.053$& $<0.054$\\
$\sum m_{\nu}$ (eV) & $<0.19$& $<0.21$ & $<0.23$&& $<0.16$& $<0.18$& $<0.20$\\
$H_0$ (km/s/Mpc) & $68.67^{+1.33}_{-1.59}$& $69.01^{+1.31}_{-1.60}$& $69.24^{+1.38}_{-1.61}$&& $68.33\pm0.82$& $68.31\pm0.82$ & $68.27\pm0.82$\\
$\sigma_8$ & $0.821^{+0.016}_{-0.017}$& $0.820\pm0.016$& $0.819\pm0.016$&& $0.818^{+0.013}_{-0.011}$& $0.814\pm0.012$& $0.810\pm0.012$ \\
$S_8$ & $0.825\pm0.011$& $0.822^{+0.012}_{-0.011}$& $0.822\pm0.011$&& $0.826\pm0.011$& $0.823\pm0.011$& $0.821\pm0.011$ \\
\midrule
$w_0 w_a \textrm{CDM}+\sum m_{\nu}$ \\
$w_0$ & $-0.68^{+0.26}_{-0.14}$ &$-0.68^{+0.26}_{-0.14}$ & $-0.68^{+0.25}_{-0.13}$&& $-0.94^{+0.08}_{-0.09}$& $-0.94\pm0.09$& $-0.93\pm0.09$\\
$w_a$ & $ -1.06^{+0.37}_{-0.79}$ & $<-0.085$& $<-0.164$ && $-0.41^{+0.46}_{-0.29}$& $-0.49^{+0.44}_{-0.30}$& $-0.56^{+0.43}_{-0.32}$\\
$m_0$ (eV) & $<0.083$ & $<0.080$& $<0.083$ && $<0.089$ & $<0.088$& $<0.088$\\
$\sum m_{\nu}$ (eV) & $<0.25$& $<0.26$ & $<0.28$ && $<0.27$& $<0.28$& $<0.29$\\
$H_0$ (km/s/Mpc) & $65.70^{+1.60}_{-2.47}$& $65.78^{+1.61}_{-2.47}$& $65.80^{+1.62}_{-2.43}$&& $68.28\pm0.83$& $68.23\pm0.84$ & $68.23\pm0.82$\\
$\sigma_8$ & $0.795^{+0.018}_{-0.023}$& $0.792^{+0.017}_{-0.023}$& $0.790^{+0.018}_{-0.023}$&& $0.817^{+0.015}_{-0.013}$& $0.813^{+0.014}_{-0.012}$& $0.811^{+0.013}_{-0.012}$ \\
$S_8$ & $0.837\pm0.014$& $0.834^{+0.014}_{-0.013}$ & $0.832\pm0.013$&& $0.827\pm0.012$& $0.826\pm0.012$& $0.824\pm0.012$ \\
\midrule
$w_0 w_a \textrm{CDM}+\sum m_{\nu}$($w(z)\geq -1)$\\
$w_0$ & $<-0.873$ & $<-0.888$& $<-0.900$&& $<-0.937$& $<-0.944$& $<-0.949$\\
$w_a$ & $0.009^{+0.057}_{-0.067}$ &$0.007^{+0.049}_{-0.058}$&$0.007^{+0.044}_{-0.050}$&& $0.028^{+0.034}_{-0.056}$& $0.022^{+0.029}_{-0.047}$ & $0.020^{+0.025}_{-0.043}$\\
$m_0$ (eV) & $<0.032$ & $<0.034$& $<0.035$&& $<0.032$& $<0.033$& $<0.035$\\
$\sum m_{\nu}$ (eV) & $<0.10$& $<0.13$ & $<0.16$&& $<0.09$& $<0.13$& $<0.16$\\
$H_0$ (km/s/Mpc) & $66.64^{+0.97}_{-0.66}$& $66.46^{+0.88}_{-0.62}$& $66.33^{+0.83}_{-0.57}$&& $67.23^{+0.63}_{-0.53}$& $67.01^{+0.57}_{-0.51}$ & $66.81^{+0.54}_{-0.48}$\\
$\sigma_8$ & $0.801^{+0.012}_{-0.010}$& $0.795^{+0.011}_{-0.009}$& $0.789^{+0.010}_{-0.008}$&& $0.807^{+0.010}_{-0.008}$& $0.799^{+0.009}_{-0.008}$& $0.793\pm0.008$ \\
$S_8$ & $0.826\pm0.011$& $0.823\pm0.011$& $0.820\pm0.011$&& $0.824\pm0.010$& $0.821\pm0.011$& $0.817\pm0.011$ \\
\midrule
$\Lambda\textrm{CDM}+\sum m_{\nu}$+$A_{\textrm{Lens}}$\\
$A_{\textrm{Lens}}$ & $1.100^{+0.046}_{-0.056}$ & $1.107^{+0.042}_{-0.055}$& $1.116^{+0.040}_{-0.050}$&& $1.098^{+0.044}_{-0.058}$&$1.106^{+0.042}_{-0.053}$&$1.116^{+0.040}_{-0.050}$ \\
$m_0$ (eV) & $<0.098$ & $<0.094$& $<0.092$&& $<0.094$& $<0.090$& $<0.087$\\
$\sum m_{\nu}$ (eV) & $<0.29$& $<0.29$ & $<0.30$&& $<0.28$& $<0.28$& $<0.29$\\
$H_0$ (km/s/Mpc) & $67.76^{+0.71}_{-0.60}$& $67.66^{+0.66}_{-0.59}$& $67.56^{+0.63}_{-0.53}$&& $67.86\pm^{+0.68}_{-0.57}$& $67.76\pm0.59$ & $67.66^{+0.58}_{-0.51}$\\
$\sigma_8$ & $0.782^{+0.029}_{-0.018}$& $0.777^{+0.025}_{-0.013}$& $0.772^{+0.022}_{-0.012}$&& $0.784^{+0.028}_{-0.017}$&$0.779^{+0.024}_{-0.013}$& $0.773^{+0.021}_{-0.011}$ \\
$S_8$ & $0.793^{+0.023}_{-0.020}$& $0.790^{+0.022}_{-0.017}$& $0.786^{+0.020}_{-0.016}$&& $0.793^{+0.023}_{-0.019}$&$0.790^{+0.021}_{-0.017}$& $0.785^{+0.019}_{-0.016}$ \\
\midrule
$\Lambda\textrm{CDM}+\sum m_{\nu}+\Omega_{\textrm{k}}$\\
$\Omega_{\textrm{k}}$ &$0.0004\pm0.0020$ & $0.0012^{+0.0020}_{-0.0021}$& $0.0019^{+0.0019}_{-0.0021}$&& $0.0004\pm0.0021$& $0.0012\pm0.0020$& $0.0019\pm0.0020$\\
$m_0$ (eV) & $<0.050$ & $<0.051$& $<0.053$&& $<0.044$& $<0.047$& $<0.049$\\
$\sum m_{\nu}$ (eV) & $<0.15$& $<0.17$ & $<0.20$&& $<0.13$& $<0.16$& $<0.19$\\
$H_0$ (km/s/Mpc) & $67.87\pm0.67$& $67.75\pm0.67$& $67.67^{+0.67}_{-0.68}$&& $67.95\pm0.67$& $67.84\pm0.66$ & $67.77\pm0.66$\\
$\sigma_8$ & $0.813^{+0.011}_{-0.008}$& $0.807^{+0.010}_{-0.008}$& $0.801^{+0.009}_{-0.008}$&& $0.814^{+0.010}_{-0.009}$& $0.807^{+0.010}_{-0.008}$& $0.801^{+0.009}_{-0.008}$ \\
$S_8$ & $0.826\pm0.011$& $0.823\pm0.011$& $0.819\pm0.011$ && $0.825\pm0.011$& $0.822\pm0.011$& $0.818\pm0.010$ \\
\bottomrule
\end{tabular}}
\caption{Constraints on selected cosmological parameters in the extended models considering three different hierarchies (degenerate, normal, and inverted) with the Base and Base+SNe datasets. In the $\Lambda\textrm{CDM}+\sum m_{\nu}+ r$ model Base data also includes BK15.}\label{tab4}
\end{table*}
\begin{figure}[tbp]
\centering
\includegraphics[width=.4963\linewidth]{r_mnu_base_r_mnu_base_pan.pdf}
\hfill
\includegraphics[width=.4963\linewidth]{w_mnu_base_w_mnu_base_pan.pdf}
\caption{\label{fig:3} 68\% and 95\% marginalized contours for $r$ vs $\sum m_{\nu}$ in the $\Lambda\textrm{CDM}+\sum m_{\nu}+ r$ model on the left, and for $w$ vs $\sum m_{\nu}$ in the $w\textrm{CDM}+\sum m_{\nu}$ model on the right, using Base and Base+SNe datasets and considering degenerate hierarchy. For the $\Lambda\textrm{CDM}+\sum m_{\nu}+ r$ model Base data also includes BK15 data. }
\end{figure}
\begin{itemize}
\item i) \textbf{Results in the $\Lambda\textrm{CDM}+\sum m_{\nu}+r$ model:} In this model we allow the tensor perturbations to vary as well, along with scalar ones. The CMB B-mode polarization comes from two sources: i) primordial gravitational waves, ii) gravitational weak lensing. With the Base data (which also includes BK15) we get the following 95\% bounds: $\sum m_{\nu} < 0.11$ eV (DH), $\sum m_{\nu} < 0.14$ eV (NH), and $\sum m_{\nu} < 0.17$ eV (IH). We do not see any big changes on the mass bounds compared to $\Lambda\textrm{CDM}+\sum m_{\nu}$. The bounds are, however, slightly tighter in the $\Lambda\textrm{CDM}+\sum m_{\nu}+r$ model with the BK15 data than without, most likely due to the additional lensing information encoded in the B mode data BK15, since $r$ and $\sum m_{\nu}$ are not correlated, as can be seen from the left panel of figure \ref{fig:3}.
The effect of the neutrino mass bounds getting slightly tighter with B mode data was previously noticed in \cite{Choudhury:2018byy,Choudhury:2018adz,Choudhury:2018sbz} with BK14 data \cite{Array:2016afx}, predecessor of BK15.
\item ii) \textbf{Results in the $w\textrm{CDM}+\sum m_{\nu}$ model:} In this model we go away from the cosmological constant, and consider the DE EoS $w$ as a varying parameter. A well-known degeneracy exists between $w$ and $\sum m_{\nu}$ \cite{Hannestad:2005gj}, through their mutual degeneracy with $\Omega_m$, which considerably relaxes the bounds on $\sum m_{\nu}$ in this model compared to $\Lambda\textrm{CDM}+\sum m_{\nu}$. The visualization of the anti-correlation between the two parameters is available in the right panel of figure \ref{fig:3} for the Base and Base+SNe data. We find a correlation coefficient of $R_{w,\sum m_{\nu}} = -$0.57 (DH), $-$0.52 (NH), $-$0.47 (IH) with the Base data, whereas $R_{w,\sum m_{\nu}} = -$0.45 (DH), $-$0.35 (NH), $-$0.30 (IH) with Base+SNe. The SNe data when used with the Base data, reduces the magnitude of the anti-correlation with better constraints on $w$. This happens, since in the SNe data $w$ and $\Omega_m$ are strongly anti-correlated, whereas in the CMB data $w$ and $\Omega_m$ are strongly correlated (for instance see figure 20 of \cite{Scolnic:2017caz}). So including the SNe data with Base leads to a large decrease in the $w$-$\Omega_m$ correlation, which in turn leads to a decrease in magnitude of $R_{w,\sum m_{\nu}}$. Also from figure \ref{fig:3}, we see that due to the anti-correlation lower values of $w$ prefer higher $\sum m_{\nu}$ and higher values of $w$ prefer lower $\sum m_{\nu}$. The Base+SNe combination constraints $w$ better than Base such that the lowest values of $w$ allowed by Base data is rejected, and this leads to stronger bounds on $\sum m_{\nu}$ with Base+SNe, as can be seen from table \ref{tab4}. The SNe data also rejects high $w$ values, but those regions prefer low $\sum m_{\nu}$ values and hence rejecting the high $w$ region does not help in strengthening the upper bound on $\sum m_{\nu}$.
There is also a strong degeneracy between $w$ and $H_0$, that exists since lower values of $w$ correspond to higher present day expansion rate and hence higher $H_0$ values. This happens since changing $w$ can change the comoving distance to the last scattering surface $\chi(z_{dec})= \int^{z_{dec}}_0 dz/H(z)$, which is well-constrained by the CMB data, and thus any change in $\chi(z_{dec})$ needs to be compensated with another parameter. We have,
\begin{equation}
H(z) = \sqrt{\omega_{\gamma}(1+z)^4 + (\omega_c + \omega_b)(1+z)^3 + \Omega_{DE}(z)h^2+\rho_{\nu}(z)h^2/\rho_{cr,0}}.
\end{equation}
Here $\Omega_{DE}(z) = \Omega_{DE}(0) (1+z)^{3(1+w)}$ is the energy density of dark energy with EoS $w$, and at late times when dark energy is a dominant component, decreasing the value of $w$ leads to a decrease in $H(z)$, which can be compensated by increasing either $h$ (or $H_0$) or $\sum m_{\nu}$ or both. This is why in the $w\textrm{CDM}+\sum m_{\nu}$ model, with Base data, we find that not only $w$ and $H_0$ are anti-correlated ($R_{w,H_0} =-0.91$) (DH), the correlation between $H_0$ and $\sum m_{\nu}$ is inverted from what it was in $\Lambda\textrm{CDM}+\sum m_{\nu}$, i.e. here $H_0$ and $\sum m_{\nu}$ are positively correlated with $R_{H_0,\sum m_{\nu}} =+0.30$ (DH). The SNe data, when combined with Base, constrains $w$ and $H_0$ in a much better way and breaks the degeneracy present between $H_0$ and $\sum m_{\nu}$, and we find that $R_{H_0,\sum m_{\nu}} =-0.013$ (DH) with Base+SNe. This is why, in table \ref{tab4} with the Base data, as we go from DH to NH to IH, the preference for higher and higher neutrino masses and the positive correlation between $H_0$ and $\sum m_{\nu}$ leads to slight increase in the preferred values of $H_0$, but with Base+SNe, the almost negligible correlation explains the lack of any big change in the preferred $H_0$ values across the different hierarchies. On the other hand, the strong degeneracy between $w$ and $H_0$ survives, but becomes slightly weaker at $R_{w,H_0} =-0.76$ (DH) with Base+SNe. This leads to the $w\textrm{CDM}+\sum m_{\nu}$ model predominantly preferring values of $w<-1$, i.e. the phantom DE region.
\end{itemize}
\begin{figure}[tbp]
\centering
\includegraphics[width=.3\linewidth]{w_wa_base_w_wa_mnu_base_pan_DH.pdf}
\hfill
\includegraphics[width=.3\linewidth]{w_wa_base_w_wa_mnu_base_pan_NH.pdf}
\hfill
\includegraphics[width=.3\linewidth]{w_wa_base_w_wa_mnu_base_pan_IH.pdf}
\caption{\label{fig:4} 68\% and 95\% marginalized contours for $w_0$ vs $w_a$ in the $w_0 w_a\textrm{CDM}+\sum m_{\nu}$ model using Base and Base+SNe datasets, for DH (left panel), NH (middle panel), and IH (right panel). The region at the right of the vertical dashed blue line and above the slanted dashed blue line is the non-phantom DE region.}
\end{figure}
\begin{itemize}
\item iii) \textbf{Results in the $w_0 w_a \textrm{CDM}+\sum m_{\nu}$ model:} In this model the DE EoS $w(z) = w_0+w_a \frac{z}{1+z}$ is dynamical in nature, but remains close to the value of $w_0+w_a$ for high redshift values, and only changes significantly during the late times. Like the previous model, there is degeneracy between $w(z)$ and $H_0$, and $w(z)$ and $\sum m_{\nu}$. However now we have two parameters $w_0$ and $w_a$ instead of $w$, with
\begin{equation}
\Omega_{DE}(z) = \Omega_{DE}(0) (1+z)^{3(1+w_0+w_a)} \textrm{exp}\left(-3w_a\frac{z}{1+z}\right).
\end{equation}\label{eq6}
Any change in $\chi(z_{dec})$ due to $H_0$ can be readily compensated with $w_0$ and $w_a$, and hence, in this model the correlation between $H_0$ and $\sum m_{\nu}$ is very small even with Base data ($R_{H_0,\sum m_{\nu}} =+0.08$ (DH)). Again, $w_0$ and $w_a$ are anti-correlated between themselves, since the change in $\chi(z_{dec})$ due to an increase in $w_0$ can be countered with a decrease in $w_a$. The anti-correlation can be seen clearly in figure \ref{fig:4}. As it can be seen from the figure, neither the Base data nor Base+SNe combination prefers the non-phantom DE region. The data prefers regions where the DE is currently phantom or has been phantom in the past. Since the phantom DE region, $w(z)< -1$ prefers neutrino masses which are larger, the mass bounds are essentially very relaxed in this model, as can be seen from table \ref{tab4}. We have $\sum m_{\nu} < 0.25$ eV (DH), 0.26 eV (NH), 0.28 eV (IH) with Base data, whereas with Base+SNe, we get $\sum m_{\nu} < 0.27$ eV (DH), 0.28 eV (NH), 0.29 eV (IH). The SNe data produces better constraints on the DE parameters. In figure \ref{fig:4}, another important thing to notice is that as we go from DH to NH to IH, the 2D contours shift away from the non-phantom DE region due to the preference for higher and higher masses, and in the IH case, Base data rejects the non-phantom region at more than 2$\sigma$.
\begin{figure}[tbp]
\centering
\includegraphics[width=.45\linewidth]{w_mnu_base_w_wa_mnu_base_DH.pdf}
\hfill
\includegraphics[width=.45\linewidth]{wa_mnu_base_w_wa_mnu_base_DH.pdf}
\caption{\label{fig:6} 68\% and 95\% marginalized contours for $w_0$ vs $\sum m_\nu$ (eV) (left) and $w_a$ vs $\sum m_\nu$ (eV) (right) in the $w_0 w_a\textrm{CDM}+\sum m_{\nu}$ model using Base dataset, for DH.}
\end{figure}
We can understand how the individual DE parameters $w_0$ and $w_a$ are affected by $\sum m_\nu$ by looking at figure \ref{fig:6}. For the Base data, the relevant correlation coefficients are: $R_{w_0,\sum m_{\nu}} =-0.004$ (DH), $-0.018$ (NH), $-0.056$ (IH), and $R_{w_a,\sum m_{\nu}} =-0.27$ (DH), $-0.21$ (NH), $-0.17$ (IH). It is clear that $w_a$ is highly anti-correlated with $\sum m_{\nu}$. Since $w(z) = w_0 +w_a z/(1+z)$, given a particular value of $w_0$, it is $w_a$ which determines the late time dynamics of the dark energy equation of state, and thus is more affected by $\sum m_{\nu}$ than $w_0$. Increasing $w_a$ has the effect of increasing $\Omega_{\rm DE}(z)$ at late times ($z\ll1$), as can be seen from eq. \ref{eq6}. This can be partially compensated by decreasing $\sum m_{\nu}$ to decrease $\rho_{\nu} (z)$ at late times to keep $\chi(z_{dec})$ fixed. This can be also compensated by decreasing $w_0$, and thus $w_0$ and $w_a$ are anti-correlated, whereas $w_0$ and $\sum m_{\nu}$ are weakly correlated in the Base data.
\item iv) \textbf{Results in the $w_0 w_a \textrm{CDM}+\sum m_{\nu}$ model with $w(z)\geq -1$ :} This is essentially the same model as in the previous case, but the parameter space is restricted to the non-phantom range only, i.e. $w(z)\geq -1$. This parameter space corresponds to dark energy field theories modelled with a single scalar field, like quintessence \cite{Linder:2007wa}, which cannot cross the phantom barrier (the $w(z)=-1$ line). Due to the degeneracy between $w$ and $\sum m_{\nu}$ (which we have discussed for the last two models) the cosmological data actually prefers smaller and smaller neutrino masses as we go deeper in the non-phantom region in the $w_0-w_a$ parameter space. Thus the bounds on $\sum m_{\nu}$ obtained in this model are even tighter than the $\Lambda\textrm{CDM}+\sum m_{\nu}$ model. This was first noticed in \cite{Vagnozzi:2018jhn}, and subsequently in \cite{Choudhury:2018byy, Choudhury:2018adz}. From table \ref{tab4}, with the base data we find $\sum m_{\nu} < 0.10$ eV (DH), 0.13 eV (NH), 0.16 eV (IH). With the Base+SNe combination, the bounds stand at $\sum m_{\nu} < 0.09$ eV (DH), 0.13 eV (NH), 0.16 eV (IH). These are the tightest bounds on $\sum m_{\nu}$ reported in this paper for the datasets considered.
\begin{figure}[tbp]
\centering
\includegraphics[width=.4963\linewidth]{alens_mnu_base_alens_mnu_base_pan.pdf}
\hfill
\includegraphics[width=.4963\linewidth]{omegak_mnu_base_omegak_mnu_base_pan.pdf}
\caption{\label{fig:5} 68\% and 95\% marginalized contours for $\Omega_k$ vs $\sum m_{\nu}$ in the $\Lambda\textrm{CDM}+\sum m_{\nu}+ \Omega_k$ model on the right, and for $A_{\textrm{Lens}}$ vs $\sum m_{\nu}$ in the $\Lambda\textrm{CDM}+\sum m_{\nu}+A_{\textrm{Lens}}$ model on the left, using Base and Base+SNe datasets and considering degenerate hierarchy.}
\end{figure}
\item v) \textbf{Results in the $\Lambda\textrm{CDM}+\sum m_{\nu}+A_{\textrm{Lens}}$ model :} As mentioned in section \ref{sec2.1}, the $A_{\textrm{Lens}}$ parameter is used to artificially scale the lensing amplitude predicted by the underlying theoretical model. There is a well-known $A_{\textrm{Lens}}$-issue in the CMB anisotropy data which corresponds to preference for $A_{\textrm{Lens}}$ values which are more than 2$\sigma$ away from the theoretical expectation of $A_{\textrm{Lens}}=1$. For instance Planck 2018 TT,TE,EE+lowE data yields an $A_{\textrm{Lens}}=1.180\pm0.065$ (68\%) in the $\Lambda\textrm{CDM}+A_{\textrm{Lens}}$ model, showing a 2.8$\sigma$ tension \cite{Aghanim:2018eyx}. The $\Lambda\textrm{CDM}+A_{\textrm{Lens}}$ also provides significantly better fit to the Planck power spectrum data compare to $\Lambda\textrm{CDM}$. Planck power spectrum constrains $A_{\textrm{Lens}}$ by measuring the smoothing of the CMB acoustic peaks due to the gravitational lensing of the CMB photons from large scale structures. Planck lensing data, however constrains $A_{\textrm{Lens}}$ directly, and adding CMB lensing data to the anisotropy power spectrum usually brings back the $A_{\textrm{Lens}}$ values closer to the theoretically expected result \cite{Ade:2015xua}. We find it true in this model as well. With Base data, for the DH case, we have $A_{\textrm{Lens}} = 1.100^{+0.100}_{-0.096}$ (95\%), which is almost consistent with $A_{\textrm{Lens}}=1$ at 2$\sigma$.
In this $\Lambda\textrm{CDM}+\sum m_{\nu}+A_{\textrm{Lens}}$ model, $\sum m_{\nu}$ and $A_{\textrm{Lens}}$ are strongly correlated, since increasing $\sum m_{\nu}$ has an effect of reducing lensing induced smearing of the acoustic peaks since increasing $\sum m_{\nu}$ caused increasing suppression to the small scale matter power. We find that $R_{A_{\textrm{Lens}},\sum m_{\nu}} = +$0.68 (DH), $+$0.60 (NH), $+$0.55 (IH) with the Base data. The positive correlation between these two parameters can be seen in figure \ref{fig:5}. This degeneracy causes the bounds on $\sum m_{\nu}$ to relax considerably compared to the $\Lambda\textrm{CDM}+\sum m_{\nu}$ model, as can be seen from table \ref{tab4}. Compared to a 95\% bound of $\sum m_{\nu}<0.12$ eV (DH) with Base data in the minimal $\Lambda\textrm{CDM}+\sum m_{\nu}$ model, here we get a bound of $\sum m_{\nu}<0.29$ eV with the same data. Rather this $A_{\textrm{Lens}}$-issue actually explains why the neutrino mass bounds are so strong in the $\Lambda\textrm{CDM}+\sum m_{\nu}$ model, especially with the degenerate hierarchy, as a higher $\sum m_{\nu}$ would decrease lensing of the CMB photons.
From DH to NH to IH, while using same data, the preference for higher $A_{\textrm{Lens}}$ values occurs with higher $\sum m_{\nu}$ values, and the discrepancy with $A_{\textrm{Lens}} = 1$ increases.
In the absence of varying dark energy EoS, however, $H_0$ and $\sum m_{\nu}$ are strongly correlated in this model. The model also produces lower $\sigma_8$ and $S_8$ values compared to other models considered in the paper, leading to a decrease in the $\sigma_8$ tension present between Planck and cosmic shear experiments like CFHTLenS \cite{Erben:2012zw}, KiDS-450 \cite{Hildebrandt:2016iqg} etc.
\item v) \textbf{Results in the $\Lambda\textrm{CDM}+\sum m_{\nu}+\Omega_{\textrm{k}}$ model :} In this model we consider the possibility of a non-flat background geometry of the universe. $\Omega_{\textrm{k}}$ and $\sum m_{\nu}$ are correlated positively and strongly with both Base and Base+SNe combinations, as can be seen from the right panel of figure \ref{fig:5}. This causes the bounds on $\sum m_{\nu}$ to degrade compared to the $\Lambda\textrm{CDM}+\sum m_{\nu}$ model. With Base data we have: $\sum m_{\nu} < 0.15$ eV (DH), $\sum m_{\nu} < 0.17$ eV (DH), $\sum m_{\nu} < 0.20$ eV (IH). The origin the degeneracy again lies in the tightly constrained angular diameter distance to the last scattering surface, i.e. $D_{A}(z_{dec}) \equiv \textrm{sin}\left(\sqrt{K\chi(z_{\rm dec}}\right)/\sqrt{K}$ (where $K = -\Omega_k H_0^2$ is the curvature), leading to a three way geometric degeneracy between $H_0$, $\Omega_k$ and $\sum m_{\nu}$ \cite{Howlett:2012mh} (note that when $\Omega_k=0$, $D_{A}(z) = \chi(z)$). In this model, the expression for $H(z)$ is given by,
\begin{equation}
H(z) = \sqrt{\omega_{\gamma}(1+z)^4 + (\omega_c + \omega_b)(1+z)^3 + \omega_{\Lambda}+\omega_k (1+z)^2 + \rho_{\nu}(z)h^2/\rho_{cr,0}}
\end{equation}.
It is to be noted however, that the Planck data actually prefers values of $\Omega_{\textrm{k}}<0$ \cite{Aghanim:2018eyx}, since closed universe models produce more lensing amplitude compared to a flat universe. However, inclusion of lensing data usually brings the parameters back closer to a flat universe. BAO data helps in partially breaking the three way degeneracy by constraining $H_0$. We find that with Base data, in the DH case $\Omega_{\textrm{k}} = 0.0004\pm0.0020$, which is perfectly consistent with $\Omega_{\textrm{k}}=0$. The correlation coefficients are $R_{\Omega_k, \sum m_{\nu}} = +0.41$, $R_{H_0, \sum m_{\nu}} = -0.18$, and $R_{\Omega_k, H_0} = +0.60$ for DH, i.e. the three parameters still remain considerably correlated with each other. From DH to NH to IH, preference for higher neutrino masses leads to preference for higher values of $\Omega_k$, although with the Base data $\Omega_{\textrm{k}} = 0$ is included within 2$\sigma$ ranges of $\Omega_k$, for any of the hierarchies.
\end{itemize}
\textbf{Akaike information criterion (AIC).} Here we compare the goodness of fit of degenerate, normal and inverted hierarchy scenarios in the extended models that we have studied in this section. As in previous section, we use AIC as a measure of goodness of fit. AIC for the degenerate, normal, and inverted hierarchy are denoted as $\textrm{AIC}_{\textrm{DH}}$, $\textrm{AIC}_{\textrm{NH}}$, and $\textrm{AIC}_{\textrm{IH}}$ respectively. Since the DH, NH and IH cases both have the same number of parameters, $\Delta \textrm{AIC} = \textrm{AIC} - \textrm{AIC}_{\textrm{IH}} = \Delta \chi^2$, where $\Delta \chi^2$ is the $\chi^2$ difference between the $\chi^2$ value for a particular hierarchy (DH, NH, or IH) and the IH case, at the best-fit points. The $\Delta \textrm{AIC}$ values for each of the extended models has been listed in table \ref{tab5}, for the Base and Base+SNe datasets. We find that in most scenarios $\Delta \textrm{AIC} < 0$, i.e. the fit due to DH/NH is better than IH, whereas in a few cases $\Delta \textrm{AIC} > 0$. But in none of the models do we see any statistically significant difference between NH and IH.
\begin{table*}\centering
\ra{1.3}
\resizebox{\textwidth}{!}{\begin{tabular}{@{}rrrrcrrr@{}}\toprule
& \multicolumn{3}{c}{Base} & \phantom{abc} & \multicolumn{3}{c}{Base+SNe}\\
\cmidrule{2-4} \cmidrule{6-8}
& DH & NH & IH && DH & NH & IH\\ \midrule
$\Lambda\textrm{CDM}+\sum m_{\nu}+r$ & $-5.32$ & $-1.39$ & 0 && $-4.46$ & $-3.21$ & 0 \\
$w\textrm{CDM}+\sum m_{\nu}$ & $-1.66$ & $-1.59$ & 0 && $-0.84$ & $-0.99$ & 0 \\
$w_0 w_a \textrm{CDM}+\sum m_{\nu}$ & $+0.75$ & $+2.08$ & 0 && $-0.80$ & $-0.41$ & 0 \\
$w_0 w_a \textrm{CDM}+\sum m_{\nu}$ with $w(z)\geq -1$ & $-2.71$ & $-0.69$ & 0 && $-4.67$ &$-1.31$ & 0\\
$\Lambda\textrm{CDM}+\sum m_{\nu}+A_{\textrm{Lens}}$ & $+0.88$ & $+0.20$ & 0 && $+0.13$ & $+0.92$ & 0 \\
$\Lambda\textrm{CDM}+\sum m_{\nu}+\Omega_{\textrm{k}}$ & $-3.56$ & $-1.96$ & 0 && $-2.74$ & $-1.22$ & 0 \\
\bottomrule
\end{tabular}}
\caption{ The values of $\Delta \chi^2 = \Delta \textrm{AIC} = \chi^2 - \chi^2_{\rm IH}$ for various extended models studied in this paper, with Base and Base+SNe dataset combinations. The $\chi^2$ differences are calculated at best-fit points.}\label{tab5}
\end{table*}
\section{Bayesian Model Comparison}\label{sec5}
While in the previous section we had chosen a frequentist measure (i.e. $\Delta\rm AIC$) to compare NH and IH, in this section we take a Bayesian approach. Our method closely follows \cite{Hannestad:2016fog}, and that of \cite{Blennow:2013kga,Hall:2012kg}. Consider that any two models, $M_1$ and $M_2$, with parameter vectors $\theta_1$ and $\theta_2$, are being tested against the same data $D$. Then, the posterior probability of a particular model $M_i$ ($i\equiv 1,2$) given the data $D$ is,
\begin{equation}
p_i \equiv P(M_i|D) = \frac{\pi_i P(D|M_i)}{P(D)},
\end{equation}\label{eq7}
where $P(D)$ is the prior probability for the data, and $\pi_i\equiv P(M_i)$ is our prior degree of belief in model $M_i$. Since we are dealing with probabilities, if $M_1$ and $M_2$ are the only two possible models for a particular scenario(for instance, if $M_1$ and $M_2$ represent NH and IH and we know these are the only two possible mass orderings), we can write $\pi_1 + \pi_2 =1$.
$P(D|M_i)$ is the Bayesian evidence or the marginal likelihood, and is calculated by marginalizing product of the likelihood $P(D|\theta_i,M_i)$ and the parameter prior $P(\theta_i|M_i)$ over all parameters:
\begin{equation}
P(D|M_i) = \int d\theta_i P(D|\theta_i,M_i) P(\theta_i|M_i).
\end{equation}
Again, as we can take the $p_1 + p_2 =1$, we can rewrite eq.~\ref{eq7} as,
\begin{equation}
p_i \equiv P(M_i|D) = \frac{\pi_i P(D|M_i)}{\Sigma_i\pi_i P(D|M_i)}.
\end{equation}
As indicated before, we shall treat $M_1$ and $M_2$ as NH and IH, and from now on we shall use the notation $p_{\rm NH}$ and $p_{\rm IH}$ for their posterior probabilities. One can now define the quantity Odds(NH:IH) as
\begin{equation}
\mathrm{Odds(NH:IH)} \equiv \frac{p_{\rm NH}}{p_{\rm IH}}
\end{equation}
which is simply the ratio of posterior probabilities \cite{Blennow:2013kga}. If no apriori knowledge of $\pi_i$ exists, it is customary to take all the $\pi_i$ to be equal, i.e. $\pi_{\rm NH} = \pi_{\rm IH} = 0.5$. In that case, Odds(NH:IH) essentially becomes equal to the Bayes' factor $B_{\rm NH/IH}$, which is defined as the ratio of evidences in the NH and IH case.
However, in case of neutrino mass hierarchy, one can assign $\pi_i$ from the Bayesian analysis of neutrino oscillations data. From the Bayesian analysis of neutrino oscillations data in \cite{deSalas:2018bym}, we see that $ln(B_{\rm NH/IH}) = 6.5$ (taking only the central value), which points to a posterior $\textrm{Odds(NH:IH)} = 665:1$. If we take this as our prior belief before doing any analysis with cosmological data, we get $\pi_{\rm NH} = 0.9985, \pi_{\rm IH} = 0.0015$. We have used both this and the previous $\pi_{\rm NH} = \pi_{\rm IH} = 0.5$ case while calculating the Odds(NH:IH).
Alternatively, following \cite{Hannestad:2016fog, Vagnozzi:2017ovm}, one can also consider the confidence limit at which IH is disfavored, as
\begin{equation}
\mathrm{CL_{IH}} \equiv 1- p_{\rm IH}.
\end{equation}
Odds(NH:IH) and CL$_{\rm IH}$ and strength of evidence as per Jeffrey’s scale \cite{Jeffreys} has been provided in table \ref{tab6}, where we have used only the mean values of the estimated Bayesian evidence (by CosmoChord \cite{Handley}) to compute the numerical quantities. It can be seen from table~\ref{tab6} that the analyses without the input from neutrino oscillations data on $\pi$ yield only a ``Inconclusive'' evidence for the normal neutrino mass hierarchy, which agrees with with our small $\Delta \rm AIC$ values from previous section. On the other hand, all the evidences are ``Strong'' with $\pi_{\rm NH} = 0.9985, \pi_{\rm IH} = 0.0015$, i.e. the preference for normal mass hierarchy is mostly driven by the oscillations data.
\begin{table*}\centering
\ra{1.3}
\resizebox{\textwidth}{!}{\begin{tabular}{@{}rrrccrrc@{}}\toprule
& \multicolumn{3}{c}{$\pi_{\rm NH}=\pi_{\rm IH}=0.5$} & \phantom{abc} & \multicolumn{3}{c}{$\pi_{\rm NH}=0.9985$,$\pi_{\rm IH}=0.0015$}\\
\cmidrule{2-4} \cmidrule{6-8}
& Odds(NH:IH) & CL$_{\rm IH}$ & Strength of Evidence && Odds(NH:IH) & CL$_{\rm IH}$ & Strength of Evidence\\ \midrule
$\Lambda\textrm{CDM}+\sum m_{\nu}$ & 1.43:1 & 58.8\% & Inconclusive && 950:1 & 99.895\% & Strong \\
$\Lambda\textrm{CDM}+\sum m_{\nu}+r$ & 0.93:1 & 48.1\% & Inconclusive && 617:1 & 99.838\% & Strong \\
$w\textrm{CDM}+\sum m_{\nu}$ & 1.3:1 & 56.6\% & Inconclusive && 867:1 & 99.885\% & Strong\\
$w_0 w_a \textrm{CDM}+\sum m_{\nu}$ & 1.24:1 & 55.4\% & Inconclusive && 827:1 & 99.879\% & Strong \\
$w_0 w_a \textrm{CDM}+\sum m_{\nu}$ with $w(z)\geq -1$ & 2:1 & 66.6\% & Inconclusive && 1329:1 & 99.925\% & Strong\\
$\Lambda\textrm{CDM}+\sum m_{\nu}+A_{\textrm{Lens}}$ & 0.75:1 & 42.9\% & Inconclusive && 500:1 & 99.800\% & Strong \\
$\Lambda\textrm{CDM}+\sum m_{\nu}+\Omega_{\textrm{k}}$ & 2.3:1 & 69.7\% & Inconclusive && 1530:1 & 99.935\% & Strong \\
\bottomrule
\end{tabular}}
\caption{ The values of Odds(NH:IH) and CL$_{\rm IH}$ and strength of evidence for various cosmological models studied in this paper, with Base dataset.}\label{tab6}
\end{table*}
\section{Discussion and conclusions}\label{sec4}
Presently the upper bounds on the sum of 3 active neutrino masses, $\sum m_{\nu}$ from analyses of cosmological data in the backdrop of $\Lambda\textrm{CDM}+\sum m_{\nu}$ model are bordering on the minimum sum of neutrino masses required by the inverted hierarchy, which is around 0.1 eV. However, these analyses are usually done with the assumption of 3 degenerate neutrino masses, but terrestrial neutrino oscillation experiments have confirmed that the three neutrino masses are not equal. Thus in this paper we update the bounds on $\sum m_{\nu}$ from latest publicly available cosmological data while explicitly considering particular neutrino mass hierarchies using the results on mass-squared splittings from a global analysis of neutrino oscillations data, NuFit 4.0 \cite{Esteban:2018azc}. For implementing the normal and inverted hierarchy scenarios, we use the mass of the lightest mass eigenstate, denoted with $m_0$, as the varying parameter, and use the mass-squared splittings from NuFit 4.0 to determine the other masses in a particular hierarchy, and thus use the total mass sum, i.e. $\sum m_{\nu}$, as a derived parameter. This approach puts some implicit priors on the neutrino mass sum: $\sum m_{\nu} \geq 0.06$ eV for the normal hierarchy (NH) case, $\sum m_{\nu} \geq 0.10$ eV for the inverted hierarchy (IH) case.
In the minimal $\Lambda\textrm{CDM}+\sum m_{\nu}$ model with Planck 2018 TT,TE,EE, lowE, lensing, and the latest BAO data from various galaxy surveys, we find that at 95\% C.L. $\sum m_{\nu}<0.12$ eV in the case of degenerate mass approximation. We call this dataset ``Base." This is similar to the bound of $\sum m_{\nu}<0.12$ eV quoted by the Planck 2018 collaboration using the same data, in the same model. We also find that in the same model we have $\sum m_{\nu}<0.15$ eV in case of NH and $\sum m_{\nu}<0.17$ eV in case of IH; i.e., the bounds vary significantly across the different mass orderings. The main reason for these differences in the bounds is the implicit priors on $\sum m_{\nu}$ when we assume a particular hierarchy. The case of degenerate neutrino masses with a prior $\sum m_{\nu}\geq 0.06$ eV or $\sum m_{\nu}\geq 0.10$ eV produces almost the same bounds as the NH or IH case with the lightest mass ($m_0$) parametrization we have used in the paper. The NH and IH case produce bounds which are only slightly stronger than the degenerate case with priors of $\sum m_{\nu}\geq 0.06$ eV and $\sum m_{\nu}\geq 0.10$ eV respectively, which can be attributed to the fact that with the lightest mass ($m_0$) parametrization, the implicit prior on $\sum m_{\nu}$ is not completely flat (i.e. rises at the low values of $m_0$). Also, we find that the normal hierarchy is very mildly preferred to the inverted: $\Delta \chi^2 \equiv \chi^2_{\textrm{NH}}- \chi^2_{\textrm{IH}} = -0.95$ (best-fit). We also study this model against another dataset combination: Base+SNe. For Base and Base+SNe datasets, the $\chi^2$ differences between the NH and IH case remain statistically mild, but the bounds on $\sum m_{\nu}$ across the three different mass orderings vary considerably.
In this paper, we also provide bounds on $\sum m_{\nu}$ considering different hierarchies in various extended models: $\Lambda\textrm{CDM}+\sum m_{\nu}+r$, $w\textrm{CDM}+\sum m_{\nu}$, $w_0 w_a \textrm{CDM}+\sum m_{\nu}$, $w_0 w_a \textrm{CDM}+\sum m_{\nu}$ model with $w(z)\geq -1$, $\Lambda \textrm{CDM} + \sum m_{\nu} + A_{\textrm{Lens}}$, and $\Lambda \textrm{CDM} + \sum m_{\nu} + \Omega_k$. Here $r$ is the tensor-to-scalar ratio, $w$ is the dark energy equation of state (DE EoS) parameter, $w_0$ and $w_a$ again parametrize the DE EoS but in a dynamical manner: $w(z) = w_0 + w_a z/(1+z)$ (CPL parametrization), $A_{\textrm{Lens}}$ is the parameter for artificial scaling of the lensing amplitude, and $\Omega_k$ is the curvature energy density. We used the Base and Base+SNe dataset combinations to constrain these models.
Consistent with other studies (see e.g.\ \cite{DiValentino:2019dzu} for a very recent example) we found that in some cases the formal bound $\sum m_\nu$ could be loosened by up to factor of two.
However, in none of the extended models could we find any statistically significant difference in the quality of fit to the data between NH and IH, i.e. the current cosmological data is not sufficiently strong to demarcate the two hierarchies. Apart from checking $\chi^2$ differences between NH and IH, we also confirmed the same using computation of Bayesian evidence. The results are provided in section {\ref{sec5}}. Starting with equal apriori beliefs on NH and IH scenarios, we find no conclusive evidence for the normal hierarchy over inverted hierarchy in any of the cosmological models with the Base dataset. This finding is consistent with previous work showing that if the actual value of $\sum m_{\nu}$ is sufficiently less than 0.1 eV (i.e. the neutrino masses follow NH) a formal sensitivity to $\sum m_\nu$ of 0.01-0.02 eV is required to guarantee a conclusive distinction between the two hierarchies \cite{Hannestad:2016fog,Mahony:2019fyb}. However, if the actual value of $\sum m_{\nu} >$ 0.1 eV, then we will not be able to determine the hierarchy, due to the fact that even the most optimistic future cosmological measurements won't be able to differentiate among individual neutrino masses \cite{Archidiacono:2020dvx}. Hence, for a direct detection of the neutrino mass hierarchy, we have to depend on future terrestrial neutrino oscillation experiments like DUNE \cite{Acciarri:2015uup} and JUNO \cite{An:2015jdp}.
\section*{Acknowledgements}
The authors thank the anonymous referee immensely, for the suggestions on improving the manuscript. for SRC thanks the computing facilities at HRI (\url{http://www.hri.res.in/cluster/}) and at Aarhus University (\url{http://www.cscaa.dk/}), and the warm hospitality at Aarhus University during the completion of the project. SRC would also like to thank the Department of Atomic Energy (DAE) Neutrino Project of HRI. STH is supported by a grant from the Villum Foundation.
|
2,877,628,091,333 | arxiv | \section{Evolution of Quantum Systems by Graphic Representation of States}
We explore the main processes involved in the evolution of general quantum systems by means of \textit{Diagrams of States}. The representation by diagrams of states is a novel method to graphically represent and analyze how quantum information is elaborated during computations performed by quantum circuits. In the widely-used representation by quantum circuits, horizontal lines represent single qubits constituting the considered quantum system. In contrast, in diagrams of states we draw a horizontal line for each state of the computational basis. Therefore, diagrams of states are less synthetic in respect to quantum circuits, but allow a clear and straightforward visualization of the quantum information processing.
We previously introduced this method by defining basic representations for standard quantum operations and providing examples of basic practical quantum computations \cite{FeStsdI08}. In this paper, diagrams of states are applied to explore the density matrix representation of quantum states, the time-evolution of open quantum systems by Kraus operators, and a general model for single-qubit decoherence and errors. As in previous related works \cite{FeStsdI08, FeSt06}, diagrams of states will be used both as a novel approach to investigate quantum computations, in addition to (or in substitution of) standard methods like analytical study and Feynman diagrams \cite{Fey88, Fey99}, and as an auxiliary tool to construct novel quantum computations from the desired manipulation of quantum states.
This paper is organized as follows. In Section~\ref{sec-sd-II-denskraus}, we consider the representation of quantum states by density matrices, and their partial trace and purification processes for composite systems; as practical examples, we illustrate physically feasible procedures to reduce density matrices in two-qubit systems and to purify density matrices of single-qubit systems. We conclude this section with the Kraus representation for the most general time-evolution of density matrices. In Section~\ref{sec-sd-II-deco1q}, we explore the most general transformation to model single-qubit decoherence and errors \cite{BeFeSt06}.
Finally, in Section~\ref{sec-sd-II-concl} we present our conclusions.
Throughout this paper, in order to perform the analysis of given quantum processes, we shall directly derive diagrams of states from the quantum circuits associated with the physical implementation of the processes. These diagrams can easily be rearranged into new simpler diagrams, which better visualize the overall manipulation of information from input to output: We shall refer to the former as \textit{complete} diagrams and to the latter as \textit{simplified} diagrams.
Any sequence of logic gates must be read from left (input) to right (output), both for conventional quantum circuits and for their representations by means of diagrams of states. From top to bottom, qubits run from the least significant (\textsc{lsb}) to the most significant (\textsc{msb}).
\section{Density Matrices and the Kraus Representation}\label{sec-sd-II-denskraus}
The density matrix representation (see, \textit{e.g.}, \cite{qcbook2}, pages 258-264, or \cite{CoDiLa77}, pages 295-307) is very useful whenever representing non-pure states or describing their time evolution. Any quantum state can be represented by means of a density matrix which, in turn, can be expressed by its spectral decomposition:
\begin{equation}
\rho = \sum_i \lambda_i \ket i \bra i,
\end{equation}
where $\{\ket i\}$ are the density matrix eigenvectors and $\{\lambda_i\}$ are the corresponding eigenvalues.
\subsection{Partial trace and reduced density matrices}
The Hilbert space of a composite system which is constituted by two independent or interacting subsystems can be obtained as the tensor product of the Hilbert spaces corresponding to the two constitutive subsystems.
Given the density matrix of a general state of the composite system, a partial trace operation consists in finding a description of states which considers only elements related to a certain subsystem (denoted as subsystem $I$) while disregarding any element related to the remaining subsystem (denoted by $II$):
\begin{equation}
\rho^I \equiv Tr_{II} \{\rho\} \equiv \sum_i \bra i \rho \ket i,
\end{equation}
where $i$ is the index corresponding to subsystem $II$.
\subsubsection{Reduced density matrices in the spaces of two qubits}
For the sake of simplicity, we consider a system composed of two single-qubit subsystems, whose general overall state is described by a density matrix of dimension $4 \times 4$. By performing on this density matrix the partial trace operations on each one of the two single-qubit subsystems, it is possible to define the two corresponding reduced density matrices (of dimension $ 2 \times 2 $).
The two possible partial trace operations are graphically represented in Figure \ref{sd-II-partialtr}. The processing of information involved in the partial trace operations is most clearly illustrated in the diagrams of states (left). Evidently, the diagram related to the partial trace on the least significant qubit (lower) can be directly derived from the diagram related to the partial trace on the most significant qubit (upper) by appropriately permuting the qubits constituting the system; the permutation is obtained by simply applying swap gates before and after the partial trace operation.
By appropriate immersions and permutations of diagrams of states \cite{FeStsdI08}, the method can be easily generalized to describe partial trace operations in systems of higher dimensionality and in respect to subsystems constituted by any combination of qubits.
Finally, the diagram-of-states representation for partial trace operations will be a fundamental tool for the description of decoherence and evolution of open single-qubit systems, addressed in Section \ref{sec-sd-II-deco1q}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=9.4cm]{sd-II-partialtr.eps}
\end{center}
\caption{Graphic representations of partial trace operations for a two-qubit system, on the most significant qubit (upper figures) and on the least significant qubit (lower figures). From left to right: quantum circuits, additions of matrix entries, diagrams of states. The reduced density matrix of the subsystem constituted by the least significant qubit is obtained by adding the $ 2 \times 2 $ sub-matrices highlighted in gray shading. Similarly, the reduced density matrix of the subsystem constituted by the most significant qubit is obtained by adding in pairs the entries highlighted in gray shading.}
\label{sd-II-partialtr}
\end{figure}
\subsection{Purification of a density matrix}
The purification of density matrices (see, \textit{e.g.}, \cite{qcbook2}, pages 267-270) can be considered as a sort of inverse process to partial trace operations.
A general non-pure quantum state in a system denoted by $I$ is represented by its density matrix $\rho^I$. This density matrix $\rho^I$ can be seen as a reduced density matrix obtained by partial trace from a larger density matrix $\rho$, describing a pure state in a larger system. The larger system would include the initial system $I$ and a second ancillary quantum system $II$, and it is denoted by $I + II$. The purification of the density matrix $\rho^I$ consists in determining the quantum system $I + II$ which leads to the reduction process.
By using the Latin set of indices $\{ i, j, ...\}$ when referring to the initial quantum system $I$ and the Greek set of indices $\{\alpha, \beta, ...\}$ when referring to the ancillary quantum system $II$, a general pure state in the overall system $I+II$ is described by:
\begin{equation}
|\Phi\rangle\equiv\sum_{i \; \alpha} \, C_{i\alpha} \, |i\rangle|\alpha\rangle
\end{equation}
and by the density matrix:
\begin{equation}
\rho\equiv\sum_{i \, \alpha}\sum_{j \; \beta}C_{i\alpha} \,
C_{j\beta}^* \, |i\rangle|\alpha\rangle\langle\beta|\langle
j|.
\end{equation}
The assigned density matrix $\rho^I$ is in general not diagonal:
\begin{equation}\label{eq-sd-II-rhoI}
\rho^I\equiv\sum_{k \; l}|k\rangle\rho^I_{kl}\langle l|.
\end{equation}
By imposing that the density matrix $\rho^I$ is obtained by partial trace of the density matrix $\rho$ of the overall system in respect to the basis $\{|\gamma\rangle\}$ of the subsystem $II$, we obtain:
\begin{equation}
\rho^I\equiv\sum_{k\;l}|k\rangle\rho^I_{kl}\langle l|=\sum_\gamma
\; \left\{ \sum_{i\;j}C_{i\gamma} \, C_{j\gamma}^* \, |i\rangle\langle
j|\right\},
\end{equation}
that is, a set of equations for the entries of the density matrix $\rho^I$:
\begin{equation}\label{eq-sd-II-coefrhoI}
\rho^I_{ij}=\sum_\gamma C_{i\gamma}C_{j\gamma}^*,
\end{equation}
where the coefficients $\rho_{ij}^I$ are known, while the coefficients $C_{i\gamma}$ are to be determined.
As is well known, equations (\ref{eq-sd-II-coefrhoI}) are solvable if the space of the basis $\{\ket\gamma\}$ has a sufficiently high dimensionality. That is, given any density matrix in a defined quantum system, it is always possible to consider an extended quantum system in which the assigned non-pure state can be obtained by partial trace of a pure state in the extended system.
\subsubsection{Density matrix purification for a single-qubit system}
For the sake of simplicity, we consider the purification process for a density matrix $\rho$ assigned in a single-qubit system. A single ancillary qubit is sufficient to obtain the corresponding pure state $\ket\Psi$ in a larger quantum system.
From equations (\ref{eq-sd-II-coefrhoI}) we obtain a three-equation system in four unknown $C_{i\alpha}$:
\begin{equation}\label{eq-sd-II-sisrhoI}
\left\{%
\begin{array}{c}
\rho_{11}=C_{00}C_{00}^*+C_{01}C_{01}^*\nonumber\\
\rho_{12}=C_{00}C_{10}^*+C_{01}C_{11}^*=\rho^*_{21}\\
\rho_{22}=C_{10}C_{10}^*+C_{11}C_{11}^*\nonumber\\
\end{array}%
\right.
\end{equation}
We can assume the parameter $C_{00}$ to be real, since a common phase is arbitrary, and we can impose the condition $C_{01}=0$, thus obtaining:
\begin{equation}
C_{00}=\sqrt\rho_{11}\, , \sh
C_{01}=0\, , \sh
C_{10}=\frac{\rho^*_{12}}{\sqrt\rho_{11}}\, , \sh
C_{11}=\sqrt\frac{\rho_{11}\,\rho_{22}-|\rho_{12}|^2}{\rho_{11}}\, .
\end{equation}
Thus, the following pure two-qubit state performs a purification of the given density matrix $\rho$:
\begin{equation}\label{eq-sd-II-purrho}
\ket\Psi= \sqrt\rho_{11}\;\ket0\ket0
+\frac{\rho^*_{12}}{\sqrt\rho_{11}}\;\ket1\ket0+
\sqrt\frac{\rho_{11}\,\rho_{22}-|\rho_{12}|^2}{\rho_{11}}\;\ket1\ket1.
\end{equation}
Figure \ref{sd-II-cirpurrho} shows the quantum circuit to generate the quantum state for the purification of the density matrix $\rho$. The processing of information is clearly illustrated by the complete and simplified diagrams of states in figure \ref{sd-II-diastpurrho}. In these diagrams, the active information contained in the initial states is manipulated from left to right along thick lines, while thin lines correspond to absence of information.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=6.4cm]{sd-II-cirpurrho.eps}
\end{center}
\caption{Quantum circuit representing the synthesis of a two-qubit state purifying the density matrix of a single-qubit system, by unitary operations performed on a two-qubit register. The gate array \qq A'' synthesizes the amplitude modules, while the gate array \qq B'' synthesizes the phase.}\label{sd-II-cirpurrho}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=7.6cm]{sd-II-diastpurrho.eps}
\end{center}
\caption{Complete (upper) and simplified (lower) diagrams of states representing the purification of the density matrix of a single-qubit system. Starting from the input state $\ket{00}$, the active information is processed along thick lines, while thin lines correspond to absence of information.}\label{sd-II-diastpurrho}
\end{figure}
After the application of the \qq A'' gate array of Figures \ref{sd-II-cirpurrho} and \ref{sd-II-diastpurrho}, we obtain the state:
\begin{equation}
\ket{\Psi} = \left[%
\begin{array}{c}
\cos \theta_1 \\
0 \\
\cos \theta_2 \, \sin \theta_1 \\
\sin \theta_2 \, \sin \theta_1 \\
\end{array}%
\right].
\end{equation}
Purification is then achieved by applying the \qq B'' gate array, where the controlled -phase gate has phase $\varphi = \arg \,(\rho_{12})$. Thus, the pure state $\ket\Psi$ of equation (\ref{eq-sd-II-purrho}) is generated by two-qubit unitary operations.
\subsection{The Kraus representation}
The most general time-evolution of a density matrix can be described by the Kraus representation (\cite{Kra83} or see, \textit{e.g.}, \cite{qcbook2}, pages 278-284).
This representation is schematically illustrated in Figure \ref{sd-II-evolkraus}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=6cm]{sd-II-evolkraus.eps}
\end{center}
\caption{Kraus representation for the time-evolution of a general density matrix. The overall number of qubits is preserved during the evolution process represented by the unitary matrix $U$: $n^\prime + m^\prime = n + m$.}\label{sd-II-evolkraus}
\end{figure}
For the sake of simplicity, we consider an evolving quantum system constituted by a single qubit of density matrix $\rho$, and by a single ancillary qubit on which the partial trace operation is performed. This choice causes no loss of generality, as the Kraus representation can easily be generalized for main and auxiliary systems of higher dimensionality by appropriate immersions and permutations of diagrams of states \cite{FeStsdI08}.
The processing of information involved in the time-evolution is illustrated by the diagram of states in Figure \ref{sd-II-diastevkr}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=8.4cm]{sd-II-diastevkraus.eps}
\end{center}
\caption{Quantum circuit (left) and diagram of states (right) representing the Kraus time-evolution of a single-qubit main quantum system, supported by a single-qubit ancillary system in the most significant position. In the diagram, the active lines of the unitary matrix $U$ determine the evolution of the overall system. A similar representation can be obtained for the ancillary qubit in the least significant position, by simply permuting the qubits constituting the system.}\label{sd-II-diastevkr}
\end{figure}
According to Figure \ref{sd-II-evolkraus}, the overall number of qubits is preserved during the evolution process, and in the simplest case considered here we have $n = m = n^\prime = m^\prime = 1$.
The initial density matrix $\rho^{All}$ is given by:
\begin{equation}
\rho^{All}=\ket0\langle0|\otimes\rho.
\end{equation}
The evolution of the overall density matrix $\rho^{All}$ is then obtained by applying the unitary matrix $U$ appropriately divided into sub-matrices of dimension $2 \times 2$ and, subsequently, by partial tracing on the most significant qubit.
The final density matrix $\rho^\prime$ is:
$$
\rho^\prime=Tr_{MSB}\{U\,\rho^{All}\,U^\dag\}=
$$
$$= Tr_{MSB} \left\{
\left[\begin{array}{cc}A&B\\C&D\end{array}\right]\,
\rho^{All}\,
\left[\begin{array}{cc}A^\dag&C^\dag\\B^\dag&D^\dag\end{array}\right] \right\}=
$$
\begin{equation}\label{eq-sd-II-evolkraus1}
= A\,\rho \,A^\dag+C\,\rho \,C^\dag
\end{equation}
with the condition of trace preservation:
\begin{equation}\label{eq-sd-II-evolkraus1tr}
A^\dag A+C^\dag C=1.
\end{equation}
\section{Single-Qubit Decoherence and Errors}\label{sec-sd-II-deco1q}
In any realistic physical implementation for quantum computers, quantum systems which process information must always be considered open, that is, never perfectly isolated from the environment. This gives rise to the well known phenomenon of decoherence, with which we here denote any quantum-noise process due to the unavoidable coupling of the main system to the environment (see, \textit{e.g.}, \cite{qcbook2}, pages 335-351).
In addition to decoherence, many sources of imprecision or perturbation have to be taken into account in all quantum information processing tasks, since any experimental implementation of quantum states and operations is unavoidably imperfect to some extent.
To our best knowledge, error models in the literature do not offer a complete and realistic description of the most general noise-transformations for single-qubit systems.
In \cite{BeFeSt06} we considered all physically possible single-qubit errors: A general transformation of a single-qubit density matrix is determined by a set of parameters, with which we associated basic transformations, or quantum noise channels. Here we provide a detailed analysis and complete description of this single-qubit error model, representing all noise channels by means of quantum circuits, diagrams of states and geometrical visualizations of their action on pure quantum states.
\subsection{General transformation of a single-qubit density matrix}
It is well known that any quantum state:
\begin{equation}
\ket \Psi = \alpha \ket 0 + \beta \ket 1 = \left[
\begin{array}{c}
\alpha \\
\beta \\
\end{array}
\right], \sh |\alpha^2| + |\beta^2| = 1,
\end{equation}
can be rewritten in spherical coordinates:
\begin{equation}
\ket \Psi = \cos \frac{\theta}{2} \ket 0 + e^{i \phi }\sin \frac{\theta}{2} \ket 1 = \left[
\begin{array}{c}
\cos \frac{\theta}{2} \\
e^{i \phi }\sin \frac{\theta}{2} \\
\end{array}
\right]
\end{equation}
and represented by the Cartesian coordinates of the three-dimensional space embedding the Bloch sphere.
The quantum state $\ket \Psi$ can also be associated to its density matrix $\rho = \ket \Psi \bra \Psi$ which, in turn, can be expressed by means of Pauli operators $\mbox{\mathversion{bold}$\sigma$\mathversion{normal}}$:
\begin{equation}
\rho \equiv \frac{1}{2} \, [ I + \mbox{\mathversion{bold}$\lambda$\mathversion{normal}} \mbox{\mathversion{bold}$\sigma$\mathversion{normal}}] = \frac{1}{2} \left[
\begin{array}{cc}
1 + Z & X - i Y \\
X + i Y & 1 - Z \\
\end{array}
\right],
\sh \mbox{\mathversion{bold}$\lambda$\mathversion{normal}} = \left[
\begin{array}{c}
X \\
Y \\
Z \\
\end{array}
\right],
\end{equation}
where $\mbox{\mathversion{bold}$\lambda$\mathversion{normal}}$ is the vector of the Bloch sphere coordinates of the state $\ket \Psi$.
According to the Kraus representation, the most general evolution of a
single-qubit density matrix is given by:
\begin{equation}
\rho^\prime = \sum_{i} F_{i} \rho F_{i}^{\dagger}, \sh
\sum_{i} F_{i}^{\dagger} F_{i} = I,
\end{equation}
which leads to an affine transformation in the Bloch sphere coordinates of the state $\ket \Psi$:
\begin{equation}
\mbox{\mathversion{bold}$\lambda$\mathversion{normal}}^\prime = M \mbox{\mathversion{bold}$\lambda$\mathversion{normal}} + \mathbf{c},
\end{equation}
where $M$ is a real matrix of dimension $3 \times 3$ and $\mathbf{c}$ is a translation vector of dimension $3$.
Thus, twelve parameters are necessary and sufficient to characterize a generic
quantum-noise operation acting on a single-qubit system.
These twelve parameters can not be chosen arbitrarily, since they are subjected to the mathematical conditions of complete positivity of the quantum-noise transformations, that is, we always need to assure the physical feasibility of the transformations determined by the parameters. This restriction may require a very difficult treatment if we choose to rely only on analytical descriptions. On the contrary, by studying the parameters by means of equivalent quantum circuits, the mathematical conditions for completely positive transformations are inherently and straightforwardly satisfied, and the same method easily allows for extensions to higher-dimensional cases.
In order to develop descriptions by quantum circuits, we thus decompose the matrix $M$ as one diagonal and two orthogonal matrices, $ M = O_1 \, D \, O_2^T ,$
and we associate the basic parameters with transformations of a general quantum state residing in the Bloch sphere. Three parameters correspond to
rotations of the sphere around the axes $x, y, z$; three parameters correspond to displacements of the sphere along the axes $x, y, z$; three parameters correspond to deformations of the sphere into an ellipsoid with $x$, $y$ or $z$ as symmetry axes. The last three parameters correspond to further rotations of the already deformed sphere, in order to obtain an ellipsoid with symmetry axis along arbitrary directions. Thus, we only need to illustrate in detail the first nine transformations, since deformations of the Bloch sphere into an ellipsoid with symmetry axis along an arbitrary direction can be obtained by appropriate compositions of the first nine transformations.
\subsection{Rotations of the Bloch sphere around the coordinate axes}
Rotation errors are the simplest case of single-qubit errors.
They are unitary transformations and can be obtained immediately as special cases of single-qubit unitary gates, whose diagrams of states are described in detail in \cite{FeStsdI08}.
For the reader's convenience, the corresponding transformations of the Bloch sphere coordinates are:
$$
R_{x} = \left[
\begin{array}{cc}
\cos \frac{\theta}{2} & - i \sin \frac{\theta}{2} \\
- i \sin \frac{\theta}{2} & \cos \frac{\theta}{2}
\end{array}\right]
\mh
\left\{
\begin{array}{l}
X^\prime=X\\
Y^\prime=\cos\theta \, Y - \sin \theta \, Z\\
Z^\prime=\sin\theta \, Y + \cos \theta \, Z
\end{array}
\right.
$$
$$
R_{y} = \left[
\begin{array}{cc}
\cos \frac{\theta}{2} & - \sin \frac{\theta}{2} \\
\sin \frac{\theta}{2}& \cos \frac{\theta}{2}
\end{array}\right]
\mh
\left\{
\begin{array}{l}
X^\prime=\cos\theta \, X - \sin \theta \, Z\\
Y^\prime=Y\\
Z^\prime=\sin\theta \, X + \cos \theta \, Z
\end{array}
\right.
$$
\begin{equation}
R_{z} = \left[
\begin{array}{cc}
\cos \frac{\theta}{2} - i \sin \frac{\theta}{2} & 0 \\
0 & \cos \frac{\theta}{2} + i \sin \frac{\theta}{2}
\end{array}\right]
\sh
\left\{
\begin{array}{l}
X^\prime=\cos\theta \, X - \sin \theta \, Y\\
Y^\prime=\sin\theta \, X + \cos \theta \, Y\\
Z^\prime=Z
\end{array}
\right.
\end{equation}
where the error parameter $\theta$ denotes the rotation angle, that is, the intensity of the rotation error applied to the original single-qubit quantum state.
\subsection{Deformations of the Bloch sphere along the coordinate axes}
Deformation errors are non-unitary errors: Consequently, they require a physically feasible representation involving a unitary operation in an extended quantum system.
The deformation of the Bloch sphere into an ellipsoid centered at the origin of the sphere with axis directed along $x, y$ or $z$ can be respectively implemented by means of the bit flip, bit-phase flip and phase flip channels. The corresponding quantum circuits are illustrated in Figure \ref{sd-II-deformch} (left), where the environment is represented by a single ancillary qubit. The processing of information caused by each deformation channel is clearly illustrated in the corresponding diagrams of states in Figure \ref{sd-II-deformch} (right).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=9.4cm]{sd-II-deformch.eps}
\end{center}
\caption{Quantum circuits (left) and the corresponding diagrams of states (right) representing deformations of the Bloch sphere along the $x, y$ and $z$ axes. From top to bottom, the bit flip, bit-phase flip and phase flip channels. The ancillary qubit representing the environment is set to the initial state
$|\Psi\rangle= \cos\frac{\theta}{2}|0\rangle+\sin\frac{\theta}{2}|1\rangle$, with
$0\le\theta\le \pi$.}\label{sd-II-deformch}
\end{figure}
The Kraus operators of each channel can be easily derived from the corresponding diagram of states.
For the bit flip channel we have:
\begin{equation}
F_{1} = \left| \cos\frac{\theta}{2}\right| \, I \mh F_{2} = \left|\sin\frac{\theta}{2}\right| \, \sigma_{x}.
\end{equation}
For the bit-phase flip channel we have:
\begin{equation}
F_{1} = \left|\cos\frac{\theta}{2}\right| \, I \mh F_{2} = \left|\sin\frac{\theta}{2}\right| \, \sigma_{y}.
\end{equation}
Finally, for the phase flip channel we have:
\begin{equation}
F_{1} = \left|\cos\frac{\theta}{2}\right| I \mh F_{2} = \left|\sin\frac{\theta}{2}\right| \, \sigma_{z}.
\end{equation}
The Kraus operators of the three channels induce the following transformations in the Bloch sphere coordinates of the initial state:
\begin{equation}
\left\{
\begin{array}{l}
X^\prime=X\\
Y^\prime=\cos \theta \, Y\\
Z^\prime=\cos \theta \, Z
\end{array}
\right.
\mh
\left\{
\begin{array}{l}
X^\prime=\cos \theta \, X\\
Y^\prime=Y\\
Z^\prime=\cos \theta \, Z
\end{array}
\right.
\mh
\left\{
\begin{array}{l}
X^\prime=\cos \theta \, X\\
Y^\prime=\cos \theta \, Y\\
Z^\prime=Z
\end{array}
\right.
\end{equation}
These transformations can be finally visualized in the Bloch sphere representation (Figure \ref{app-gv-def}).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{Bitflip.eps}
\includegraphics[width=2.4cm]{Bitflip2.eps}
\\
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{Bitphflip.eps}
\includegraphics[width=2.4cm]{Bitphflip2.eps}
\\
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{Phflip.eps}
\includegraphics[width=2.4cm]{Phflip2.eps}
\end{center}
\caption{Graphic visualization of deformations of the Bloch sphere along the $x$, $y$ and $z$ axes by means of the bit flip channel, the bit-phase flip channel and the phase flip channel, respectively. The Bloch sphere (left) represents initial unperturbed states, while the ellipsoids (right) show how the Bloch sphere is affected by the noise channels, for increasing values of the error parameter ($\theta = 0, \frac{\pi}{4}, \frac{\pi}{3}$).}\label{app-gv-def}
\end{figure}
\subsection{Displacements of the Bloch sphere along the coordinate axes}
As happens for deformations, also displacement errors are non-unitary errors and, consequently, require a physically feasible representation involving a unitary operation in an extended quantum system.
The most important feature of all displacement channels is that they must perform at the same time a correlated deformation, so that the resulting matrix $\rho^\prime$ is still a density matrix representing a physically feasible state.
The transformations induced by displacements of the center of the Bloch sphere along the axes $x, y$ or $z$ can be implemented by means of the amplitude damping channel and its appropriate modifications to reverse or rotate the displacement direction\footnote{We remark that the amplitude damping channel actually describes the most general deformation errors: It performs the best possible deformation of the Bloch sphere in relation to the desired displacement of its center along the considered coordinate axis, as given by the high degree of tangency of the resulting ellipsoid to the initial Bloch sphere.}.
The quantum circuits for the standard and modified amplitude damping channels are illustrated in Figures \ref{sd-II-qcampldamp} and \ref{sd-II-displxy}(left), where the environment is again represented by a single ancillary qubit. The processing of information caused by the different instances of the amplitude damping channel is clearly illustrated by the diagrams of states in Figures \ref{sd-II-sdampldamp} and \ref{sd-II-displxy} (right).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=10.8cm]{sd-II-qcampldamp.eps}
\end{center}
\caption{Quantum circuits representing displacements of the Bloch sphere along the axis $z$. From left to right, the amplitude damping channel and its reversion to achieve the displacement in the positive and negative directions of the axis $z$, respectively.}\label{sd-II-qcampldamp}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=9.4cm]{sd-II-sdampldamp.eps}
\end{center}
\caption{Diagrams of states representing displacements of the Bloch sphere along the axis $z$. From top to bottom, from left to right: complete and simplified diagrams of states of the amplitude damping channel, and diagram of states of its reversion.}\label{sd-II-sdampldamp}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=10.6cm]{sd-II-displxy.eps}
\end{center}
\caption{Quantum circuits (left) and diagrams of states (right) representing displacements of the Bloch sphere along the axes $x$ and $y$. Circuits and diagrams are derived by rotating the displacement direction along which the amplitude damping channel and its reversion act. Rotations are obtained by applying the appropriate unitary matrices $U$ and $U^{\dagger}$, while \qq D'' denotes the gate array corresponding to the amplitude damping channel, as shown in Figures \ref{sd-II-qcampldamp} and \ref{sd-II-sdampldamp}.}\label{sd-II-displxy}
\end{figure}
The Kraus operators for each instance of the amplitude damping channel can be easily derived from the corresponding diagram of states.
For the standard amplitude damping channel we have:
\begin{equation}
F_0=
\left[
\begin{array}{cc}
1 & 0\\
0 & \cos\frac{\theta}{2}
\end{array}
\right], \mh F_1=
\left[
\begin{array}{cc}
0 & \sin\frac{\theta}{2} \\
0 & 0
\end{array}
\right].
\end{equation}
For the reversed amplitude damping channel, we have:
\begin{equation}
F_0=
\left[
\begin{array}{cc}
\cos\frac{\theta}{2} & 0\\
0 & 1
\end{array}
\right], \mh F_1=
\left[
\begin{array}{cc}
0 & 0 \\
\sin\frac{\theta}{2} & 0
\end{array}
\right].
\end{equation}
To rotate the amplitude damping channel for displacements along the axes $x$ and $y$, we apply respectively the unitary transformations:
\begin{equation}
U_{(x)}=\frac{1}{\sqrt{2}}
\left[
\begin{array}{cc}
1 & \pm 1\\
\mp 1 & 1
\end{array}
\right], \mh U_{(y)}=\frac{1}{\sqrt{2}}
\left[
\begin{array}{cc}
1 & \pm i\\
\pm i & 1
\end{array}
\right].
\end{equation}
Thus we obtain the following Kraus operators for $x$ displacements:
$$
F_0=
\frac{1}{2}\left[
\begin{array}{cc}
1+\cos\frac{\theta}{2} & \pm (1-\cos\frac{\theta}{2})\\
\pm (1-\cos\frac{\theta}{2}) & 1+\cos\frac{\theta}{2}
\end{array}
\right],
$$
\begin{equation}
F_1=
\frac{1}{2}\left[
\begin{array}{cc}
\mp \sin\frac{\theta}{2} & \sin\frac{\theta}{2} \\
-\sin\frac{\theta}{2} & \pm \sin\frac{\theta}{2}
\end{array}
\right]
\end{equation}
and for $y$ displacements:
$$ F_0=
\frac{1}{2}\left[
\begin{array}{cc}
1+\cos\frac{\theta}{2} & \pm i (1-\cos\frac{\theta}{2})\\
\mp i (1-\cos\frac{\theta}{2}) & 1+\cos\frac{\theta}{2}
\end{array}
\right],
$$
\begin{equation}
F_1=
\frac{1}{2}\left[
\begin{array}{cc}
\pm i \sin\frac{\theta}{2} & \sin\frac{\theta}{2} \\
\sin\frac{\theta}{2} & \mp i \sin\frac{\theta}{2}
\end{array}
\right].
\end{equation}
The Kraus operators induce the following transformations in the Bloch sphere coordinates of the initial state, respectively for the negative and positive directions of the axes $x, y, z$:
$$
\left\{
\begin{array}{l}
X^\prime=\mp \sin^2 \theta \pm \cos^2 \theta X\\
Y^\prime=\cos \theta Y\\
Z^\prime=\cos \theta Z
\end{array}
\right.
\mh
\left\{
\begin{array}{l}
X^\prime=\cos \theta X\\
Y^\prime=\mp \sin^2 \theta \pm \cos^2 \theta Y\\
Z^\prime=\cos \theta Z
\end{array}
\right.
$$
\begin{equation}
\left\{
\begin{array}{l}
X^\prime=\cos \theta X\\
Y^\prime=\cos \theta Y\\
Z^\prime=\mp \sin^2 \theta \pm \cos^2 \theta Z
\end{array}
\right.
\end{equation}
These transformations can be finally visualized in the Bloch sphere representation in Figure \ref{app-gv-displ}.
\begin{figure}[!p]
\begin{center}
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{DisplX1.eps}
\includegraphics[width=2.4cm]{DisplX2.eps}
\includegraphics[width=2.4cm]{DisplX3.eps}
\\
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{Displ-X1.eps}
\includegraphics[width=2.4cm]{Displ-X2.eps}
\includegraphics[width=2.4cm]{Displ-X3.eps}
\\
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{DisplY1.eps}
\includegraphics[width=2.4cm]{DisplY2.eps}
\includegraphics[width=2.4cm]{DisplY3.eps}
\\
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{Displ-Y1.eps}
\includegraphics[width=2.4cm]{Displ-Y2.eps}
\includegraphics[width=2.4cm]{Displ-Y3.eps}
\\
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{DisplZ1.eps}
\includegraphics[width=2.4cm]{DisplZ2.eps}
\includegraphics[width=2.4cm]{DisplZ3.eps}
\\
\includegraphics[width=2.4cm]{SferaBloch.eps}
\includegraphics[width=2.4cm]{Displ-Z1.eps}
\includegraphics[width=2.4cm]{Displ-Z2.eps}
\includegraphics[width=2.4cm]{Displ-Z3.eps}
\end{center}
\caption{Graphic visualization of displacements of the Bloch sphere along the $x$, $y$ and $z$ axis. The Bloch sphere (left) represents initial unperturbed states, while the ellipsoids (right) show how the Bloch sphere is affected by the noise channel, for increasing values of the error parameter ($\theta = 0, \frac{\pi}{6},
\frac{\pi}{4}, \frac{\pi}{3}$).}\label{app-gv-displ}
\end{figure}
\subsection{The depolarizing channel}
After describing the complete set of basic single-qubit noise operations, we conclude this section by mentioning a well known and widely used single-qubit decoherence model: the depolarizing channel (see, \textit{e.g.}, \cite{qcbook2}, pages 340-343).
The depolarizing channel can be described by means of the quantum circuit in Figure \ref{sd-II-depolch} (left).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=10.4cm]{sd-II-depolch.eps}
\end{center}
\caption{Quantum circuit (left) and diagram of states (right) representing a generalized depolarizing channel. Depending on the choice of the parameters $|\alpha|, |\beta|$ and $|\gamma|$, several noise operations can be described as special cases of this channel.}\label{sd-II-depolch}
\end{figure}
The processing of information performed by the depolarizing channel is clearly illustrated in the diagram of states in Figure \ref{sd-II-depolch} (right).
The main single-qubit system is coupled with the environment, which is now represented by two ancillary qubits, set in the pure state:
\begin{equation}
\ket \Psi = \alpha |00\rangle + \beta |01\rangle + \gamma |10\rangle + \delta |11\rangle,
\end{equation}
with the normalization condition:
\begin{equation}
|\alpha|^2 + |\beta|^2 + |\gamma|^2 + |\delta|^2 = 1.
\end{equation}
The channel applies one of the the operators $I, \sigma_x, \sigma_y$ or $\sigma_z$ to the least significant qubit if the two most significant qubits are in one (or in a superposition) of the states $|00\rangle, |01\rangle, |10\rangle$ or $|11\rangle$.
The final state of the single-qubit system is obtained by tracing over environmental qubits.
Depending on the choice of the parameters $|\alpha|, |\beta|$ and $|\gamma|$, several noise operations can be described as special cases of this channel (for instance, the bit flip or the phase flip channels).
The standard depolarizing channel is obtained when:
\begin{equation}
|\alpha|^2=\cos^2\theta, \sh |\beta|^2 =
|\gamma|^2 = |\delta|^2 = \frac{\sin^2\theta}{3}.
\end{equation}
In this case, the Kraus operators become:
$$
F^{(1)}= \cos \theta \, I, \mh
F^{(2)}=\frac{\sin\theta}{\sqrt3}
\, \sigma_x,
$$
\begin{equation}
F^{(3)}=\frac{\sin\theta}{\sqrt3} \, \sigma_y, \mh
F^{(4)}=\frac{\sin\theta}{\sqrt3} \, \sigma_z,
\end{equation}
and the Bloch sphere coordinates of the initial state are transformed as follows:
\begin{equation}
\left\{
\begin{array}{l}
X^\prime= \left(1-\frac{4}{3}\sin^2\theta\right) X\\
Y^\prime= \left(1-\frac{4}{3}\sin^2\theta\right) Y\\
Z^\prime= \left(1-\frac{4}{3}\sin^2\theta\right) Z
\end{array}
\right.
\end{equation}
Thus, the Bloch sphere is deformed into an ellipsoid,
centered at the origin of the Bloch sphere, whose axes are directed along
$x, y$ and $z$, the deformation rate being the same along each axis.
Thus, the depolarizing channel can not be considered as a general model to describe and study single-qubit decoherence effects, since it only includes a small subset of specific noise operations, although involving an higher dimensional auxiliary system in respect to the previously illustrated single-qubit noise channels.
\section{Conclusions and Future Developments}\label{sec-sd-II-concl}
We have explored the main processes involved in evolution of general quantum systems by means of \textit{Diagrams of States}, a novel method to graphically represent and analyze how quantum information is elaborated during computations performed by quantum circuits.
We have offered complete and detailed descriptions by diagrams of states of partial trace operations, density matrix purification and evolution by Kraus operators, single-qubit decoherence and noise channels.
In our opinion, diagrams of states can be used as an auxiliary or as an alternative approach to standard methods, both to investigate and to conceive quantum computations. Analytical study and Feynman diagrams alone are often too synthetic to clearly visualize how quantum information is processed by computations. On the contrary, the dimension of the graphic representation of states grows exponentially in respect to the dimension of the examined quantum system, thus offering a complete and detailed visualization of the computational process.
Diagrams of states appear to be most useful whenever the quantum operations to be analyzed are described by very sparse matrices, since only non-null entries of matrices are associated with diagram lines which contain active information. This way, the resulting diagrams show clearly and immediately the significant pattern along which quantum information is processed by the computation from input to output. Indeed, several quantum computations actually involve operations satisfying this requirement, and evidence of this is also provided by the processes illustrated in this paper.
Further computations are going to be explored by this graphic representation, among which quantum algorithms \cite{FeStsdIV} and entanglement-based practical applications \cite{FeStsdIII}.
\section*{Acknowledgments}
The authors wish to thank Samuel L. Braunstein
and Roberto Suardi for their kind contributions to the development and improvement of this paper.
Sara Felloni acknowledges support by ERCIM, as this work was partially carried out during the tenure of an ERCIM \qq Alain Bensoussan'' Fellowship Programme.
|
2,877,628,091,334 | arxiv | \section{Introduction}
Affect recognition based on a subject’s facial expressions has been a topic of major research in the attempt to generate machines that can understand the way subjects feel, act and react.
The problem of affect analysis and recognition constitutes a key issue in behavioural modelling, human computer/machine interaction and affective computing.
There are a number of related applications spread across a variety of fields, such as medicine, health, or driver fatigue, monitoring, e-learning, marketing, entertainment, lie detection and law \cite{acharya2018automated,kim2016deep,nasser2019artificial,greene2016survey,zepf2020driver}.
In the past, due to the unavailability of large amounts of data captured in real-life situations, research has mainly focused on controlled environments. However, recently, social media and platforms have been widely used and large amount of data have become available. Moreover, deep learning has emerged as a means to solve visual analysis and recognition problems. Thus, major research has been given during the last few years to the development and use of deep learning techniques and deep neural networks \cite{lecun2015deep,goodfellow2016deep} in various applications, including affect recognition in-the-wild, i.e., in unconstrained environments.
Moreover, apart from affect analysis and recognition, generation of facial affect is of great significance, in many real life applications, such as for synthesis of affect on avatars that interact with humans, in computer games, in augmented and virtual environments, in educational and learning contexts \cite{thies2016face2face,thies2018headon,pham2018generative,pumarola2018ganimation}. The ABAW Workshop exploits these advances and makes significant contributions for affect analysis, recognition and synthesis in-the-wild.
Ekman \cite{ekman2002facial} was the first to systematically study human facial expressions. His study categorizes the prototypical facial expressions, apart from neutral expression, into six classes representing anger, disgust, fear, happiness, sadness and surprise. Furthermore, facial expressions are related to specific movements of facial muscles, called Action Units (AUs). The Facial Action Coding System (FACS) was developed, in which facial changes are described in terms of AUs \cite{darwin1998expression}.
Apart from the above categorical definition of facial expressions and related emotions, in the last few years there has been great interest in dimensional emotion representations, which are of great interest in human computer interaction and human behaviour analysis. Dimensional emotion representations are used to tag emotional states in continuous mode, usually in terms of the arousal and valence dimensions, i.e. in terms of how active or passive, positive or negative is the human behaviour under analysis \cite{frijda1986emotions,whissel1989dictionary,russell1978evidence}.
The third ABAW Competition, to be held in conjunction with the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2022 is a continuation of the first \cite{kollias2020analysing} and second \cite{kollias2021analysing} ABAW Competitions held in conjunction with the IEEE Conference on Face and Gesture Recognition (IEEE FG) 2021 and with the International Conference on Computer Vision (ICCV) 2022, respectively, which targeted dimensional (in terms of valence and arousal) \cite{deng2020multitask, li2021technical, zhang2020m, do2020affective,chen2017multimodal,weichi,deng2021towards,zhang2021prior,vu2021multitask,wang2021multi,zhang2021audio,xie2021technical,jin2021multi,antoniadis2021audiovisual,oh2021causal}, categorical (in terms of the basic expressions) \cite{kuhnke2020two,gera2020affect,dresvyanskiy2020audio,youoku2020multi,liu2020emotion,gera2021affect,mao2021spatial} and facial action unit analysis and recognition \cite{pahl2020multi,ji2020multi,kollias2021distribution,han2016incremental,kollias2019face,deng2020fau,saito2021action,vu2021multitask}. The third ABAW Competition contains four Challenges, which are based on the same in-the-wild database, (i) the uni-task Valence-Arousal Estimation Challenge; (ii) the uni-task Expression Classification Challenge (for the 6 basic expressions plus the neutral state plus the 'other' category that denotes expressions/affective states other than the 6 basic ones); (iii) the uni-task Action Unit Detection Challenge (for 12 action units); (iv) the Multi-Task Learning Challenge (for joint learning and predicting of valence-arousal, 8 expressions -6 basic plus neutral plus 'other'- and 12 action units). These Challenges produce a significant step forward when compared to previous events.
In particular, they use the Aff-Wild2 \cite{kollias2021analysing,kollias2020analysing,kolliasexpression,kollias2021affect,kollias2018aff2,kollias2018multi,kollias2021distribution,kollias2019face,kollias2018deep,zafeiriou,zafeiriou1}, the first comprehensive benchmark for all three affect recognition tasks in-the-wild: the Aff-Wild2 database extends the Aff-Wild \cite{kollias2018deep,zafeiriou,zafeiriou1}, with more videos and annotations for all behavior tasks.
The remainder of this paper is organised as follows. We introduce the Competition corpora in Section \ref{corpora}, the Competition evaluation metrics in Section \ref{metrics}, the developed baseline, along with the obtained results in Section \ref{baseline}, before concluding in Section \ref{conclusion}.
\section{Competition Corpora}\label{corpora}
The third Affective Behavior Analysis in-the-wild (ABAW2) Competition relies on the Aff-Wild2 database, which is the first ever database annotated for all three main behavior tasks: valence-arousal estimation, action unit detection and expression classification. These three tasks constitute the basis of the four Challenges.
The Aff-Wild2 database, in all Challenges, is split into training, validation and test set. At first the training and validation sets, along with their corresponding annotations, are being made public to the participants, so that they can develop their own methodologies and test them. The training and validation data contain the videos and their corresponding annotation. Furthermore, to facilitate training, especially for people that do not have access to face detectors/tracking algorithms, we provide bounding boxes and landmarks for the face(s) in the videos (we also provide the aligned faces). At a later stage, the test set without annotations will be given to the participants. Again, we will provide bounding boxes and landmarks for the face(s) in the videos (we will also provide the aligned faces).
In the following, we provide a short overview of each Challenge's dataset and refer the reader to the original work for a more complete description. Finally, we describe the pre-processing steps that we carried out for cropping and aligning the images of Aff-Wild2. The cropped and aligned images have been utilized in our baseline experiments.
\subsection{Valence-Arousal Estimation Challenge}
This Challenge's corpora include $567$ videos in Aff-Wild2 that contain annotations in terms of valence and arousal. Sixteen of these videos display two subjects, both of which have been annotated. In total, $2,816,832$ frames, with $455$ subjects, $277$ of which are male and $178$ female, have been annotated by four experts using the method proposed in \cite{cowie2000feeltrace}. Valence and arousal values range continuously in $[-1,1]$.
\subsection{Expression Recognition Challenge}
This Challenge's corpora include $548$ videos in Aff-Wild2 that contain annotations in terms of the the 6 basic expressions, plus the neutral state, plus a category 'other' that denotes expressions/affective states other than the 6 basic ones. Seven of these videos display two subjects, both of which have been annotated. In total, $2,603,921$ frames, with $431$ subjects, $265$ of which are male and $166$ female, have been annotated by seven experts in a frame-by-frame basis.
\subsection{Action Unit Detection Challenge}
This Challenge’s corpora include 547 videos that contain annotations in terms of 12 AUs, namely AU1, AU2, AU4, AU6, AU7, AU10, AU12, AU15, AU23, AU24, AU25 and AU26. Seven of these videos display two subjects, both of which have been annotated. In total, $2,603,921$ frames, with $431$ subjects, $265$ of which are male and $166$ female, have been annotated in a semi-automatic procedure (that involves manual and automatic annotations). The annotation has been performed in a frame-by-frame basis. Table \ref{au_distr} shows the name of the twelve action units that have been annotated and the action that they are associated with.
\begin{table}[h]
\centering
\caption{Distribution of AU annotations in Aff-Wild2}
\label{au_distr}
\begin{tabular}{|c|c|}
\hline
Action Unit \# & Action \\ \hline
\hline
AU 1 & inner brow raiser \\ \hline
AU 2 & outer brow raiser \\ \hline
AU 4 & brow lowerer \\ \hline
AU 6 & cheek raiser \\ \hline
AU 7 & lid tightener \\ \hline
AU 10 & upper lip raiser \\ \hline
AU 12 & lip corner puller \\ \hline
AU 15 & lip corner depressor \\ \hline
AU 23 & lip tightener \\ \hline
AU 24 & lip pressor \\ \hline
AU 25 & lips part \\ \hline
AU 26 & jaw drop \\ \hline
\end{tabular}
\end{table}
\subsection{Multi-Task-Learning Challenge}
For this Challenge’s corpora, we have created a static version of the Aff-Wild2 database, named s-Aff-Wild2. s-Aff-Wild2 contains selected-specific frames/images from Aff-Wild2.
In total, 172,360 images are used that contain annotations in terms of valence-arousal; 6 basic expressions, plus the neutral state, plus the 'other' category; 12 action units (as described in the previous subsections).
\subsection{Aff-Wild2 Pre-Processing: Cropped \& Cropped-Aligned Images} \label{pre-process}
At first, all videos are splitted into independent frames. Then they are passed through the RetinaFace detector \cite{deng2020retinaface} so as to extract, for each frame, face bounding boxes and 5 facial landmarks. The images were cropped according the bounding box locations; then the images were provided to the participating teams.
The 5 facial landmarks (two eyes, nose and two mouth corners) were used to perform similarity transformation. The resulting cropped and aligned images were additionally provided to the participating teams. Finally, the cropped and aligned images were utilized in our baseline experiments, described in Section \ref{baseline}.
All cropped and cropped-aligned images were resized to $112 \times 112 \times 3$ pixel resolution and their intensity values were normalized to $[-1,1]$.
\section{Evaluation Metrics Per Challenge}\label{metrics}
Next, we present the metrics that will be used for assessing the performance of the developed methodologies of the participating teams in each Challenge.
\subsection{Valence-Arousal Estimation Challenge}
\noindent The performance measure is the average between the Concordance Correlation Coefficient (CCC) of valence and arousal. CCC evaluates the agreement between two time series (e.g., all video annotations and predictions) by scaling their correlation coefficient with their mean square difference. CCC takes values in the range $[-1,1]$; high values are desired. CCC is defined as follows:
\begin{equation} \label{ccc}
\rho_c = \frac{2 s_{xy}}{s_x^2 + s_y^2 + (\bar{x} - \bar{y})^2},
\end{equation}
\noindent
where $s_x$ and $s_y$ are the variances of all video valence/arousal annotations and predicted values, respectively, $\bar{x}$ and $\bar{y}$ are their corresponding mean values and $s_{xy}$ is the corresponding covariance value.
Therefore, the evaluation criterion for the Valence-Arousal Estimation Challenge is:
\begin{equation} \label{va}
\mathcal{P}_{VA} = \frac{\rho_a + \rho_v}{2},
\end{equation}
\subsection{Expression Recognition Challenge}\label{evaluation}
\noindent The performance measure is the average F1 Score across all 8 categories (i.e., macro F1 Score). The $F_1$ score is a weighted average of the recall (i.e., the ability of the classifier to find all the positive samples) and precision (i.e., the ability of the classifier not to label as positive a sample that is negative). The $F_1$ score takes values in the range $[0,1]$; high values are desired. The $F_1$ score is defined as:
\begin{equation} \label{f1}
F_1 = \frac{2 \times precision \times recall}{precision + recall}
\end{equation}
Therefore, the evaluation criterion for the Expression Recognition Challenge is:
\begin{equation} \label{expr}
\mathcal{P}_{EXPR} = \frac{\sum_{expr} F_1^{expr}}{8}
\end{equation}
\subsection{Action Unit Detection Challenge}\label{evaluation2}
The performance measure is the average F1 Score across all 12 AUs (i.e., macro F1 Score).
Therefore, the evaluation criterion for the Action Unit Detection Challenge is:
\begin{equation} \label{au}
\mathcal{P}_{AU} = \frac{\sum_{au} F_1^{au}}{12}
\end{equation}
\subsection{Multi-Task-Learning Challenge}\label{mtl}
The performance measure is the sum of: the average CCC of valence and arousal; the average F1 Score of the 8 expression categories; the average F1 Score of the 12 action units (as defined above).
Therefore, the evaluation criterion for the Multi-Task-Learning Challenge is:
\begin{align} \label{mtll}
\mathcal{P}_{MTL} &= \mathcal{P}_{VA} + \mathcal{P}_{EXPR} + \mathcal{P}_{AU} \nonumber \\
&= \frac{\rho_a + \rho_v}{2} + \frac{\sum_{expr} F_1^{expr}}{8} + \frac{\sum_{au} F_1^{au}}{12}
\end{align}
\section{Baseline Networks and Results} \label{baseline}
All baseline systems rely exclusively on existing open-source machine learning toolkits to ensure the reproducibility of the results. All systems have been implemented in TensorFlow; training time was around six hours on a Titan X GPU, with a learning rate of $10^{-4}$ and with a batch size of 256.
In this Section, we first describe the baseline systems developed for each Challenge and then we report their obtained results.
\begin{table*}[h]
\caption{Valence-Arousal Challenge Results on Aff-Wild2's validation set; evaluation criterion is the mean CCC of valence-arousal}
\label{comparison_sota_va}
\centering
\begin{tabular}{ |c||c|c|c| }
\hline
\multicolumn{1}{|c||}{\begin{tabular}{@{}c@{}} Baseline \end{tabular}} &
\multicolumn{1}{c|}{\begin{tabular}{@{}c@{}} CCC-Valence \end{tabular}} &
\multicolumn{1}{c|}{\begin{tabular}{@{}c@{}} CCC-Arousal\end{tabular}}
&
\multicolumn{1}{c|}{\begin{tabular}{@{}c@{}} $\mathcal{P}_{VA}$ \end{tabular}}
\\
\hline
\hline
ResNet50
& \begin{tabular}{@{}c@{}} 0.31 \end{tabular}
& \begin{tabular}{@{}c@{}} 0.17 \end{tabular} & 0.24 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[h]
\caption{Expression Challenge Results on Aff-Wild2's validation set; evaluation criterion is the average F1 Score of 8 expressions}
\label{comparison_sota_expr}
\centering
\scalebox{1.}{
\begin{tabular}{ |c||c| }
\hline
\multicolumn{1}{|c||}{\begin{tabular}{@{}c@{}} Baseline \end{tabular}} &
\multicolumn{1}{c|}{\begin{tabular}{@{}c@{}} $\mathcal{P}_{EXPR}$ \end{tabular}}
\\
\hline
\hline
VGGFACE & 0.23 \\
\hline
\end{tabular}
}
\end{table*}
\begin{table*}[h]
\caption{Action Unit Challenge Results on Aff-Wild2's validation set; evaluation criterion is the average F1 Score of 12 AUs}
\label{comparison_sota_au}
\centering
\scalebox{1.}{
\begin{tabular}{ |c||c| }
\hline
\multicolumn{1}{|c||}{\begin{tabular}{@{}c@{}} Baseline \end{tabular}} &
\multicolumn{1}{c|}{\begin{tabular}{@{}c@{}} $\mathcal{P}_{AU}$ \end{tabular}}
\\
\hline
\hline
VGGFACE & 0.39 \\
\hline
\end{tabular}
}
\end{table*}
\begin{table*}[h]
\caption{Multi-Task-Learning Challenge Results on Aff-Wild2's validation set; evaluation criterion is the sum of each task's independent performance metric}
\label{comparison_sota_mtl}
\centering
\scalebox{1.}{
\begin{tabular}{ |c||c| }
\hline
\multicolumn{1}{|c||}{\begin{tabular}{@{}c@{}} Baseline \end{tabular}} &
\multicolumn{1}{c|}{\begin{tabular}{@{}c@{}} $\mathcal{P}_{MTL}$ \end{tabular}}
\\
\hline
\hline
VGGFACE & 0.30 \\
\hline
\end{tabular}
}
\end{table*}
\subsection{Baseline Systems}
\paragraph{Valence-Arousal Estimation Challenge}
The baseline network is a ResNet one with 50 layers, pre-trained on ImageNet (ResNet50) and with a (linear) output layer that gives final estimates for valence and arousal.
\paragraph{Expression Recognition Challenge}
The baseline network is a VGG16 network with fixed (i.e., non-trainable) convolutional weights (only the 3 fully connected layers were trainable), pre-trained on the VGGFACE dataset and with an output layer equipped with softmax activation function which gives the 8 expression predictions.
\paragraph{Action Unit Detection Challenge}
The baseline network is a VGG16 network with fixed convolutional weights (only the 3 fully connected layers were trained), pre-trained on the VGGFACE dataset and with an output layer equipped with sigmoid activation function which gives the 12 action unit predictions.
\paragraph{Multi-Task-Learning Challenge}
The baseline network is a VGG16 network with with fixed convolutional weights (only the 3 fully connected layers were trained), pre-trained on the VGGFACE dataset. The output layer consists of 22 units: 2 linear units that give the valence and arousal predictions; 8 units equipped with softmax activation function that give the expression predictions; 12 units equipped with sigmoid activation function that give the action unit predictions.
\subsection{Results}
Table \ref{comparison_sota_va} presents the CCC evaluation of valence and arousal predictions on the Aff-Wild2 validation set, of the baseline network (ResNet50).
Table \ref{comparison_sota_expr} presents the performance, in the Expression Classification Challenge, on the validation set of Aff-Wild2, of the baseline network (VGGFACE).
Table \ref{comparison_sota_au} presents the performance, in the Action Unit Detection Challenge, on the validation set of Aff-Wild2, of the baseline network (VGGFACE).
Table \ref{comparison_sota_mtl} presents the performance, in the Multi-Task-Learning Challenge, on the validation set of Aff-Wild2, of the baseline network (VGGFACE).
\section{Conclusion}\label{conclusion}
In this paper we have presented the third Affective Behavior Analysis in-the-wild Competition (ABAW) 2022 to be held in conjunction with IEEE CVPR 2022. This Competition followed the first and second ABAW Competitions held in conjunction with IEEE FG 2020 and ICCV 2021, respectively. This Competition comprises four Challenges targeting: i) uni-task valence-arousal estimation, ii) uni-task expression classification, iii) uni-task action unit detection and iv) multi-task-learning. The database utilized for this Competition has been derived from the Aff-Wild2, the first and large-scale database annotated for all these three behavior tasks.
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,091,335 | arxiv | \section{Introduction}
\label{sec:intro}
One approach to investigate the physical processes in active galactic nuclei (AGN) at scales not
reachable by present-day telescopes is to study the correlation between radiative energy in different frequencies. \citet{arshakian10} (hereafter A10) investigated the radio-optical correlation between the VLBA core emission at 15\,GHz and the optical nuclear emission at 5100\,\AA\ for a statistically complete sample of 135 compact jets to test a single production mechanism for radio and optical continuum emission on scales of submilliarcseconds. The present study is an extension of that research with a sample of 250 compact extragalactic radio sources at 15\,GHz compiled by \citet{kovalev05}, which includes 135 compact AGN from the flux-density limited MOJAVE-1 sample \citep{lister09}. The sample comprises 188 quasars, 36 BL Lacs, 20 radio galaxies, and 6 sources with no optical identification.
The majority of AGN in the sample have relativistic jets oriented to the line of sight and have an unprecedented resolution of 0.5 milliarcseconds at 15 GHz. Note that this sample is not complete by the limiting radio flux.
\begin{table*}[!t]\centering
\setlength{\tabnotewidth}{1.0\textwidth}
\setlength{\tabcolsep}{1.1\tabcolsep}
\tablecols{11}
\caption{Kendall's $\tau$ correlation analysis between radio and optical luminosities \tabnotemark{a}}
\label{tab:1}
\begin{tabular}{lllcccccccc}
\toprule
& & & \multicolumn{2}{c}{All} & & \multicolumn{2}{c}{Quasars} & & \multicolumn{2}{c}{BL Lac}\\
\cmidrule{4-5}\cmidrule{7-8}\cmidrule{10-11}
A1 & A2 & A3 & $\tau$ & $P$ & & $\tau$ & $P$ && $\tau$ & $P$ \\
\midrule
$L_{5100}$ & $ L_{\rm VLBA}$ & z & \textbf{ 0.259 } & \textbf{ 3.95E-14 } & & \textbf{ 0.237 } & \textbf{ 7.67E-10 } & & 0.140 & 0.173 \\
$L_{5100}$ & $ L_{\rm un}$ & z & \textbf{ 0.242 } & \textbf{ 6.14E-12 } & & \textbf{ 0.198 } & \textbf{ 9.02E-07 } & & 0.150 & 0.149 \\
$L_{5100}$ & $ L_{\rm jet}$ & z & \textbf{ 0.246 } & \textbf{ 2.07E-12 } & & \textbf{ 0.218 } & \textbf{ 6.53E-08 } & & \textbf{ 0.389 } & \textbf{ 9E-04 } \\
\bottomrule
\tabnotetext{a}{The columns are arranged as follows: A1 and A2 are the independent variables for which the Kendall's $\tau$ correlation analysis is performed, and A3 is the dependent variable, in this case the redshift, ``All'' refers to the 233 sources in the sample, also were analyzed separately 181 quasars and 31 BL Lacs, $\tau$ is the correlation coefficient, and $P$ is the probability of a chance correlation. The correlations considered to be significant are marked in bold face.}
\end{tabular}
\end{table*}
\begin{figure*}[!t]
\includegraphics[width=0.45\textwidth]{Lopt_LVLBA}%
\hspace*{\columnsep}%
\includegraphics[width=0.45\textwidth]{Lopt_Ljet}%
\caption{\small{The total radio luminosity of the VLBA jet at 15\,GHz against optical continuum luminosity at 5100\,\AA\, (left panel), and radio luminosity of the jet (unresolved core subtracted) against optical continuum luminosity at 5100\,\AA\ (right panel) for quasars, BL Lacs and radio galaxies.}}
\label{fig:1}
\end{figure*}
\section{Radio-Optical Correlations: analysis and summary}
\label{sec:results}
The total luminosity of the VLBA component ($L_{\rm VLBA}$), the unresolved core ($L_{\rm un}$) and the jet luminosity ($L_{\rm jet}$) at 15 GHz were estimated as described in A10 \S~3. Optical nuclear luminosities at 5100 \AA\,($L_{5100}$) corrected for stellar contribution (Equation~(10) in A10) were estimated for 233 AGN of our sample from an homogeneous calibration using the B-band in the standard Johnson's photometric system.
The partial Kendall's $\tau$ test \citep{akritas96} is used to check the correlation between the radio luminosities of the unresolved core and the jet and the optical nuclear luminosity. We consider the correlations to be significant for the samples of all 233 AGN and 181 quasars if the chance probability of the correlation $P<0.02$ (or confidence level $>98\,\%)$, and $P<0.05$ (or confidence level $>95\,\%)$ for the sample of 31 BL Lacs.
The outcome of the statistical analysis is summarized in Table~\ref{tab:1} and they are in agreement with the results reported in A10. There is a significant positive correlation for 233 AGN between optical nuclear emission and radio emission originated in the jet at milliarcseconds scales (Figure~\ref{fig:1}; Table~\ref{tab:1}). For quasars, correlations hold also between optical nuclear luminosity and radio luminosities of the unresolved core (at sub-milliarcseconds scales) and the whole jet. For BL Lacs, the optical luminosity correlates positively only with the jet luminosity at the high confidence level of $\sim\,99.9\%$ (Figure~\ref{fig:1}).
The dispersion of the radio-optical correlation can be caused by non-simultaneous observations, distributions of intrinsic luminosities, and Doppler factors. The larger dispersion for BL Lacs could be a result of stronger variability in the radio and optical bands as well as a wider range of intrinsic luminosities, while for radio galaxies the deeming of optical continuum emission by an obscuring dusty torus can vary significantly.
Our results, together with the apparent speed -- optical luminosity diagram and the aspect curves derived from the relativistic beaming theory (see Figure~9 of A10), support the idea that the optical emission is generated in the relativistic jet. In particular, the optical emission in quasars with superluminal jets originates in the innermost part of the jet at sub-parsec scales, while in the BL Lacs it is generated in the parsec-scale jet.
\bibliographystyle{rmaa}
|
2,877,628,091,336 | arxiv | \section{Introduction}
\subsection{Background}
For a prime $p$ and an integer $u$ with $\gcd(u,p)=1$
the {\it Fermat quotient\/} $q_p(u)$ is defined as the unique integer
with
$$
q_p(u) \equiv \frac{u^{p-1} -1}{p} \pmod p, \qquad 0 \le q_p(u) \le p-1,
$$
and we also define
$$
q_p(kp) = 0, \qquad k \in {\mathbb Z}.
$$
It is well-known that the $p$-divisibility of Fermat
quotients $q_p(a)$ by $p$ has numerous applications, which include
the Fermat Last Theorem and squarefreeness testing,
see~\cite{ErnMet2,Fouch,Gran1,Len}.
In particular, the smallest value $\ell_p$ of $u\ge 1$ for which
$q_p(u) \not \equiv 0 \pmod p$ plays a prominent role in
these applications, for which the
following estimates are given~\cite{BFKS}
$$
\ell_p \le \left\{\begin{array}{lll}
(\log p)^{463/252 + o(1)} &\quad \text{for all}\ p, \\
(\log p)^{5/3 + o(1)} &\quad \text{for almost all}\ p,
\end{array}\right.
$$
(where almost all $p$ means for all $p$ but a set of relative density zero),
which improve the previous estimates of the form
$\ell_p = O\( (\log p)^2\)$ of~\cite{Fouch,Gran2,Ihara,Len}.
It is widely believed that $\ell_p = 2$ for all primes $p$,
except for a very thin set of so called {\it Wieferich primes\/},
which one expects $\ell_p =3$ (in particular, it is expected
that $\ell_p \le 3$ for all primes). The behaviour (and even the infinitude)
of Wieferich primes is still very poorly understood, although
several interesting results, relating Wieferich primes
to other number theoretic problems are known, see~\cite{GrSo,MoMu,Silv}.
There are also several results about
the distribution of Fermat quotients. For instance,
Heath-Brown~\cite{H-B} has proved that the Fermat quotients $q_p(u) $ are asymptotically uniformly distributed (after
scaling by $1/p$ and mapping them into $q_p(u)/p \in [0,1]$)
for $u = M+1, \ldots, M+N$ for any integers $M$
and $N \ge p^{1/2 +\varepsilon}$ for some fixed $\varepsilon$ and
$p\to \infty$. Note that~\cite[Theorem~2]{H-B} gives this
only for $N \ge p^{3/4 +\varepsilon}$ but using the full
strength of the Burgess bound one can lower this
threshold down to $h \ge p^{1/2 +\varepsilon}$, see
Lemma~\ref{lem:HB} below and
also~\cite[Section~4]{ErnMet2}.
It is also shown in~\cite[Proposition~2.1]{Fouch}
that for any integer $a$ the number of
solutions to the equation $q_p(u) = a$, $0 \le u < p$, is at most
\begin{equation}
\label{eq:Rep}
\# \{u \in \{0, \ldots, p-1\}\ :\ q_p(u) = a\} \le p^{1/2 + o(1)}.
\end{equation}
Finally, we also recall several results on congruences
involving Fermat quotients, see~\cite{AgoSkul,Di-BFa,Sun} and references
therein.
\subsection{Our results}
Here we consider the dynamical system generated by Fermat quotients.
That is, we fix a sufficiently large prime $p$
and, for an initial value $u_0 \in \{0, \ldots, p-1\}$
we consider the sequence
\begin{equation}
\label{eq:FermDyn}
u_n = q_p(u_{n-1}), \qquad n =1, 2, \ldots\,.
\end{equation}
Clearly, there is some $t$ such that $u_t = u_k$
for some $k < t$. Then $u_{n+t} = u_{n+k}$ for any $n \ge 0$.
Accordingly, for the smallest value of $t$ with the above
condition, we call $u_0, \ldots, u_{t-1}$ the orbit
of the initial value $u_0$.
Here we address various questions concerning the
sequences generated by~\eqref{eq:FermDyn} such as
the number of fixed points, image size and
the ``typical'' orbit length.
In particular, we compare their characteristics with
those expected from random maps, see~\cite{FlOdl}.
All our numerical results support the natural expectation that
the map
$u\mapsto q_p(u)$ behaves very similar to a random map on
the set $\{0, \ldots, p-1\}$.
We also investigate
their distribution and other characteristics which are
relevant to their use as pseudorandom number generators.
As we have mentioned, a result of Heath-Brown~\cite{H-B}
implies that the fractions $q_p(u)/p$ are uniformly
distributed for $u = M+1, \ldots, M+N$,
provided that $N \ge p^{1/2 +\varepsilon}$ for some
fixed $\varepsilon > 0$. However, the method of~\cite{H-B},
based on bounds of multiplicative character sums,
such as the
Polya-Vinogradov and Burgess bounds,
see~\cite[Theorems~12.5 and~12.6]{IwKow},
does not seem to apply to studying the distribution of
several consecutive elements (as it is essentially equivalent
to estimating short sums of multiplicative characters
modulo $p^2$ with polynomial
arguments).
Here we use a
different approach,
to study the distribution of points
\begin{equation}
\label{eq:Points}
\(\frac{q_p(u)}{p},\ldots, \frac{q_p(u+s-1)}{p}\), \qquad u = M+1, \ldots, M+N,
\end{equation}
in the $s$-dimensional cube, which is nontrivial
provided that $N \ge p^{1 +\varepsilon}$ for any fixed real $\varepsilon > 0$
and integer $s \ge 1$.
We also obtain a nontrivial lower bound on the linear
complexity of the sequence $q_p(u)$ which is
also a very important characteristic of any sequence
relevant to its applications to both cryptography and
Quasi-Monte Carlo methods, see~\cite{CDR,MOV,TopWin}.
Besides theoretic estimates, we also present results
of several numerical tests. Some of these tests
are based
on a modification of an algorithm described
in~\cite{ErnMet1,ErnMet2}, which seems to be more
computationally efficient. We also address some
other algorithmic aspects of computation with
Fermat quotients. In particular, we give asymptotic estimates
of several new algorithms which
we design for this purpose.
We note that all heuristic predictions
concerning various conjectures about Fermat quotinets
(for example, the expected number of Wieferich primes
up to $x$ as $x \to \infty$) are based on the assumption
of the pseudorandomness of the map $u \mapsto q_p(u)$. Our results
provide some theoretic and experimental support to this assumption
which seems to be never systematically verified prior to our
work.
Finally, motivated by the pseudorandom nature of the map
$u \mapsto q_p(u)$, we also discuss some possibilities of using
Fermat quotients for designing cryptographically useful
hash functions.
We remark that Smart and Woodcock~\cite{SmWo}
have considered iterations of a related function
\begin{equation}
\label{eq:L fun}
L_p(u) = \frac{u^{p} -u}{p}
\end{equation}
in the ring of $p$-adic integers. However, the setting
of~\cite{SmWo} (where $p$ is fixed, for example $p = 2$)
and our settings where $p$ is the main growing parameter are
very different.
\subsection{Acknowledgement}
The authors are very grateful to Sergei Konyagin for his comments which have led to a significant improvement of the preliminary
version of Theorem~\ref{thm:Image Size}.
Thanks also go to Daniel Sutantyo for
his help with Magma programs and Tauno Mets{\"a}nkyl{\"a} for
his comments and encouragement.
During the preparation of this paper,
A.~O. was supported in part by
the Swiss National Science Foundation Grant~121874
and I.~S. by
the Australian Research Council
Grant~DP0556431.
\section{Preparations}
\subsection{General Notation}
Throughout the paper, $p$ always denotes prime
numbers, while $k$, $m$ and $n$ (in both the upper and
lower cases) denote positive integer
numbers.
For integers $a$, $b$ and $m \ge 1$ with $\gcd(b,m)=1$,
we write
$$
c = a/b~\mathrm{\, rem~} m
$$
for the unique integer $c$
with $bc \equiv a \pmod m$ and $0 \le c < m$.
We also define
$$
{\mathbf{\,e}}_p(z) = \exp(2 \pi i z/p).
$$
The implied constants in the symbols `$O$',
and `$\ll$'
may occasionally depend on an integer parameter $s$
and are absolute otherwise.
We recall that the notations $U = O(V)$ and $V \ll U$ are both
equivalent to the assertion that the inequality $|U|\le cV$ holds for some
constant $c>0$.
\subsection{Discrepancy and linear complexity}
Given a sequence $\Gamma$ of $N$ points
\begin{equation}
\label{eq:GenSequence}
\Gamma = \left\{(\gamma_{n,1}, \ldots, \gamma_{n,s})_{n=0}^{N-1}\right\}
\end{equation}
in the $s$-dimensional unit cube $[0,1)^s$
it is natural to measure the level of its statistical uniformity
in terms of the {\it discrepancy\/} $\Delta(\Gamma)$.
More precisely,
$$
\Delta(\Gamma) = \sup_{B \subseteq [0,1)^s}
\left|\frac{T_\Gamma(B)} {N} - |B|\right|,
$$
where $T_\Gamma(B)$ is the number of points of $\Gamma$
inside the box
$$
B = [\alpha_1, \beta_1) \times \ldots \times [\alpha_{s}, \beta_{s})
\subseteq [0,1)^s
$$
and the supremum is taken over all such boxes, see~\cite{DrTi,KuNi}.
Typically the bounds on the discrepancy of a
sequence are derived from bounds of exponential sums
with elements of this sequence.
The relation is made explicit in
the celebrated {\it Erd\"os-Turan-Koksma
inequality\/}, see~\cite[Theorem~1.21]{DrTi},
which we present in the following form.
\begin{lemma}
\label{lem:ETK} For any
integer $H > 1$ and any sequence $\Gamma$ of $N$ points~\eqref{eq:GenSequence}
the discrepancy $\Delta(\Gamma)$
satisfies the following bound:
$$
\Delta(\Gamma) = O\( \frac{1}{H}
+ \frac{1}{N}\sum_{ 0 < |\vec{h}| \le H}
\prod_{j=1}^s \frac{1}{ |h_j| + 1}
\left| \sum_{n=0}^{N-1} \exp \( 2 \pi i\sum_{j=1}^{s}h_j\gamma_{n,j} \)
\right| \),
$$
where the sum is taken over all integer vectors
$\vec{h} = (h_1, \ldots, h_s) \in {\mathbb Z}^s$
with $|\vec{h}| = \max_{j = 1, \ldots, s} |h_j| < H$.
\end{lemma}
Finally, we recall that the {\it linear complexity\/} $L$ of
an $N$-element sequence $s_0, \ldots, s_{N-1}$ in
a ring ${\mathcal R}$ is defined as the smallest $L$ such that
$$
s_{u+L}=c_{L-1}s_{u+L-1}+\ldots+c_0s_u, \qquad
0\le u\le N-L-1,
$$
for some $c_0, \ldots, c_{L-1} \in {\mathcal R}$, see~\cite{CDR,MOV,TopWin}.
\subsection{Exponential sums}
First, we recall the bound of Heath-Brown~\cite{H-B} on
exponential sums with $q_p(u)$. Although here we use it
only with $\nu = 2$ (exactly as it is given in~\cite{H-B})
we formulate it in full generality.
As we have mentioned, the method of Heath-Brown~\cite{H-B} combined with the
Polya-Vinogradov bound (when $\nu = 1$) and the Burgess
bound (when $\nu \ge 2$),
see~\cite[Theorems~12.5 and~12.6]{IwKow}, implies the following
generalisation of~\cite[Theorem~2]{H-B}:
\begin{lemma}\label{lem:HB}
For any fixed integer $\nu \ge 1$, we have
$$
\max_{\gcd(a, p) =1}
\left|\sum_{u=M+1}^{M+N}
{\mathbf{\,e}}_p\(a q_p(u)\) \right| \ll N^{1-1/\nu}p^{(\nu+1)/2\nu ^2+o(1)},
$$
as $p\to \infty$, uniformly over $M$ and $N\ge 1$.
\end{lemma}
We now recall the following well-known bound,
see~\cite[Bound~(8.6)]{IwKow}.
\begin{lemma}
\label{eq:Incompl}
For any integers $K$ and $r$, we have
$$
\sum_{k=0}^{K-1} {\mathbf{\,e}}_p(kr) \ll \min\left\{K, \frac{p}{\|r\|}\right\},
$$
where
$$
\|r\| = \min_{s \in {\mathbb Z}} |r - sp|
$$
is the distance between $r$ and the closest multiple of $p$.
\end{lemma}
\subsection{Basic properties of Fermat quotients}
Most of our results are based on the following two
well-known properties of Fermat quotients.
For any integers $k$, $u$ and $v$ with $\gcd(uv,p) = 1$
we have
\begin{equation}
\label{eq:add struct1}
q_p(uv) \equiv q_p(u) + q_p(v) \pmod p
\end{equation}
and
\begin{equation}
\label{eq:add struct2}
q_p(u+kp) \equiv q_p(u) - ku^{-1} \pmod p,
\end{equation}
see, for example,~\cite[Equations~(2) and~(3)]{ErnMet2}.
\section{Dynamical Properties}
\subsection{Computation of $q_p(u)$}
As we have mentioned, computing each individual
value of $q_p(u)$ can be done in $O(\log p)$
arithmetic operations
on $O(\log p)$-bit integers via repeated squaring computation
of $u^{p-1}$ modulo $p^2$, we refer to~\cite{vzGG}
for a background on modular arithmetic and
complexity of various algorithms.
In particular, one can easily reformulate
our complexity estimates in terms of
bit operations.
Thus computing all values of $q_p(u)$, $0 \le u < p$,
requires $O(p\log p)$ arithmetic operations
on $O(\log p)$-bit integers. Such computation is necessary,
for example, to find all fixed points of the map $u \mapsto q_p(u)$
or for finding the image size.
Here we show that there is a slightly more efficient
algorithm which is based on~\eqref{eq:add struct1}
and~\eqref{eq:add struct2}.
We assume that we are given a primitive root $g$ modulo $p$.
This can be done at
the pre-computation stage and we keep it outside of the algorithm
(in any case, it can be found in $p^{1/4+o(1)}$
arithmetic operations
on $O(\log p)$-bit integers, see~\cite{Shp1}, which is lower than the
remaining parts of the algorithm).
\begin{algorithm}[Generating $q_p(u)$, $0 \le u \le p-1$] {}\qquad \newline
\label{alg:qu all}
\begin{description}
\item {\bf Input:} A prime $p$ and a primitive root $g$ modulo $p$ with $1< g < p$.
\item {\bf Output:} A permuted sequence of the values $q_p(u)$, $0 \le u \le p-1$.
\end{description}
\begin{enumerate}
\item Set $q_p(0) = 0$ and $q_p(1) = 0$.
\item Compute $q_p(g)$ using the repeated squaring modulo $p^2$.
\item Set $b_1 = g$ and $c_1 = g^{-1} \mathrm{\, rem~} p$.
\item For $i = 2, \ldots, p-2$ compute
\begin{enumerate}
\item $b_i = g b_{i-1}\mathrm{\, rem~} p$ and $c_i = c_{i-1}g^{-1}\mathrm{\, rem~} p$;
\item $k_i = (g b_{i-1}-b_i)/p$;
\item $q_p(b_i)= q_p(g) + q_p(b_{i-1}) + k_i c_i \mathrm{\, rem~} p$.
\end{enumerate}
\end{enumerate}
\end{algorithm}
\begin{theorem}
\label{thm:qu all}
Algorithm~\ref{alg:qu all} computes every value
$q_p(u)$, $0 \le u < p-1$, in $O\(p\)$
arithmetic operations
on $O(\log p)$-bit integers.
\end{theorem}
\begin{proof} The complexity estimate is immediate.
The correctness of the algorithm follows from
the congruences
\begin{eqnarray*}
q_p(b_i) \equiv q_p(g b_{i-1} - k_ip)
& \equiv & q_p(g b_{i-1}) + k_i (g b_{i-1})^{-1} \\
& \equiv & q_p(g) + q_p(b_{i-1}) + k_i c_i \pmod p,
\end{eqnarray*}
which in turn follow from~\eqref{eq:add struct1}
and~\eqref{eq:add struct2}.
\end{proof}
Note that the algorithm of~\cite{ErnMet1,ErnMet2} is very similar,
except that it uses $g=2$ instead of a primitive root. This makes
each step faster, but if $2$ is not a primitive root
modulo $p$ requires going trough all conjugacy classes of the
group generated by $2$ modulo $p$ and thus requires more ``administration''
of data and also more memory.
Unfortunately Algorithm~\ref{alg:qu all} does not help to compute
$q_p(u)$ for a given value of $u$ unless all values
$q_p(v)$, $0 \le v \le p-1$, are precomputed and stored in a
table, after which $q_p(u)$ can simple be read from
there. We now describe a trade-off algorithm which
requires less memory but the computation of $q_p(u)$
is more expensive than the simple table look-up.
It depends on a parameter $z\ge 2$,
which can be adjusted to particular algorithmic
needs.
For a real $V< p$ we use ${\mathcal Q}_p(V)$ to denote the table of
the values of $q_p(v)$ with $v \in [0, V]$.
We see from Theorem~\ref{thm:qu all} that
${\mathcal Q}_p(V)$ can be computed in $O\(\min\{p, V \log p\}\)$
arithmetic operations on $O(\log p)$-bit integers.
Furthermore, for an integer $m$, we
use ${\mathcal I}_m(V)$ to denote the table of
the values $v^{-1} \mathrm{\, rem~} m$ with $v \in [1, V]$ and
$\gcd(v,m)=1$.
Since by the Euler theorem $v^{-1} \equiv v^{\varphi(m)-1} \pmod m$,
where $\varphi(m)$ is the Euler function, we see that
${\mathcal I}_m(V)$ can be computed in $O\(V \log m\)$
arithmetic operations on $O(\log m)$-bit integers
(there are even more efficient modular
inversion algorithms with a better bound on the
number of bit operations, see~\cite{vzGG}; however using them
does not change the overall complexity of our algorithm).
\begin{algorithm}[Computing $q_p(u)$ for a given {$u\in [0, p-1]$}] {}\qquad \newline
\label{alg:qu one}
\begin{description}
\item {\bf Input:} A prime $p$, a real $z\ge 2$, the tables ${\mathcal Q}_p(p/z)$,
${\mathcal I}_p(p/z)$, ${\mathcal I}_{p^2}(z)$ and
an integer $u\in \{0, \ldots, p-1\}$.
\item {\bf Output:} The value of $q_p(u)$.
\end{description}
\begin{enumerate}
\item If $u =0$ set $q_p(u) = 0$.
\item Find integers $v$ and $w$ with $u \equiv v/w \pmod p$ and such
that $1 \le v \le 2p/z$ and $|w| \le z$.
\item Recall $r = w^{-1} \mathrm{\, rem~} p^2$ if $w > 0$
or $r = -((-w)^{-1} \mathrm{\, rem~} p^2)$ if $w < 0$ from
the table ${\mathcal I}_{p^2}(z)$.
\item Compute $s$ with $s \equiv v/w \pmod {p^2}$ and
such that $0\le s < p^2$.
\item Compute $k = (s-u)/p$.
\item Recall $r = v^{-1} \mathrm{\, rem~} p$ from the table ${\mathcal I}_p(p/z)$.
\item Recall $q_p(v)$ and $q_p(w)$ from the table ${\mathcal Q}_p(p/z)$.
\item Compute $q_p(u) = (q_p(v) - q_p(w) + kr w) \mathrm{\, rem~} p$.
\end{enumerate}
\end{algorithm}
\begin{theorem}
\label{thm:qu one}
For any integer $u$ with $0 \le u < p-1$,
Algorithm~\ref{alg:qu one}
computes
$q_p(u)$ in $O\(\log z\)$
arithmetic operations on $O(\log p)$-bit integers.
\end{theorem}
\begin{proof}
The correctness of the algorithm follows from
the congruences
\begin{eqnarray*}
q_p(u) & \equiv & q_p(s - kp)
\equiv q_p(s) + k s^{-1} \\
& \equiv & q_p(v) - q_p(w) + k v^{-1} w
\equiv q_p(v) - q_p(w) + k r w \pmod p
\end{eqnarray*}
which in turn follow from~\eqref{eq:add struct1}
and~\eqref{eq:add struct2}.
It remains to estimate the complexity of finding the $v$ and $w$
with $u \equiv v/w \pmod p$.
We can also assume that $z < p$ since otherwise the
result is trivial.
We start computing
continued fraction convergents $a_i/b_i$, $\gcd(a_i, b_i)=1$,
$i =1, 2, \ldots$, to $u/p$, see, for example,~\cite{Ste} for basic
properties of continued fractions.
We define $j$ by the condition
$$
b_j \le z < b_{j+1}.
$$
By the well-known property of continued fractions, we have
$$\left| \frac{a_j}{b_j} - \frac{u}{p} \right| \le \frac{1}{b_jb_{j+1}}
\le \frac{1}{b_jz}.
$$
We now define
$$
w=|a_jp-b_ju|
$$
and note that (since $z < 0$)
$$0<w = b_jp
\left| \frac{a_j}{b_j} - \frac{u}{p} \right|
\leq \frac{p}{z}.
$$
Furthermore $u v \equiv w \pmod p$ for either
$v = a_j$ or $v = -a_j$. Finally,
since the denominators of the convergents grow
at least exponentially, we see that $j = O(\log b_j) =
O(\log z)$ and thus find $a_j$ and $b_j$ in
$O(\log z)$ steps, each of them requires to compute
with $O(\log p)$-bit integers.
\end{proof}
We see from Theorem~\ref{thm:qu one}
taken with $z = \exp\(\sqrt{\log p}\)$, that
evaluating (in time $p \exp\(-(1 + o(1))\sqrt{\log p}\)$)
and storing $p \exp\(-(1 + o(1))\sqrt{\log p}\)$ values of Fermat
quotients, we can compute any other value in time
$(\log p)^{1/2 + o(1)}$.
\subsection{Fixed Points}
Let $F(p)$ denote the number of fixed points of the map $q_p(u)$
that is,
$$
F(p) = \# \{u \in \{0, \ldots, p-1\}\ : \
q_p(u) = u\}.
$$
We derive a nontrivial estimate on $F(p)$ from
Lemmas~\ref{lem:ETK} and~\ref{lem:HB}
\begin{theorem}\label{thm:FP}
We have
$$
F(p) \ll p^{11/12+o(1)}
$$
as $p\to \infty$.
\end{theorem}
\begin{proof} Let us choose some positive integer
parameter $N \in [1, p-1]$ and for an integer $M$
we denote by $T(p;M,N)$ the number of integers
$u \in [M+1, M+N]$ with $q_p(u) \in [M+1, M+N]$.
Considering the discrepancy of the fractions
$q_p(u)/p$, $u = M+1, \ldots, M+N$ and
combining Lemma~\ref{lem:ETK} (taken with $s=1$)
with Lemma~\ref{lem:HB} (taken with $\nu=2$) , we
immediately conclude
$$
T(p;M,N) = \frac{N^2}{p} + O\(N^{1/2} p^{3/8+o(1)}\).
$$
Clearly every $u = M+1, \ldots, M+N$ which is a fixed
point contributes to $T(p;M,N)$. Covering the interval
$[0,p-1]$ with at most $(p/N + 1)$ intervals of length
$h$ we obtain
$$
F(p) \le \(\frac{p}{N} + 1\) \(\frac{N^2}{p} + O\(N^{1/2} p^{3/8+o(1)}\)\).
$$
Choosing $N = \rf{p^{11/12}}$, we conclude
the proof.
\end{proof}
There is little doubt that the bound of Theorem~\ref{thm:FP}
is very imprecise. It is easy to see that in the full range
$0 \le u \le p^2-1$ the relation~\eqref{eq:add struct2}
implies
$$
\# \{u \in \{0, \ldots, p^2-1\}\ : \
q_p(u) \equiv u \pmod p\} = 2p-1.
$$
Indeed, it is enough to write $u = v+kp$ with $v,k\in \{0, \ldots, p-1\}$
and notice that
\begin{itemize}
\item either $v = 0$ and then $k$ can take any values
\item or $v > 0$ and then the relation~\eqref{eq:add struct2}
identify $k$ uniquely.
\end{itemize}
Thus one can expect that $F(p) = O(1)$.
In fact it seems reasonable to expect
that the map $u\mapsto q_p(u)$ behaves similar to a random
map. We recall that for a random map on $m$ elements,
the probability of having $k$ fixed points is
$$
\frac{1}{m^m} \binom{m}{k} \times (m-k -1)^{m-k} \to \frac{1}{ek!}
$$
as $m \to \infty$.
Below we present numerical results giving the numbers $N(k)$
of primes $p\in [50000, 200000]$ for which the map
$u\mapsto q_p(u)$ has exactly $F(p) = k$ fixed points
(note that we discard the ``artificial'' fixed point $u=0$).
We also give the proportions of such primes $\rho(k) = N(k)/N$
where $N = 12851$ is the total number of primes
$p\in [50000, 200000]$ and compare them with $\rho_0(k) =(e k!)^{-1}$
for $k =0,\ldots, 6$.
We note that in the above range $N(k)= 0$ for $k \ge 7$.
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
$k$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\
\hline
$\rho_0(k)$ & 0.368 & 0.368 & 0.184 & 0.0613 &
0.0153 & 0.00306 & 0.000511\\
\hline
$N(k)$ & 4770 & 4697 & 2327 & 844 & 174 & 36 & 3 \\
$\rho(k)$ & 0.371 & 0.365 & 0.181 & 0.0656 &
0.0135 & 0.00280 & 0.000233 \\
\hline
\end{tabular}
\vskip 10pt
{\it Statistics of fixed points}
\end{center}
These numerical results appear to indicate a reasonable agreement between
the prediction and actual results.
\subsection{Concentration of values}
For integers $k$ and $h\ge 1$ we denote
by $U(p;k,h)$ the number of $u \in \{0, \ldots, p-1\}$
for which $q_p(u) \equiv z \pmod p$ for some $z \in [k+1, k+h]$.
As in the proof of Theorem~\ref{thm:FP}, a
combination of Lemma~\ref{lem:HB} (which we take
with $N =p$ and $\nu =2$) with Lemma~\ref{lem:ETK}
gives the following asymptotic formula
\begin{equation}
\label{eq:image asymp}
U(p;k,h) = h + O(p^{7/8 + o(1)})
\end{equation}
as $p \to \infty$.
On the other hand, using~\eqref{eq:Rep}, we
trivially obtain
$$
U(p;k,h) \le hp^{1/2 + o(1)}
$$
that improves~\eqref{eq:image asymp} for $h \le p^{3/8}$.
We now obtain a better upper bound, which
improves~\eqref{eq:image asymp} for $h \le p^{3/4}$.
\begin{theorem}\label{thm:Distr}
For any integers $k$ and $h\ge 1$, we have
$$
U(p;k,h) \le h^{1/2}p^{1/2 + o(1)}
$$
as $p\to \infty$.
\end{theorem}
\begin{proof} Let ${\mathcal U}$ be the set of $u \in \{0, \ldots, p-1\}$,
which are counted by $U(p;k,h)$.
Using~\eqref{eq:add struct1} we see that any $w$ of the form $w = uv$
with $uv \in {\mathcal U}$ satisfies $0 \le w \le p^2 -1$ and
\begin{equation}
\label{eq:cong w}
q_p(w) \equiv z \pmod p
\end{equation}
for some $z \in [2k+2, 2k+2h]$.
For a fixed integer $z$, there are $O(p)$ values of
$w \in \{0, \ldots, p^2-1\}$ satisfying~\eqref{eq:cong w},
which follows immediately from~\eqref{eq:add struct2}
(see also the proof of~\cite[Proposition~2.1]{Fouch}).
So there are at most $O(hp)$ values of $w$
satisfying~\eqref{eq:cong w} with some $z \in [2k+2, 2k+2h]$.
Using the classical estimate
$$
\tau(w) = w^{o(1)}, \qquad w \to \infty,
$$
on the divisor function $\tau(w)$
(see~\cite[Bound~(1.81)]{IwKow} with $k=2$),
we deduce that each $w = uv$ can be obtained from no more than $p^{o(1)}$
distinct pairs $(u,v) \in {\mathcal U}^2$. Therefore
$\(\# {\mathcal U}\)^2 \le hp^{1+o(1)}$, which concludes the proof.
\end{proof}
\subsection{Image size}
Let $M(p)$ be the image size of the $q_p(u)$ for $0 \le u \le p-1$,
that is
$$
M(p) = \# \{q_p(u)\ : \ 0 \le u \le p-1\}.
$$
The bound~\eqref{eq:Rep} immediately implies $M(p) \ge p^{1/2+o(1)}$.
In fact more precise bounds
$$
\sqrt{p} -1 \le M(p) \le p- \sqrt{(p-1)/2}
$$
can be obtained from~\eqref{eq:add struct1}
and~\eqref{eq:add struct2}, see~\cite[Section~3]{ErnMet2}.
We now obtain a stronger lower bound on $M(p)$.
\begin{theorem}
\label{thm:Image Size}
We have
$$
M(p) \ge (1+o(1)) \frac{p}{(\log p)^{2}},
$$
as $p\to \infty$.
\end{theorem}
\begin{proof} Let $Q(p,a)$ be the number of primes $\ell \in \{1, \ldots, p-1\}$
with $q_p(\ell) = a$ (note that we have discarded $u =0$).
Clearly
\begin{equation}
\label{eq:1st Moment}
\sum_{a=0}^{p-1} Q(p,a) = \pi(p-1)
\end{equation}
where, as usual, $\pi(x)$ denotes the number of primes $\ell \le x$,
and also
\begin{equation}
\label{eq:2nd Moment}
\sum_{a=0}^{p-1} Q(p,a)^2 = \# {\mathcal R}(p),
\end{equation}
where
$$
{\mathcal R}(p) = \{(\ell,r)\ : \ 1 \le \ell,r\le p-1, \ \ell,r~\mathrm{primes}\ q_p(\ell) = q_p(r)\}.
$$
We see from~\eqref{eq:add struct1} that if $(\ell,r) \in {\mathcal R}(p)$
and
\begin{equation}
\label{eq:wuv}
w\equiv \ell/r\pmod {p^2}
\end{equation}
then
$$
q_p(w) \equiv q_p(\ell) - q_p(r) \equiv 0 \pmod p.
$$
Since all $w$ with $q_p(w) \equiv 0 \pmod p$ and $\gcd(w,p)=1$
have
$$
w^{p-1} \equiv 1 \pmod {p^2},
$$
they are elements of the group ${\mathcal G}_p$ of the
$p$th power residues modulo $p$.
Thus we see from~\eqref{eq:wuv} that
$$
\# {\mathcal R}(p) \le N(p),
$$
where $N(p)$ is the number of solutions
$(\ell, r, w)$ to
\begin{equation}
\label{eq:wlr}
w\ell \equiv r \pmod {p^2}, \qquad\text {where } \ell,r\le p-1, \
\ell,r\ \text{primes},\ w\in {\mathcal G}_p.
\end{equation}
We note that for $w \equiv 1 \pmod {p^2}$ there are exactly
$\pi(p-1)$ pairs $(\ell,r)$ with $\ell = r$ that
satisfy~\eqref{eq:wlr}.
For any other $w\in {\mathcal G}_p$ if~\eqref{eq:wlr} is
satisfied for $(\ell_1,r_1)$ and $(\ell_2,r_2)$
then
$$
\ell_1 r_2 \equiv \ell_2 r_1 \pmod {p^2}
$$
which in turn implies the equation
\begin{equation}
\label{eq:l12r12}
\ell_1 r_2 = \ell_2 r_1
\end{equation}
(since $1 \le \ell_1, \ell_2 r_1,r_2\le p-1$).
Because $\ell_1, \ell_2 r_1,r_2$ are primes, we
see from~\eqref{eq:l12r12} that either
$(\ell_1,\ell_2) = (r_1, r_2)$, which is
impossible for $w \not \equiv 1 \pmod {p^2}$,
$(\ell_1,r_1) = (\ell_2, r_2)$, which means that when
$w\in {\mathcal G}_p \setminus \{1\}$ is fixed, then~\eqref{eq:wlr}
is satisfied for at most one pair of primes $(\ell,r)$.
Therefore
\begin{equation}
\label{eq:W bound}
\# {\mathcal R}(p) \le N(p) \le \pi(p-1) + \#{\mathcal G}_p-1 = p + O(p/\log p).
\end{equation}
Now, since by the Cauchy inequality we have
$$
\(\sum_{a=0}^{p-1} Q(p,a)\)^2 \le M(p) \sum_{a=0}^{p-1} Q(p,a)^2,
$$
recalling~\eqref{eq:1st Moment} and~\eqref{eq:2nd Moment}
and using~\eqref{eq:W bound}, we obtain
$$
M(p) \ge (1+o(1)) \pi(p-1)^2 p^{-1}.
$$
which concludes the proof.
\end{proof}
Clearly the bound of Theorem~\ref{thm:Image Size}
is not tight. The image size $M_m$ of a random map on
an $m$ element
set is expected to be
$$
M_m = \(1-\frac{1}{e}\) m = 0.63212\ldots m
$$
see~\cite[Theorem~2]{FlOdl},
and thus it is reasonable to expect that $M(p)/p \approx 1-1/e$.
We now give
the average value of $M(p)/p$
taken over primes $p$ in the intervals
\begin{equation}
\label{eq:Int Ji}
{\mathcal J}_i = [50000i, 50000(i+1)], \qquad i =1,2,3.
\end{equation}
and
the whole interval
\begin{equation}
\label{eq:Int J}
{\mathcal J} = [50000, 200000].
\end{equation}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Range & ${\mathcal J}_1$ &
${\mathcal J}_2$ & ${\mathcal J}_3$ & ${\mathcal J}$ \\
\hline
\# of primes & 4459 & 4256 & 4136 & 12851\\
\hline
$M(p)/p$ & 0.63212 & 0.63208 & 0.63212 & 0.63211 \\
\hline
\end{tabular}
\vskip 10pt
{\it Statistics of image sizes}
\end{center}
\subsection{Distribution of orbit lengths}
For any map $f$ defined on an $m$ element
set,
and any initial value $u_0$ from this set, we consider
the iterations $u_i =f(u_{i-1})$, $i =1,2,\ldots$.
Then for some $\rho > \mu \ge 0$ we have
$u_\rho = u_\mu$. The smallest value of $\rho$ is
called the {\it orbit length\/} and
the corresponding (and thus uniquely defined)
value of $\mu$ is called the {\it tail length\/}.
By~\cite[Theorem~3]{FlOdl} the expected values $\rho_m$ and $\mu_m$ of
the orbit and tail length, taken over all random maps and
initial values $u_0$, satisfy
$$
\frac{\rho_m}{\sqrt{m}} = \sqrt{\pi/2} + o(1)
\qquad\mbox{and}\qquad
\frac{\mu_m}{\sqrt{m}} = \sqrt{\pi/8}+ o(1),
$$
as $m \to \infty$.
Here we present the results of computation of
the average values of
the orbit and the tail lengths, scaled by $\sqrt{p}$,
for the sequence~\eqref{eq:FermDyn}
taken over primes $p$ in the intervals ${\mathcal J}_1, {\mathcal J}_2, {\mathcal J}_3$ and
${\mathcal J}$, given by~\eqref{eq:Int Ji} and~\eqref{eq:Int J}, respectively,
and a randomly chosen initial
value $u_0 \in [1, p-1]$.
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Range & ${\mathcal J}_1$ &
${\mathcal J}_2$ & ${\mathcal J}_3$ & ${\mathcal J}$ \\
\hline
\# of primes & 4459 & 4256 & 4136 & 12851\\
\hline
$\rho/\sqrt{p} $& 1.2423 & 1.2445 & 1.2444 & 1.2437 \\
$\mu/\sqrt{p}$ & 0.62179 & 0.62200 & 0.61806 & 0.62066 \\
\hline
\end{tabular}
\vskip 10pt
{\it Statistics of orbit and the tail lengths, random $u_0$}
\end{center}
Since the values $q_p(2)$ are of special interest,
we also present similar data where the inutial value is
alway chosen as $u_0 = 2$.
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Range & ${\mathcal J}_1$ &
${\mathcal J}_2$ & ${\mathcal J}_3$ & ${\mathcal J}$ \\
\hline
\# of primes & 4459 & 4256 & 4136 & 12851\\
\hline
$\rho/\sqrt{p} $& 1.2381 & 1.2507 & 1.2401 & 1.2429 \\
$\mu/\sqrt{p}$ & 0.61778 & 0.63004 & .62060 & 0.62275 \\
\hline
\end{tabular}
\vskip 10pt
{\it Statistics of orbit and the tail lengths, $u_0=2$}
\end{center}
The results show quite satisfactory matching with
the expected values of
$$
\sqrt{\pi/2} = 1.2533\ldots \qquad\mbox{and}\qquad \sqrt{\pi/8} = 0.62665 \ldots.
$$
Furthermore, we also give similar average values for $C(p)/p$,
where $C(p)$ is the total number of cyclic points in all possible
trajectories of the map $u\mapsto q_p(u)$ on the set $\{0, \ldots, p-1\}$,
taken over primes from the same intervals ${\mathcal J}_1, {\mathcal J}_2, {\mathcal J}_3$ and ${\mathcal J}$.
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Range & ${\mathcal J}_1$ &
${\mathcal J}_2$ & ${\mathcal J}_3$ & ${\mathcal J}$ \\
\hline
\# of primes & 4459 & 4256 & 4136 & 12851\\
\hline
$C(p)/\sqrt{p} $& 1.2413 & 1.2527 &1.23706 & 1.2437 \\
\hline
\end{tabular}
\vskip 10pt
{\it Statistics of cyclic points}
\end{center}
By~\cite[Theorem~2]{FlOdl} the number $C_m$ of cyclic nodes
of a random map on
an $m$ element
set is expected to be
$$
C_m =\sqrt{\pi/2} m = 1.2533\ldots ,
$$
which again is very close to the observed average values.
\section{Pseudorandomness}
\label{eq:pseudo}
\subsection{Joint distribution}
For integers $M$, $N \ge 1$, $s\ge 1$ and an integer vector
$\vec{a} = (a_0, \ldots, a_{s-1})$ we consider the exponential sums
$$
S_{s,p}(M,N;\vec{a}) =
\sum_{u=M+1}^{M+N} {\mathbf{\,e}}_p\(\sum_{j=0}^{s-1} a_j q_p(u+j)\).
$$
Thus the above sums are generalisations of those of Lemma~\ref{lem:HB}
that correspond to the case $s = 1$. However the method of
Heath-Brown~\cite{H-B} does not seem to apply to the sums
$S_{s,p}(M,N;\vec{a})$ as it requires
good estimates of mulitiplicative character sums with
polynomials, which are not currently known (see however~\cite{Chang}
for some potential approaches in the case $s = 2$).
We are now ready to prove an estimate on $S_{s,p}(M,N;\vec{a})$
which together with Lemma~\ref{lem:ETK} implies an
upper bound on the discrepancy of
points~\eqref{eq:Points}.
\begin{theorem}
\label{thm:Exp Sum}
For any integer $s\ge 1$, we have
$$
\max_{\gcd(a_0, \ldots, a_{s-1}, p) =1}
\left|S_{s,p}(M,N;\vec{a}) \right| \ll
s p \log p
$$
uniformly over $M$ and $p^2 > N\ge 1$.
\end{theorem}
\begin{proof}
Select any $\vec{a} = (a_0, \ldots, a_{s-1}) \in {\mathbb Z}^s$ with
$\gcd(a_0, \ldots, a_{s-1} ,p) = 1$ and take $K=\fl{N/p}$. We get
\begin{eqnarray*}
S_{s,p}(M,N;\vec{a})&=& \sum_{u=M+1}^{M+Kp} {\mathbf{\,e}}_p\(\sum_{j=0}^{s-1} a_j q_p(u+j)\)+O(p)\\&=&\sum_{u=1}^{Kp} {\mathbf{\,e}}_p\(\sum_{j=0}^{s-1} a_j q_p(u+M+j)\)+O(p) \\&=&\sum_{v=1}^{p}\sum_{k=0}^{K-1}{\mathbf{\,e}}_p\(\sum_{j=0}^{s-1} a_j q_p(v+M+j+kp)\)+O(p).
\end{eqnarray*}
Let ${\mathcal V}$ be the set of $v = 1, \ldots, p$ with
$v\not \equiv -M - j \pmod p$ for any $j = 0, \ldots, s-1$.
Therefore, using~\eqref{eq:add struct2}, we obtain:
\begin{equation}
\label{eq:S and W}
S_{s,p}(M,N;\vec{a}) = W +O(p+sK),
\end{equation}
where
\begin{eqnarray*}
W & = & \sum_{v\in {\mathcal V}}\sum_{k=0}^{K-1}{\mathbf{\,e}}_p\(\sum_{j=0}^{s-1} (a_j q_p(v+M+j)-a_jk(v+M+j)^{-1})\)
\\
& = & \sum_{v\in {\mathcal V}}{\mathbf{\,e}}_p\(\sum_{j=0}^{s-1}a_j q_p(v+M+j)\)
\sum_{k=0}^{K-1}{\mathbf{\,e}}_p\(-k\sum_{j=0}^{s-1}a_j(v+M+j)^{-1})\).
\end{eqnarray*}
Taking now the absolute value, we obtain
$$
\left|W\right|\le \sum_{v\in {\mathcal V}}
\left|\sum_{k=0}^{K-1}{\mathbf{\,e}}_p\(k \sum_{j=0}^{s-1}a_j(v+M+j)^{-1})\)\right|.
$$
Recalling Lemma~\ref{eq:Incompl}, we deduce
$$
\left|W\right|\le \sum_{v\in {\mathcal V}}
\min\left\{K, \frac{p}{\|F_{\vec{a},s}(v)\|}\right\},
$$
where
$$
F_{\vec{a},s}(V) = \sum_{j=0}^{s-1}\frac{a_j}{V+M+j}.
$$
Examining the poles of $F_{\vec{a},s}(v)$, we see that
if $\gcd(a_0, \ldots, a_{s-1}, p) =1$ then it is a nonconstant
rational function of degree $O(s)$
modulo $p$. Thus every residue modulo $p$ occurs $O(s)$ times
among the values $F_{\vec{a},s}(v)$, $v \in {\mathcal V}$. Hence
$$
\left|W\right|\ll s \sum_{u = 0}^{p-1}
\min\left\{K, \frac{p}{\|u\|}\right\} \ll s p \log p
$$
which concludes the proof.
\end{proof}
Using Lemma~\eqref{lem:ETK},
we immediately obtain:
\begin{cor}
\label{cor:Exp Sum}
For any fixed $s$, the discrepancy $\Delta_{p,s}(M,N)$ of
points~\eqref{eq:Points} satisfies
$$
\Delta_{p,s}(M,N) \ll N^{-1}p(\log p)^{s+1},
$$
uniformly over $M$ and $p^2 > N\ge 1$.
\end{cor}
\subsection{Linear complexity}
Here we estimate the linear complexity for a sufficiently long
sequence of consecutive values of $q_p(u)$.
\begin{theorem}
\label{thm:LCN}
For $p^2> N \ge 1$ the linear complexity $L_p(N)$ of the sequence
$q_p(u)$, $u =0, \ldots, N-1$, satisfies
$$
L_p(N) \ge \frac{1}{2} \min\{p-1, N-p-1\}.
$$
\end{theorem}
\begin{proof} Assume that
\begin{equation}
\label{eq:Rec RelN}
\sum_{j=0}^L c_j q_p(u+j) \equiv 0 \pmod p, \qquad
0\le u\le N-L-1,
\end{equation}
for some integers $c_0, \ldots, c_{L-1}$ and $c_L = -1$.
Let $R = \min\{p-L, N-L-p\}$.
Then we see from~\eqref{eq:Rec RelN} that for $1 \le u \le R-1$ we have
\begin{equation}
\label{eq:Rel1N}
\sum_{j=0}^L c_j q_p(u+p+j) \equiv 0 \pmod p.
\end{equation}
Recalling~\eqref{eq:add struct2} and using~\eqref{eq:Rec RelN} again,
we now see that
\begin{equation}
\begin{split}
\label{eq:Rel2N}
\sum_{j=0}^L c_j q_p(u+p+j) \equiv
\sum_{j=0}^L c_j &\(q_p(u+j) - (u+j)^{-1}\) \\
& \equiv - \sum_{j=0}^L c_j (u+j)^{-1} \pmod p.
\end{split}
\end{equation}
Comparing~\eqref{eq:Rel1N} and~\eqref{eq:Rel2N} we see that
$$
\sum_{j=0}^L c_j (u+j)^{-1} \equiv 0 \pmod p, \qquad 1 \le u \le R-1.
$$
We can assume that $L < p$ since otherwise there is nothing to prove.
Clearing the denominators, we obtain a nontrivial polynomial congruence
$$
\sum_{j=0}^L c_j \prod_{\substack{h=0\\ h \ne j}}^L (u+h) \equiv 0 \pmod p,
$$
of degree $L$, which has $R-1$ solutions (to see that it is nontrivial
it is enough to substitute $u=0$ in the polynomial on the left hand side).
Therefore $L \ge R-1$ and the result follows.
\end{proof}
The argument used in the proof of Theorem~\ref{thm:LCN} can also
be used to estimate the linear complexity of arbitrary segments
of the sequence $q_p(u)$, although the resulting bound is slightly weaker.
\begin{theorem}
\label{thm:LCMN}
For $M$ and $p^2> N \ge 1$ the linear complexity $L_p(M;N)$ of the sequence
$q_p(u)$, $u =M+1, \ldots, M+N$, satisfies
$$
L_p(M;N)\ge \min\left\{ \frac{p-1}{2} , \frac{N-p-1}{3}\right\}.
$$
\end{theorem}
\begin{proof} Assume that
\begin{equation}
\label{eq:Rec RelMN}
\sum_{j=0}^L c_j q_p(u+M+j) \equiv 0 \pmod p, \qquad
1\le u\le N-L,
\end{equation}
for some integers $c_0, \ldots, c_{L-1}$ and $c_L = -1$.
Let $R = \min\{p, N-L-p\}$.
Then we see from~\eqref{eq:Rec RelMN} that for $1 \le u \le R$ we have
\begin{equation}
\label{eq:Rel1MN}
\sum_{j=0}^L c_j q_p(u+M+p+j) \equiv 0 \pmod p.
\end{equation}
Recalling~\eqref{eq:add struct2} and using~\eqref{eq:Rec RelMN} again,
we now see that for any integer $u$ with
$u \not \equiv -M - j \pmod p$, $j = 0, \ldots, L$, we have
\begin{equation}
\begin{split}
\label{eq:Rel2MN}
\sum_{j=0}^L c_j q_p(u+M+p+j) \equiv
\sum_{j=0}^L c_j &\(q_p(u+M+j) - (u+M+j)^{-1}\) \\
& \equiv - \sum_{j=0}^L c_j (u+M+j)^{-1} \pmod p.
\end{split}
\end{equation}
Comparing~\eqref{eq:Rel1MN} and~\eqref{eq:Rel2MN} we see that
$$
\sum_{j=0}^L c_j (u+M+j)^{-1} \equiv 0 \pmod p,
$$
for at least $R-L-1$ values of $u$ with
$$1 \le u \le R\qquad\mbox{and}\qquad u \not \equiv -M - j \pmod p, \ j = 0, \ldots, L.
$$
As before we can assume that $L < p$ since otherwise there is nothing to prove.
Clearing the denominators, we obtain a nontrivial polynomial congruence
$$
\sum_{j=0}^L c_j \prod_{\substack{h=0\\ h \ne j}}^L (u+M+h) \equiv 0 \pmod p
$$
of degree $L$, which has at least $R-L-1$ solutions (to see that it is nontrivial
it is enough to substitute $u=-M$ in the polynomial on the left hand side).
Therefore $L \ge R-L-1$ and the result follows.
\end{proof}
\section{Hash Functions from Fermat Quotients}
\subsection{General Construction}
In this section we propose a new construction of hash functions based on iterations of Fermat quotients. A similar idea, however based on
a very different family of functions, has
been previously introduced
by D.~X.~Charles, E.~Z.~Goren and
K.~E.~Lauter~\cite{CGL}.
Let $n$ and $r$ be two positive integers. Choose $2^r$ random
$(n+1)$-bit primes $p_0,\ldots,p_{2^r-1}$.
We also consider a random initial $n$ bit integer ${u}_0$.
The has function is built from a sequence of iterations of Fermat
quotients moduli $p_0,\ldots,p_{2^r-1}$.
As in~\cite{CGL}, the input of the hash function is used to decide what
modulo what prime the next Fermat quotient is computed. More precisely,
given an input bit string $\Sigma$, we perform the following steps:
\begin{itemize}
\item Pad $\Sigma$ with at most $r-1$ zeros on the left to make
sure that its length $L$ is a multiple of $r$.
\item Split $\Sigma$ into blocks $\sigma_j$, $j =1, \ldots,J$,
where $J = L/r$, of length $r$ and interpret each block
as an integer $\ell \in [0, 2^r-1]$.
\item Starting at the point $u_0$, apply the Fermat quotient
maps
$q_{p_\ell}$ iteratively by using $n$ least significant bits
of $u_{j-1}$ to form an $n$-bit integer $w_{j-1}$ and then
computing
$$
u_{j} = q_{p_\ell}(w_{j-1}).
$$
\item Output the last element in the above sequence, that is, $u_J=q_{p_J}(w_{J-1})$ and outputing its $n$ least significant bits as the value of the hash function.
\end{itemize}
\subsection{Collision Resistance}
We remark that the initial element $u_0$
is fixed and in particular, does not depend on the input of the hash function. Furthermore, the collision resistance is based on the difficulty of making the decision which
Fermat quotient to apply at each step when one attempts to back trace
from a given output to the initial element $u_0$ and thus
produce two distinct strings $\Sigma_1$ and
$\Sigma_2$ of the same length $L$, with the same output.
Note that for strings of different lengths, say of $L$ and $L+1$, a collision can easily be created. It is enough to take
$\Sigma_2 = (0, \Sigma_1)$ (that is, $\Sigma_2$ is obtained
from $\Sigma_1$ by augmenting it by $0$). If $L \not \equiv 0 \pmod r$
then they lead to the same output.
Certainly any practical implementation has to take care of
things like this.
We also note that the results of Section~\ref{eq:pseudo}
suggest that the above hash functions exhibit rather
chaotic behaviour, which close to the behaviour of
a random function. It is probably too early to make
any suggestions about the applicability of Fermat quotients
for hashing but this direction definitely deserves further
studying, experimentally and theoretically.
\section{Comments}
Unfortunately we are not able to give any estimates
on the discrepancy or linear complexity of the
orbits~\eqref{eq:FermDyn}, which is a very interesting
but possibly hard, question.
Obtaining analogues of Theorems~\ref{thm:Exp Sum},
\ref{thm:LCN} and~\ref{thm:LCMN},
which are nontrivial for $N < p$ is another interesting question.
The method of proof of Theorems~\ref{thm:LCN}
and~\ref{thm:LCMN} does not apply
to the {\it nonlinear complexity\/}. We recall
the nonlinear complexity of degree $d$ of
an $N$-element sequence $s_0, \ldots, s_{N-1}$ of elements in
a ring ${\mathcal R}$
is the smallest $L$ o such that
$$
s_{u+L}=\psi(s_{u+L-1},\ldots,s_u), \qquad
0\le u\le N-L-1,
$$
where $\psi\in{\mathcal R}[Y_1,\ldots,Y_L]$ is a polynomial of
total degree at most $d$. Estimating the nonlinear complexity
of Fermat quotients is of ultimate interest.
Finally, we remark that one can also study
the sums
$$
T_{p}(M,N;\chi) =
\sum_{u=M+1}^{M+N} \chi\(q_p(u)\)
$$
with a nonprincipal multiplicative character $\chi$ modulo $p$.
Arguing as in the proof of Theorem~\ref{thm:Exp Sum}
we get
$$
|T_{p}(M,N;\chi)| \ll
\sum_{v= M+1}^{M+p-1}
\left|\sum_{k=0}^{K-1}\chi\(q_p(v+M) -k (v+M)^{-1})\)\right| + p,
$$
where $K = \fl{N/p}$.
One can now apply the Burgess bound,
see~\cite[Theorems~12.6]{IwKow},
and get a nontrivial estimate on $T_{p}(M,N;\chi)$,
starting with $N \ge p^{5/4 + \varepsilon}$ for
any fixed $\varepsilon > 0$, see~\cite{Shp2}.
However it is natural to expect
that one can take advantage of additional
averaging over $v$ and get a nontrivial bound for smaller
values of $N$. Furthermore, using~\eqref{eq:add struct1}
it is possible to estimate bilinear character sums
$$
W_p({\mathcal A}, {\mathcal B}, U,V;\chi) = \sum_{0 \le u \le U} \sum_{0 \le v \le V}
\alpha_u \beta_v \chi\(q_p(uv)\)
$$
with arbitrary complex weights ${\mathcal A} = \(\alpha_u\)$
and ${\mathcal B} = \(\beta_v\)$, and then using
the Vaughan identity, see~\cite[Section~13.4]{IwKow},
estimate the character sums with Fermat quotients at primes
arguments, see~\cite{Shp2} for details.
Furthermore, we remark that studying the map
$x \mapsto (x^{p-1} - 1)/p$ in the field of $p$-adic numbers,
is also of great interest, see~\cite{SmWo} where a similar
question is considered for the maps
given by~\eqref{eq:L fun}. The other way around, it is also quite
natural to study the map~\eqref{eq:L fun} modulo $p$.
Finally, analogues of Fermat quotients modulo a composite
number is certainly an exciting object of study with its
own twists, see~\cite{Agoh,ADS,BLS,Dilch}.
|
2,877,628,091,337 | arxiv | \section{Introduction}
More than $3,500$ exoplanets have been discovered over the last 25 years \citep{Schneider2011}\footnote{From \url{http://www.exoplanet.eu}, as of 26 September 2016.}. This has allowed us to compare the observed exoplanet populations with formation theories and evolutionary models \citep[e.g.,][]{Mordasini2009A, Mordasini2009B, Alibert2011, Mordasini2012}.
One of the highly-discussed topics in exoplanetary science is the so called ``sub-Jovian desert'', which describes a significant dearth of exoplanets with masses lower than $\sim$300 Earth masses and orbital periods below 2-4 days
\citep{Szabo2011, Beauge2013, Mazeh2016}.
Whereas lower mass planets get reduced in size due to photo-evaporation \citep{Lundkvist2016}, hot Jovian planets, more massive than Jupiter and with orbital periods below 4 days, tend to be inflated. A detailed empirical study of these radius anomalies was conducted by \citet{Laughlin2011} who found a clear correlation between the planets' orbit-averaged effective temperatures and the observed inflation. \citet{Laughlin2011} suggested that the Ohmic heating might account for the observed inflation. This effect could influence the upper border of this ``desert'' related to the radius. But as \citet{Mazeh2016} showed, this desert is also present in the mass regime. Recent theoretical studies on planet formation give additional explanations for the boundaries of the desert using \emph{in situ} formation \citep{Batygin2016}, as well as planet migration theories \citep{Matsakos2016}. Unfortunately, the lack of well characterized planets in the regime close to the sub-Jovian desert does not allow us at the moment to give strict constraints on its border. The upper border seems to be well defined due to the large amount of planets detected with ground based transit surveys, but, as a comparison with Kepler planets shows, the detection bias of these ground based surveys does not allow to extrapolate the upper border of the sub Jovian desert to a regime for planets smaller than 0.8 \Rjup. The number of well characterized Kepler planets on the other hand are also very limited. A better empirical definition of the sub Jovian desert and its boundaries might allow further constraints to be placed on planet formation and evolution models.
Here we report our results on \pnameone\ and \pnametwo, both short period planets with orbital periods of $\sim$3 days. The small mass and size of \pnameone\ puts this planets close to the sub Jovian desert and thus might help to better restrict its boundaries in future. \pnametwo\ on the other hand is an highly inflated planet. It is a member of the inflated hot Jupiters, but is only one of few orbiting a sub Giant host star.
Planet \pnameone\ has been recently reported as a planet candidate by \citet{Crossfield2016} and validated using high resolution imaging by \citet{Schmitt2016}. However, the planet has not been characterized before in terms of mass and bulk density.
\section{Observations}
\subsection{K2 photometry and transit detection}
The Kepler space observatory, launched in 2009, was designed to provide precise photometric monitoring of over $150,000$ stars in a single field and to detect transiting Earth-sized planets with orbital periods up to one year \citep{2010Sci...327..977B}. In spring 2013, after 4 years of operation in space, the failure of the second reaction wheel caused the end of the mission, as it was not longer possible to precisely point the telescope. At the end of 2013 the operation of the Kepler space telescope re-started with a new concept that uses the remaining reaction wheels, the spacecraft thrusters, and Solar wind pressure, to point the telescope. The new mission, called K2 \citep{K2}, enables the continued use of the Kepler spacecraft with limited pointing accuracy. In contrast to the Kepler mission, K2 observes different fields located along the ecliptic for a duration of about three consecutive months per field. EPIC~206038483 (\snameone) was observed by the K2 mission in campaign 3 from 2014 November until 2015 February. EPIC~216468514 (\snametwo) was observed in campaign 7, between 2015 October and December.
To detect transit signals in K2 campaign 3 and 7, we used the light curves extracted by \citet{Vanderburg2014} from the K2 data. We used the same algorithms and vetting tools described in \citet{cabrera2012} and \citet{Grziwa12,Grziwa16b}. These algorithms have been largely used by our team to detect and confirm planets in other K2 fields \citep{Barragan2016,Grziwa16,Johnson2016,Smith2016}. For the modeling of the transit light curves we used our own optimized photometry employing a similar approach as in \citet{Vanderburg2014}, which allowed us to reduce strong systematics by choosing optimal segment sizes when splitting the light curve for de-correlation. The photometry was performed using a fixed aperture for each object as shown in Fig.~\ref{fig1}. For \snameone\ we selected an aperture of 33 pixels as the star is isolated. In the case of \snametwo, this target is in a field that is close to the galactic center and thus very crowded. We minimized the contamination effects arising from nearby sources by using an fixed aperture of only 9 pixels (Fig.~\ref{fig1}). As in the pipeline of Kepler and \citet{Vanderburg2014}, each light curve was split in segments to remove correlated noise. The length of these segments influences the quality of de-correlation. We found an optimal size for the segments to be twice the orbital period of the planet. This way we avoided splitting the light curve within any transit signal. These short segments were individually de-correlated against the relative motion of the star, given in the \texttt{POS\_CORR} columns\footnote{Due to strong correlation between \texttt{POS\_CORR1} and \texttt{POS\_CORR2}, it was sufficient to use \texttt{POS\_CORR1} for de-correlation.}. To remove long term trends we de-correlated these segments also in the time domain after ruling out the existence of ellipsoidal variations in the phase folded light curve that might hint at eclipsing binary systems. The resulting light curves, in the time domain and phase folded, are shown in Figures~\ref{fig2} and \ref{fig3}.
\begin{figure}
\resizebox{0.5\textwidth}{!}{\plottwo{f1}{f2}}
\caption{K2 stamps of \snameone\ (left) and \snametwo\ (right). The yellow lines represent the adopted photometric apertures. The pixel scale of the Kepler spacecraft is $3.98\arcsec$ per pixel. The stamp of \snameone\ has a size of 12x10 pixels, whereas the stamp of \snametwo\ has a size of 8x7 pixels. The gray scale represents the counts per pixel. \label{fig1}}
\end{figure}
\begin{figure}
\resizebox{0.5\textwidth}{!}{\plotone{f3}}
\caption{Corrected and normalized light curve of \snameone. The upper plot shows the normalized light curve over time. The lower plot displays the phase folded light curve.\label{fig2}}
\resizebox{0.5\textwidth}{!}{\plotone{f4}}
\caption{Corrected and normalized light curve of \snametwo. Notation as in Figure \ref{fig2}.\label{fig3}}
\end{figure}
\subsection{High Dispersion Spectroscopy}
\label{RV_Follow_up}
We acquired five and eight high-resolution spectra ($R$\,$\approx$\,67,000) of \snameone\ and \snametwo\ with the the FIbre-fed \'Echelle Spectrograph \citep[FIES;][]{Frandsen1999,Telting2014} between June and September 2016. FIES is mounted at the 2.56m Nordic Optical Telescope (NOT) of Roque de los Muchachos Observatory (La Palma, Spain). We adopted the same observing strategy as in \citet{Buchhave2010} and \citet{Gandolfi2015}, i.e., we bracketed each science observation with long exposed ThAr spectra ($T_\mathrm{exp}$\,$\approx$\,35\,sec). The exposure time was set to 1800\,--\,3600~sec -- according to sky conditions and scheduling constraints -- leading to a signal-to-noise ratio (S/N) of 25--35 per pixel at 5500\,\AA. The FIES data were reduced using standard \texttt{IRAF} and \texttt{IDL} procedures. Radial velocity measurements were extracted via multi-order cross-correlation with the RV standard stars \object{HD\,50692} and \object{HD\,182572} \citep{Udry1999} observed with the same instrument set-up as the target stars.
We also took three additional high-resolution spectra of \snameone\ in July 2016 with the HARPS-N spectrograph \citep[R\,$\approx$\,115,000;][]{Cosentino2012} mounted at the 3.58m Telescopio Nazionale Galileo (TNG) at Roque de los Muchachos Observatory (La Palma, Spain). The exposure times were set to 1200\,--\,1500 seconds leading to a S/N of 15--20 per pixel at 5500\,\AA\ for the extracted spectra. We used the second fiber to monitor the Moon background and reduced the data with the HARPS-N dedicated pipeline. Radial velocities were extracted by cross-correlating the extracted spectra with a G2 numerical mask \citep{Baranne96,Pepe02}.
The FIES and HARPS-N RVs and their uncertainties are listed in Table~\ref{rvs}, along with the bisector span (BIS) of the cross-correlation function (CCF). Time stamps are given in barycentric Julian day in barycentric dynamical time (BJD$_\mathrm{TDB}$).
We searched for possible correlation between the RV and BIS measurements that might unveil activity-induced RV variations and/or the presence of blended eclipsing binary systems \citep{Queloz01}. The Pearson correlation coefficient between the RV and BIS measurements of \snameone\ is 0.11 with a p-value of 0.79. For \snametwo\ the Pearson correlation coefficient is 0.10 with a p-value of 0.81. Adopting a threshold of 0.05 for the p-value confidence level \citep{Lucy1971}, the lack of significant correlations between the RV and BIS measurements of both stars further confirm that the observed Doppler variations are induced by the orbiting planets.
\begin{table}
\caption{FIES and HARPS-N RV measurements of \snameone\ and \snametwo.\label{rvs}}
\begin{tabular}{lccrr}
\hline
\hline
BJD$_\mathrm{TDB}$ & RV & $\sigma_{\mathrm{RV}}$ & BIS & Instr. \\
$-$2,450,000 & (\kms) & (\kms) & (\kms) & \\
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{\texttt{\snameone}} \\
7568.72048 & $-$45.505 & 0.012 & 0.007 & FIES \\
7569.72143 & $-$45.394 & 0.024 & $-$0.006 & FIES \\
7570.71124 & $-$45.490 & 0.012 & 0.022 & FIES \\
7577.71171 & $-$45.532 & 0.012 & 0.014 & FIES \\
7578.64730 & $-$45.422 & 0.030 & 0.018 & FIES \\
7585.67140 & $-$45.324 & 0.010 & $-$0.036 & HARPS-N \\
7586.68055 & $-$45.362 & 0.006 & $-$0.023 & HARPS-N \\
7587.70160 & $-$45.260 & 0.007 & 0.007 & HARPS-N \\
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{\texttt{\snametwo}}\\
7565.58753 & $-$8.276 & 0.025 & 0.069 & FIES \\
7566.56965 & $-$8.404 & 0.023 & 0.012 & FIES \\
7567.57489 & $-$8.438 & 0.016 & 0.025 & FIES \\
7568.59817 & $-$8.272 & 0.021 & 0.044 & FIES \\
7570.54490 & $-$8.465 & 0.017 & 0.043 & FIES \\
7628.44921 & $-$8.273 & 0.029 & $-$0.007 & FIES \\
7637.39240 & $-$8.392 & 0.016 & 0.004 & FIES \\
7640.40979 & $-$8.429 & 0.016 & 0.035 & FIES \\
\hline
\end{tabular}
\end{table}
\subsection{Imaging}
Imaging with spatial resolution higher than that of K2 is used to detect potential nearby eclipsing binaries that could mimic planetary transit-like signals. It also enables us to measure the fraction of contaminating light arising from potential unresolved nearby sources whose light leaks into the photometric mask of K2, thus diluting the transit signal. \citet{Schmitt2016} observed \snameone\ using the adaptive optics facility at the KECK telescope. They excluded faint contaminant stars as close as $0\farcs25$ up to 4 magnitudes fainter than the target star.
We observed \snametwo\ on 2016 September 13 (UT) with the ALFOSC camera mounted at the Nordic Optical Telescope. We used the Johnson's standard R-band filter and acquired 16 images of 6 sec and 2 images of 20 sec. The data were bias subtracted and flat-fielded using dusk sky flats. The co-added 6-sec ALFOSC exposures are shown in Figure~\ref{fig5}. We detected two nearby faint stars located $4.3\arcsec$ North-East and $6.0\arcsec$ South-East of \snametwo. They are 6.3 and 6.5 magnitudes fainter than the target and fall inside the photometric aperture that we used to extract the light curve of \snametwo\ from the K2 images. We measured a contribution of $0.005\pm0.001$ to the total flux, by contaminating sources for \snametwo. Our observations exclude additional contaminants out to a separation of $2\arcsec$ and up to 6 magnitudes fainter than the target. We compared our findings with the first data release from GAIA \citep{GAIA_DR1} and found a contamination factor of 0.0043, in agreement with our estimate. No additional sources are present in the GAIA catalog \citep{Lindegren2016} within a radius of $10\arcsec$. The resolving power of GAIA is well below $1\arcsec$.
\begin{figure}
\epsscale{0.9}
\plotone{f5}
\caption{ALFOSC@NOT R-band image of \snametwo. We can resolve sources as close as $2\arcsec$ to our target star. \snametwo\ and its two contaminants are marked with green circles.\label{fig5}}
\end{figure}
\section{Analysis}
\subsection{Spectral Analysis}
We derived the spectroscopic parameters of \snameone\ and \snametwo\ from the co-added spectra used to extract the RVs of the stars (Sect.~\ref{RV_Follow_up}). The stacked FIES and HARPS-N have a S/N of 62 and 32 per pixel at 5500~\AA; the co-added FIES data of \snametwo\ have a S/N of 76 per pixel at 5500~\AA. The analysis was carried out in three independent ways.
The first technique uses \texttt{ATLAS~9} model spectra \citep{Castelli2004} to fit spectral features that are sensitive to different photospheric parameters. We adopt the calibration equations of \citet{Bruntt2010} and \citet{Doyle2014} to determine the microturbulent (\vmic) and macroturbulent (\vmac) velocities. We mainly used the wings of the H$_\alpha$ and H$_\beta$ lines to estimate the effective temperature ($T_\mathrm{eff}$), and the Mg\,{\sc i}~5167, 5173, and 5184~\AA, Ca\,{\sc i}~6162 and 6439~\AA, and the Na\,{\sc i}~D lines to determine the surface gravity \logg. We simultaneously fit different spectral regions to measure the metal abundance [M/H]. The projected rotational velocity \vsini\ was determined by fitting the profile of many isolated and unblended metal lines.
For the second method, micro-turbulent (\vmic)~and macroturbulent (\vmac)~velocities, as well as the projected stellar rotational velocity \vsini\ were determined as described above. For the spectral analysis the second method relies on the use of the package SME (Spectroscopy Made Easy, where we used version 4.43) \citep{Valenti1996,Valenti2005}. SME calculates, using a grid of models (we used the Atlas 12) for a set of given stellar parameters, synthetic spectra of stars and fits them to the observed high-resolution spectra using a $\chi$-square minimizing procedure.
The third method uses the equivalent width (EW) method to derive stellar atmospheric parameters: {\it i}\,) \teff\ is measured by eliminating trends between abundance of the chemical elements and the respective excitation potentials; {\it ii}\,) \logg\ is derived by assuming the ionization equilibrium condition, i.e. requiring that for a given species, the same abundance (within the uncertainties) is obtained from lines of two ionization states (typically, neutral and singly ionized lines); {\it iii}\,) microturbulent velocity is set by minimizing the slope of the relationship between abundance and the logarithm of the reduced EWs. We measured the equivalent widths using the \texttt{DOOp} program {Cantat2014}, a wrapper of \texttt{DAOSPEC} \citep{Stetson2008}. We derived the photospheric parameters with the program \texttt{FAMA} \citep{Magrini2013}, a wrapper of \texttt{MOOG} \citep{Sneden2012}. The adopted atomic parameters are the public version of those prepared for the Gaia-ESO Survey \citep{Heiter2015} and based on the VALD3 data \citep{Ryabchikova2011}. We typically used 200 Fe\,{\sc i} lines and 10 Fe\,{\sc ii} lines for the determination of stellar parameters.
The three methods provide consistent results within two sigma (see Table \ref{tab:stellar1}). The final adopted values are the weighted mean of the three independent determinations, using the error bars to calculate the weighting factor. The stellar parameters for both systems are listed in Table~\ref{tab:stellar}, along with the main identifiers and optical and near-infrared magnitudes.
\begin{table*}[!th]
\begin{center}
\caption{Effective temperature, surface gravity, and metallicity from different spectral analysis methods.\label{tab:stellar1}}
\begin{tabular}{l|ccc|ccc}
\hline
\hline
\noalign{\smallskip}
&& \snameone && & \snametwo &\\
Method & \teff\ (K) & \logg\ (cgs) & [Fe/H] (dex) & \teff\ (K) & \logg\ (cgs) & [Fe/H] (dex) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Method 1 & $5480 \pm 85 $ & $4.05 \pm 0.15$ & $-0.10 \pm 0.10$ & $6050 \pm 110$ & $3.95 \pm 0.10$ & $0.08 \pm 0.06$\\
Method 2 & $5350 \pm 90 $ & $3.95 \pm 0.10$ & $-0.10 \pm 0.08$ & $5970 \pm 100$ & $4.30 \pm 0.15$ & $0.08 \pm 0.08$\\
Method 3 & $5625 \pm 115$ & $4.22 \pm 0.07$ & ~~$0.24 \pm 0.15$ & $6080 \pm 150$ & $3.95 \pm 0.05$ & $0.13 \pm 0.16$\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[!th]
\begin{center}
\caption{Main identifiers, coordinates, magnitudes, and spectroscopic parameters of both systems.\label{tab:stellar}}
\begin{tabular}{lccc}
\hline
\hline
\noalign{\smallskip}
Parameter & {\texttt{\snameone}} & {\texttt{\snametwo}} & Unit\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
RA & 22$^h$34$^m$25$^s$.49 & 18$^h$59$^m$56$^s$.49 & h \\
DEC & -13$\degr$43$^\prime$54$^{\prime \prime}$.13 & -22$\degr$17$^\prime$36$^{\prime \prime}$.25 & deg\\
2MASS ID & 22342548-1343541 & 18595649-2217363 & \ldots\\
EPIC ID & 206038483 & 216468514 & \ldots\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Effective Temperature \teff & \steffone & \stefftwo & K\\
Surface Gravity \logg & \sloggone & \sloggtwo & cgs\\
Metallicity [Fe/H] & \smetone & \smettwo & dex \\
\vsini & $2.2 \pm 0.5$& $4.6 \pm 0.5$& \kms \\
Spectral Type & G4\,V & F9\,IV & \ldots\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
B mag (UCAC4) & $13.56 \pm 0.01$ & $13.64 \pm 0.01$ & mag\\
V mag (UCAC4) & $12.79 \pm 0.02$ & $12.92 \pm 0.01$ & mag\\
J mag (2MASS)& $11.41 \pm 0.02$ & $11.56 \pm 0.02$ & mag\\
H mag (2MASS)& $11.09 \pm 0.03$ & $11.26 \pm 0.03$ & mag\\
K mag (2MASS)& $10.99 \pm 0.02$ & $11.21 \pm 0.02$ & mag\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Joint Analysis of Photometric and Radial Velocity Measurements}
We used the Transit Light Curve Modelling (\texttt{TLCM}) code (\citealt{Csizmadia2015}; Csizmadia et al. in prep.) for the simultaneous analysis of the detrended light curves and radial velocity measurements. \texttt{TLCM} uses the \cite{M&A} model to fit planetary transit light curves. The RV measurements are modeled with a Keplerian orbit. The fit is optimized using first a genetic algorithm and then a simulated annealing chain.
The fitted parameters are the semi-major axis $a/R_*$ and planet radius $R_\mathrm{p}/R_*$, both scaled to the radius of the star, the orbital inclination $i$, the limb darkening coefficients $u_+=u_1+u_2$ and $u_- = u_1 -u_2$, the radial velocity semi amplitude $K$ and the systemic $\gamma$-velocity. The period ($P_\mathrm{orb}$) and epoch of mid-transit ($T_0$) are allowed to vary slightly around the values determined already by the detection.
For \pnametwo\ the model did not converge to the global minimum when leaving all nine parameters completely free, instead it seemed to converge to a broader local minimum. We thus first modeled the light curve, keeping the epoch and period, as well as the limb darkening coefficients fixed using estimates from \citet{Claret2011}. This gave us first estimates on the inclination, planet radius ratio, and semi major axis. In a second step we fitted all nine free parameters as for \pnameone, but restricted the parameter space with the priors as given by our first fit. To verify our results, We also modeled the light curve with different fixed inclinations, leaving all other parameters free. This confirmed our result of an high impact parameter.
We also fit the data for non-circular orbits. The best fitting eccentricity for \snameone\ is $0.09 \pm 0.03$ with a p-value of 0.90; as for \snametwo, we obtained $0.06 \pm 0.05$ with a p-value of 0.57. Both p-values are larger than the 0.05 level of significance suggested by \citet{Lucy1971}. We concluded that the RV measurements do not allow us to prefer the eccentric solutions over the circular ones and thus fixed the orbit eccentricities to zero. This assumption is reasonable given the fact that short period orbits are expected to have circularized. Using the equations from \citet{Leconte2010}, we calculated the tidal time-scales for the eccentricity evolution of the two systems\footnote{The rotation periods of the stars are estimated from the stellar radii and \vsini, assuming that the objects are seen equator-on.}. Assuming a modified tidal quality factor of $Q^\prime_\star=10^{6.5}$ for the stars and $Q^\prime_\mathrm{p}=10^{5.5}$ for the planets \citep{Jackson2008}, the timescales are $\sim$400 and $\sim$25~Myr for \snameone\ and \snametwo, respectively. These time scales are shorter than the estimated ages of the two host stars (Table~\ref{tab:param}).
We also fitted for radial velocity trends that might unveil the presence of additional orbiting companions in the systems. We obtained radial accelerations that are consistent with zero.
The best fitting transit model and circular RV curve of \pnameone\ are shown in Figures~\ref{fig6} and \ref{fig7}, along with the photometric and RV data. Results for \pnametwo\ are displayed in Figures~\ref{fig8} and \ref{fig9}. We checked our results by performing a joint fit to the photometric and RV data using the MCMC code \texttt{pyaneti} \citep{Barraganprep}. Following the same method outlined in \citet{Barragan2016}, we set uninformative uniform priors in a wide range for each parameter and explored the parameter space with 500 chains. The final parameter estimates are consistent within 1-$\sigma$ with those obtained using \texttt{TLCM}.
From the results of the spectral analysis and joint data modeling, we used Yonsei-Yale \citep{Yi2001,Demarque2004} and Dartmouth \citep{Dotter2008} isochrones to estimate masses, radii, and ages of \snameone\ and \snametwo. We obtained results that are in agreement regardless of the adopted set of isochrones. For the final results we used the Yonsei-Yale isochrones \citep{Yi2001,Demarque2004}. From the fundamental parameters of the host stars we calculated radii and masses of the two transiting planets. The parameter estimates are listed in Table~\ref{tab:param} for both systems.
\begin{figure}
\resizebox{.5\textwidth}{!}{\plotone{f6}}
\caption{Phase folded light curve and best fitting transit model (red line) of \pnameone. Residuals to the fit are shown in the lower panel.\label{fig6}}
\resizebox{.5\textwidth}{!}{\plotone{f7}}
\caption{FIES (blue circles) and HARPS-N (orange circles) RV measurements of \pnameone\ and best fitting circular model. Residuals to the fit are shown in the lower panel.\label{fig7}}
\end{figure}
\begin{figure}
\resizebox{.5\textwidth}{!}{\plotone{f8}}
\caption{Phase folded light curve and the best fitting transit model (red line) of \pnametwo. Residuals to the fit are shown in the lower panel.\label{fig8}}
\resizebox{.5\textwidth}{!}{\plotone{f9}}
\caption{FIES RV measurements of \pnametwo\ and best fitting circular model. Residuals to the fit are shown in the lower panel.\label{fig9}}
\end{figure}
\section{Discussion and Summary}
\subsection{\pnameone}
\pnameone\ is a transiting sub-Jovian planet with an orbital period of $3.00267 \pm 0.00006$ ~days. It orbits a G4 main sequence star. The planet's calculated effective temperature is \pteffone ~K. With a radius of \pradone~\Rjup\ and a mass of \pmassone~\Mjup, it is more dense than expected. The radius anomaly, based on the difference between model estimated to observed radius as described in \citet{Laughlin2011}, is $-0.46$, making this planet more dense than expected. Adaptive optics imaging by \citet{Schmitt2016} shows that there is no light contamination that could cause an underestimation of the planetary radius. We can exclude ellipsoidal variation with amplitudes above 0.05\,mmag in the light curve. There is no obvious trend in the radial velocity data, although we can not exclude radial accelerations lower than 0.002~km~s$^{-1}$day$^{-1}$.
The short orbital period and high effective temperature of the planet, along with its sub-Jovian size, put \pnameone\ close to the the so called sub-Jovian desert. Fig.~\ref{fig11} shows the known transiting planets with their radii plotted against their calculated effective temperatures as given by the equation in \citet{Laughlin2011}
\begin{equation}
T_{eff} = \left(\frac{R_{S}}{2a}\right)^{1/2} \frac{T_S}{(1-e^2)^{1/8}}.
\end{equation}
\begin{figure*}
\plotone{f11}
\caption{Planet radius over its orbit-averaged effective temperature. The gray dots show all planets. The blue dots mark planets that have been detected by the Kepler spacecraft (Kepler mission or K2 mission). The red dot denotes \pnameone\ and the green dot \pnametwo. The exoplanet data are taken from Extrasolar Planets Encyclopaedia (\url{www.exoplanets.eu}).\label{fig11}}
\end{figure*}
There is a clear lack of hot sub-Jovian planets. Due to different observational biases of exoplanet surveys (e.g., most of the inflated hot Jupiters have been detected by ground based surveys, which might not be able to detect sub-Jovian planets with the same efficiency) the upper border is not as well defined as it may seem. This can be seen by looking only at confirmed planets of the Kepler spacecraft (blue points in Fig.~\ref{fig11}). Nevertheless, all observations suggest that the sub-Jovian desert exists, although its borders are not well defined. Only a few planets are known in this regime \citep[e.g.][]{Sato2005, Bonomo2014}. \pnameone\ might help in the future to get better restrictions on its borders.
\subsection{\pnametwo}
\pnametwo\ is a Jovian planet on a short orbital period of $3.31392\pm 0.00002$ days. The planet orbits a F9 star about to leave the main sequence. It is one of only a few transiting planets known to orbit sub giants \citep[e.g.][]{Smith2016, Pepper2016, Eylen2016, Almenara2015}. The planet's calculated effective temperature is \ptefftwo ~K its radius is \pradtwo~\Rjup\ and its mass is \pmasstwo~\Mjup. The radius anomaly is $+0.21$, making \pnametwo\ in contrast to \pnameone\ an highly inflated gaseous planet. Such high inflation has already been observed for other giant planets with a similar effective temperature (see Figure \ref{fig11}). As suggested by \citet{Laughlin2011}, Ohmic heating might be at least partly responsible for such inflation of the planet.
Since it is projected against the galactic center, \snametwo\ is in a relatively crowded stellar region. Using seeing-limited imaging and the GAIA public archive (DR1) we identify two faint stars within $\sim$$10\arcsec$. The resulting contamination factor of 0.005 has been taken into account when modeling the light curve. The radial velocity data do not show any significant eccentricity or long term trend higher than 0.001~km s$^{-1}$day$^{-1}$.
The light curve of \snametwo\ shows no ellipsoidal variation with an amplitude larger than 0.1~mmag.
\begin{table*}[!th]
\begin{center}
\caption{Parameters from light curve and RV data analysis.\label{tab:param}}
\begin{tabular}{lccc}
\hline
\hline
\noalign{\smallskip}
Parameter & \texttt{\snameone} & {\texttt{\snametwo}}& Unit\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Orbital period $P_\mathrm{orb}$ & $3.00265 \pm 0.00004$ & $3.31392 \pm 0.00002$ & days\\
Transit epoch $T_0$ & $6928.0593 \pm 0.0007$ & $7304.5244 \pm 0.0002$ & BJD$_\mathrm{TDB}-2450000$\\
Transit Duration & $3.08 \pm 0.10$ & $3.19 \pm 0.45$ & hours\\
Scaled semi-major axis $a/R_*$ & $7.78 \pm 0.17$ & $5.75 \pm 0.31$ & \\
Semi-major axis $a$ & $ 0.045 \pm 0.003$ & $0.048 \pm 0.005$ & au\\
Scaled planet radius $R_P/R_*$ & $0.063 \pm 0.001$ & $0.083 \pm 0.001$& \\
Orbital inclination angle $i$ & $88.49 \pm 0.96$ & $81.9 \pm 0.7$& $\deg$\\
Impact parameter $b$ & $0.21 \pm 0.13$ & $0.81 \pm 0.08$& \\
Limb-darkening coefficient $u_+$ & $0.63 \pm 0.08$ & $0.51 \pm 0.08$& \\
Limb-darkening coefficient $u_-$ & $0.29 \pm 0.10$ & $0.17 \pm 0.07$& \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Radial velocity semi amplitude $K$ & $61.0 \pm 2.6$ & $95.5 \pm 1.3$ & $ \ms $\\
Systemic radial velocity $\gamma$ & $-45.475 \pm 0.003$ & $-8.364 \pm 0.001$ & \kms \\
RV velocity offset between FIES and HARPS & $0.158 \pm 0.004$ & - & \kms \\
Eccentricity $e$& 0 (fixed) & 0 (fixed) & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Stellar mass $M_*$ & \smassone & \smasstwo & \Msun\\
Stellar radius $R_*$ & \sradone & \sradtwo & \Rsun\\
$M^{1/3}_*/R_*$ & $0.89 \pm 0.02$ & $0.61\pm 0.03$& Solar units\\
Stellar mean density $\rho_*$ & $0.99 \pm 0.06$ & $0.33 \pm 0.05$ & \gcm \\
Stellar surface gravity \logg\footnote{\label{note}Derived from the light curve modeling, effective temperature, metal content, and isochrones.} & \slogglcone & \slogglctwo & cgs\\
Age & \sageone & \sagetwo & Gyr\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Planetary mass $M_P$ & \pmassone & \pmasstwo & $M_\mathrm{Jup}$\\
Planetary radius $R_P$ & \pradone & \pradtwo & $R_\mathrm{Jup}$ \\
Planetary mean density & $1.7 \pm 0.3$ & $0.35 \pm 0.1$ & cgs\\
Planetary surface gravity \loggp & \ploggone & \ploggtwo & cgs\\
Planetary calculated effective temperature & \pteffone & \ptefftwo& K\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\acknowledgments
We would like to express our deepest gratitude to the NOT and TNG staff members for their unique support during the observations and scheduling of our runs. Szilard Csizmadia thanks the Hungarian OTKA Grant K113117. Hans Deeg and David Nespral acknowledge support by grant ESP2015-65712-C5-4-R of the Spanish Secretary of State for R\& D\&i (MINECO). This research was supported by the Ministerio de Economia y Competitividad under project FIS2012-31079. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2013-2016) under grant agreement No.~312430 (OPTICON) and from the NASA K2 Guest Observer Cycle 1 program under grant NNX15AV58G to The University of Texas at Austin. Based on observations obtained with the Nordic Optical Telescope (NOT), operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos (ORM) on the island of La Palma, and with the Italian Telescopio Nazionale Galileo (TNG) operated also at the ORM (IAC) by the INAF - Fundaci\'on Galileo Galilei. The data presented here were obtained in part with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOTSA. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
We are happy to acknowledge the continued involvement with help and upgrades to the Spectroscopy Made Easy (SME) program package, by N Piskunov and J. Valenti. SME makes use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna \citep{Ryabchikova2015}.
\facility{NOT (FIES, ALFOSC), TNG (HARPS-N), Kepler (K2), GAIA}.
\software{\texttt{IDL}, \texttt{IRAF}, \texttt{SME}, \texttt{DOOp}, \texttt{FAMA}, \texttt{TLCM}, \texttt{pyaneti}.}
|
2,877,628,091,338 | arxiv | \section{Introduction}
Homogenization problems have been studied for many years for both their intrinsic mathematical interest and the many applications in different sciences (e.g. the study of heterogeneous media). In particular stochastic homogenization arises whenever
at the microscopic level the system depends on some random variable but at the macroscopic level one can expect a deterministic behaviour.\\
In this paper, we study asymptotics of a special class of {\em degenerate} (i.e. non-coercive) first-order Hamilton-Jacobi equations with random coefficients taking the form
\begin{equation}
\label{equation1}
\left\{
\begin{aligned}
&
u_t+H\left(x, \sigma(x)\nabla u,\omega\right)=0,\ t>0,\ x\in\mathbb{R}^N,\ \omega \in \Omega,\\
& u(0,x,\omega)=g(x),\ x\in \mathbb{R}^N, \;\omega \in \Omega,
\end{aligned}
\right.
\end{equation}
where $(\Omega, {\mathcal F},{\mathbb P})$ is a given probability space, and $\sigma:\mathbb{R}^N\to \mathbb{R}^{m\times N}$ with $m\le N$. Even though $H(x,q)$ is coercive and convex in the variable $q=\sigma(x)p\in \mathbb{R}^m,$ the map
$p\mapsto H\left(x, \sigma(x)p,\omega\right)$ is in general not coercive because $\sigma(x)$ may have a nontrivial kernel.
The illustrating example is the Heisenberg group, which is topologically $\mathbb{R}^3$ but with a different algebraic structure (see Section~$2$, e.g. \eqref{MatrixHeisenberg}).\\
Equations as \eqref{equation1} can be understood in the framework of Carnot groups, i.e. non-commutative stratified nilpotent Lie groups (see Section 2 for more details).
In particular these groups satisfy the H\"ormander condition:
they are endowed with a family of vector fields that, together with all their associated commutators,
span the whole tangent space at any point of the original manifold.
For the associated homogenization problem, the Carnot group structure suggests a natural anisotropic rescaling of $\mathbb{R}^N,$ denoted by $\delta_{1/\varepsilon}(x)$ for $x\in \mathbb{R}^N.$ \\
Then the homogenization problem can be formulated as follows:\\
under some assumptions made precise later (see Section 3), find the equation solved by the (locally uniform) limit of $u^\varepsilon(t,x,\omega)$ where $u^{\varepsilon}$ are viscosity solutions of
\begin{equation}
\label{equation2}
\left\{
\begin{aligned}
&
u^{\varepsilon}_t+H\left(\delta_{1/\varepsilon}(x), \sigma(x)\nabla u^{\varepsilon},\omega\right)=0,\ t>0,\ x\in\mathbb{R}^N,\ \omega \in \Omega,\\
& u^{\varepsilon}(0,x,\omega)=g(x),\ x\in \mathbb{R}^N, \;\omega \in \Omega.
\end{aligned}
\right.
\end{equation}
In other words, the aim is to identify $\overline H:\mathbb{R}^m\to \mathbb{R}$ such that the viscosity solutions of \eqref{equation2} converge, locally uniformly in $t$ and $x$ and almost surely in $\omega$, to a deterministic function $u(t,x)$ which can be characterized as the unique viscosity solution of a problem of the form
\begin{equation}
\label{LimitProblem}
\left\{
\begin{aligned}
&u_t+\overline{H}\left(\sigma(x)\nabla u\right)=0,\ t>0,\ x\in\mathbb{R}^N,\\
&u(0,x)=g(x),\ x\in\mathbb{R}^N.
\end{aligned}
\right.
\end{equation}
In the case of the Heisenberg group, the anisotropic rescaling is
$\delta_{1/\varepsilon}(x_1,x_2,x_3)=(\varepsilon^{-1}x_1,\varepsilon^{-1}x_2,\varepsilon^{-2}x_3).$ This is consistent with the geometric structure of the Heisenberg group, but the anisotropy can be understood heuristically in another way: at each point, some directions are ``forbidden'', i.e. paths of the associated control problem can move only on a two-dimensional subspace. By varying their direction often (i.e. by the use of non-trivial commutators from the H\"ormander condition) they are able to reach any given point but the cost for ``zig-zagging'' to get in the forbidden direction is higher, so typically they move slower in these directions, which makes a faster rescaling necessary.\\
Note that, in~\eqref{equation2}, $\sigma(x)$ is not rescaled so this is in principle a problem with a fast and a slow variable, but the equation is degenerate if the slow variables are frozen. Obviously, general non-coercive equations have no homogenization, so considering a cell problem with a frozen variable is not the way to tackle this problem.\\
Instead, our approach is based on the use of a variational formulation for the viscosity solutions of \eqref{equation2}, that has been introduced in the coercive case by Souganidis \cite{S1} and Rezakhanlou-Tarver \cite{RT}. This variational approach is motivated by $\Gamma$-convergence methods for the random Lagrangian. In order to define the associated variational problem from the Hamilton-Jacobi equation, some form of convexity is needed, but it should be noted that due to the degeneracy the relation is more subtle than the Euclidean Legendre transform, see \cite{BCP}.
Moreover the approach developed in \cite{RT, S1} fails since the idea of using the Subadditive Ergodic Theorem indirectly requires the existence of curves invariant under translation and rescaling (as straight lines are w.r.t. the Euclidean translations). In our anisotropic geometries, this property is true only for curves that have constant horizontal speed (i.e. velocity constant w.r.t. a given family of left-invariant vector fields, see Section 2 for more details). Unfortunately those lines are too few to cover the whole space (they only generate a $m$-dimensional submanifold in $\mathbb{R}^N$ where usually $N>m$).\\
Then main idea of the proof for the convergence theorem is to apply the techniques from \cite{RT, S1} (for the periodic case see also \cite{E}) to a lower dimensional constrained variational problem (Section 5), the constraint being to belong to the $m$-dimensional manifold mentioned above. Then by an approximating argument (Section 6) we write the original variational problem \eqref{Lepsilon}
as limit of sum of lower dimensional constrained variational problems. The key role in the whole argument will be to approximate any horizontal curve by a suitable family of piecewise horizontal lines with constant speed and the use of the H\"ormander condition to move everywhere in the space.\\
Here our a priori bounds on the Lagrangian ensure that the cost of connecting any two points can be bounded by a function of the geodesic distance. This allows to estimate the difference in cost for connecting nearby points,
a property which makes up for the lack of uniform continuity of the Lagrangian due to the rescaling in space.
\\
This is to our knowledge the first paper which connects two previously separate branches of homogenization theory: Stochastic homogenization on the one hand, which so far has not been considered in sub-Riemannian geometries, and homogenization in the sub-elliptic setting, which so far has been restricted to a suitable generalization of periodic environments, i.e. essentially in a compact setting.
For homogenization in subelliptic settings in the periodic case see for example \cite{BW, BMT, Franchi1, Franchi2, MS, STR1}, and for homogenization with singular perturbation see \cite{AB1,ABM}.
Since the first results on stochastic homogenization for first-order Hamilton-Jacobi equations (\cite{S1,RT}), it has been a difficult question which are the necessary conditions on the deterministic structure of the Hamiltonian, with convexity and coercivity being sufficient. The case of non-convexity has been understood better recently, see e.g. \cite{Armstrong-Carda, FS, Ziliotto}. Instead this paper gives a very general class of examples which are convex (but not strictly) but non-coercive. This homogenization result is in line with the folk theorem that, in order to have homogenization, characteristics have to be able to go everywhere: our degeneracy is related to H\"ormander vector fields, which have the property that admissible paths (see Section 2) can connect any two given points.\\
$\Gamma$-convergence for random functionals, which is used here, has in a general setting first been studied by Dal Maso and Modica, \cite{DalMasoModica} and recently been extended to non-convex integrands, \cite {DG}.
Alternatives to the variational approach for obtaining stochastic homogenization results in the Euclidean setting for both first and second order equations and the simultaneous effect of homogenization and vanishing viscosity (i.e. singular perturbation) have been developed subsequently, see for example \cite{KRV, LS05, LS10}. Extending these methods to the sub-Riemannian setting will be a challenge for further research.\\
This paper is organized as follows.\\
In Section 2 we introduce some basic notions for Carnot groups, in particular the dilations in the group and some norms and distances related to both the geometric and the algebraic structure of Carnot groups.
In this section we also introduce horizontal curves, horizontal velocity and study some properties, which will be very useful in later proofs.\\
In Section 3 we state the problem and we explain the meaning on some assumptions on the Hamiltonian; in particular the stationary ergodic assumption which is crucial in order to get a deterministic limit problem.
In the same section we also introduce the variational formulation for the solutions of the $\varepsilon$-problem.
\\
In Section 4 we study several properties for the variational problem. In particular we prove local uniform continuity.\\
In Section 5 we prove the convergence for the constrained variational problem, i.e. for the minimizing problem for an integral cost under the additional $m$-dimensional constraint.\\
In Section 6 we prove our main convergence result for the unconstrained variational problem by the introduction of a suitable approximation argument.\\
In Section 7 we apply the convergence proved in Section 6 to the family of non-coercive Cauchy-Hamilton-Jacobi problems \eqref{equation2} via variational formula.\\
In the Appendix (Section 8) we give a proof for the well-posedness of the $\varepsilon$-problem \eqref{equation2} in the viscosity sense.
\section{Preliminaries: Carnot groups.}
Carnot groups are non-commutative Lie groups: they are endowed both with a non-commutative algebraic structure and with a manifold structure. The lack of commutativity in the algebraic structure reflects on the manifold structure as restrictions on the admissible motions. This means that the allowed curves are constrained to have their velocities in a lower dimensional subspace of the tangent space of the manifold. Then the associated manifold structure is not Riemannian but sub-Riemannian. We refer the reader to \cite{BLU} for an overview on Carnot groups and sub-Riemannian manifolds. Here we only recall the definitions and some of the main properties, which will be crucial in the later proofs.
\begin{defi}\label{defG}[Carnot group]
A Carnot group $(\mathbb{G}, \circ)$ of step $r$ is a simply connected, nilpotent Lie group whose Lie algebra $g$ of left-invariant vector fields admits a stratification, i.e. there exist non zero subspaces $\{V_i\}$, $i=1,\dots r$ such that $g=\bigoplus_{i=1}^r V_i$, $[V_1, V_i]=V_{i+1}\neq 0$, for $i=1,\dots r-1$, $[V_1,V_r]=0$. $V_1$ is called the first layer.\\
Any such group is isomorphic to a homogeneous Carnot group in $\mathbb{R}^N$, that is a triple $(\mathbb{R}^N, \circ, \delta_{\lambda})$ where $\mathbb{R}^N=\mathbb{R}^{n_1}\times\mathbb{R}^{n_2}\times\dots\times\mathbb{R}^{n_k}$, $\circ$ is a group operation whose identity is $\mathrm{e}$ and such that $(x,y)\rightarrow y^{-1}\circ x$ is smooth (where $y^{-1}$ denote the inverse of $y$), and $\delta_{\lambda}:\mathbb{R}^N\rightarrow \mathbb{R}^N$ is the dilation:
\begin{equation}
\label{Dilations}
\delta_{\lambda}(x)=
\delta_{\lambda} \left(x^{(1)},x^{(2)},\cdots, x^{(r)}\right):=\left(\lambda\,x^{(1)},\lambda^2\,x^{(2)}, \cdots,\lambda^r x^{(r)}\right),\ x^{(i)}\in \mathbb{R}^{n_i},
\end{equation}
is an automorphism of the group $(\mathbb{R}^N, \circ)$ for all $\lambda>0$ and there are $m:=n_1$ smooth vector fields $X_1$, $\cdots$, $X_m$ on $\mathbb{R}^N$ invariant with respect to the left translation
$$
L_{\beta}(x):=\beta\circ x
$$
for all $\beta\in\mathbb{R}^N$ and such that
they generate a Lie algebra with rank $N$ at every point $x\in\mathbb{R}^N$.
The vector fields $X_1$, $\cdots$, $X_m$ are called the generator of the Carnot groups or horizontal vector fields and the $n\times m$ matrix whose columns are these vector fields is denoted by $\sigma$.
For $x\in\mathbb{R}^N$, we shall also use the notation: $x=(x^1,x^2)$ with $x^1\in\mathbb{R}^m$, $x^2\in \mathbb{R}^{N-m}$ and $x^1:=\pi_m(x)$.
\end{defi}
The definition of dilations (that replace the role of product of a point by a scalar in the Euclidean case) gives good notions of rescaling in these geometries.\\
Note that we are interested only in the case where $\mathbb{G}=\mathbb{R}^N$ for some $N\geq 3$ (in fact Carnot groups with dimension less than 3 do not exist).
\begin{ex}
\label{Heisenberg}
The simplest example of a Carnot group is the so called Heisenberg group.
The $N$-dimensional Heisenberg group $\mathbb{H}^N$ is a Carnot group of step 2 (i.e. $r=2$ in the stratification) defined in $\mathbb{R}^{2N+1}$ (with $N\geq 1$). In particular if $N=1$ the stratification
is $V_1\bigoplus V_2$, where $V_1= \mathbb{R}^2$ and $V_2=\mathbb{R}$.
In this last case
the group operation is
$$
x\circ y:=\left(x_1+y_1, x_2+y_2, x_3+y_3+\frac{x_1y_2- x_2y_1}{2}\right)
$$
where $x=(x_1, x_2, x_3)$ and $y=(y_1, y_2, y_3)$ are two points in $\mathbb{R}^{3}$ and
the generators are the two vector fields
$$ X_1(x)=\left(\begin{array}{c}1 \\0 \\\frac{x_2}{2}\end{array}\right),\
X_2(x)=\left(\begin{array}{c}0 \\1 \\\frac{-x_1}{2}\end{array}\right)
$$
\end{ex}
In the Heisenberg group $\mathbb{H}^1$ the dilations that give the natural rescaling are
$$
\delta_{\lambda}(x)=\delta_{\lambda} (x_1,x_2,x_3)=(\lambda\,x_1,\lambda\,x_2, \lambda^2\,x_3).
$$
To make the paper more easily readable for mathematicians not used to worked in Carnot groups
we will explain most of the notions and
properties of Carnot groups, using the 1-dimensional Heisenberg group $\mathbb{H}=\mathbb{H}^1$ as referring model.\\
Another family of algebraic objects which will play a crucial role in our homogenization problem are the translations. Since the group law is not commutative, in general left translations and right translations will be different. We will always translate points using only the left translations.\\
Using the stratification, a Carnot group can be endowed with a homogeneous norm that induces a homogeneous distance. The homogeneous norm and the homogeneous distance are very important in homogenization problems since they are compatible with rescaling under dilations (as we will see in the properties below).
\begin{defi}\label{homogenousNorm}
[Homogeneous norm and homogeneous distance]
A homogeneous norm $\|\cdot\|_h$ is a continuous function from $\mathbb{G}$ to $[0,+\infty)$ such that
\begin{enumerate}
\item
$\|x\|_h=0\ \iff\ x=\mathrm{e}$
\item
$\|x^{-1}\|_h=\|x\|_h$
\item $\|\delta_{\lambda}(x)\|_h=\lambda\|x\|_h, \forall x\in \mathbb{G}, \lambda>0$
\item
$\|x+y\|_h\leq \|x\|_h+\|y\|_h, \forall x,y\in \mathbb{G}.$
\end{enumerate}
The homogeneous distance between two points $x,y\in \mathbb{G}$ is
$$
d_h(x,y)=\|y^{-1}\circ x\|_h.
$$
From $\|x\|_h=\|x^{-1}\|_h$ we have that $d_h(x,y)=d_h(y,x)$ and obviously $d_h(x,x)=0$ for all $x,y\in \mathbb{G}$.\\
Moreover, given two points $x, y\in \mathbb{G}\equiv \mathbb{R}^N$, $\big(\delta_{\lambda}(x)\big)^{-1}=\delta_{\lambda}(x^{-1})$ and \\
$\delta_{\lambda}(x)\circ \delta_{\lambda}(y) =\delta_{\lambda}(x\circ y)$.
This implies that
$$d_h\big(\delta_{\lambda}(x),\delta_{\lambda}(y) \big)=\lambda\,d_h(x,y).
$$
\end{defi}
In the case of the 1-dimensional Heisenberg group $\mathbb{H}$ we have
$$
\norma{x}_h=\norma{(x_1,x_2,x_3)}_h=\big((x_1^2+x_2^2)^2+x_3^2\big)^{1/4}.
$$
Moreover it is easy to check that $\mathrm{e}=(0,0,0)$ and $x^{-1}=(-x_1,-x_2,-x_3)$ so
$$
d_h(x,y)=\big(((x_1-y_1)^2+(x_2-y_2)^2)^2+(x_3-y_3)^2\big)^{1/4}.
$$
One can easily check all the properties listed above in the case of the 1-dimensional Heisenberg group.\\
For later use, it is very useful to introduce the $m\times n$ matrix associated to the vector fields
$$
\sigma(x):=\big(X_1(x),\dots,X_m(x)\big)^T
$$
e.g. in $\mathbb{H}^1$ the matrix $\sigma(\cdot)$ is the $2\times 3$-matrix given by
\begin{equation}
\label{MatrixHeisenberg}
\sigma(x_1,x_2,x_3)=
\begin{pmatrix}
1 &0& -\frac{x_2}{2}\\
0&1&\frac{x_1}{2}
\end{pmatrix}.
\end{equation}
From now on we will always consider the Carnot groups, written
in exponential coordinates (or canonical coordinates).
In fact in exponential coordinates the vector fields (and so the associated matrix $\sigma(x)$) assume a special form, as shown in the following lemma.
\begin{lemma}
Given a Carnot group in exponential (or canonical) coordinates, then the vector fields can be considered as the columns of a $m\times N$ matrix $\sigma(x)$ of this form
\begin{equation}\label{matrixC}
\sigma(x)=\begin{pmatrix}Id_{m\times m}&A(x)\end{pmatrix}
\end{equation}
where $Id_{m\times m}$ is the identity matrix $m\times m$ and $A(x)$ is a $m \times (N-m)$ matrix whose coefficients are smooth functions depending only on $x_1,\dots, x_m$.\\
Moreover the non-vanishing coefficients of $A(x)=(a_{j,i}(x))$ with $i=1,\dots,N-m$ and $j=1,\dots,m$ are polynomial functions of degree $k-1$ whenever the $(m+i)$-th component rescale as $\lambda^k$ in the dilations $\delta_{\lambda}$ defined in \eqref{Dilations}.
\end{lemma}
For a proof we refer the reader to \cite{BLU}; in particular see \cite[Proposition 1.3.5, Corollary 1.3.19] {BLU} for the polynomial structure and the corresponding homogeneity degree. Remember that $\delta_{\lambda}$-homogeneity corresponds to Euclidean homogeneity whenever the functions depend only on the first $m$ components.\\
The previous lemma is easy to check in the 1-dimensional Heisenberg group (see \eqref{MatrixHeisenberg}), in fact
$a_{1,1}(x_1,x_2)=-\frac{x_2}{2} $ and $a_{2,1}(x_1,x_2)=\frac{x_1}{2} $ are both polynomials of degree 2-1=1. We now give another example for a step 3 Carnot group.
\begin{ex}[Engel group in exponential coordinates]
The Engel group is Carnot group of step 3 defined on $\mathbb{R}^4$. It can be written as extension of the Heisenberg group but for us it is crucial to write it in exponential coordinates (see e.g. \cite{LeDonne}).
The rescaling in the Engel group is given by
$$
\delta_{\lambda}(x_1,x_2,x_3,x_4)=\big(\lambda x_1,\lambda x_2,\lambda^2 x_3,\lambda^3 x_4
\big).
$$
In exponential coordinate the vector fields generating $V_1$ can be written as
$$
X_1(x_1,x_2,x_3,x_4)=\frac{\partial}{\partial x_1}\;\textrm{and}
\;X_2(x_1,x_2,x_3,x_4)
=\frac{\partial}{\partial x_2}+x_1\frac{\partial}{\partial x_3}+\frac{x_1^2}{2}\frac{\partial}{\partial x_4}.
$$
In this case the corresponding $2\times 4$-matrix has the form of a $2\times 2$-identity matrix and a $2\times 2$ matrix $A(x)$ whose coefficients are $a_{1,1}(x)=0=a_{1,2}(x)$, while $a_{2,1}(x)=x_1$ which is a polynomial of degree 1 (in fact the component $2+1=3$ rescales with $k=2$), and
$a_{2,2}(x)=\frac{x_1^2}{2}$ which is a polynomial of degree 2 (in fact the component $2+2=4$ rescales with $k=3$). Then Lemma \ref{matrixC} is easily verified.
\end{ex}
So far, we have briefly recalled the algebraic structure of Carnot groups.
Since Carnot groups are also sub-Riemannian manifolds there is also another important distance to consider: the so called Carnot-Carath\'eodory distance.
Before defining the Carnot-Carath\'eodory distance and its relations with the homogeneous distance and the Euclidean distance, we need to introduce the sub-Riemannian manifold structure associated to a Carnot group.
Consider the left-invariant vector fields $X_1,\dots, X_m$ introduced above on $\mathbb{R}^N$, then by identifying the tangent space at the origin with the Lie algebra $g$ (see Definition \ref{defG}) and in any other point by left-translation, then $X_1,\dots, X_m$ satisfy the H\"ormander condition with step $r$. We remind that the H\"ormander condition states that the Lie algebra induced by the vector fields has to be at any point equal to the whole tangent space at that point.\\
Denoted by $\mathcal{H}_x=\textrm{Span}\big(X_1,\dots,X_m\big)$ the distribution spanned by the given left-invariant vector fields, then it is possible to define a Riemannian metric
on $\mathcal{H}_x$ induced by the vector fields, by taking $<v,w>=\alpha\cdot\beta$ where $\alpha$ and $\beta$ are $m$-valued vectors, corresponding to the coordinates of $v$ and $w$ respectively, w.r.t. the given vector fields.\\
The triple $\big(\mathbb{R}^N,\mathcal{H}_x,<\cdot,\cdot>\big)$ is a sub-Riemannian manifold.
For more details on sub-Riemannian manifolds in general and the manifold structure associated to Carnot groups in particular, we refer respectively to \cite{montgomery} and \cite{BLU}. \\
Next we recall the notion of horizontal (or admissible) curve that will play a crucial role in defining the Carnot-Carath\'eodory distance and
later in the variational formulas.
\begin{defi}\label{horcurv}
An absolutely continuous curve $\xi:[0,T]\to \mathbb{R}^N$ is called horizontal if
there exists $\alpha^{\xi}:[0,T ]\to \mathbb{R}^m$ measurable such that
\begin{equation}
\label{EQ_Horizontal}
\dot{\xi}(s)=\sum_{i=1}^{m}\alpha_i^{\xi}(s)X_i(\xi(s)),\quad a.e. \;s\in (0,T),
\end{equation}
where the vector fields $X_i$ are those introduced in Definition \ref{defG}.\\
The vector $\alpha^{\xi}$ is called {\em horizontal velocity} of the curve.
\end{defi}
\begin{rem}
\label{RemarkLinearIndepedent}
Note that whenever $X_1,\dots, X_m$ are linearly independent, as they are always in the case of Carnot groups (see e.g. \cite[Ch. 1]{BLU}), the vector $\alpha^{\xi}$ is unique up to a measure zero set.
\end{rem}
Let us define the Carnot-Carath\'eodory distance (briefly C-C distance) associated to a family of vector fields $\mathcal{X}=\{X_1,\dots,X_m\}$.
\begin{defi}
\label{CC-distance_definiton}
Given two points $x,y\in \mathbb{R}^N$ and a family of smooth vector fields on $X_1,\dots, X_m$, we define the Carnot-Carath\'eodory distance as the minimal length distance (or geodesic distance) among all horizontal curves joining $x$ to $y$, that is
$$
d_{CC}(x,y)=\inf\left\{
\int_0^T|\alpha^{\xi}(t)|\,dt\,\bigg|\, \xi(0)=x,\,\xi(T)=y\, \textrm{and $\xi$ is horizontal}
\right\},
$$
where $|\alpha^{\xi}(t)|$ is the Euclidean norm of the $m$-valued horizontal velocity.
\end{defi}
Whenever $X_1,\dots, X_m$ satisfy the H\"ormander condition (as in our case of Carnot groups), then
$d_{CC}(x,y)<+\infty$ for all $x,y\in\mathbb{R}^N$ and it is continuous w.r.t. the Euclidean topology on $\mathbb{R}^N$.\\
We denote by $\|x\|_{CC}:=d_{CC}(x,0)$ the Carnot-Carath\'eodory norm.\\
\begin{rem}
The Carnot-Carath\'eodory distance is globally equivalent to the so-called minimal-time (or control) distance that is defined as
$$\hat{d}(x,y):=\inf\{T\geq 0 | \exists\ \xi\,\textrm{subunit horizontal in}\, [0,T]\ \textrm{with}\; \xi(0)=x, \xi(T)=y\},$$
where an absolutely continuous curve $\xi:[0,T]\to \mathbb{R}$ is called {\em subunit horizontal} if satisfies \eqref{EQ_Horizontal} and $|\alpha^{\xi}(t)|\leq 1$ for a.e. $t\in [0,T]$.\\
\end{rem}
Note that, even if it is possible to give an explicit formulation for the Carnot-Carath\'eodory distance in $\mathbb{H}$ (by computing the geodesics), this is extremely complicated so we omit that.\\
Thus we will need to use both the Carnot-Carath\'eodory distance and the homogeneous distance, so it is important to recall the relation between these distances and between them and the standard Euclidean distance in $\mathbb{R}^N$.
\begin{lemma}
\label{ReelationDistances}
Let $d_h$ and $d_{CC}$ be the homogeneous distance and the Carnot-Carath\'eodory distance defined respectively in Definitions \ref{homogenousNorm} and \ref{CC-distance_definiton}.
Then for any compact $K\subset\mathbb{R}^N$ there exists a positive constant $C_K$ such that
$$ C^{-1}_K|x-y|\leq d_{CC}(x,y)\leq C_K |x-y|^{1/r},$$
where $r$ is the step of the Carnot group and $|x-y|$ denotes here the standard Euclidean distance in $\mathbb{R}^N$.\\
The same statement holds also replacing $d_h$ and $d_{CC}$.\\
Moreover $d_h$ and $d_{CC}$ are equivalent distance on compact sets, i.e. for any compact $K\subset\mathbb{R}^N$ there exists a positive constant $c_K$ such that
$$ c^{-1}_Kd_{h}(x,y)\leq d_{CC}(x,y)\leq c_K d_{h}(x,y).$$
\end{lemma}
For the proof we refer to the monograph \cite{BLU}.\\
In the following Lemma we collect several properties of horizontal curves that will be very useful later.
\begin{lemma}\label{leftinv}
Let
$\xi$ be a horizontal curve with velocity
$\alpha^{\xi}(s)$
such that
$\xi(0)=x$ and $\xi(t)=y$.
Then the following properties hold:
\begin{enumerate}
\item[(i)] For any $z\in\mathbb{R}^N$, $\widetilde\xi(s):=z\circ\xi(s)$ is still horizontal
with
$\alpha^{\widetilde\xi}(s)=\alpha^{\xi}(s)$,
$\widetilde\xi(0)=z\circ x$ and $\widetilde\xi(t)=z\circ y$.
\item[(ii)] For any $C>0$, $\eta(s):=\xi(Cs)$ is still horizontal
with
$\alpha^{\eta}(s)=C\alpha^{\xi}(Cs)$,
$\eta(0)=x$ and $\eta(t/C)=y$.
\item[(iii)] For any $\lambda>0$, $\hat\xi(s):=\delta_\lambda(\xi(s))$
is still horizontal
with
$\alpha^{\hat\xi}(s)=\lambda\alpha^{\xi}(s)$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item[(i)] Denote by $L_z$ the left translation w.r.t. $z$ (i.e. $L_z(x)=z\circ x$) and by $DL_z$ the differential of the left translation $L_z$. We have
\begin{eqnarray*}
\dot{\widetilde\xi}(s)&=&DL_z(\xi(s))\;\dot{\xi}(s)=
DL_z(\xi(s))\;\bigg(
\sum_{i=1}^{m}\alpha_i^{\xi}(s)X_i(\xi(s))\bigg)=\\
&=&\sum_{i=1}^{m}\alpha_i^{\xi}(s)DL_z(\xi(s))\;X_i(\xi(s))=
\sum_{i=1}^{m}\alpha_i^{\xi}(s)X_i(z\circ\xi(s))=\\
&=& \sum_{i=1}^{m}\alpha_i^{\xi}(s)X_i(\widetilde\xi(s)),
\end{eqnarray*}
where we have used the fact that the vector fields $X_i$ are left-invariant by definition, i.e.
$DL_z(\xi(s))\; X_i(\xi(s))=X_i(z\circ\xi(s))$, for all $z$.\\
\item[(ii)]
For any $C\in\mathbb{R}$, given $\eta(s)=\xi(Cs)$, then
\begin{equation}
\label{cambiooriz}
\dot{\eta}(s)=C\dot{\xi}(C s)= C \bigg(\sum_{i=1}^{m}\alpha_i^{\xi}(C s)
X_i(\xi(C s))\bigg)=\sum_{i=1}^{m}C\alpha^{\xi}_i(C s)
X_i(\eta(s)),
\end{equation}
so $\eta$ is horizontal with $\alpha^{\eta}(s)=C\alpha^{\xi}(Cs)$.\\
\item[(iii)]
Using the fact that we are in exponential coordinates and the definition of dilations as automorphisms of the group by the exponential map, that is:
$$
\delta_{\lambda}\left(
\textrm{exp}\left(
\sum_{i=1}^r\sum_{j=1}^{m_i}g_{j,i} X_{j,i}
\right)
\right)
=
\textrm{exp}
\left(
\sum_{i=1}^r\sum_{j=1}^{m_i}\lambda^ig_{j,i} X_{j,i}
\right),
$$
where $X_{j,i}$ for $j=1,\dots, m_i$ are a basis for the layer $V_i$,
and
$g_{j,i}$ are the associated exponential coordinates for the point $g\in \mathbb{G}=\mathbb{R}^N$. \
From the previous formula written for horizontal curves, that means $i=1$ and $j=1,\dots, m_1=m$, it follows immediately that
$\hat\xi(s):=\delta_\lambda(\xi(s))$
is horizontal
and
$\alpha^{\hat\xi}(s)=\lambda\alpha^{\xi}(s)$.
\end{enumerate}
\end{proof}
The following lemma proves that we can control the supremum norm of two curves by the $L^1$-norm of the associated horizontal velocity.
\begin{lemma}
\label{aprroxCurveLemma}
Consider two measurable functions $\alpha,\beta:[0,T]\to \mathbb{R}^m$ and the associated horizontal curves $\xi^{\alpha},\xi^{\beta}$ starting from the same initial point, i.e.
$$
\dot{\xi}^{\alpha}(s)=\sum_{i=1}^m \alpha_i (s)X_i\big(\xi^{\alpha}(s)\big),\quad
\dot{\xi}^{\beta}(s)=\sum_{i=1}^m \beta_i (s)X_i\big(\xi^{\beta}(s)\big),
\quad \xi^{\alpha}(0)=\xi^{\beta}(0).
$$
If $\alpha,\beta$ are equi-bounded in $L^1(0,T)$, then there exists a positive constant $C>0$ such that
$$
\norma{\xi^\alpha-\xi^\beta}_{\infty}\leq C \norma{\alpha-\beta}_{L^1(0,T)}.
$$
\end{lemma}
\begin{proof}
The proof is trivial. In fact:
$$
\xi^\alpha(t)-\xi^\beta(t)=\int_0^t \big[\dot{\xi}^\alpha(s)-\dot{\xi}^\beta(s)\big]d s=
\sum_{i=1}^m\int_0^t \left[ \alpha_i (s)X_i\big(\xi^{\alpha}(s)\big)- \beta_i (s)X_i\big(\xi^{\beta}(s)\big) \right] d s
$$
At this point we add $\pm \alpha_i (s)X_i\big(\xi^{\beta}(s)\big)$, we use that the vector fields are smooth
(so in particular locally Lipschitz continuous) and that $\alpha,\beta$ are equibounded.
$\xi^{\alpha},\xi^{\beta}$ are equi-bounded,
Hence by Gronwall's inequality one can easily conclude.
\end{proof}
The manifold structure is crucial when one works with PDEs. In fact vector fields allow us to define naturally derivatives of any order, just considering how a vector field acts on smooth functions. Since we are interested in first-order Hamilton-Jacobi equations we introduce only the first derivatives.
\begin{defi}[Horizontal gradient] For a Carnot group defined on $\mathbb{R}^N$, consider the family of vector fields $\mathcal{X}=\{X_1,\dots,X_m\}$ associated to the Carnot group.
The horizontal gradient is defined as
$$
\mathcal{X} u:=\big(X_1u\big)X_1+\dots +\big(X_mu\big)X_m.
$$
\end{defi}
\begin{rem}
Note that $\mathcal{X} u$ is always an element of the distribution $\mathcal{H}$ since it is defined as a linear combination of the vector fields that span the distribution.\\
For sake of simplicity we will often identify the horizontal gradient (which is a $N$-dimensional object in $\mathcal{H}$) with its coordinate vector $\nabla_{\mathcal{X}} u=(X_1 u,\dots ,X_mu)^T$ which is instead an element in $\mathbb{R}^m$.
Trivially $\nabla_{\mathcal{X}}u=\sigma(x) \nabla u$ where $\nabla u$ denotes the standard (Euclidean) gradient of $u$ and
$\sigma$ is defined in \eqref{matrixC}.
\end{rem}
\begin{ex}
In the case of $\mathbb{H}$ the horizontal gradient can be explicitly written as
$$
\nabla_{\mathcal{X}} u=\begin{pmatrix}
u_{x_1}-\frac{x_2}{2}u_{x_3}\\
u_{x_2}+\frac{x_1}{2} u_{x_3}
\end{pmatrix}
=
\begin{pmatrix}
1 &0& -\frac{x_2}{2}\\
0&1&\frac{x_1}{2}
\end{pmatrix}\nabla u
\in \mathbb{R}^2,
$$
while $\mathcal{X} u=\left(u_{x_1}-\frac{x_2}{2}u_{x_3}
\right) X_1(x_1,x_2,x_3)+\left(u_{x_2}+\frac{x_1}{2}u_{x_3}
\right) X_2(x_1,x_2,x_3)
\in \mathbb{R}^3
$.
\end{ex}
\section{Statement of the problem.}
For any $\e>0$, we look at the following family of randomly perturbed problems:
\begin{equation}
\label{ApproxPr}
\left\{
\begin{aligned}
&
u^{\varepsilon}_t+H\left(\delta_{1/\varepsilon}(x), \sigma(x)\nabla u^{\varepsilon},\omega\right)=0,\ x\in\mathbb{R}^N, \;\omega \in \Omega, \ t>0,\\
& u^{\varepsilon}(0,x,\omega)=g(x),\ x\in \mathbb{R}^N, \;\omega \in \Omega,
\end{aligned}\right.
\end{equation}
where $\big(\Omega,\mathcal{F},\mathbb{P}\big)$ is a probability space, $\sigma(x)$ is a smooth $m\times N$ matrix (with $m\leq N$), whose columns are vector fields $X_1, X_2,\dots,X_m$ associated to a Carnot group, $\delta_{\lambda}(\cdot)$ are the anisotropic dilations defined by the Carnot group structure.\\
We assume that
the Hamiltonian $H:\mathbb{R}^N\times \mathbb{R}^m\times \Omega\to \mathbb{R}$ satisfies the following assumptions w.r.t. $x\in \mathbb{R}^N$, $q=\sigma(x)p\in \mathbb{R}^m$ and $\omega\in \Omega$:
\begin{description}
\item[{\bf (H1)}] $ q\mapsto H(x,q,\omega)$ is convex in $q$;
\item[{\bf (H2)}] $\exists \; \overline{C}_1>0, \; \overline{\lambda}>1$ such that
$$ \overline{C}^{-1}_1(|q|^{\overline{\lambda}}-1)
\leq H(x,q,\omega)
\leq \overline{C}_1(|q|^{\overline{\lambda}}+1), \; \forall\; (x,q,\omega)\in \mathbb{R}^N\times\mathbb{R}^m\times \Omega;
$$
\item[{\bf(H3)}] there exists $\overline{m}:[0,+\infty)\to [0,+\infty)$ concave, monotone increasing
with $\overline{m}(0^+)=0$ and such that for all $x,y\in \mathbb{R}^N,q\in \mathbb{R}^m,\omega\in \Omega$
$$
|H(x,q,\omega)-H(y,q,\omega)|\leq \overline{m}\big(\|-x\circ y\|_h(1+|q|^{\overline{\lambda}})\big);$$
\item[{\bf(H4)}] $ \forall\, q\in \mathbb{R}^m$ the function $(x,\omega)\mapsto H(x,q,\omega)$
is a stationary, ergodic random field on $\mathbb{R}^N\times \Omega$ w.r.t. the unitary translation operator, associated to
the Carnot group structure.
\end{description}
Recall that $|q|$ is the usual Euclidean norm in $\mathbb{R}^m$ while $\norma{x}_h$ is the homogeneous distance in Definition \ref{homogenousNorm}, and $x^{-1}=-x$ in exponential coordinates.
\begin{ex}[Main model]
\label{exampleMainModels} {\rm
The main model that we have in mind is:
\begin{equation}
\label{model1}
H(x,q,\omega)= a(x,\omega)\frac{|q|^{\beta}}{\beta} +V(x,\omega),
\end{equation}
with $\beta>1$ and $V$ and $a(x,\omega)$ are bounded,
and satisfying {\bf(H3)} and {\bf(H4)} and $a$ is also uniformly strictly positive.
}\end{ex}
\begin{rem} [Non-coercivity of the Hamiltonian] Note that the main difference between these assumptions and the assumptions in \cite{S1}
is that {\em the Hamiltonian is not anymore coercive w.r.t. the gradient but only w.r.t. the lower dimensional horizontal gradient} (assumption {\bf(H2)}).
This lack of coercivity w.r.t. the total gradient variable $p$ is what will make the approach in \cite{S1} and \cite{RT} failing and will lead to the main technical difficulties.
\end{rem}
\begin{rem}
Assumption {\bf(H3)} is adapted to the anisotropic dilations in the group. Nevertheless using Lemma \ref{ReelationDistances} this assumption can be rewritten in terms of the standard Euclidean distance (with a power depending on the step of the group). In particular if $H(x,p,\omega)$ is Lipschitz continuous in $x$ w.r.t. the homogeneous distance then it is only H\"older continuous in $x$ w.r.t. the standard Euclidean distance with power $1/r$ and $r=$step of the Carnot group. E.g. in the Heisenberg group it would be $1/2$-H\"older continuous.
\end{rem}
We next write more explicitly assumption {\bf (H4)} to show how this adapts to the algebraic structure of the Carnot group.\\
Assumption {\bf (H4)} means that there exists a family of measure-preserving maps $\tau_x:\Omega\to \Omega,$ indexed by either $x$ in the Carnot group or $x$ in a discrete version of the Carnot group ($\mathbb Z^N$ as subset of the Carnot group equipped with the group operation of the Carnot group) with the following properties:
\begin{itemize}
\item $\tau_0=id$
\item $\tau_x(\tau_y(\omega))=\tau_{x\circ y}(\omega)$ for all $x,y\in \mathbb R^N$ (or $\mathbb Z^N$, respectively) and almost all $\omega\in \Omega.$
\item (Stationarity) $H(x,q,\omega)=H(0,q,\tau_x(\omega))$ for all $x\in \mathbb R^N$ (or $\mathbb Z^N$, respectively) and almost all $\omega\in \Omega.$
\item (Ergodicity) If $A\subseteq\Omega$ is such that $\tau_x(A)=A$ for all $x\in \mathbb R^N$ (or $\mathbb Z^N$, respectively), then ${\mathbb P}(A)\in \{0,1\}.$
\end{itemize}
Examples are short-correlated Euclidean-stationary random fields (by Borel-Cantelli) or (for the discrete Heisenberg group) Heisenberg-periodic sets where an independent identically distributed random variable is chosen for each cell.
\begin{rem}We would like to add some remarks concerning the assumptions on ergodicity and stationarity.
\begin{enumerate}
\item Note that we apply the ergodic theorem only to a one-dimensional {\em Abelian} subgroup of the Carnot group ($\mathcal{X}$-lines), so for convergence we are completely in the framework of classical ergodic theory, see proof of Theorem \ref{convzero}. The ergodicity with respect to actions of the full group is only used to establish that the limit is deterministic.
\item In case of the Example \ref{model1}, and short-correlated random coefficients, the convergence to a deterministic limit in Theorem \ref{convzero} follows already from the law of large numbers.
\item For examples of ergodic actions of the Heisenberg group on a probability space $(\Omega, {\mathcal F}, \mathbb{P})$ see e.g. $\cite{Da}.$ In order to obtain from this a model like Example \ref{model1} with $a(x,\omega)= 1$,
take a bounded random variable $V:\Omega\to \mathbb{R}$ and set $V(x,\omega)= V(\tau_x(\omega))$.
\end{enumerate}
\end{rem}
\begin{ex}
An explicit example for a model satisfying {\bf (H4)} in the special case of Example \ref{model1} and in Heisenberg group $\mathbb{H}$ can be constructed in the following way:
\noindent
Take three independent random fields on $\mathbb{R},$ $f_i(x,\omega):\mathbb{R}\times \Omega\to \mathbb{R},$ for $i=1,2,3,$ such that they are stationary ergodic w.r.t.
the action of $\mathbb{R}.$ Then for a Borel-measurable bounded function $G:\mathbb{R}^3\to \mathbb{R}$ the random potential
$$ V(x_1,x_2,x_3,\omega):=G(f_1(x_1,\omega),f_2(x_2,\omega), f_3(x_3,\omega))$$
is Heisenberg-stationary. Indeed, by independence and one-dimensional stationarity we have for any open intervals $(a_1,a_2), (b_1,b_2), (c_1,c_2)$ that
\begin{eqnarray*}
&&{\mathbb P}\{(x_1+r,x_2+s,x_3+t+1/2(x_1s-x_2r))\in (a_1,a_2)\times(b_1,b_2)\times(c_1,c_2)\}\\&& =
{\mathbb P}\{f_1(x_1+r)\in (a_1,a_2)\}{\mathbb P}\{f_2(x_2+s)\in (b_1,b_2)\}\\&& \times{\mathbb P}\{f_3(x_3+t+1/2(x_1s-x_2r))\in(c_1,c_2)\}\\ &&=
{\mathbb P}\{f_1(x_1)\in (a_1,a_2)\}{\mathbb P}\{f_2(x_2)\in (b_1,b_2)\}{\mathbb P}\{f_3(x_3)\in(c_1,c_2)\}\\ &&={\mathbb P}\{(x_1,x_2,x_3)\in (a_1,a_2)\times(b_1,b_2)\times(c_1,c_2)\}.
\end{eqnarray*}
Since open rectangles generate the Borel-$\sigma$-algebra, the result follows.
\end{ex}
We introduce
\begin{equation}\label{rapprepsilon}
u^{\varepsilon}(t,x,\omega):=\inf\limits_{y\in \mathbb{R}^N}\big\{g(y)+L^{\varepsilon}(x,y,t,\omega)\big\}
\end{equation}
with
\begin{equation}\label{Lepsilon}
L^{\varepsilon}(x,y,t,\omega):=\inf\limits_{\xi\in \mathcal{A}^t_{y,x}} \int_0^t H^*\left(\delta_{1/\varepsilon}(\xi(s)), \alpha^{\xi}(s),\omega\right)ds
\end{equation}
where
$$
\mathcal{A}^t_{y,x}:=\big\{\xi\in W^{1,\infty}\big((0,t)\big)\,\big|\, \xi \ \;\textrm{horizontal curve such that}\; \xi(0)=y\;\textrm{and}\;\xi(t)=x\big\},
$$
while $H^*$ denotes the Legendre-Fenchel transform of $H$ w.r.t. $q\in \mathbb{R}^m$, that is
$$H^*(x,q,\omega):=\sup_{p\in \mathbb{R}^m}\{p\cdot q-H(x,p,\omega)\}$$ and
$\alpha^{\xi}(s)$ is the $m$-valued measurable function corresponding to the horizontal velocity of $\xi(s)$ defined in \eqref{EQ_Horizontal}.
\begin{theorem}\label{th1}
We assume $g$ bounded and Euclidean Lipschitz continuous,
and that the Hamiltonian $H$ satisfies {\bf(H1)-(H3)}. Then, for all fixed $\omega\in \Omega$, $u^{\varepsilon}(t,x,\omega)$ given by formula \eqref{rapprepsilon} is the unique $BUC$ viscosity solution of problem \eqref{ApproxPr}.
\end{theorem}
The proof of this theorem will be given in the Appendix.\\
From now on we use the notation:
\begin{equation}
\label{Lagrangian}
L(x,q,\omega):=H^*(x,q,\omega)=\sup_{p\in \mathbb{R}^m} \{ p\cdot q-H(x,p,\omega)\};
\end{equation}
$
L(x,\cdot,\omega)
$ is the Legendre-Fenchel transform of the Hamiltonian $H(x,\cdot ,\omega)$ taken w.r.t. $q\in\mathbb{R}^m$, for each $x\in \mathbb{R}^N$ and $\omega\in \Omega$ fixed, and it is called {\em Lagrangian} associated to the given Hamiltonian.
In the following lemma we show how the properties of the Hamiltonian pass to the associated Lagrangian.
\begin{lemma}\label{PropertiesLagrangian}
If $H(x,q,\omega)$ satisfies {\bf (H1)-(H4)} and $
L(x,q,\omega)
$ is the associated Lagrangian defined by \eqref{Lagrangian}, then $L$ satisfies
\begin{description}
\item[{\bf (L1)}]
$ q\mapsto L(x,q,\omega)$ is convex, for all $(x,\omega)\in \mathbb{R}^N\times \Omega$;
\item[{\bf (L2)}]
there exists $C_1>0$ such that
$$
C^{-1}_1(|q|^{\lambda}-1)
\leq L(x,q,\omega)
\leq C_1(|q|^{\lambda}+1),
\quad\forall\; (x,q,\omega)\in \mathbb{R}^N\times\mathbb{R}^m\times \Omega,$$
where $\lambda=\overline{\lambda}^*:=\frac{\overline{\lambda}}{\overline{\lambda}-1}$ (i.e., the conjugate of the constant $\overline{\lambda}$ defined in assumption {\bf (H2)});
\item[{\bf (L3)}] for all $R>0$, there exists $ m_R:[0,+\infty)\to [0,+\infty)$
concave, monotone increasing, with $m_R(0^+)=0$ and such that for all
$x,y\in \mathbb{R}^N,\omega\in \Omega$
$$ |L(x,q,\omega)-L(y,q,\omega)|\leq
m_R\big(\|-x\circ y\|_h\big),\quad \forall\;q\in B_R(0),
$$
where $B_R(0)$ is a (Euclidean) ball in $\mathbb{R}^m$ of radius $R$;
\item[{\bf (L4)}]
$\forall\, q\in \mathbb{R}^m$ the function $(x,\omega)\mapsto L(x,q,\omega)$
is stationary, ergodic random
field on, $\mathbb{R}^N\times \Omega$ w.r.t. the unitary translation operator associated to
the Carnot group structure;
\item[{\bf (L5)}]
$ L^*=H$.
\end{description}
\end{lemma}
\begin{proof} Properties {\bf (L1)} and {\bf (L5)} follow immediately by definition of $L=H^*$.
Property {\bf (L2)} comes trivially from {\bf (H2)} and \eqref{Lagrangian} taking $\lambda=\overline{\lambda}^{\;*}$, the conjugate exponent of $\overline \lambda$.
The proofs of {\bf (L3)} and {\bf (L4)} are also immediate: one can find a detailed proof of {\bf (L3)} in \cite[Theorem A.2.6]{CS} while {\bf (L4)} comes directly by the definition of $L=H^*$.
\end{proof}
\begin{ex}
\label{ExampleModelsLagrangian}
In the case of the model \eqref{model1} the associated Lagrangian is
\begin{equation}
\label{model1Lagrangian}
L(x,q,\omega)=
b(x,\omega)\frac{|q|^{\beta^*}}{\beta^*}+V(x,\omega),
\end{equation}
with $\beta^*=\frac{\beta}{\beta-1}$ and $b(x,\omega)=\frac{1}{a(x,\omega)^{\frac{1}{\beta -1}}}.$
\end{ex}
Here we state the main results of this paper. The proof will be given in Section \ref{HomogSection}.
\begin{theorem}\label{mainTH}
Consider the problem \eqref{ApproxPr} with $g:\mathbb{R}^N\to \mathbb{R}$ bounded and Lipschitz continuous. Assume that the Hamiltonian
$H(x,p,\omega)$ satisfies assumptions {\bf(H1)-(H4)} and for $L=H^*$
the following additional assumption holds:
\begin{description}
\item[{(L6)}] there exist constants $ C>0$ and $ \lambda>1$ such that
$$
|L(x,p,\omega)-L(x,s\,p,\omega)|
\le C\big|1-|s|^\lambda\big| \,|L(x,p,\omega)|,
$$
for all $ s\in \mathbb{R},\;x\in \mathbb{R}^N, p\in\mathbb{R}^m, \; \omega\in\Omega$.
\end{description}
Then the viscosity solutions $u^{\varepsilon}(t,x,\omega)$ of problem~\eqref{ApproxPr} converge locally uniformly in $x$ and $t>0$ and a.s. in $\omega\in \Omega$ to the unique solution $u(t,x)$ of the deterministic problem \eqref{LimitProblem}, where the effective Hamiltonian $\overline H(q)$ is defined as $\overline H(q)= \overline L ^*(q)$ and $\overline L(q)$ is the effective Lagrangian defined by limit \eqref{EffectiveLagrangian}.
\end{theorem}
Now we give a class of operators where we can apply the previous result.
\begin{corollary}\label{cor}
Consider the problem \eqref{ApproxPr} with $g:\mathbb{R}^N\to \mathbb{R}$ bounded and Lipschitz continuous. Assume that the Hamiltonian
$H(x,p,\omega)$ satisfies assumptions {\bf(H1)-(H4)} and moreover
\begin{equation}
\label{LambdaHomogeneity}
H(x,p,\omega)=H_1(x,p,\omega)+H_2(x,\omega),
\end{equation}
with $H_1(x,p,\omega)$ $\lambda$-homogeneous in $p$ (in the sense that $H_1(x,s\,p,\omega)=|s|^{\lambda}H^1(x,p,\omega)$). Then
the viscosity solutions $u^{\varepsilon}(t,x,\omega)$ of \eqref{ApproxPr} converge locally uniformly in $x$ and $t>0$ and a.s. in $\omega\in \Omega$ to the unique solution $\bar u(t,x)$ of the deterministic problem \eqref{LimitProblem}, where the effective Hamiltonian $\overline H(q)$ is defined as $\overline H(q)= \overline L ^*(q)$ and $\overline L(q)$ is the effective Lagrangian defined by limit \eqref{EffectiveLagrangian}.
\end{corollary}
\begin{proof}[Corollary \ref{cor}]
We need only to remark that, whenever $H$ satisfies \eqref{LambdaHomogeneity}, then the associated Lagrangian has the same structure (by taking $\lambda=\overline{\lambda}^*$), and such a structure for $L$ implies assumption {\bf (L6)}.
Hence Theorem \ref{mainTH} immediately implies the result.
\end{proof}
\begin{example}
In particular our main model of Hamiltonian~\eqref{model1} satisfies the assumptions of Corollary \ref{cor}.
\end{example}
\section{Properties of $L^{\varepsilon}(x,y,t,\omega)$.}
\label{SectionEstimates}
In this section we will investigate several properties for the variational problem $L^{\varepsilon}(x,y,t,\omega)$ defined by \eqref{Lepsilon}.
\begin{lemma}\label{lemma71}
Under assumption {\bf(L2)}
then
\begin{equation*}
L^\varepsilon(x,y,t,\omega)\leq C_1 t +C_1\frac{d_{CC}(x,y)^\lambda}{t^{\lambda-1}}, \qquad \forall x,y\in \mathbb{R}^N,t>0,\omega\in \Omega,
\end{equation*}
where $C_1$ and $\lambda$ are the constants introduced in assumption {\bf(L2)}.
\end{lemma}
\begin{proof}
We consider a geodesic $\eta$ from $y$ to $x$ in time $t$ (parametrized by arc-length), then $\eta\in {\cal A}^t_{y,x}$ with $|\alpha^\eta(s)|=d_{CC}(x,y)/t$. By assumption {\bf(L2)} we have
\[
L^\varepsilon(x,y,t,\omega)\leq\int_0^t L(\delta_{1/\varepsilon}(\eta(s),\alpha^\eta(s), \omega)\, ds\leq \int_0^t C_1(1+d_{CC}(x,y)^\lambda t^{-\lambda})\, ds,
\]
which easily entails the statement.
\end{proof}
\begin{proposition}\label{prp71}
Under assumption {\bf(L2)} then
\begin{equation}
L^\varepsilon(x,y,t,\omega)=\inf\limits_{\eta}\left\{\int_0^t L(\delta_{1/\varepsilon}(\eta(s)),\alpha^\eta(s),\omega)\, ds \right\},
\end{equation}
where the infimum is taken over the curves
\begin{equation}\label{7.1}
\eta\in{\cal A}^t_{y,x} \qquad \textrm{such that }\|\alpha^\eta\|_{L^\lambda(0,t)}\leq \widetilde{C},
\end{equation}
where $\widetilde{C}= [(C_1^2+C_1+1)t^\lambda +C_1^2 d_{CC}(x,y)^\lambda]^{1/\lambda}t^{\frac{1}{\lambda}-1}$.\end{proposition}
\begin{proof}
Using the definition of $L^\varepsilon(x,y,t,\omega)$ as greatest lower bound, then there exists a curve $\eta\in{\cal A}^t_{y,x}$
such that
\[
\int_0^t L(\delta_{1/\varepsilon}(\eta(s)), \alpha^\eta(s),\omega)\, ds\leq L^\varepsilon(x,y,t,\omega) +t.
\]
By the bound from below in {\bf(L2)} and using Lemma~\ref{lemma71}, we have
\begin{equation*}
C_1^{-1}\int_0^t\left(|\alpha^\eta (s)|^\lambda-1 \right)ds\leq L^\varepsilon(x,y,t,\omega) +t
\leq (C_1+1)t +C_1 \frac{d_{CC}(x,y)^\lambda}{t^{\lambda-1}},
\end{equation*}
and consequently
\begin{equation*}
\|\alpha^\eta\|_{L^\lambda(0,t)}^\lambda\leq (C_1^2+C_1+1)t +C_1^2 \frac{d_{CC}(x,y)^\lambda}{t^{\lambda-1}},
\end{equation*}
which is equivalent to relation~\eqref{7.1}.
\end{proof}
\begin{corollary}
\label{corollary71}
Under assumption {\bf (L2)} then the infimum in $L^{\varepsilon}(x,y,t,\omega)$ is attained over admissible curves $\eta$ such that
\begin{description}
\item[(i)] $ \|\alpha^\eta\|_{L^{\gamma}(0,t)}\leq C_2$, for all $ 1\leq \gamma\leq \lambda$ and for all $t<T$, where $\lambda$ is the constant given in assumption {\bf (L2)} and $C_2$ depends only on the constants in assumption {\bf(L2)} and the bound $T$,
\item[(ii)]$ \|\eta\|_{\infty}\leq C_3$, where $C_3$ depends only on $\sigma(x)$, the constants in assumption {\bf(L2)} and $T$ for all $t<T$.
\end{description}
\end{corollary}
\begin{proof} This follows easily by Proposition \ref{prp71}, the standard embedding properties of $L^\lambda (0,t)$ and
$\xi(t)=\xi(0)+\int_0^t\sigma(\xi(s))\alpha^\xi(s) ds.$
\end{proof}
We now prove the uniform continuity of $L^\varepsilon$, uniformly w.r.t. $\varepsilon>0$. To this purpose, we adapt some arguments from \cite{RT}.
\begin{lemma}\label{lemma41}
Under assumption {\bf(L2)}, then
\begin{equation}\label{43}
L^{\varepsilon}(x,y,t+h,\omega)\leq L^{\varepsilon}(x,y,t,\omega)+C_1h,
\end{equation}
for all
$ \varepsilon>0, x,y\in \mathbb{R}^N,t>0,\,\omega\in \Omega$ and where $C_1$ is the constant introduced in {\bf(L2)}.
\end{lemma}
\begin{proof}
We proceed as in \cite{RT}.
Let $\xi$ be an admissible path for $L^{\varepsilon}(x,y,t,\omega)$; we introduce
\[
\xi_1(s):=\left\{\begin{array}{ll}
\xi(s) &0\leq s\leq t,\\
x&t\leq s\leq t+h.
\end{array}\right.
\]
Note that $\xi_1$ is an admissible path for $L^{\varepsilon}(x,y,t+h,\omega)$. Hence, we have
\begin{eqnarray*}
L^{\varepsilon}(x,y,t+h,\omega)&\leq &\int_0^{t+h}L(\delta_{1/\varepsilon}(\xi_1(s)), \alpha^{\xi_1}(s),\omega)ds\\
&\leq & \int_0^tL(\delta_{1/\varepsilon}(\xi(s)), \alpha^\xi(s),\omega)ds +\int_t^{t+h}L(\delta_{1/\varepsilon}(x), 0,\omega)ds.
\end{eqnarray*}
Taking the infimum over $\xi$ and by assumption {\bf(L2)}, we obtain the statement.
\end{proof}
\begin{lemma}\label{lemma2}
Under assumption {\bf(L2)}, then
\begin{equation}\label{4.2}
L^{\varepsilon}(x\circ v,y,t+\|v\|_{CC},\omega) \leq L^{\varepsilon}(x,y,t,\omega)+ 2C_1 \|v\|_{CC},
\end{equation}
for all
$ \varepsilon>0, x,y,v\in \mathbb{R}^N,t>0,\,\omega\in \Omega$ and
where $C_1$ is the constant introduced in assumption {\bf(L2)}.
Moreover, for each compact $K\subset \subset \mathbb{R}^N$, there exists a constant $C$ (depending only on $K$ and on the assumptions of the problem, so in particular independent of $\varepsilon$) such that
\begin{equation}\label{4.3}
L^{\varepsilon}(x\circ v,y,t+\|v\|_{h},\omega) \leq L^{\varepsilon}(x,y,t,\omega)+ C \|v\|_{h}\qquad \forall v\in K,
\end{equation}
and for all
$ \varepsilon>0, x,y\in \mathbb{R}^N,t>0,\,\omega\in \Omega$.
\end{lemma}
\begin{proof}
For any curve $\xi \in {\cal A}^t_{y,x}$, we define
\[
\eta_\xi(s):=\left\{\begin{array}{ll}
\xi(s)&\quad 0\leq s\leq t\\
\tilde\gamma (s)&\quad t\leq s\leq t+\|v\|_{CC}
\end{array}\right.
\]
where $\tilde\gamma (s)=\gamma(s-t)$ and $\gamma$ is a geodesic from $x$ to $x\circ v$ in time $\|v\|_{CC}$.
Since $\eta_\xi\in {\cal A}^{t+\norma{v}_{CC}}_{y,x\circ v}$, then we have
\begin{eqnarray*}
&& L^{\varepsilon}(x\circ v,y,t+\|v\|_{CC},\omega) \leq
\int_0^{t+\|v\|_{CC}}L(\delta_{1/\varepsilon}(\eta_\xi(s)),\alpha^{\eta_\xi}(s),\omega)ds\\
&&\qquad = \int_0^{t}L(\delta_{1/\varepsilon}(\xi(s)),\alpha^\xi(s),\omega)ds+
\int_t^{t+\|v\|_{CC}}L(\delta_{1/\varepsilon}(\tilde \gamma(s)),\alpha^{\tilde \gamma}(s),\omega)ds\\
&&\qquad \leq \int_0^{t}L(\delta_{1/\varepsilon}(\xi(s)),\alpha^\xi(s),\omega)ds+ 2C_1 \|v\|_{CC},
\end{eqnarray*}
where the last inequality is due to assumption {\bf(L2)} and the fact that by arc-length parametrisation we can assume $|\alpha^{\tilde\gamma}(s)|=1$. Taking the infimum over $\xi$, we get the bound \eqref{4.2}.
It remains to prove \eqref{4.3}.
We argue as before defining
\[
\eta_\xi(s):=\left\{\begin{array}{ll}
\xi(s)&\quad 0\leq s\leq t\\
\tilde\gamma_1 (s)&\quad t\leq s\leq t+\|v\|_{h}
\end{array}\right.
\]
where $\tilde\gamma_1 (s):=\gamma_1 (s-t)$ and $\gamma_1$ is a geodesic from $x$ to $x\circ v$ in time $\|v\|_{h}$. We have $|\alpha^{\gamma_1}(s)|=\|v\|_{CC}/\|v\|_{h}$. Hence, as before, we get
\begin{equation*}
L^{\varepsilon}(x\circ v,y,t+\|v\|_{h},\omega) \leq
\int_0^{t}L(\delta_{1/\varepsilon}(\xi(s)),\alpha^\xi(s),\omega)ds+
C_1\|v\|_{h}\left(1+\frac{\|v\|_{CC}^\lambda}{\|v\|_{h}^\lambda}\right).
\end{equation*}
Recall that for each compact $K\subset\subset\mathbb{R}^N$, there exists a constant $c$ such that $c^{-1}\|v\|_{h}\leq \|v\|_{CC}\leq c\|v\|_{h}$ (see Lemma \ref{ReelationDistances}); hence
\[
L^{\varepsilon}(x\circ v,y,t+\|v\|_{h},\omega) \leq
\int_0^{t}L(\delta_{1/\varepsilon}(\xi(s)),\alpha^\xi(s),\omega)ds+ C_1(1+c^\lambda)\|v\|_{h}
\]
and, taking the infimum over $\xi$, we get \eqref{4.3}.
\end{proof}
\begin{lemma}\label{prop1} Under assumptions {\bf(L2)} and {\bf(L6)}, then
$L^{\varepsilon}(x,y,t,\omega)$ is locally uniformly continuous in $t$ away from $0$, locally uniformly w.r.t. $x$ and $y$ and uniformly w.r.t. $\varepsilon$. More precisely
for any $\delta\in(0,1)$ there exists $C_{\delta}>0$
(depending only on the constants in assumptions {\bf(L2)} and {\bf(L6)} and going to $+\infty$ as $\delta\to 0^+$) such that
\begin{equation}\label{contunift}
|L^{\varepsilon}(x,y,t+h,\omega)-L^{\varepsilon}(x,y,t,\omega)|\leq C_{\delta}h\quad \forall \varepsilon>0,
\end{equation}
and for any $t, t+h \in [\delta, 1/\delta]=:I_{\delta}$, $h>0$ and for any
$(x,y) \in A_{\delta}$ where $$A_{\delta}:=\{ (x,y)\in\mathbb{R}^N\times \mathbb{R}^N\,|\, d_{CC}(x,y)<1/\delta\}.$$
\end{lemma}
\begin{proof}
By Lemma \ref{lemma41} we know that for any $h>0$
$$L^{\varepsilon}(x,y,t+h,\omega)-L^{\varepsilon}(x,y,t,\omega)\leq C_1h.$$
It remains to show the opposite inequality, i.e.
\begin{equation}\label{9settembre}
L^{\varepsilon}(x,y,t+h,\omega)-L^{\varepsilon}(x,y,t,\omega)\geq C_{\delta}h.
\end{equation}
Take $(x,y)\in A_{\delta}$ and a curve $\eta$ admissible for $L^{\varepsilon}(x,y,t+h,\omega)$ and such that $\|\eta\|_{L^\lambda(0,t+h)}\leq \overline{C}$ where $\overline{C}=\overline{C}(\delta)$ is the constant introduced in Proposition~\ref{prp71}.
We define
$\xi_{\eta}(s)= \eta(\frac{t+h}{t} s)$. By Lemma~\ref{leftinv}, $\xi_{\eta}(s)$ is still horizontal with
$$\alpha^{\xi_{\eta}}(s) = \frac{t+h}{t}\alpha^{\eta}\left(\frac{t+h}{t}s\right), \;\xi_{\eta}(0)=\eta(0)=y, \;\xi_{\eta}(t)= \eta\left(\frac{t+h}{t}t\right)=x;$$
so, $\xi_{\eta}(s)$ is admissible for $L^{\varepsilon}(x,y,t,\omega)$.
We observe that
\begin{eqnarray*}
L^{\varepsilon}(x,y,t,\omega)&\leq&
\int_0^{t }L(\delta_{1/\varepsilon}(\xi_{\eta}(s)), \alpha^{\xi_{\eta}}(s),\omega)ds\\
&=&\frac{t}{t+h}\int_0^{t+h }L\left(\delta_{1/\varepsilon}(\eta(s)), \frac{t+h}{t}\alpha^{\eta}(s),\omega\right)ds.
\end{eqnarray*}
Define
$$
I:=\frac{t}{t+h}\int_0^{t+h }L\left(\delta_{1/\varepsilon}(\eta(s)), \frac{t+h}{t}\alpha^{\eta}(s),\omega\right)ds-
\int_0^{t+h }L(\delta_{1/\varepsilon}(\eta(s)), \alpha^{\eta}(s),\omega)ds,
$$
so that
\begin{equation}
\label{chicago}
\int_0^{t+h }L(\delta_{1/\varepsilon}(\eta(s)), \alpha^{\eta}(s),\omega)ds-L^{\varepsilon}(x,y,t,\omega)\geq -I.
\end{equation}
Assume for the moment that
\begin{equation}\label{9sett}
I\leq C_\delta h.
\end{equation}
Then
passing to the infimum over $\eta$ in \eqref{chicago} we obtain relation \eqref{9settembre}. \\
Let us now prove \eqref{9sett}.
Writing $\frac{t}{t+h}=1-\frac{h}{t+h}$, we have
\begin{multline}\label{9sett1}
I=\int_0^{t+h }\left(L\left(\delta_{1/\varepsilon}(\eta(s)), \frac{t+h}{t}\alpha^{\eta}(s),\omega\right)-L\left(\delta_{1/\varepsilon}(\eta(s)), \alpha^{\eta}(s),\omega\right)\right)ds\;+\\ -\frac{h}{t+h} \int_0^{t+h }L\left(\delta_{1/\varepsilon}(\eta(s)), \frac{t+h}{t}\alpha^{\eta}(s),\omega\right)ds
\end{multline}
To get \eqref{9sett} we estimate $|I|$. We start estimating the modulus of the second integral.
Note that {\bf(L2)} implies that
\begin{equation}
\label{GrowthConditionModulus}
|L(x,q,\omega)|\leq C_1\big(|q|^\lambda+1\big)+C^{-1}_1 \big(|q|^\lambda-1\big)\leq C(|q|^\lambda+1)
\end{equation}
where we have used that $\max\{A,B\}\leq |A|+|B|$ and $C=C_1+C^{-1}_1$.\\
Then by \eqref{GrowthConditionModulus} and for $h$ sufficiently small (so that $\delta<t+h<\frac{1}{\delta}$), we have
\begin{eqnarray}\label{412}
&&\frac{h}{t+h} \int_0^{t+h}\bigg|L\left(\delta_{1/\varepsilon}(\eta(s)), \frac{t+h}{t}\alpha^{\eta}(s),\omega\right)\bigg|ds\nn\\
&&\qquad \leq \frac{h}{t+h} \int_0^{t+h} C\left(1+\left(\frac{t+h}{t}\right)^\lambda |\alpha^\eta(s)|^\lambda \right)ds\nn\\
&&\qquad \leq \frac{h}{\delta}C
\left(\frac{1}{\delta}+\left(\frac{t+h}{t}\right)^\lambda \|\alpha^\eta\|_{L^\lambda (0,t+h)}\right)\nn\\
&&\qquad \leq h\frac{C}{\delta}
\left(\frac{1}{\delta}+\widetilde{C}\right)=C_\delta h.
\label{9sett2}
\end{eqnarray}
where the last inequality is due to Proposition~\ref{prp71}.
On the other hand, by assumptions {\bf(L2)} and {\bf(L6)}, we have
\begin{eqnarray}\label{413}
&&\int_0^{t+h}\left|L\left(\delta_{1/\varepsilon}(\eta(s)), \frac{t+h}{t}\alpha^{\eta}(s),\omega\right)-L(\delta_{1/\varepsilon}(\eta(s)), \alpha^{\eta}(s),\omega)\right|ds\nn\\
&&\qquad\leq \int_0^{t+h} C_1 \left|\left(\frac{t+h}{t}\right)^\lambda -1 \right|\left|\alpha^\eta(s)\right|^\lambda ds \nn\\
&&\qquad \leq C_1\delta^{-\lambda} \big| (|t+h)^\lambda -t^\lambda
\big|\|\alpha^\eta\|_{L^\lambda(0,t+h)}^\lambda\nn\\
&&\qquad \leq C_{\delta} h \label{9sett3}
\end{eqnarray}
where the last inequality is due to Proposition \ref{prp71}.\\
Using \eqref{9sett2} and \eqref{9sett3} in \eqref{9sett1}, we get \eqref{9sett}.
\end{proof}
\begin{lemma}\label{NDrem}
For later use, we collect some consequences of the previous estimates:
\begin{enumerate}
\item[1.] $L^\varepsilon(x\circ v,x,\|v\|_{CC},\omega)\le C\|v\|_{CC}$. \\
\item [2.] For $0<\rho \ll 1$ we have
\begin{eqnarray*}|L^\varepsilon(x,y,t,\omega)-L^\varepsilon(x,y,(1+\rho)t,\omega)
&\le& C\rho|L^\varepsilon(x,y,t,\omega)|,
\end{eqnarray*}
by choosing $h=\rho t$ in the proof of
Lemma \ref{prop1}.
\item [3.] $L^\varepsilon(x,z,t+s,\omega)\le L^\varepsilon(x,y,t,\omega)+
L^\varepsilon(y,z,s,\omega)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Point (1) follows from Lemma \ref{lemma2} by choosing $x=y$ and $t=0$.
We prove now point (2). We note that w.l.o.g. we can replace assumption {\bf (L2)} with
\begin{equation*}
C^{-1}_1(|q|^{\lambda}+1) \leq L(x,q,\omega) \leq C_1(|q|^{\lambda}+1),
\end{equation*}
Indeed, if we increase $L$ by a constant then all the $u^\varepsilon$ increase by the same constant making no relevant change in the problem. By this assumption, Lemma~\ref{lemma71} implies $C_1^{-1}t\leq L^\varepsilon(x,y,t,\omega)$
We now easily conclude the proof following the same arguments of Lemma \ref{prop1}: As $h(t+h)^{-1}$ becomes $\rho(1+\rho)^{-1}$ for $h=\rho t,$ the integral in \eqref{412}
can be estimated for $0 \le \rho\le 1$ by
$$
\rho \int_0^{(1+\rho)t}C_1\left((1+(1+\rho)^\lambda)|\alpha|^\lambda)\right)ds\le \rho CL^\varepsilon(x,y,(1+\rho)t,\omega)\le \hat C\rho L^\varepsilon(x,y,t,\omega),
$$ where the first inequality follows from our lower bounds on the Lagrangian by choosing $\alpha$ as a minimizer. The second inequality is a consequence of ${(\bf L6)}$ for $s=1+\rho\le 2.$
The integral in \eqref{413} can be estimated by
$$
C_1|(1+\rho)^\lambda-1|\|\alpha^\eta\|_{L^\lambda(0,(1+\rho)t)}^\lambda.
$$For $\rho$ sufficiently small, we find a constant $C_\lambda$ such that
$
|(1+\rho)^\lambda-1|\le C_\lambda\rho.$ Now we conclude as in the previous step.
\noindent Point (3) is obvious.
\end{proof}
As a direct consequence we have the following lemma:
\begin{lemma}\label{replacement}
Consider $x_1, y_1, x_2, y_2\in \mathbb{R}^N$ and $t>0$, with $\norma{-x_1\circ x_2}_{CC}+\norma{-y_1\circ y_2 }_{CC}\ll t.$
Then
\begin{multline*}
|L^\varepsilon(x_1,y_1,t,\omega)-L^\varepsilon(x_2,y_2,t,\omega)|\\
\le C\big(L^\varepsilon(x_1,y_1,t,\omega)+L^\varepsilon(x_2,y_2,t,\omega)\big)\left(\| -x_1\circ x_2 \|_{CC}+\|-y_1 \circ y_2\|_{CC}\right).
\end{multline*}
\end{lemma}
\begin{proof}
By applying twice Lemma \ref{NDrem} part 3, we deduce
\begin{multline*}
L^\varepsilon(x_1,y_1,t+\big(\norma{-x_1\circ x_2}_{CC}+\norma{-y_1\circ y_2}_{CC},\omega\big)\\
\le L^\varepsilon\big(x_1,x_2,\norma{-x_1 \circ x_2}_{CC},\omega\big)+L^\varepsilon(x_2,y_2,t,\omega)+L^\varepsilon(y_2,y_1,\norma{-y_1\circ y_2 }_{CC},\omega).
\end{multline*}
By applying twice Lemma \ref{NDrem} part 1, taking first $v=-x_1\circ x_2 $ and then $v=-y_1\circ y_2 ,$ and by Lemma \ref{NDrem} part 2 with
$$\rho=\frac{\|-x_1 \circ x_2\|_{CC}+\|-y_1 \circ y_2\|_{CC}}{t},$$
and interchanging the role of $x_1,x_2$ with that of $y_1,y_2$, the claim follows.
\end{proof}
\begin{theorem}\label{UniformContinuityFunctional}
Under assumptions {\bf (L2)} and {\bf (L6)}, for all fixed $\omega\in \Omega$, $L^{\varepsilon}(x,y,t,\omega)$ is locally uniformly continuous w.r.t. $x,y\in \mathbb{R}^N$ and $t$ away from 0, uniformly w.r.t. $\varepsilon>0$.
\end{theorem}
\begin{proof}
By Lemma \ref{prop1} we have the local uniform continuity with respect to $t$.
It remains to show the local uniform continuity with respect to $x$ (w.r.t $y$ is the same and so omitted).
We need to show that, for every $\delta>0$, there exists a constant $\overline{C}_{\delta}>0$ such that
\begin{equation}\label{12sett}\left|L^{\varepsilon}(x,y,t,\omega)-L^{\varepsilon}(\tilde x,y,t,\omega) \right|\leq \overline{C}_\delta\,\norma{-x\circ \tilde x}_h\qquad \forall \varepsilon>0
\end{equation}
and for any $t\in [\delta, 1/\delta]$ and for any
$x,\tilde x, y $ with CC-norm smaller that $\frac{1}{\delta}$.\\
Indeed, we have
\begin{multline*}
\left|L^{\varepsilon}(x,y,t,\omega)-L^{\varepsilon}(\tilde x,y,t,\omega) \right|\leq
\left|L^{\varepsilon}(x,y,t,\omega)-L^{\varepsilon}(\tilde x,y,t+\|-x\circ \tilde x\|_{h},\omega) \right|\\+
\left|L^{\varepsilon}(\tilde x,y,t+\|-x\circ \tilde x\|_{h},\omega)-L^{\varepsilon}(\tilde x,y,t,\omega)\right|.
\end{multline*}
We observe that Lemma \ref{lemma2} (with $v=-x\circ \widetilde{x}$) and Lemma~\ref{prop1} give
$$
\left|L^{\varepsilon}(x,y,t,\omega)-L^{\varepsilon}(\tilde x,y,t,\omega) \right|\leq(C_1+C_\delta)
\norma{-x\circ \widetilde{x}}_h,
$$
where $C_1$ is the constant introduced in {\bf(L2)} while $C_\delta$ is the constant introduced in Lemma \ref{prop1}.
\end{proof}
\begin{lemma}\label{lemmapalle}
Under assumption {\bf(L2)}, then
\begin{equation}
L^{\varepsilon}(x,y,t,\omega)\geq C_1^{-1}t^{1-\lambda}(d_{CC}(x,y))^{\lambda}-C_1^{-1}t,
\end{equation}
for all $\varepsilon>0$, $t>0$ and $x,y\in \mathbb{R}^N$,
where $C_1$ and $\lambda$ are the constants introduced in {\bf(L2)}.
\end{lemma}
\begin{proof}
By the definition of $L^{\varepsilon}(x,y,t,\omega)$, assumption {\bf(L2)} and Jensen's inequality, we obtain
\begin{eqnarray*}
L^{\varepsilon}(x,y,t,\omega)&\geq& C_1^{-1}\inf\limits_{\xi\in\mathcal{A}^t_{y,x}} \left\{\int_0^{t }(|\alpha^{\xi}(s)|^{\lambda}-1)ds\right\}\\
&=&C_1^{-1}t \inf\limits_{\xi\in\mathcal{A}^t_{y,x}} \left\{\left(\frac{1}{t}\int_0^{t}|\alpha^{\xi}(s)|^{\lambda}ds\right)\right\} - C_1^{-1}t\\
&\geq&C_1^{-1} t^{1-\lambda}\bigg(\inf\limits _{\xi\in\mathcal{A}^t_{y,x}} \left\{\int_0^{t }|\alpha^{\xi}(s)|ds\right\}\bigg)^{\lambda}-C_1^{-1}t
\end{eqnarray*}
which is equivalent to the statement because of the definition of $d_{CC}(x,y)$.
\end{proof}
\begin{proposition}\label{prop3}
Assume that the Lagrangian~$L$ satisfies {\bf(L2)} and that the initial datum~$g$ satisfies
\begin{equation}\label{gdcc}
g(x)\geq -C(1+d_{CC}(x,0))\qquad \forall x\in \mathbb{R}^N.
\end{equation}
Then the $\inf\limits_{y\in \mathbb{R}^N}\{g(y)+L^{\varepsilon}(x,y,t,\omega)\}$ is attained in a ball.
\end{proposition}
\begin{proof}
As in \cite[Lemma 3.4]{RT}, we want to prove that the infimum outside a suitable ball is greater than the infimum over the entire space.
From {\bf(L2)} we have that $L(x,0,\omega)\leq C_1$, so
$L^{\varepsilon}(x,x,t,\omega)\leq C_1t$, that implies
\begin{equation}\label{*}
\inf\limits_{y\in \mathbb{R}^N}\left\{g(y)+L^{\varepsilon}(x,y,t,\omega)\right\}\leq
g(x)+L^{\varepsilon}(x,x,t,\omega)\leq g(x)+C_1t,
\end{equation}
where the second inequality is obtained choosing the constant curve $\xi(s)=x$ for any $s\in(0,t)$ in the definition of $L^{\varepsilon}(x,x,t,\omega)$.\\
From \eqref{gdcc} and the triangle inequality, we have
\begin{align*}
g(y)+C_1^{-1}t^{1-\lambda}d_{CC}(x,y)^{\lambda}-C_1^{-1}t\geq & -C-C\,t\frac{d_{CC}(x,y)}{t}-Cd_{CC}(x,0)\\
&+C_1^{-1}t\left(\frac{d_{CC}(x,y)}{t}\right)^{\lambda}-C_1^{-1}t.
\end{align*}
Since the right-hand side, for any fixed $x$ and $t$, goes to $+\infty$, as $d_{CC}(x,y)\to +\infty$, then there exist $R>0$ such that
\begin{equation}\label{**}
g(y)+C_1^{-1}t^{1-\lambda}d_{CC}(x,y)^{\lambda}-C_1^{-1}t \geq g(x)+C_1t \qquad \forall y\in \mathbb{R}^N\backslash D_R
\end{equation}
where
$D_R:=\{y\in \mathbb{R}^N: d_{CC}(x,y)\leq RT\}.$
By using both inequalities (\ref{*}) and (\ref{**}), we get
\begin{eqnarray*}
\inf\limits_{y\in \mathbb{R}^N}\{g(y)+L^{\varepsilon}(x,y,t,\omega)\}&\leq&
\inf\limits_{ \mathbb{R}^N\backslash D_R}\left\{ g(y)+C_1^{-1}t\left(\frac{d_{CC}(x,y)}{t}\right)^{\lambda}-C_1^{-1}t\right\}\\
&\leq& \inf\limits_{\mathbb{R}^N\backslash D_R}\big\{ g(y)+L^{\varepsilon}(x,y,t,\omega)\big\}
\end{eqnarray*}
where the last inequality is due to Lemma \ref{lemmapalle}. To conclude we just remark that, by the H\"ormander condition, $D_R$ is contained in an Euclidean ball (up to a different radius), then the lemma is proved.
\end{proof}
\begin{proposition}\label{propu4}
Assume that $L(x,p,\omega)$ satisfies {\bf(L2)} and that $g$ is bounded.
Then function $u^{\varepsilon}$ defined in \eqref{rapprepsilon} can be written as
$$u^{\varepsilon}(x,t,\omega)=\min\limits_{y\in \mathbb{R}^N, d_{CC}(x,y)\leq C}\{g(y)+L^{\varepsilon}(x,y,t,\omega)\}$$
where $C>0$ is a constant depending only on the $\|g\|_{\infty}$ and the constants {\bf (L2)}.
\end{proposition}
\begin{proof}
The boundedness of $u^{\varepsilon}$ is proved in
Theorem \ref{th1}. Moreover
for each $\eta>0$ there exists a $\overline y\in\mathbb{R}^N$ such that
\begin{eqnarray*}
&&u^{\varepsilon}=\inf\limits_{y\in \mathbb{R}^N}\{g(y)+L^{\varepsilon}(x,y,t,\omega)\}\geq
g(\overline y)+L^{\varepsilon}(x,\overline y,t,\omega)-\eta\\
&&\geq -\|g\|_{\infty}+ C_1^{-1}t^{1-\lambda}(d_{CC}(x,\overline y))^{\lambda}-C_1^{-1}t-\eta.
\end{eqnarray*}
So we get
\begin{equation}\label{stimau4}
(d_{CC}(x,\overline y))^{\lambda}\leq
C_1\bigg(\|u^{\varepsilon}\|_{\infty} +\|g\|_{\infty} +C_1^{-1}t+\eta\bigg)t^{\lambda-1},
\end{equation}
which implies the statement, by recalling that $\|u^{\varepsilon}\|_{\infty}$ is bounded by $\|g\|_{\infty}$.\end{proof}
\section{A lower dimensional constrained problem to \\
determine the effective Lagrangian.}
Inspired by the approach of \cite{S1}, we now pass to study the convergence of the functional $L^{\varepsilon}(x,y,t,\omega)$ as $\varepsilon \to 0^+$,
by using the Subadditive Ergodic Theorem.
First we introduce a special family of horizontal curves which can be used as initial condition to build a subadditive stationary process.
At this purpose we use curves with the property to have constant horizontal velocity w.r.t. the given family of vector fields $\mathcal{X}=\{X_1,\dots,X_m\}$, namely $\mathcal{X}$-lines.
For more details on those curves one can see \cite{{BD1},{BD2}}.
\begin{definition}
\label{Defi_Lines}
We call \emph{$\mathcal{X}$-line} any absolute continuous curve $\xi:[0,t]\to \mathbb{R}^N$, satisfying
\begin{equation}
\label{lines}
\dot{\xi}(s)=\sum_{i=1}^mq_i X_i(\xi(s))=\sigma(\xi(s))q,\quad\textrm{a.e.}\ s\in (0,t),
\end{equation}
for some constant vector $q\in \mathbb{R}^m$.
Using notation coherent with \cite{S1} we denote by $l^\mathcal{X}_q(s)$ the $\mathcal{X}$-line starting from the origin associated to the horizontal constant velocity $q\in \mathbb{R}^m$.
\end{definition}
\begin{rem}$\quad$
\begin{enumerate}
\item
Since the vector fields associated to Carnot groups are smooth, the $\mathcal{X}$-lines are smooth curves so relation~\eqref{lines} holds for all $s\in (0,t)$.
\item Since the vector fields are linearly independent at any points (see Remark \ref{RemarkLinearIndepedent}), for any fixed $q\in \mathbb{R}^m$ and for any fixed starting point $x$, there is a unique $\mathcal{X}$-line starting from the point $x$ and associated to the horizontal constant velocity $q$.
\item $\mathcal{X}$-lines starting from a given point $x$ are curves in $\mathbb{R}^N$ depending only on $m$ parameters with $m<N$. Then, while there always exists an horizontal curve joining two given points $x$ and $y$, in general a $\mathcal{X}$-line joining $x$ to $y$ may not exist in a Carnot group.
\end{enumerate}
\end{rem}
To study the convergence of $L^{\varepsilon}(x,y,t,\omega)$ we need to use $\mathcal{X}$-lines so we first restrict our attention to the points $x$ and $y$ that can be connected by using a $\mathcal{X}$-line. Following the notations in \cite{BD2}, we define the $\mathcal{X}$-plane associated to a point $x$ which is, roughly speaking the union of all the $\mathcal{X}$-lines starting from $x$.
\begin{definition}\label{Vx}
We call $\mathcal{X}$-plane associated to the point $x$ the set of all the points that one can reach from $x$ through a
$\mathcal{X}$-line, i.e.
\begin{equation*}
V_x:=\{y\in\mathbb{R}^N\,|\,\exists\,q\in\mathbb{R}^m \;\textrm{and}\; \xi^{q}\; \mathcal{X}\mbox{-line such that}\; \xi^{q}(0)=x, \xi^{q}(1)=y\}.
\end{equation*}
\end{definition}
\begin{rem}[$\mathcal{X}$-lines in Carnot groups]
Note that in the Heisenberg group the $\mathcal{X}$-lines form a subset of Euclidean straight lines but in general the structure of $\mathcal{X}$-lines can look very different from the Euclidean straight lines (see \cite{{BD1},{BD2}} for some examples). Still, if we assume that the vector fields are associated to a Carnot group in exponential coordinates (see \eqref{matrixC}), then at least the first $m$-components are Euclidean affine. This implies that in Carnot groups, whenever $y\in V_x$, then the unique horizontal velocity $q$ such that $\xi^{q}(0)=x$ and $ \xi^{q}(1)=y$ is given by $q=\pi_m(y-x)$.
\end{rem}
Let us define
\begin{equation}\label{defB}\mathcal{B}^q_{a,b}\!\!:=\left\{\xi:[a,b]\to \mathbb{R}^N\,|\xi\in W^{^{1,\infty}}\!\!\big((a,b)\big)\; \textrm{horizontal,} \,\xi(a)=l^\mathcal{X}_q(a),\, \xi(b)=l^\mathcal{X}_q(b)\right\}.
\end{equation}
For any interval $[a,b)$ we define the following stochastic process (similar to \cite{S1}):
\begin{equation}\label{miq}
\mu_q([a,b),\omega):=
\inf_{\xi\in \mathcal{B}^q_{a,b}} \;\int_a^b L (\xi(s), \alpha^{\xi}(s),\omega)ds.
\end{equation}
To use the Sub-additive Ergodic Theorem, fixed the slope $q\in \mathbb{R}^m$, we need to consider the action of $\mathbb{Z}$ on the process $\mu_q$ as additive translation in time. More precisely:
\begin{definition}
\label{tau_z} Given $a,b\in \mathbb{R}$ with $a<b$, $q\in \mathbb{R}^m$ and $z\in \mathbb{Z}$, we define
$$\tau_z\,\mu_q([a,b),\omega):=\mu_q(z+[a,b),\omega)
=
\inf_{\xi\in\mathcal{B}^q_{a+z,b+z}}\;\int_{a+z}^{b+z} L(\xi(s), \alpha^{\xi}(s),\omega)ds.
$$
\end{definition}
\begin{lemma}\label{lemma1}
For every $q\in\mathbb{R}^m$, $z\in \mathbb{Z}$, let $\mu_q$ be defined by \eqref{miq} and $\tau_z$ the additive action introduced in Definition \ref{tau_z}. Under assumptions {\bf(L1)-(L4)}, we have
\begin{equation}\label{eqmi}
\tau_z\,\mu_q([a,b),\omega)=\mu_q([a,b),\tau_{z_q}\omega)
\end{equation}
where $z_q={l{^\mathcal{X}}_{q}(z)}$ and $l^{\mathcal{X}}_q$ is the $\mathcal{X}$-line defined in Definition \ref{Defi_Lines}.
\end{lemma}
\begin{proof}
For any $\xi\in \mathcal{B}^q_{a+z,b+z}$ we consider
\begin{equation*}
\label{scuolabus}
\widetilde{\xi}(s): =[l^\mathcal{X}_q(z)]^{-1}\circ \xi (s+z).
\end{equation*}
By Lemma
\ref{leftinv} part (i),
$\widetilde{\xi}(s)$ is still horizontal and $\alpha^{\widetilde\xi}(s)=\alpha^{\xi}(s+z)$.\\
Moreover note that $\tilde \xi(a)=[l^\mathcal{X}_q(z)]^{-1}\circ l^\mathcal{X}_q(a+z)$ and $\tilde \xi(b)=[l^\mathcal{X}_q(z)]^{-1}\circ l^\mathcal{X}_q(b+z)$. \\
We claim that
\begin{equation}
\label{bumbum}
\widetilde{\xi}(a)=l^\mathcal{X}_q(a)\quad \textrm{and}\quad\widetilde{\xi}(b)=l^\mathcal{X}_q(b).
\end{equation}
In fact, consider the two curves
$l^\mathcal{X}_q(s)$ and $\widetilde l^\mathcal{X}_q(s):=[l^\mathcal{X}_q(z)]^{-1}\circ l^\mathcal{X}_q (s+z)$: both the curves start from the origin since
$\widetilde l^\mathcal{X}_q(0)=[l^\mathcal{X}_q(z)]^{-1}\circ l^\mathcal{X}_q (z)=0=l^\mathcal{X}_q(0)$.
Moreover they have the same horizontal velocity since
$\alpha^{l^\mathcal{X}_q}(s)=q$ (by definition) and $\alpha^{\widetilde l^\mathcal{X}_q}(s)=\alpha^{l^\mathcal{X}_q}(s+z)=q$ (by
Lemma
\ref{leftinv}). Hence by standard uniqueness for ODEs with smooth data, the two curves coincide and in particular \eqref{bumbum} holds.\\
This implies that for each $\xi\in \mathcal{B}^q_{a+z,b+z}$, the curve $\widetilde{\xi}\in \mathcal{B}^q_{a,b}$. Then, by the change of variable $ \widetilde s=s-z$,
\begin{align*}
\tau_z\,\mu_q([a,b),\omega)
&=
\inf_{\widetilde{\xi} \in \mathcal{B}^q_{a,b}}
\int_{a+z}^{b+z} L([l^\mathcal{X}_q(z)]\circ \widetilde{\xi} (s-z), \alpha^{\widetilde\xi}(s-z),\omega)ds\\
&=
\inf_{\widetilde\xi \in \mathcal{B}^q_{a,b}}
\int_{a}^{b} L([l^\mathcal{X}_q(z)]\circ \widetilde{\xi} (s), \alpha^{\widetilde\xi}(s),\omega)ds.
\end{align*}
We set $z_q={l{^\mathcal{X}}_{q}(z)}$ and use assumption {\bf(L4)} to conclude
$$\tau_z\,\mu_q([a,b),\omega)=
\inf_{\xi \in \mathcal{B}^q_{a,b}} \int_{a}^{b} L(\xi (s), \alpha^{\xi}(s),\tau_{z_q}\omega)ds
=\mu_q([a,b),\tau_{z_q}\omega).$$
\end{proof}
\begin{lemma}[Subadditivity] For $n\in \mathbb{N}$, there holds
$$
\mu_q([0,n),\omega)\le \sum_{k=1}^n\mu_q(k-1+[0,1),\omega).
$$
\end{lemma}
\begin{proof}
If $\xi_k\in \mathcal{B}^q_{k-1,k},$ then we can construct a continuous horizontal curve $\xi$ in $ \mathcal{B}^q_{0,n},$ such that
$\xi(s)=\xi_k(s)$ for $s\in [k-1,k],$ $k\in \{1,\ldots,n\}.$ The claim follows from the definition of the infimum.
\end{proof}
Under assumptions {\bf(L1)-(L4)} the
Subadditive Ergodic Theorem applies to the process defined in
\eqref{miq}.
We will use it in the following form, which is taken from \cite[Prop. 1]{DalMasoModica}, based on Akcoglu and Krengel's theorem, \cite{AK}. We state it for one dimension. First, we get the existence of a limit which may still depend on $\omega$ and then we use ergodicity to show independence of $\omega.$ From \cite{DalMasoModica} we recall:
\begin{defi}
We denote by ${\mathcal U}_0$ the family of all bounded measurable subsets of $\mathbb{R}$. For $A\in {\mathcal U}_0$, its Lebesgue measure is $|A|$.
We denote by ${\mathcal M}$ the family of subadditive functions $m:{\mathcal U}_0\to \mathbb{R}$ such that, for some $c>0$, there holds
$$
0\leq m(A)\leq c|A| \qquad \forall A\in {\mathcal U}_0.
$$
\end{defi}
\begin{theorem}[{\bf Subadditive Ergodic Theorem, \cite{DalMasoModica}}]
\label{DMM}
Let $\mu:\Omega\to {\mathcal M}$ be a subadditive process. If $\mu$ and $\tau_x(\mu)$ have the same law for every $x\in \mathbb Z,$ then there exists a set of full measure $\Omega'$ and a measurable function~$\phi$ such that on $\Omega'$
$$
\lim_{t\to \infty}t^{-1}|I|^{-1}\mu(\omega)(tI)=\phi(\omega)
$$ for every interval $I,$ where $|I|$ denotes its length.
\end{theorem}
We look at the points $y\in V_0\subset \mathbb{R}^N$. Let us recall from Definition \ref{defG} that we write $y=(y^1, y^2)\in \mathbb{R}^m\times\mathbb{R}^{N-m}$ and $y^1=\pi_m(y)$. If $y\in V_0$ then $y^2=y^2(y^1)\in\mathbb{R}^{N-m} $ where $y^2(\cdot)$ is a $(N-m)$-dimensional valued function associated to the vector fields. E.g. in the case of the $n$-dimensional Heisenberg group $N=2n+1$ and $m=2n$ so $y^2=0\in \mathbb{R}$, for all $y^1\in \mathbb{R}^{2n}$. (See \cite[Lemma 2.2]{BD2} for more details).\\
We are now ready to give a first pointwise convergence result.
Note that, differently from the Euclidean case, the following theorem gives the asymptotic behaviour of
$L^{\varepsilon}(0,y,t,\omega)$ only under the additional $(N-m)$-dimensional constraint expressed by $y\in V_0$.
\begin{theorem}
\label{convzero}
Under assumptions {\bf(L1)-(L4)}, for each $t>0$ and $y\in V_0$ fixed,
\begin{enumerate}
\item The following limit exists a.s. in $\omega$
\begin{equation}
\label{limitvalueconstrained}
L^{\varepsilon}(y,0,t,\omega
\longrightarrow^{\varepsilon\to 0^+}
t\;\overline{\mu}\left(\,\frac{y^1}{t}, \omega \right),
\end{equation}
where $\overline{\mu}:\mathbb{R}^m\times \Omega\rightarrow \mathbb{R}$ is a measurable function
\item The limit value $\overline{\mu}$ in \eqref{limitvalueconstrained} is constant in $\omega.$
\end{enumerate}
\end{theorem}
\begin{proof} {\em We first prove part 1:}
fix $q\in\mathbb{R}^m,$ the first step is to prove part 1 by applying the Subadditive Ergodic Theorem \ref{DMM} to $\mu_{q}$, which tells that
\begin{equation}\label{convergenza1}
\frac{1}{\tau}\mu_{q}([0,\tau),\omega)\longrightarrow^{\tau\to +\infty} \overline{\mu}(q,\omega), \ a.s.\ \omega\in \Omega.
\end{equation}
Note that the definition of $\mu_{q}$ involves only a one-dimensional subgroup of translations, $\{\tau_{\ell^\mathcal{X}_q(z)}\}_{z\in\mathbb{Z}},$ the subgroup that leaves invariant the ${\mathcal X}$-line with direction $q$, passing through the origin.
Now we rewrite the functional $L^{\varepsilon}(y,0,t,\omega)$ defined by \eqref{Lepsilon} in terms of $\mu_{q}\big([a,b),\omega\big)$: let us prove
$$
L^{\varepsilon}(y,0,t,\omega)=\varepsilon\mu_{{\frac{y^1}{t}}}([0,\varepsilon^{-1}t),\omega).
$$
For any $\xi \in \mathcal{A}^t_{0,y}$, we define
$\widetilde{\xi}(s):=\delta_{1/\varepsilon}(\xi(s))$.
Using Lemma \ref{leftinv}, part (iii)
\begin{equation*}
L^{\varepsilon}(y,0,t,\omega)=\inf_{\widetilde{\xi}\in \mathcal{A}^t_{0,y_{\varepsilon}}} \int_0^t L(\widetilde{\xi}(s), \varepsilon\alpha^{\widetilde{\xi}}(s),\omega)ds,
\end{equation*}
where $y_{\varepsilon}= \delta_{1/\varepsilon}(y)$. By the change of variable
$\widetilde{s}=s/\varepsilon$ (which we call again $s$) the previous identity becomes
\begin{equation*}
L^{\varepsilon}(y,0,t,\omega)=\varepsilon\inf_{\widetilde{\xi}\in \mathcal{A}^{t/{\varepsilon}}_{0,y_{\varepsilon}}}
\int_0^{t/\varepsilon} L(\widetilde{\xi}(\varepsilon s), \varepsilon\alpha^{\widetilde{\xi}}(\varepsilon s),\omega)d s.
\end{equation*}
Take now $\eta( s):= \widetilde{\xi}(\varepsilon \, s)$ and note that by Lemma \ref{leftinv} $\eta(s)$ is still a horizontal curve and
$\alpha^{\eta}(s)=\varepsilon\,\alpha^{\widetilde{\xi}}(\varepsilon s)$, $\eta(0)=\delta_{1/\varepsilon}(y)$ and $\eta(t/\varepsilon)=0$. Then
\begin{equation}\label{Dean}
L^{\varepsilon}(y,0,t,\omega)=\varepsilon\inf_{\eta\in \mathcal{A}^{t/\varepsilon}_{0,y_{\varepsilon}}} \int_0^{t/\varepsilon} L(\eta({s}),\alpha^{\eta}(s),\omega)\, d{s}.
\end{equation}
Now fix $t>0$, $y\in V_0$ and $y^1=\pi_m(y)$.
To use the convergence result in \eqref{convergenza1} it remains to show that
$$
\mathcal{A}_{0,y_{\varepsilon}}^{t/\varepsilon}=\mathcal{B}^q_{a,b}\;\textrm{with}\; q=\frac{y^1}{t},\; a=0\; \textrm{and}\; b=\frac{t}{\varepsilon}.
$$
For this purpose, consider the
$\mathcal{X}$-line joining $0$ to $y$ with constant horizontal velocity $q_i=\frac{y_i}{t}$ for $i=1,\dots,m$, i.e. $l^\mathcal{X}_q$ is the unique solution of
$$\dot{l}^\mathcal{X}_q(s)=\sum_{i=1}^{m}
\frac{y_i}{t} X_i(l^\mathcal{X}_q(s)),\quad
l^\mathcal{X}_q(0)=0,$$
(recall that $l^\mathcal{X}_q(t)=y$).\\
{\em CLAIM:} for all constant $C>0$
\begin{equation}\label{chiline}
l^\mathcal{X}_q(Ct)= \delta_C\big(l^\mathcal{X}_q(t)\big)= \delta_C(y).
\end{equation}
To prove claim \eqref{chiline}, let us introduce the two curves
$l_1(s):=l^\mathcal{X}_q(Cs)$ and $l_2(s):=\delta_C(l^\mathcal{X}_q(s))$.
Note that by Lemma \ref{leftinv}, parts (ii) and (iii), we have
$$l_1(0)=l_2(0)=0, \quad
\alpha^{l_1}(s)=C\alpha^l= C\frac{y^1}{t}\quad \textrm{and}\quad
\alpha^{l_2}(s)=C\alpha^l= C\frac{y^1}{t}.$$
This means that $l_1(\cdot)$ and $l_2(\cdot)$ both solve the ODE problem
$$\dot{x}(s)=\sum_{i=1}^{m}C\frac{y_i^1}{t} X_i(x(s)),\quad \quad x(0)=0.$$
By standard uniqueness for ODEs with smooth data, we deduce
$l_1(s)=l_2(s)$. This implies in particular $l_1(t)=l_2(t)$ which gives \eqref{chiline}.
Note that here is crucial that the horizontal velocity of the two curves $l_1$ and $l_2$ is constant in time.\\
The claim \eqref{chiline} implies that $\mathcal{A}_{0,y_{\varepsilon}}^{t/\varepsilon}=\mathcal{B}^q_{0,t/\varepsilon}$ with $q=\frac{y^1}{t}$, thus equation \eqref{Dean} gives
\begin{equation}\label{piccione}
L^{\varepsilon}(y,0,t,\omega)= \varepsilon\mu_{{\frac{y^1}{t}}}([0,t/\varepsilon),\omega).
\end{equation}
So fixed $t>0$, $y\in V_0$, $y^1=\pi_m(y)$ and $q=\frac{y^1}{t}$, we can rewrite \eqref{convergenza1} as
\begin{equation}
L^{\varepsilon}(y,0,t,\omega)=\varepsilon\mu_{\frac{y^1}{t}}([0,t/\varepsilon),\omega)\longrightarrow^{\varepsilon\to 0^+}
t\overline{\mu}\left(\frac{y^1}{t},\omega\right), \ a.s.\ \omega\in \Omega.
\end{equation}
\noindent
{\em We now prove part 2:} we show the independence of $\overline{\mu}$ from $\omega.$
Fix $z\in\mathbb{R}^N$ and define $l_q^z(s):=-z\circ l^{\mathcal{X}}_q(s),$ then by Lemma \ref{leftinv} this is still an ${\mathcal X}$-line.
We have by stationarity of the coefficients
$$
t\overline{\mu}\left(\frac{y^1}{t},\tau_z(\omega)\right)=\lim_{\varepsilon\to 0^+}\varepsilon\mu_{\frac{y^1}{t}}([0,t/\varepsilon),\tau_z(\omega)),
$$
and by \eqref{miq} and stationarity
\begin{eqnarray*}
\mu_{\frac{y^1}{t}}([0,t/\varepsilon),\tau_z(\omega))&=&
\inf_{\xi\in \mathcal{B}^q_{0,t/\varepsilon}} \;\int_0^{t/\varepsilon} L (\xi(s), \alpha^{\xi}(s),\tau_z(\omega))ds\\
&=&\inf_{\xi\in \mathcal{B}^q_{0,t/\varepsilon}} \;\int_0^{t/\varepsilon} L (-z\circ \xi(s), \alpha^{\xi}(s),\omega)ds\\
&=&\inf_{\overline{\xi}\in \mathcal{B}^{q,z}_{0,t/\varepsilon}} \;\int_0^{t/\varepsilon} L (\overline{\xi}(s), \alpha^{\overline{\xi}}(s),\omega)ds
\end{eqnarray*}
where
\begin{multline*}
\mathcal{B}^{q,z}_{0,t/\varepsilon}:=\\
\left\{\overline{\xi}:[a,b]\to \mathbb{R}^N\,|\overline{\xi}\in W^{^{1,+\infty}}\!\!\!\big((a,b)\big)\; \textrm{horiz,} \,\overline{\xi}(a)=-z\circ l^\mathcal{X}_q(a),\, \overline{\xi}(b)=-z\circ l^\mathcal{X}_q(b)\right\}.
\end{multline*}
We have to show
\begin{equation}\label{limitinv}
\lim_{\varepsilon\to 0^+}\varepsilon\inf_{\xi\in \mathcal{B}^q_{0,t/\varepsilon}} \;\int_0^{t/\varepsilon} L (\xi(s), \alpha^{\xi}(s),\omega)ds
=\lim_{\varepsilon\to 0^+}\varepsilon\inf_{\overline{\xi}\in \mathcal{B}^{q,z}_{0,t/\varepsilon}} \;\int_0^{t/\varepsilon} L (\overline{\xi}(s), \alpha^{\overline{\xi}}(s),\omega)ds
\end{equation}
for all $z\in \mathbb{R}^N,$ then $ \overline{\mu}\left(\frac{y^1}{t},\tau_z(\omega)\right)=\overline{\mu}\left(\frac{y^1}{t},\omega\right),$ so by ergodicity w.r.t to the group action
$\overline{\mu}\left(\frac{y^1}{t},\omega\right) $ does not depend on $\omega.$\\
We show that both infima have the same limit by connecting the endpoints $x_1:=l^\mathcal{X}_q(a)$ to $y_1:=-z\circ l^\mathcal{X}_q(a),$ $x_2:=l^\mathcal{X}_q(b)$ to $y_2:=-z\circ l^\mathcal{X}_q(b)$ by geodesics of length of order $C(q)\norma{z}_{CC}.$ Indeed, by Lemma \ref{replacement}
the difference of the cost disappears in the limit $\varepsilon\to 0.$ (See Figure \ref{Figura-A}.)\\
This means any path in $\mathcal{B}^{q,z}_{0,t/\varepsilon}$ can be made into a path in $\mathcal{B}^{q}_{0,t/\varepsilon}$ by paying a cost of order $|z|$ (for a similar argument, we refer the reader also to the proof of Lemma~\ref{LemmaLiminfnuovo}).
This extra cost vanishes in the limit after multiplication by $\varepsilon.$
\end{proof}
\begin{figure} [htbp]
\begin{center}
\includegraphics[width=3.0 in, height=5.2 in, angle=90]{figura-A.jpg}
\end{center}
\caption{In the picture are drawn two $\mathcal{X}$-lines with constant horizontal velocity $q$ connecting respectively $x_1$ with $x_2$ and $y_1=-z\circ x_1$ with $y_2=-z\circ x_2$. Then $\xi$ and $\bar{\xi}$ are respectively admissible curves touching at the two couple of points.}
\label{Figura-A}
\end{figure}
\begin{remark}
Note that in the case of {\em short-correlated} random coefficients the proof of part 2 is unnecessary, as the independence of $\omega$ follows already from the fact that the restriction of the random field to a $\mathcal{X}$-line is again short-correlated and the zero-one-law for independent random variables.
\end{remark}
\begin{remark} Note that the convergence result \eqref{limitvalueconstrained} means that, for all $t>0$ and $y\in V_0$ fixed,
there exists $\Omega^{t,y
\subset \Omega$ with $\mathbb{P}(\Omega^{t,y})=1$ such that
$L^{\varepsilon}(y,0,t,\omega)\rightarrow
t\overline{\mu}\left(\frac{\pi_m(y)}{t}\right)$, for all $\omega\in \Omega^{t,y}$. This convergence result is enough to define the effective Lagrangian but it is still too weak to obtain the convergence of the solutions of the homogenization problem.
\end{remark}
\begin{defi}[Effective Lagrangian]
We define the effective Lagrangian \\$\overline{L}:\mathbb{R}^m\to \mathbb{R}$ as
\begin{equation}
\label{EffectiveLagrangian}
\overline{L}(q):=\overline\mu(q)=\lim_{\varepsilon\to 0^+} L^{\varepsilon}\big((q,y_q^2),0,1,\omega\big),
\end{equation}
where the point $y_q^2\in \mathbb{R}^{N-m}$ is uniquely determined by $y^1=q\in \mathbb{R}^m$ for all points $(q,y_q^2)\in V_0\subset \mathbb{R}^N$.
\end{defi}
\begin{ex}[Heisenberg group]
In the 1-dimensional Heisenberg group there holds:
$\overline{L}(q_1,q_{2}):=\lim_{\varepsilon\to 0^+} L^{\varepsilon}\big((q_1,q_2,0),0,1,\omega\big)$.
\end{ex}
Using the definition of effective Lagrangian introduced in \eqref{EffectiveLagrangian} we can rewrite the limit in Theorem \ref{convzero} as follows: for all $y\in V_0$ and $t>0$ fixed, there exists a set $\Omega^{t,y}\subset \Omega$ with $\mathbb{P}( \Omega^{t,y})=1$ such that
\begin{equation}
\label{limiteZero1}
\lim_{\varepsilon\rightarrow 0^+} L^{\varepsilon}(y,0,t,\omega)=t\overline{L}\left(\frac{\pi_m(y)}{t}\right),\quad \forall\; \omega\in \Omega^{t,y}.
\end{equation}
Next we want to derive the local uniform convergence for $L^{\varepsilon}(x,y,t,\omega)$ under the constraint $y\in V_x$.
The following proof is a simple adaptation of the ideas developed by Souganidis in \cite{S1} and later by the same author and co-authors in \cite{{Armstrong-Souga1-Carda},{Armstrong-Souga2},{Armstrong-Souga3}}.
The main difference is that we work
directly with the functional $ L^{\varepsilon}(x,y,t,\omega)$ and not with the solutions $u^{\varepsilon}(t,x)$.
This will guarantee in once both the uniform convergence in $y$ (essential to pass to the infimum in the limit) and the uniform convergence in $x$ and $t$ (that will allow to apply our approximation argument in Section \ref{SecApproxX-lines}).
\begin{theorem}\label{Lsegnato}
Under assumptions {\bf(L1)-(L4)} and {\bf (L6)} and the additional constraint $x\in V_y$, we have that
\begin{equation}
\label{UniformLimit}
\lim_{\varepsilon\rightarrow 0^+} L^{\varepsilon}(x,y,t,\omega)=
t\overline L \left(\frac{\pi_m(-y\circ x)}{t}\right)
\end{equation}
locally uniformly in $x,y,t$ and a.s. $\omega$, where $\overline{L}$ is the effective Lagrangian defined by \eqref{EffectiveLagrangian}.
\end{theorem}
\begin{proof}
We first show that
\begin{equation}
\label{gabbiani}
L^{\varepsilon}(x,y,t,\omega)= L^{\varepsilon}\left(-y\circ x,0,t,\tau_{\delta_{1/\varepsilon}(y)}(\omega)\right).
\end{equation}
Note that
$x\in V_y\ \text {if and only if } -y\circ x \in V_0$ (recall that $y^{-1}=-y$ in exponential coordinates).
To prove \eqref{gabbiani}, for each $\xi\in \mathcal{A}^t_{y,x}$ we define $\eta(s):=-y\circ \xi(s)$. By Lemma \ref{leftinv}-(i) we have
$\alpha^{\eta}(s)=\alpha^{\xi}(s)$, $\eta(0)=-y\circ x$, $\eta(t)=0$, hence
\begin{eqnarray*}
L^{\varepsilon}(x,y,t,\omega)&&
=\inf\limits_{\mathcal{A}^t_{0,-y\circ x}} \int_0^t L\big(\delta_{1/\varepsilon}(y\circ\eta(s)), \alpha^{\eta}(s),\omega\big)ds\\
&&=\inf\limits_{\mathcal{A}^t_{0,-y\circ x}} \int_0^t L\big(\delta_{1/\varepsilon}(y)\circ\delta_{1/\varepsilon}(\eta(s)), \alpha^{\eta}(s),\omega\big)ds
\\
&&=\inf\limits_{\mathcal{A}^t_{0,-y\circ x}} \int_0^t L\big(\delta_{1/\varepsilon}(\eta(s)), \alpha^{\eta}(s),\tau_{\delta_{1/\varepsilon}(y)}\omega\big)ds\\
&&=L^{\varepsilon}\left(-y\circ x,0,t,\tau_{\delta_{\frac{1}{\varepsilon}}(y)}(\omega)\right),
\end{eqnarray*}
where we have used property {\bf(L4)}.\\
By combining the estimates found in Section \ref{SectionEstimates} with Egoroff's Theorem and the Ergodic Theorem we can conclude.
Since the argument is standard and has been used already in several papers,
then we recall only some
steps.\\
Since $L^{\varepsilon}(x,y,t,\omega)$ are equi-uniformly continuous in $t>0$ and $x,y\in \mathbb{R}^N$ (see Theorem \ref{UniformContinuityFunctional}), using the density of $\mathbb{Q}$ in $\mathbb{R}$ we can restrict our attention only to points of the form $t_z\in (0,+\infty)\cap \mathbb{Q}=\mathbb{Q}^+$, $x_z\in \mathbb{Q}^N$ and $y_z\in \mathbb{Q}^N$.
We then define the following set:
$$
\Omega_0:=\bigcap_{t_z\in \mathbb{Q}^+,x_z\in \mathbb{Q}^N,\,y_z\in \mathbb{Q}^N}\Omega^{t_z}_{x_z,y_z}.
$$
Note that $\Omega_0$ does not depend anymore on $t$, $x$ and $y$ and $\mathbb{P}(\Omega_0)=1$.\\
Using the structure of Carnot group in exponential coordinates that implies
$\pi_m(-y\circ x)=\pi_m(x)-\pi_m(y)$, since by \eqref{limiteZero1} $-y\circ x\in \mathbb{Q}^N$,
we know that
$$
\lim_{\varepsilon\rightarrow 0^+} L^{\varepsilon}(-y_z\circ x_z,0,t_z,\omega)= t_z\,\overline{L}\left(\frac{\pi_m(x_z)-\pi_m(y_z)}{t_z}\right),
\quad \textrm{for all} \;\omega\in\Omega_0.
$$
Applying Egoroff Theorem, we find a ``very big'' subset of $\Omega_0$ where the convergence is uniform in $\omega$. More precisely for any
fixed $\delta>0$, there exists $A_{\delta}\subset \Omega_0$ such that
$\mathbb{P}(\Omega_0\setminus A_{\delta})\leq \delta$ (i.e. $\mathbb{P}( A_{\delta})=1-\delta$) and
$$
\lim_{\varepsilon\rightarrow 0^+} L^{\varepsilon}(-y_z\circ x_z,0,t_z,\omega)= t_z\,\overline{L}\left(\frac{\pi_m(x_z)-\pi_m(y_z)}{t_z}\right),$$
uniformly for all $t_z$, $x_z$, $y_z$ and all $\omega\in A_{\delta}$.\\
To conclude one can use the Ergodic Theorem to show that with very high probability $\tau_{\delta_{1/\varepsilon}(y)}(\omega)\in A_{\delta}$.
The application of the Ergodic Theorem
is quite technical, so we refer to Lemma 5.1. in \cite{Armstrong-Souga1-Carda} for the detailed argument. We just like to remark that by Lemma \ref{ReelationDistances} one can easily replace the Euclidean ball with the homogeneous ball (and the reverse), up to consider a different power for the radius which depends only on the step of the Carnot group.\\
This argument together with the estimates in Section \ref{SectionEstimates} (where we found a uniform modulus of continuity depending only on the assumption on $H$ and on the Carnot group) conclude the proof.
\end{proof}
\begin{corollary}
Under the assumptions of Theorem \ref{Lsegnato}, we have
\begin{equation}
\label{PartialInf}
\lim_{\varepsilon\to 0^+}\inf_{y\in V_x} \left[g(y)+L^{\varepsilon}( x,y,t,\omega)\right]=
\inf_{y\in V_x} \left[g(y)+t \overline{L}\left(\frac{\pi_m(x)-\pi_m(y)}{t}\right)\right].
\end{equation}
\end{corollary}
\begin{proof} We just sum $g(y)$ on both sides of \eqref{UniformLimit} and take the infimum over $y\in V_x$ by using the uniform convergence in $y$.
\end{proof}
Note that the right-hand side in \eqref{PartialInf} coincides with the Hopf-Lax formula introduced in \cite[Theorem 1.1]{BCP}. Then, whenever the initial condition satisfies the additional assumption $g(x)\geq g(\pi_m(x))$ for all $x\in \mathbb{R}^N$, the right-hand side is the unique viscosity solution of
the associated Hamilton-Jacobi Cauchy problem (defining $\overline{H}=\overline{L}^*$ and proving convexity for $\overline{L}$ see Section \ref{HomogSection}) .
Unfortunately in general
$v_{\varepsilon}(t,x)=\inf_{y\in V_x} \left[g(y)+L^{\varepsilon}( x,y,t,\omega)\right]$
does not solve the $\varepsilon$-problem \eqref{ApproxPr}.
Then it is crucial to get rid of the additional constraint $y\in V_x$.
For this purpose, in Section \ref{SecApproxX-lines} we will introduce a novel approximation argument, by using a suitable construction by $\mathcal{X}$-lines.\\
We conclude the section investigating some properties for the effective Lagrangian that will be used later.
\begin{lemma}\label{Iq}
For any $y=(y^1,y^2)\in V_0$, we have
$$
\inf\limits_{\xi} \int_0^t |\alpha^{\xi}(s)| ds\geq |y^1|,
$$
where the infimum is taken over all the horizontal curves $\xi(s)$ such that $\xi(0)=(y^1, y^2)$ and $\xi(t)=0$.
\end{lemma}
\begin{proof}
Given any horizontal curve $\xi$ such that $\xi(0)=(y^1,y^2)\in \mathbb{R}^N$, $\xi(t)=0\in \mathbb{R}^N$, we define $\eta:[0,t]\rightarrow \mathbb{R}^m$ as $\eta(s):=\pi_m(\xi(s))$.
Then $\eta(0)=y^1\in\mathbb{R}^m$, $\eta(t)=0\in\mathbb{R}^m$.
Moreover, from the structure of $\sigma$ (see \eqref{matrixC}),
we have
$\dot{\eta}(s)= (\dot{\xi}_1(s), \dots, \dot{\xi}_m(s))=\alpha^{\xi}(s)$.\\
Then, since $\eta$ are curves in the Euclidean $\mathbb{R}^m$ joining $y^1$ to $0$ at time $t$,
$$
\int_0^t |\alpha^{\xi}(s)| ds\geq \inf\limits_{\eta} \int_0^t |\dot{\eta}(s)| ds=|y^1|,
$$ and we can conclude taking the infimum on the left-hand side term.
\end{proof}
\begin{proposition}\label{prop5}
$\overline L(q)$ is continuous and superlinear in $q$, i.e.
\begin{equation}\label{superlinear}
\overline L(q)\geq C_1^{-1}(|q|^{\lambda}-1)
\end{equation}
where $C_1$ and $\lambda$ are the constants introduced in {\bf(L2)}.
\end{proposition}
\begin{proof}
The continuity follows from the uniform convergence of $L^{\varepsilon}$ in \eqref{EffectiveLagrangian}.
For each $q\in\mathbb{R}^m$, take $y^1=q$, $y=(q, y^{2})\in V_0$ , $t=1$ and $x=0$.
$$
L^{\varepsilon}(0,y,1,\omega)=\inf\limits_{\xi\in \mathcal{A}^t_{y,0}} \int_0^1 L(\delta_{1/\varepsilon}(\xi(s)), \alpha^{\xi}(s),\omega)ds.
$$
From assumption {\bf(L2)}, Jensen's inequality and Lemma \ref{Iq}, we ge
\begin{eqnarray*}
L^{\varepsilon}(0,y,1,\omega)&\geq&
C_1^{-1}\inf\limits_{\xi\in \mathcal{A}^1_{y,0}} \left(\int_0^1 |\alpha^{\xi}(s)|^{\lambda} ds\right)-C_1^{-1}\\
& \geq & C_1^{-1} \inf\limits_{\xi\in \mathcal{A}^1_{y, 0}} \left(\int_0^1 |\alpha^{\xi}(s)| ds\right)^{\lambda}-C_1^{-1}\\
& \geq & C_1^{-1}|q|^{\lambda}-C_1^{-1},
\end{eqnarray*}
which implies \eqref{superlinear}, passing to the limit as $\varepsilon\to 0^+$.
\end{proof}
\section{Approximation by $\mathcal{X}$-lines and convergence of the variational problem.}\label{SecApproxX-lines}
To remove the constraint $y\in V_x$ the idea is to apply Theorem \ref{Lsegnato} to suitable step-$\mathcal{X}$-lines, i.e. horizontal curves whose horizontal velocity is step-constant w.r.t. the given vector fields.
More precisely we want to approximate the horizontal velocity $\alpha(t)\in \mathbb{R}^m$ in $L^1$ by step-constant functions.
(Recall that if two horizontal velocities are close in $L^1$-norm then the a associated horizontal curves are close in $L^{\infty}$-norm, see Lemma \ref{aprroxCurveLemma}.)
We will treat the liminf and the limsup separately. Both are treated in the spirit as one would do for the $\Gamma$-liminf and the $\Gamma$-limsup for integral functionals. One of the technical difficulties here is how to approximate limits of a sequence of minimizing paths by $\mathcal{X}$-lines. Due to the fast oscillations of our integrands in $\xi,$ this is not straightforward, we refer to the discussion in \cite{E}. As we cannot assume that our limit paths are smooth but only in some Sobolev space, we have to work with Lebesgue points of the horizontal velocity. Here this is more subtle than in the Euclidean case.
\begin{lemma}\label{Lebesgueapprox}
Suppose $\dot \xi(s)=\alpha_1(s)X_1(\xi(s))+\ldots+\alpha_m(s)X_m(\xi(s))$ and $t_0\in \mathbb{R}$ is a Lebesgue point for $\alpha_1,\dots,\alpha_m$, that means
\begin{equation}\label{lebesguepointconv}
\lim_{\delta\to 0}\max_{i=1,\ldots,m}\delta^{-1}\int_{t_0-\delta}^{t_0+\delta}|\alpha_i(s)-\alpha_i(t_0)| ds=0.
\end{equation}
Consider the $\mathcal{X}$-line $\ell(s):=l^{\alpha(t_0)}(s)$ i.e.
$$
\dot \ell(s)=\alpha_1(t_0)X_1(\ell(s))+\ldots+\alpha_m(t_0)X_m(\ell(s)),\quad \ell(t_0)=\xi(t_0).
$$
Then for any $\varepsilon>0$ there exists $\delta_0>0$ such that for all $\delta<\delta_0$
\begin{equation}\label{xlineapprox}
\sup_{[t_0-\delta,t_0+\delta]}d_{CC}(\xi(s),\ell(s))<\varepsilon\left(\delta+\int_{t_0-\delta}^{t_0+\delta}|\alpha(s)|\,d\,s \right).
\end{equation}
\end{lemma}
\begin{proof} We use the exponential representation of the Carnot group. Moreover, since the Carnot-Carath\'eodory distance is locally equivalent to the homogeneous distance, we estimate $\|-\ell(s)\circ \xi(s)\|_h$.
{\em Case 1: the Heisenberg group $\mathbb{H}$.}
We prove in the Heisenberg group $\mathbb{H}$ by explicit computations. W.l.o.g. we assume $t_0=0$; for the first two coordinates we have
$$
\xi_i(t)=\ell_i(0)+t\alpha_i(0)+t\ \underbrace{\left(
\frac{1}{t}\int_0^t(\alpha_i(s)-\alpha_i(0))ds\right)}_{:=r_i(t)}=\ell_i(t)+t\,r_i(t).
$$
For the third coordinate we have
\begin{eqnarray*}
\ell_3(t)&=&\frac{t}{2}\left(\ell_1(0)\alpha_2(0)-\ell_2(0)\alpha_1(0)\right)+\ell_3(0)\\
\xi_3(t)&=&\int_0^t\frac{1}{2}\left(\xi_1(s)\alpha_2(s)-\xi_2(s)\alpha_1(s)\right)ds+\ell_3(0).
\end{eqnarray*}
Writing $\alpha_2(s)=\alpha_2(s)\pm \alpha_2(0)$, we get
\begin{eqnarray*}
\int_0^t\xi_1(s)\alpha_2(s)ds
&=&\frac{t^2}{2}\alpha_1(0)\alpha_2(0)+t\ell_1(0)\alpha_2(0)+\ell_1(0)t\,r_2(t)+\\&&+\int_0^t s\, r_1(s)\alpha_2(s)ds+\alpha_1(0)\int_0^ts(\alpha_2(s)-\alpha_2(0))ds,
\end{eqnarray*}
and as $\int_0^t\xi_2(s)\alpha_1(s)ds$ can be treated similarly, we have
\begin{eqnarray*}
\xi_3(t)&=&\ell_3(t)+\frac{t}{2}\left(\ell_1(0)r_2(t)-\ell_2(0)r_1(t)\right)\\
&&+\int_0^ts\big(r_1(s)\alpha_2(s)-r_2(s)\alpha_1(s)\big)ds\\&& +\int_0^ts\left[\alpha_1(0)\big(\alpha_2(s)-\alpha_2(0)\big)-\alpha_2(0)\big(\alpha_1(s)-\alpha_1(0)\big)\right]ds.
\end{eqnarray*}
Let us denote the last two lines by $R(t).$ Now
$$
(-\ell(t)\circ \xi(t))_3=\xi_3(t)-\ell_3(t)+\frac{\ell_2(t)\xi_1(t)-\ell_1(t)\xi_2(t)}{2},
$$
we get
$$
(-\ell\circ \xi)_3=R(t)+t^2\frac{\alpha_2(0)r_1(t)-\alpha_1(0)r_2(t)}{2}.
$$
All error terms can be estimated by $t^2\|\alpha\|_\infty\sup_{[0,t]}\max_{1,2}|r_i|$ but we need an estimate where the constant depends only on $\|\alpha_i\|_{L^1}$.\\
Note that $|R(t)|$ can be estimated by
\begin{align*}
&\left|\int_0^t s\alpha_1(0)\big(\alpha_2(s)-\alpha_2(0)\big)ds\right|\le
\int_0^t t^2\frac{|\alpha_1(0)||\alpha_2(s)-\alpha_2(0)|}{t}\!\;ds \le|\alpha_1(0)||r_2(t)| t^2\\
&\left|\int_0^tsr_1(s)\big(\alpha_2(s)-\alpha_2(0)+\alpha_2(0)\big)ds\right|
\le |\alpha_2(0)|\sup_{[0,t]}|r_1|t^2+|r_2(t)|\sup_{[0,t]}|r_1|t^2,
\end{align*}
and similarly for the remaining terms.
Denoting by $r(t)$ a term vanishing with $|r_1(t)|+|r_2(t)|$, we have to show that terms of the form $\sqrt{\alpha_i(0)}\;t\,r(t)$ can be estimated in a way which can be summed over a partition of the unit interval. Since for $t\in[0,1]$
$$
\sqrt{|\alpha_i(0)|}\;t\le \frac{1}{2}|\alpha_i(0)|t+
\frac{1}{2},
$$
and
$$
|\alpha_i(0)|t\le \int_0^t|\alpha_i(s)-\alpha_i(0)|ds
+\int_0^t|\alpha_i(s)|ds =t|r_i(t)|+\int_0^t|\alpha_i(s)|ds,
$$
and the claim is shown by applying these estimates on both sub-intervals $(t_0-\delta,t_0)$ and $(t_0,t_0+\delta)$.
{\em Case 2: general case.}
{\em Step $1$.} As the left-translation leaves the CC-distance between two points invariant, we may assume w.l.o.g. $t_0=0$ and $\xi(0)=\ell(0)=0$.\\
For a general path $\eta:[0,T]\to \mathbb{R}^m$ we define the 1-variation norm in the following way: let $\Delta[0,T]$ be the family of all partitions of the interval $[0,T].$ Then
$$
|\eta|_{1-var[0,T]}=\sup_{\Delta[0,T]}\sum_{(t_k,t_{k+1})\in \Delta[0,T]}|\eta(t_{k+1})-\eta(t_{k})|.
$$
It is easy to see that
\begin{equation}\label{bus}
|\eta|_{1-var[0,T]}=\sup_{\Delta[0,T]}\sum\left|\int_{t_k}^{t_{k+1}}\dot \eta(s) ds\right|\le\int_0^t|\dot \eta(s)|ds.
\end{equation}
We define two paths in ${\mathbb R}^m$ as the projection on the first $m$ components of $\xi$ and $\ell$ and we denote them respectively by $\eta^\xi$ and $\eta^\ell$. Using the structure given by \eqref{matrixC}, we have that $\dot \eta^\xi(t)=\alpha(t)$ and $\dot \eta^\ell(t)=\alpha(0)$, then, for $\delta$ sufficiently small, \eqref{bus} implies
$$
|\eta^\xi-\eta^\ell|_{1-var[0,\delta]}\le \delta \varepsilon
$$
because by assumption $t_0=0$ is a Lebesgue point for $\alpha$.\\
{\em Step $2$.} By \cite[Proposition 7.63]{FV},
we have for the signature of the path (i.e. for the difference of all iterated integrals)
\begin{align*}
&\sup_{k=1,\ldots n-m}\sup_{0<t_1<\delta}\left|\int_0^{t_1}\int_0^{t_2}\ldots\int_0^{t_k}d\eta^\xi(s_1)\otimes \ldots\otimes d\eta^\xi(s_k)ds_1 \ldots ds_k\right.\\& \left.-\int_0^{t_1}\int_0^{t_2}\ldots\int_0^{t_k}d\eta^\xi(s_1)\otimes \ldots \otimes d\eta^\xi(s_k)ds_1 \ldots ds_k\right|\delta^{-k}<C\varepsilon.
\end{align*}
{\em Step $3$.} The Chen-Strichartz formula, which is a deep generalization of the Baker-Campbell-Hausdorff formula, allows to compute the solution of flows driven by absolutely continuous paths via multiplying the terms appearing in the signature to the corresponding commutators of the vector fields. Adapting the notation in \cite{Baudoin}, Ch. 2 to the notation used here we have for a path $\xi$ as in \eqref{EQ_Horizontal} starting from the origin (in exponential coordinates)
$$
\xi =\sum_{k=1}^r
\sum_{
I=\{i_1,...,i_k\}}
\Lambda_I(\alpha_1,\ldots,\alpha_m)X_I.
$$
Here
$$
X_I:=[X_{i_1} , [X_{i_2} , ..., [X_{i_{k−1}} , X_{i_k} ]...].
$$
Moreover for $t_1<\ldots<t_k<\delta$
$$
\Lambda_I:=\sum_{\sigma\in S_k}
\frac{(-1)^{e(\sigma)}}{k^2
\left(\begin{array}{c}k-1\\ e(\sigma)\end{array}\right)}
\int_0^{t_1}\int_0^{t_2}\ldots\int_0^{\delta}\alpha_{i_1}(t_{i_1})
\ldots \alpha(t_{i_k})dt_{i_1} \ldots dt_{i_k},
$$ where $S_k$ is the symmetric group of $k$ elements and $e(\sigma)$ is a nonnegative integer depending only on the permutation $\sigma\in S_k$ (see \cite{Baudoin}, Ch.1 )
Note that all sums are finite as the Lie algebra is nilpotent, and that the projection of the solution onthe $k$-th layerof the graded algebra is a multiple of a $k$-times iterated integral of the $\alpha_i,$ $i=1,\ldots,m.$
Combining step 2 and step 3, the desired estimate in the homogeneous (and hence Carnot-Caratheodory) distance follows.
\end{proof}
In the following lemma we build a partition using sub-intervals where we can apply the previous lemma up to a set of Lebesgue-measure arbitrarily small.
\begin{lemma}\label{covering}
For any $\rho>0$ and $\delta>0$ there exists natural numbers $N_1$ and $N_2,$ and a partition of $[0,1)$ formed by the union of the intervals $I_k=[t_k-\ell_k,t_k+\ell_k)$ for $k=1,\dots, {N_1}$ and the intervals $J_k=[t_k'-\ell_k',t_k'+\ell_k')$ for $k=1,\dots, {N_2}$ such that
\begin{itemize}
\item $0<\ell_k<\rho$ for $k=1,\ldots, N_1$,
\item for $k=1,\dots,N_1$ and $i=1,\ldots,m$ we have
$$\int_{t_k-r}^{t_k+r}|\alpha_i(s)-\alpha_i(0)|ds<r\delta,\quad \textrm{for }0<r<\ell_k,
$$
\item $\sum_{k=1}^{N_2}|J_k|<\delta.$
\end{itemize}
\end{lemma}
\begin{proof} By the Lebesgue point theorem, there exits a set ${\mathcal N}$ of zero Lebesgue measure such that any $\tau\in [0,1]\setminus{\mathcal N}$ is a joint Lebesgue point of $\alpha_1,\ldots,\alpha_m$. By definition of the Lebesgue measure, ${\mathcal N}$ can be covered by a countable union of intervals with total length smaller than $\delta$. For each $\tau\in [0,1]\setminus{\mathcal N}$ there exists a $\rho_\tau>0$ such that
$$
\int_{\tau-r}^{\tau+r}|\alpha_i(s)-\alpha_i(0)|ds<r\delta,\quad \textrm{for }i=1,\ldots,m \textrm{ and } 0<r<\rho_\tau.
$$
In this way we obtain an open cover of the compact unit intervals and extract a finite subcover. These finitely many intervals can be ordered according to their center and made into a partition by shortening them, starting from the leftmost center, until the desired partition is obtained.
\end{proof}
\begin{figure} [htbp]
\begin{center}
\includegraphics[scale=0.5]{liminf_Paola.pdf}
\end{center}
\caption{this picture illustrates Steps 3 and 4 of Lemma \ref{LemmaLiminfnuovo}.}
\label{Figura-C}
\end{figure}
In the following lemma we prove the main lower bound for the liminf of $L^\varepsilon$.
\begin{lemma}
\label{LemmaLiminfnuovo}
Let us assume that $L(x,q,\omega)$ satisfies assumptions {\bf(L1)-(L4)} and {\bf (L6)}.
Then, locally uniformly in $t>0$, $x,y\in \mathbb{R}^N$ and a.s. in $\omega\in \Omega$
\begin{equation}
\label{LIMINFN}
\liminf_{\varepsilon\to 0^+}L^{\varepsilon}(x,y,t,\omega)
\geq t\inf_{\alpha\in \mathcal{F}^t_{y,x} } \int_{0}^{t} \overline{L}(\alpha(s)) ds ,
\end{equation}
where $\mathcal{F}^t_{y,x}$ is the set of all the $m$-valued measurable functions $\alpha:[0,t]\to \mathbb{R}^m$ such that the corresponding horizontal curve $\xi^{\alpha}(s)$ joins $y$ to $x$ in a time $t$, and $\overline L(q)$ is the effective Lagrangian defined by limit \eqref{EffectiveLagrangian}.
\end{lemma}
\begin{proof}
{\em Step 1:} For sake of simplicity we assume that $t=1$ and that for each $\varepsilon$ there exists a minimizing curve $\xi^\varepsilon$ for $L^\varepsilon(x,y,1,\omega)$.
We observe that, by Corollary~\ref{corollary71}, the sequence $\{\xi^\varepsilon\}_\varepsilon$ is equibounded in $\mathbb L^\infty(0,1)$ and in $W^{1,\lambda}(0,1)$.
In particular, it is equi-bounded in the H\"older norm with H\"older exponent $\gamma <1-\frac{1}{\lambda}$. Hence, by Ascoli theorem and by Sobolev embedding theorem, up to a subsequence, $\xi^\varepsilon$ uniformly converges to some H\"older curve~$\bar \xi$.
We claim that $\bar \xi$ is horizontal. Actually, by Proposition~\ref{prp71}, there holds $\|\alpha^{\xi^\e}\|_{L^\lambda}<C_2$; hence, possibly passing to a subsequence, $\{\alpha^{\xi^\e}\}$ weakly converges to some $\bar \alpha$ in $L^\lambda$. Moreover, there holds
\[
\xi^\e(t)=\sum_{i=1}^m\int_0^t \alpha^{\xi^\e}_i(s) X_i(\xi^\e(s))\, ds;
\]
so, taking into account that $\xi^\e$ uniformly converge to $\bar \xi$, we infer
\[
\bar\xi(t)=\sum_{i=1}^m\int_0^t \alpha^{\bar\xi}_i(s) X_i(\bar \xi(s))\, ds
\]
which means that $\bar \xi$ is horizontal. Finally, smoothing the horizontal velocity $\alpha^{\bar \xi}$, we obtain a family of smooth and horizontal curves uniformly approximating $\bar \xi$. Therefore, since now on, w.l.o.g. we assume that $\bar \xi$ is admissible.
{\em Step 2:} Choose a partition of $[0,1]$ in intervals $I_k$ and $J_k$ as in Lemma \ref{covering} and error $\delta$. Denote by
$$
{\mathcal B}:=\bigcup_{k=1}^{N_2}J_k
$$
the bad set of total length $\delta.$
By the a-priori bounds on $\|\alpha\|_{L^1}$ (see Corollary~\ref{corollary71}), assumption~{\bf (L2)} and the continuity of $\bar L$ (see Proposition~\ref{prop5}), there exists $r(\delta)\to 0$ as $\delta\to 0$ such that
\begin{eqnarray}\notag
L^{\varepsilon}(x,y,t,\omega)&=&\int_0^1 L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds\\
\label{pizza}
&\ge &\int_{[0,1]\setminus {\mathcal B}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds-r(\delta),
\\
\notag
\int_{0}^{1} \overline{L}(\alpha^{\overline\xi}(s)) ds&\ge&
\int_{[0,1]\setminus {\mathcal B}}\overline{L}(\alpha^{\overline\xi}(s)) ds-r(\delta).
\end{eqnarray}
Hence we can ignore the bad intervals.\\
As the number of good intervals $N_1$ is fixed and finite and $\xi^\varepsilon$ uniformly converges to $\bar \xi$, we can choose $\varepsilon$ sufficiently small such that
\begin{equation}\label{convunifdCC}
\max_{k=1,\ldots N_1}\sup_{I_k}|I_k|^{-1}d_{CC}(\xi^{\varepsilon}(s), \overline\xi(s))<\delta.
\end{equation}
In the interval $I_k$, consider the constant velocity $\alpha^{\overline\xi}(t_k)$.
By the continuity of $\overline L$, the integral on the right hand side of (\ref{LIMINFN}) can be approximated by any Riemann sum, and the bad intervals can be ignored:
\begin{equation}
\label{pizza3}
\sum_{k=1}^{N_1}|I_k|\overline{L}\left(
\alpha^{\overline\xi}(t_k)
\right)\to \int_0^1\overline{L}(\alpha^{\overline\xi}(s)) ds, \quad \textrm{as}\; N_1\to +\infty.
\end{equation}
{\em Step 3:}
Let us consider now one "good" interval, w.l.o.g. denoted by $I=[t-\ell,t+\ell],$ and consider the $\mathcal{X}$-line through $\overline\xi(t)$ with velocity $\alpha^{\overline\xi}(t)$ which we denote by $l^{\overline\xi}$ (see Figure \ref{Figura-C}).
Let us consider a curve $\overline\xi^{\varepsilon}$ with
$\overline\xi^{\varepsilon}(t-\ell)=l^{\overline\xi}(t-\ell)$ and $\overline\xi^{\varepsilon}(t+\ell)=l^{\overline\xi}(t+\ell)$ which is the minimizer of
$$
L^\varepsilon\left(l^{\overline\xi}(t+\ell), l^{\overline\xi}(t-\ell),I,\omega\right)=\int_{t-\ell}^{t+\ell}
L\left(\delta_{1/\varepsilon}(\overline \xi^{\varepsilon}(s)),\alpha^{\overline \xi^{\varepsilon}}(s),\omega\right) ds.
$$
where $L^\varepsilon\left(x,y,I,\omega\right)$ is defined as in \eqref{Lepsilon} with the infimum is over the admissible curves with $\xi(t-\ell)=y$ and $\xi(t+\ell)=x$.\\
From Theorem~\ref{Lsegnato}, we can choose $\varepsilon$ sufficiently small such that
\begin{eqnarray}
\label{pizza2}
\int_{t-\ell}^{t+\ell} L\left(\delta_{1/\varepsilon}(\overline\xi^{\varepsilon}(s)),\alpha^{\overline\xi^{\varepsilon}}(s),\omega\right) ds&\ge& |I| \overline{L}(\alpha^{l^{\overline\xi}})-|I|\delta,
\end{eqnarray}
This can be done uniformly for all good intervals $I_k$ as their number is already fixed.
{\em Step 4: } We now claim that
\begin{multline}\label{claim2}
\int_{I} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds \geq\\
\int_{I} L\left(\delta_{1/\varepsilon}(\overline\xi^{\varepsilon}(s)),\alpha^{\overline\xi^{\varepsilon}}(s),\omega\right) ds-C\left(|I|+\|\alpha^{\overline\xi}\|_{L^1(I)}\right)\delta.
\end{multline}
Let us now prove the claim \eqref{claim2}: by Lemma \ref{Lebesgueapprox} we know that
\begin{equation}\label{distanzacurve}
\sup_{I}d_{CC}(l^{\overline\xi}, \overline\xi)<C\left(|I|+\|\alpha^{\overline\xi}\|_{L^1(I)}\right)\delta.
\end{equation}
Consider the points
$$P_1:=\overline\xi^\varepsilon(t-\ell)=l^{\overline{\xi}}(t-\ell),\qua
P_2:= \xi^{\varepsilon}(t-\ell), \quad
P_3:=\xi^{\varepsilon}(t+\ell)\quad
P_4:=\overline \xi^{\varepsilon}(t+\ell)=l^{\overline{\xi}}(t+\ell)
$$
(see Figure \ref{Figura-C}).
Then
by \eqref{convunifdCC},
and \eqref{distanzacurve}
$$
d_{CC}(P_1,P_2)\le d_{CC}(l^{\overline \xi}(t-\ell),\overline\xi(t-\ell))+d_{CC}(\overline{\xi}(t-\ell),\xi^\varepsilon(t-\ell))\le C\delta\left(
|I|+\|\alpha^{\overline\xi}\|_{L^1(I)}
\right),
$$
and analogously for $
d_{CC}(P_3,P_4)$. By Lemmas \ref{replacement} and \ref{lemma71} we have
$$
L^\varepsilon(P_3,P_2,I,\omega)\ge L^\varepsilon(P_4,P_1,I,\omega)-\delta C\,
\left(|I|+\|\alpha^{\overline\xi}\|_{L^1(I)}\right).
$$
Since $\xi^{\varepsilon}$ is admissible for
$
L^\varepsilon(P_3,P_2,I,\omega)
$
then
$$
L^\varepsilon(P_3,P_2,I,\omega)\le\int_{I} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds.
$$
Combining the last two inequalities, \eqref{claim2} is shown.
{\em Step 5:}\\
Since the claim \eqref{claim2} is true for any of the good intervals $I_k$, we can easily conclude. Indeed, using respectively that $\xi^{\varepsilon}$ are minimizer of $ L^\varepsilon(x,y,1,\omega)$, \eqref{pizza}, Definition~\ref{CC-distance_definiton} and \eqref{pizza2}
\begin{align*}
L^\varepsilon(x,y,1,\omega)&=
\int_0^1 L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds
\\
&\ge \sum_{k=1}^{N_1}
\int_{I_k} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds - r(\delta)
\\
&\geq
\sum_{k=1}^{N_1}\int_{I_k}\!\!\!\! L\left(\delta_{1/\varepsilon}(\overline\xi_k^{\varepsilon}(s)),\alpha^{\overline\xi_k^{\varepsilon}}(s),\omega\right) ds -\!\!\!\sum_{k=1}^{N_1}C\delta\left(|I_k|+\|\alpha^{\overline\xi}\|_{L^1(I_k)}\right)\!\!- r(\delta)\\
&\ge
\sum_{k=1}^{N_1}\int_{I_k} L\left(\delta_{1/\varepsilon}(\overline\xi_k^{\varepsilon}(s)),\alpha^{\overline\xi_k^{\varepsilon}}(s),\omega\right) ds -C\delta(1+d_{CC}(x,y)) - r(\delta)\\
&\ge
\sum_{k=1}^{N_1}|I_k|\overline{L}\left(\alpha^{l^{\xi}}(t_k)\right)-C\delta(1+d_{CC}(x,y))-r(\delta)
\\
&\ge
\int_0^1\overline{L}(\alpha^{\overline\xi}(s)) ds -r(\delta) -C\delta(1+d_{CC}(x,y))\\
&\longrightarrow \int_0^1\overline{L}(\alpha^{\overline\xi}(s)) ds,\quad \textrm{ as}\; \delta\to 0.
\end{align*}
\nada{
Consider the path $\psi^{\varepsilon}(s)$ which connects the points $A$ with $D$ in the following way: it goes from $A$ to $B$ following the geodesic $\psi_1^{\varepsilon}(s)$ and spending time $[\frac{r}{n},\frac{r}{n}+ \tau_1]$, then it goes from $B$ to $C$ following the curve $\psi_2^{\varepsilon}(s)=\xi^{\varepsilon}\left(\frac{r}{n}+\frac{s-\tau_1}{1-n(\tau_1+\tau_2)}\right)$ (namely it follows the curve $\xi^{\varepsilon}$ spending only time $[\frac{r}{n}+\tau_1, \frac{r+1}{n} -\tau_2]$) and finally it goes from $C$ to $D$ following the geodesic $\psi_3^{\varepsilon}(s)$ in time $[\frac{r+1}{n} -\tau_2, \frac{r+1}{n}]$.
Since the curve $\xi^{\varepsilon,r}(s)$ is a minimizer of the functional in $[\frac{r}{n}, \frac{r+1}{n}]$, we have
\begin{multline}
\label{spezzino}
\int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon,r}(s)),\alpha^{\xi^{\varepsilon,r}}(s),\omega\right) ds\leq
\int_{\frac{r}{n}}^{\frac{r}{n}+\tau_1} L\left(\delta_{1/\varepsilon}(\psi_1^{\varepsilon}(s)),\alpha^{\psi_1^{\varepsilon}}(s),\omega\right) ds \\+ \int_{\frac{r}{n}+\tau_1}^{\frac{r+1}{n}-\tau_2} L\left(\delta_{1/\varepsilon}(\psi_2^{\varepsilon}(s)),\alpha^{\psi_2^{\varepsilon}}(s),\omega\right) ds +
\int_{\frac{r+1}{n}-\tau_2}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\psi_3^{\varepsilon}(s)),\alpha^{\psi_3^{\varepsilon}}(s),\omega\right).
\end{multline}
Following the argument used to prove Lemma~\ref{lemma71}, we obtain
\begin{equation}\label{7.12}
\int_{\frac{r}{n}}^{\frac{r}{n}+\tau_1} L\left(\delta_{1/\varepsilon}(\psi_1^{\varepsilon}(s)),\alpha^{\psi_1^{\varepsilon}}(s),\omega\right) ds\leq C\frac{\tau_1^{\gamma}}{\tau_1^{(\gamma-1)}} +C\tau_1=C\tau_1.
\end{equation}
and analogously
\begin{equation}\label{7.13}
\int_{\frac{r+1}{n}-\tau_2}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\psi_3^{\varepsilon}(s)),\alpha^{\psi_3^{\varepsilon}}(s),\omega\right)\leq C\frac{(d_{CC}(C,D))\tau_2^{\gamma}}{\tau_2^{(\gamma-1)}} +C\tau_2= C\tau_2.
\end{equation}
Moreover, since $\psi_2^{\varepsilon}(s)= \xi^{\varepsilon}\left(\frac{r}{n}+\frac{s-\tau_1}{1-n(\tau_1+\tau_2)}\right)$ with $s\in[\frac{r}{n}+\tau_1, \frac{r+1}{n}-\tau_2]$, Lemma~\ref{leftinv} entails
\begin{multline*}
\int_{\frac{r}{n}+\tau_1}^{\frac{r+1}{n}-\tau_2} L\left(\delta_{1/\varepsilon}(\psi_2^{\varepsilon}(s)),\alpha^{\psi_2^{\varepsilon}}(s),\omega\right) ds=\\
\int_{\frac{r}{n}+\tau_1}^{\frac{r+1}{n}-\tau_2} L\left(\delta_{1/\varepsilon}\left(\xi^{\varepsilon}\left(\frac{r}{n}+\frac{s-\tau_1}{1-n(\tau_1+\tau_2)}\right)\right),
\frac{1}{1-n(\tau_1+\tau_2)}\alpha^{\xi^{\varepsilon}}\left(\frac{r}{n}+\frac{s-\tau_1}{1-n(\tau_1+\tau_2)}\right),\omega\right) ds.
\end{multline*}
By a change of variable, we get
\begin{multline*}
\int_{\frac{r}{n}+\tau_1}^{\frac{r+1}{n}-\tau_2} L\left(\delta_{1/\varepsilon}(\psi_2^{\varepsilon}(s)),\alpha^{\psi_2^{\varepsilon}}(s),\omega\right) ds=\\
[1-n(\tau_1+\tau_2)] \int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}\left(\xi^{\varepsilon}(\bar s)\right),\frac{1}{1-n(\tau_1+\tau_2)}\alpha^{\xi^{\varepsilon}}(\bar s),\omega\right) d\bar s.
\end{multline*}
Therefore, we have
\begin{eqnarray}
&&\int_{\frac{r}{n}+\tau_1}^{\frac{r+1}{n}-\tau_2} L\left(\delta_{1/\varepsilon}(\psi_2^{\varepsilon}(s)),\alpha^{\psi_2^{\varepsilon}}(s),\omega\right) ds-
\int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds \nn\\
&&= \int_{\frac{r}{n}}^{\frac{r+1}{n}} \left\{L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\frac{1}{1-n(\tau_1+\tau_2)}\alpha^{\xi^{\varepsilon}}(s),\omega\right)-L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) \right\}ds \nn \\\label{stimaI1e2}
&&-n(\tau_1+\tau_2) \int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\frac{1}{1-n(\tau_1+\tau_2)}\alpha^{\xi^{\varepsilon}}(s),\omega\right)ds;
\end{eqnarray}
we denote by $I_1$ and $I_2$ respectively the first and the second integral in the right hand side. Let us estimate $I_1$ and $I_2$ separately.
By our assumption:
\[
\left|L(x,p,\omega)-L(x,q,\omega)\right|\leq C \left| |p|^\lambda - |q|^\lambda\right|\qquad\forall x,p,q,\omega
\]
(THIS ASSUMPTION MUST BE PUT WITH OTHER ONES OR IN THE STATEMENT) we get
\begin{equation}\label{stimaI1}
I_1\leq \int_{\frac{r}{n}}^{\frac{r+1}{n}} C|\alpha^{\xi^\varepsilon}(s)|^\lambda \left|1-\frac{1}{[1-n(\tau_1+\tau_2)]^\lambda}\right|ds
\leq O(\frac{1}{n^{\frac12}})\int_{\frac{r}{n}}^{\frac{r+1}{n}} |\alpha^{\xi^\varepsilon}(s)|^\lambda ds
\end{equation}
where the last inequality is due to Proposition~\ref{prp71}.
On the other hand, by assumption {\bf(L2)}, we have
\begin{equation}\label{stimaI2}
I_2\leq O(\frac{1}{n^{\frac12}}) \int_{\frac{r}{n}}^{\frac{r+1}{n}} |\alpha^{\xi^\varepsilon}(s)|^\lambda ds.
\end{equation}
Replacing inequalities \eqref{stimaI1} and \eqref{stimaI2} in \eqref{stimaI1e2}, we deduce
\begin{multline}\label{stimaI1e2bis}
\int_{\frac{r}{n}+\tau_1}^{\frac{r+1}{n}-\tau_2} L\left(\delta_{1/\varepsilon}(\psi_2^{\varepsilon}(s)),\alpha^{\psi_2^{\varepsilon}}(s),\omega\right) ds\leq
\int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds \\
+ O(\frac{1}{n^{\frac12}})\int_{\frac{r}{n}}^{\frac{r+1}{n}} |\alpha^{\xi^\varepsilon}(s)|^\lambda ds.
\end{multline}
Replacing \eqref{stimaI1e2bis}, \eqref{7.12} and \eqref{7.13} in \eqref{spezzino},
{\color{blue} FEDE: spezzino e' in una lunga dimostrazione che avete tagliato usando il comando nada....non so come sistemarlo ne' perche' la dimostrazione sia stata tagliata, mi sembra che sia una dimostrazione che avevamo aggiustato insieme a Padova}
we get
\begin{multline*}
\int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds \geq
\int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon,0}(s)),\alpha^{\xi^{\varepsilon,0}}(s),\omega\right) ds
\\ -2C\frac{1}{n^{\frac32}}+ O(\frac{1}{n^{\frac12}})\int_{\frac{r}{n}}^{\frac{r+1}{n}} |\alpha^{\xi^\varepsilon}(s)|^\lambda ds.
\end{multline*}
Summing on $r$, we infer
\begin{multline*}
\sum_{r=0}^{n-1}\int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon}(s)),\alpha^{\xi^{\varepsilon}}(s),\omega\right) ds\\
\geq
\sum_{r=0}^{n-1}\int_{\frac{r}{n}}^{\frac{r+1}{n}} L\left(\delta_{1/\varepsilon}(\xi^{\varepsilon,r}(s)),\alpha^{\xi^{\varepsilon,r}}(s),\omega\right) ds\\\qquad-2C\frac{1}{n^{\frac12}}+ O(\frac{1}{n^{\frac12}})\int_{0}^{1} |\alpha^{\xi^\varepsilon}(s)|^\lambda ds;
\end{multline*}
by Proposition~\ref{prp71} we conclude the proof of claim~\eqref{claim2}.
}
\end{proof}
In the following lemma we prove the upper bound for the limsup of $L^{\varepsilon}$.
\begin{lemma}
\label{LemmaLimsup}
Let us assume that $L(x,q,\omega)$ satisfies assumptions {\bf(L1)-(L4)} and {\bf (L6)}.
Then locally uniformly in $t>0$, $x,y\in \mathbb{R}^N$ and a.s. in $\omega\in \Omega$
\begin{equation}
\label{LIMSUP}
\limsup_{\varepsilon\to 0^+}L^{\varepsilon}(x,y,t,\omega)
\leq t\inf_{\alpha\in \mathcal{F}^t_{y,x} } \int_{0}^{t} \overline{L}(\alpha(s)) ds ,
\end{equation}
where $\mathcal{F}^t_{y,x}$ is the set of all the $m$-valued measurable functions $\alpha:[0,t]\to \mathbb{R}^m$ such that the corresponding horizontal curve $\xi^{\alpha}(s)$ joins $y$ to $x$ in a time $t$ and $\overline L(q)$ is the effective Lagrangian defined by limit \eqref{EffectiveLagrangian}.\end{lemma}
\begin{proof}
W.l.o.g. we show the result for $t=1$.
Let us choose $\overline{\alpha}\in \mathcal{F}_{y,x}^1$ which realizes the infimum on the right-hand side of \eqref{LIMSUP}. We
assume $\overline{\alpha}$ smooth, in fact
we can uniformly approximate $\overline{\alpha}$ by smooth horizontal velocities and use the continuity of
$\overline{L}$.\\
We fix a partition $\pi$ of the interval $[0,1]$ in $n$ equal length intervals $\left(\frac{i-1}{n},\frac{i}{n}\right)$ and we set
$$\overline{\alpha}^i=\overline{\alpha}\left(\frac{2i-1}{2n}
\right),\quad i=1,\dots,n.$$
We define the step-function
$$
\overline{\alpha}^{\pi}(s):=\overline{\alpha}^i=\textrm{constant},\quad \forall \, s\in \left(\frac{i-1}{n},\frac{i}{n}\right),\;\textrm{for}\; i=1,\dots,n.
$$
By
Taylor expansion, we know that
\begin{equation}\label{corvo}
\norma{\overline{\alpha}-\alpha^{\pi}}_{L^1(0,1)}=O\left(\frac{1}{n}\right).
\end{equation}Note that the constants in the Taylor expansion depend on higher derivatives of $\overline \alpha,$ but this is fixed throughout the proof, in particular it does not depend on $\varepsilon.$
We define the following sequence of points in $\mathbb{R}^N$:
$$
z^0=y , \quad z^{i}=\zeta^i(1/n),
$$
where $\zeta^i:\left[0,\frac{1}{n}\right]\to \mathbb{R}^N$ is the unique $\mathcal{X}$-line starting from $z^{i-1}$ with constant horizontal velocity $\overline\alpha^i$ (i.e. $\dot{\zeta^i}(s)=\sum_{j=1}^m\overline\alpha_j^iX_j(\zeta^i(s))$, for $s\in\left[0,\frac{1}{n}\right]$, with $\zeta^i(0)=z^{i-1}$).
Note that $z^i\in V_{z^{i-1}}$, for all $i=1,\dots,n$.\\
\newline
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.4]{limsup_Paola.pdf}
\end{center}
\caption{this picture illustrates the arguments of the proof of Lemma \ref{LemmaLimsup}.}
\label{Figura-N}
\end{figure}
\newline
Since in general $\mathcal{X}$-lines do not minimize the integral functional between the two points $z^{i-1}$ and $z^{i}$, we consider the
curves $\eta_i^\varepsilon(s)$ which are minimisers of $L^{\varepsilon}\left(z^i ,z^{i-1},\frac{1}{n},\omega\right)$.
We look at the two curves (see Picture \ref{Figura-N}):
\begin{align*}
& \overline{\xi}\in \mathcal{A}^1_{y,x}\;\textrm{horizontal curve associated to the horizontal velocity}\;\overline{\alpha}(s),\\
& \overline{\eta}^\varepsilon\in \mathcal{A}^1_{y,x'}\;\textrm{horizontal curve defined as union of $\eta^i$, for $i=1,\dots,n$}
\end{align*}
where $x':=\zeta^n(1/n)$ satisfies, by \eqref{corvo} and Lemma \ref{aprroxCurveLemma},
\begin{equation}\label{civetta}
|x-x'|=O\left(\frac{1}{n}\right).
\end{equation}
Note that $\overline{\eta}^\varepsilon$ is an admissible curve, i.e. $\overline{\eta}^\varepsilon\in \mathcal{A}^1_{y,x'}$, but it
may be not a minimizer for $L^{\varepsilon}\left(x',y,1,\omega\right)$. Moreover the curve $\overline{\eta}^\varepsilon$ depends on $\overline{\alpha}$, on the partition $\pi$ (i.e. on $n$) and on $\varepsilon$; nevertheless these dependences do not influence our final estimate.\\
From \eqref{civetta} and the continuity of $L^{\varepsilon}$ in $y$, uniformly
w.r.t. $\varepsilon$, denoting by $o(1)$ a function which goes to zero if $n\to +\infty$,
we get
\begin{eqnarray}
\notag
L^{\varepsilon}\left(x,y,1,\omega\right)&=&
L^{\varepsilon}\left(x',y,1,\omega\right)+o(1)\\
\notag
&\leq&
\int_0^1L\left(\delta_{1/\varepsilon}(\overline{\eta}^\varepsilon(s)), \alpha^{\overline{\eta}^\varepsilon}(s),\omega\right) ds+
o(1)\nn\\
\notag
&=&
\sum_{i=1}^n\int_{0}^{1/n}L\left(\delta_{1/\varepsilon}({\eta}^\varepsilon_i(s)), \alpha^{{\eta}^\varepsilon_i}(s),\omega\right) ds+o(1)\\
\label{speriamo}
&=&
\sum_{i=1}^nL^{\varepsilon}\left(z^{i},z^{i-1},\frac{1}{n},\omega\right)
+o(1)
\end{eqnarray}
where we have used the definition of $\overline{\eta}^\varepsilon$ as union of minimisers for each interval of the partition.
Now we first choose $n$ big enough that the $o(1)$ term in the last line is smaller than the desired error. In the next step we then choose $\varepsilon$ depending on $n.$\\
Let us assume for the moment the following claim
\begin{equation}\label{convetai}
L^{\varepsilon}\left(z^i, z^{i-1},\frac{1}{n},\omega\right)=\frac{1}{n}\left(\overline L(\overline\alpha^i)+r( \varepsilon)\right),
\end{equation}
where $ r( \varepsilon)$ is a function which goes to zero if $\varepsilon\to 0$.
Hence,
by \eqref{speriamo} and \eqref{convetai},
we get
\begin{eqnarray*}
L^{\varepsilon}\left(x,y,1,\omega\right)&\leq&
\sum_{i=1}^n\frac{1}{n}\left(\overline L(\overline\alpha^i)+ r(\varepsilon)\right)+ o(1)\\
&=&
\sum_{i=1}^n \frac{1}{n}\overline L(\overline\alpha^i) + o\left(\frac{1}{n}\right)+ o(1)\to
\int_{0}^{1} \overline{L}(\overline\alpha(s)) ds\quad \textrm{as } n\to +\infty.
\end{eqnarray*}
It remains only to prove \eqref{convetai}.
By stationarity (see also \eqref{gabbiani}) we have
$$
L^{\varepsilon}\left(z^i,z^{i-1},\frac{1}{n},\omega\right)
=L^{\varepsilon}\left(-z^{i-1}\circ z^{i},0,\frac{1}{n},\tau_{\delta_{1/\varepsilon}(z^{i-1})}(\omega)\right).
$$
Using the relation between the Euclidean distance and the Carnot-Carath\'eodory distance and the fact that $\mathcal{X}$-lines are horizontal curves, we get
$$
|-z^{i-1}
\circ z^i |\leq C
d_{CC}(z^{i-1},z^{i})\leq C\int_0^{\frac{1}{n}}
|\overline{\alpha}^i| d s=C_i\frac{1}{n},$$
where $C_i=C|\overline{\alpha}^i|$.\\
Then, up to a constant, we can write
$-z^{i-1}
\circ z^i=\frac{\overline{z}}{n}$ where $|\overline{z}|\leq 1$ (Euclidean norm in $\mathbb{R}^N$). Setting $\overline{z}^1=\pi_m(\overline{z})$ and using identity \eqref{gabbiani}
\begin{eqnarray*}
L^{\varepsilon}\left(z^i,z^{i-1},\frac{1}{n},\omega\right)
&=&L^{\varepsilon}\left(\frac{\overline z}{n},0,\frac{1}{n},\tau_{\delta_{1/\varepsilon}(z^{i-1})}(\omega)\right)\\
&=&\varepsilon\mu_{(-\overline z^1)}\left(\left[0,\frac{1}{\varepsilon n}\right),\tau_{\delta_{1/\varepsilon}(z^{i-1})}(\omega)\right)\\&=&
\frac{1}{n}(\varepsilon n)\mu_{\overline z^1}\left(\left[0,\frac{1}{\varepsilon n}\right),\tau_{\delta_{1/\varepsilon}(z^{i-1})}(\omega)\right)\\&=&
\frac{1}{n}\big(\overline\mu(\overline z^1)+ r(\varepsilon)\big)=
\frac{1}{n}\big(\overline L(\alpha^i)+r(\varepsilon)\big),
\end{eqnarray*}
where one can use the same argument
as in the proof of Theorem \ref{Lsegnato} to show that
$
\frac{1}{n}(\varepsilon n)\mu_{\overline z^1}([0,\frac{1}{\varepsilon n}),\tau_{\delta_{1/\varepsilon}(z^{i-1})}(\omega))\approx\frac{1}{n}(\varepsilon n)\mu_{\overline z^1}([0,\frac{1}{\varepsilon n}),\omega)
$, as $\varepsilon\to 0^+$.
\end{proof}
Combining Lemmas \ref{LemmaLiminfnuovo} and \ref{LemmaLimsup} we are finally able to prove our main convergence result.
\begin{theorem}\label{Thconvvar}
Let us assume that $L(x,q,\omega)$ satisfies assumptions {\bf(L1)-(L4)} and {\bf (L6)}.
\begin{enumerate}
\item Then
\begin{equation}\label{PensoPositivo}
\lim_{\varepsilon\to 0^+}L^{\varepsilon}(x,y,t,\omega)=t\inf_{\alpha\in \mathcal{F}^t_{y,x} } \int_{0}^{t} \overline{L}(\alpha(s)) ds ,
\end{equation}
locally uniformly in $t>0$ and $x,y\in \mathbb{R}^N$ and almost surely $\omega\in \Omega$,
where $\mathcal{F}^t_{y,x}$ is the set of all the $m$-valued measurable functions $\alpha:[0,t]\to \mathbb{R}^m$ such that the corresponding horizontal curve $\xi^{\alpha}(s)$ joins $y$ to $x$ in a time $t$.\\
\item
Given any $g:\mathbb{R}^N\to \mathbb{R}$ uniformly continuous and $u^{\varepsilon}(x,t,\omega)$ defined by \eqref{rapprepsilon}, then
\begin{equation}
\label{LIMITFINALE}
\lim_{\varepsilon\to 0^+}u^{\varepsilon}(x,t,\omega)=
\inf_{y\in \mathbb{R}^N}
\left[g(y)+
t\inf_{\alpha\in \mathcal{F}^t_{y,x} } \int_{0}^{t} \overline{L}(\alpha(s)) ds
\right],
\end{equation}
locally uniformly in $t>0$ and $x\in \mathbb{R}^N$ and almost surely $\omega\in \Omega$.
\end{enumerate}
\end{theorem}
\section{Homogenization for the Hamilton-Jacobi problem}\label{HomogSection}
We want to use Theorem \ref{Thconvvar} to derive the convergence of the viscosity solutions
of problem \eqref{ApproxPr} to the unique solution of the deterministic problem \eqref{LimitProblem}.
Our strategy is to use the Hopf-Lax variational formula from \cite{BCP}.\\
The key point is the convexity of the effective Lagrangian $\overline L(q)$ defined in \eqref{EffectiveLagrangian}.
In the Euclidean case this is an easy consequence of the Dynamical Programming Principle but in our degenerate case this strategy fails since it is not possible to find three points related to a convex combination satisfying simultaneously the associated constraints.
\begin{proposition}\label{ConvexityTh}
Let us suppose that
$L(x,q,\omega)$ satisfies assumptions {\bf(L1)-(L4)} and {\bf (L6)}.
Then $\overline L(p)$ defined in \eqref{EffectiveLagrangian} is convex in $\mathbb{R}^m$.
\end{proposition}
\begin{proof}
For sake of simplicity, we prove the midpoint convexity (which is equivalent to the convexity),
i.e. we want to prove
\begin{equation}\label{TH}
\overline L\left(\frac{p+q}{2}\right)\leq \frac{1}{2}\overline L(p)+\frac{1}{2}\overline L(q)\qquad \forall p,q\in\mathbb{R}^m.
\end{equation}
By definition of $\overline{L}$ we have that
\begin{eqnarray}\nn
\overline L\left(\frac{p+q}{2}\right)
&=&\lim_{\varepsilon\to 0^+}L^{\varepsilon}\left(y^{\frac{p+q}{2}},0,1,\omega\right)\\
\label{defL} &=&\lim_{\varepsilon\to 0^+}\quad \inf_{_{\xi\in \mathcal{A}_{0,y^{(p+q)/2}}}} \int_0^1
L\left(\delta_{1/\varepsilon}(\xi(s)), \alpha^{\xi}(s),\omega\right)ds
\end{eqnarray}
where $y^{\frac{p+q}{2}}= l^{(p+q)/2}(1)$ and
$l^{(p+q)/2}$ is the $\mathcal{X}$-line starting from $0$ with horizontal velocity $\frac{p+q}{2}$.
We define the curve $\xi_n$ as the horizontal curve with $\xi_n(0)=0$ and horizontal velocity
\begin{eqnarray*}
\alpha^{\xi_n}(s)=
\left\{\begin{array}{cc}p, & if \ s\in \left[\frac{i-1}{2n}, \frac{i}{2n}\right]\; \textrm{and}\,i\ \mbox{even}, \\
q, & if \ s\in \left[\frac{i-1}{2n}, \frac{i}{2n}\right]\; \textrm{and}\, i\ \mbox{odd}
\end{array}\right.
\end{eqnarray*}
for $i=1,\dots,2n$.
We call $x_k= \xi_n(\frac{k}{2n})$, $k=0,\dots,2n$ (see Figure \ref{Figura-D}).
We observe that
\begin{equation}\label{xi}
x_i\in\ V_{x_{i+1}}, \quad \forall \; i=1,\dots,2n.
\end{equation}
We claim that
\begin{equation}\label{vicini!}
|\xi_n(1)-y^{(p+q)/2}|=O\left(\frac{1}{n}\right),
\end{equation}
where $|\cdot|$ denotes the Euclidean norm.\\
Assume for the moment that claim \eqref{vicini!} is true.
Then by the uniform continuity of $L^{\varepsilon}$ (see Theorem \ref{UniformContinuityFunctional}) we can deduce
\begin{equation}\label{conconv}
L^{\varepsilon}\big(y^{(p+q)/2},0,1,\omega\big)= L^{\varepsilon}\big( \xi_n(1), 0,1,\omega\big)+
O\left(\frac{1}{n}\right).
\end{equation}
We consider now the curve $\xi_{n}^{\varepsilon}$ which is the union of the curves
$\xi_{i,n}^{\varepsilon}$ defined in $\left[\frac{i-1}{2n}, \frac{i}{2n}\right]$
that are the minimizers for
$L^{\varepsilon}\left( x_{i+1},x_i ,\frac{1}{2n},\omega\right)$.
Observe that $\xi_{n}^{\varepsilon}$ is an admissible curve between $0$ and $\xi_n(1)$.
Hence
\begin{align}\nn
&L^{\varepsilon}(\xi_n(1), 0,1,\omega)
\leq \int_0^1 L\left(\delta_{1/\varepsilon}(\xi_{n}^{\varepsilon}(s)), \alpha^{\xi_{n}^{\varepsilon}}(s),\omega\right)ds\\ \nn
&=\sum_{i\textrm{ odd}}\int_{\frac{i-1}{2n}}^{\frac{i}{2n}} L\left(\delta_{1/\varepsilon}(\xi_{i,n}^{\varepsilon}(s)), \alpha^{\xi_{i,n}^{\varepsilon}}(s),\omega\right)ds\\
&+\sum_{i\textrm{ even}}\int_{\frac{i-1}{2n}}^{\frac{i}{2n}} L\left(\delta_{1/\varepsilon}(\xi_{i,n}^{\varepsilon}(s)), \alpha^{\xi_{i,n}^{\varepsilon}}(s),\omega\right)ds\nn\\
&=\sum_{i\textrm{ odd}}L^{\varepsilon}\left( x_{i+1},x_i,\frac{1}{2n},\omega\right)+
\sum_{i\textrm{ even}}L^{\varepsilon}\left(x_{i+1},x_i, \frac{1}{2n},\omega\right),\label{conv1}
\end{align}
where the last identity comes from the definition of
$\xi_{i,n}^{\varepsilon}$.
From \eqref{xi}, we can apply~\eqref{convetai} obtaining
\begin{equation}
\label{Lconvexity2}
L^{\varepsilon}\left(x_{i+1},x_i, \frac{1}{2n},\omega\right)=
\left\{\begin{aligned}
&\frac{1}{2n}\big(\overline L(p)+r(\varepsilon)\big),\quad \textrm{if }i\textrm{ is even} ,\\
&\frac{1}{2n}\big(\overline L(q)+r(\varepsilon)\big), \quad \textrm{if }i\textrm{ is odd}
\end{aligned}
\right.
\end{equation}
where $r(\varepsilon)\to 0$ as $\varepsilon\to 0^+$.
By applying \eqref{defL}, \eqref{conconv},\eqref{conv1}, \eqref{Lconvexity2} we have
\begin{eqnarray*}
&&\overline L\left(\frac{p+q}{2}\right)=
\lim_{\varepsilon\to 0^+}L^{\varepsilon}\left( y^{\frac{p+q}{2}},0,1,\omega\right)=
\lim_{\varepsilon\to 0^+}L^{\varepsilon}\big(\xi_n(1), 0,1,\omega\big)+
O\left(\frac{1}{n}\right)\\
&&\leq\lim_{\varepsilon\to 0^+}\left(
\sum_{i\textrm{ odd}}L^{\varepsilon}\left(x_{i+1},x_i, \frac{1}{2n},\omega\right)+
\sum_{i\textrm{ even}}L^{\varepsilon}\left( x_{i+1},x_i,\frac{1}{2n},\omega\right)
\right)+O\left(\frac{1}{n}\right)\\
&&=\lim_{\varepsilon\to 0^+}\left(
\frac{1}{2}\overline L(p)+\frac{1}{2}\overline L(q)+r(\varepsilon)
\right)+
O\left(\frac{1}{n}\right).
\end{eqnarray*}
Passing to the limits, we get \eqref{TH}.
Now it remains to prove claim \eqref{vicini!}.
First of all we estimate the distance between
$x_2$ and $l^{(p+q)/2}(1/n)$ (recall that $x_2=\xi_n(2/2n)=\xi_n(1/n)$).
For $a=(a_1,\cdots a_m)\in\mathbb{R}^m$, we set $X_a:= a_1X_1+\cdots a_mX_m$.
Using the exponential coordinates,
we can write
$$x_2=\exp\left(\frac{1}{2n}X_p\right)\circ \exp\left(\frac{1}{2n}X_q\right)$$
and
$$
l^{(p+q)/2}\left(\frac{1}{n}\right)= \exp\left(\frac{1}{n}X_{\frac{p+q}{2}}\right).
$$
The Baker-Campbell-Hausdorff formula (\cite{BLU}) allows to write:
\begin{eqnarray}\label{BCH}
&&x_2=\exp\left(\frac{1}{2n}X_p\right)\circ \exp\left(\frac{1}{2n}X_q\right)=\sum_{k_1,k_2}\frac{1}{k_1!k_2!}\left(\frac{1}{2n}X_p\right)^{k_1}\left(\frac{1}{2n}X_q\right)^{k_2}.
\end{eqnarray}
Moreover
\begin{eqnarray}\label{serie}
l^{(p+q)/2}\left(\frac{1}{n}\right)= \exp\left(\frac{1}{n}X_{\frac{p+q}{2}}\right)=\sum_{k}\frac{1}{k!}\left(\frac{1}{2n}X_{p+q}\right)^{k}.
\end{eqnarray}
Hence considering the first three terms of expansions \eqref{BCH} and \eqref{serie}
we obtain that
$\big|x_2-l^{(p+q)/2}(1/n)\big|= O(1/n^2)$. Iteratively, we get $\big|x_{2i}-l^{(p+q)/2}(i/n)\big|= O(1/n^2)+iO(1/n^2)$; in particular, since $x_{2n}=\xi(1)$ and $y^{\frac{p+q}{2}}= l^{(p+q)/2}(1)$, for $i=n$ we obtain the claim \eqref{vicini!}.
\end{proof}
\begin{figure} [htbp]
\begin{center}
\includegraphics[scale=0.4]{convexity_Paola.pdf}
\end{center}
\caption{this picture illustrates the arguments of the proof of Proposition \ref{ConvexityTh}.}
\label{Figura-D}
\end{figure}
We can now prove Theorem \ref{mainTH}, i.e. the homogenization result for non-coercive Hamilton-Jacobi problem.
\begin{proof}[Theorem \ref{mainTH}]
Note that by Lemma \ref{PropertiesLagrangian}, assumptions {\bf (H1)-(H4)} implies {\bf (L1)-(L4)}.
By Theorem~\ref{th1}, the function $u^{\varepsilon}(t,x,\omega)$ in the left-hand side of
\eqref{LIMITFINALE} is the unique viscosity solution of \eqref{ApproxPr}.
We denote by $\bar u$ the right-hand side of \eqref{LIMITFINALE}, i.e.
\begin{equation*}
\bar u(t,x):=\inf_{y\in \mathbb{R}^N} \left[g(y)+ t\inf_{\alpha\in \mathcal{F}^t_{y,x} } \int_{0}^{t} \overline{L}(\alpha(s)) ds
\right],
\end{equation*}
We define the effective Hamiltonian $\overline{H}(q): =\overline{L}^*(q)$.
The convexity and the superlinearity of $\bar L$ (see Theorem \ref{ConvexityTh} and Proposition~\ref{prop5}) imply $\overline{L}(q) =(\overline{L}^*)^{^*}(q)=\overline{H}^*(q)$.
By the Hopf-Lax formula in \cite[Theorem 3.4]{BCP}, the function~ $\bar u (t,x)$ is the unique viscosity solution of \eqref{LimitProblem}.
The convergence easily follows from \eqref{LIMITFINALE}.
\end{proof}
\section{Appendix}
In this appendix we prove Theorem~\ref{th1} on the well posedness of problem \eqref{ApproxPr}
\begin{proposition}\label{UC}
Under the assumptions of Theorem \ref{th1}, let $u^\varepsilon$ be the function defined in \eqref{rapprepsilon}.
Then $u^{\varepsilon}$ is uniformly continuous in $[0,T]\times \mathbb{R}^N$.
\end{proposition}
\begin{proof}
We want to prove that for any $\overline \eta>0$ there exists $\overline \delta>0$ such that
\begin{equation}\label{UCue}
|u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(s,y,\omega)|<\overline \eta,\qquad\textrm{if } |t-s|+\|-y\circ x\|_{CC}<\overline\delta.
\end{equation}
{\em Step 1.} We claim that for any $\eta >0$ there exists $\delta>0$ such that
\begin{equation}
\label{Step1_Appendix}
|u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(0,y,\omega)|<\eta + m(d_{CC}(x,y)),\ \forall t\in[0,\delta], \forall x,y\in\mathbb{R}^N.
\end{equation}
Indeed, by definition of $u^{\varepsilon}$ and {\bf (L2)} we have
\begin{equation*}
u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(0,x,\omega)\leq L^{\varepsilon}(x,x,t,\omega)\leq \int_0^t L(\delta_{1/\varepsilon}(x),0,\omega)ds\leq C_1t.
\end{equation*}
On the other hand, by the assumption on $g$, for any $\overline \eta>0$ there exists $\overline y\in\mathbb{R}^N$ such that
\begin{equation*}
u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(0,x,\omega)
\geq -m(d_{CC}(\overline y,x))+L^{\varepsilon}(x,\overline y,t,\omega)-\overline\eta.
\end{equation*}
Moreover, by {\bf (L2)}, for any $\overline\eta$ there exists $\xi\in \mathcal{A}^t_{\overline y,x}$ such that
\begin{eqnarray*}
&&L^{\varepsilon}(x,\overline y,t,\omega)
\geq \int_0^t C_1^{-1}\left(|\alpha^{\xi}(s)|^{\lambda}-1\right)ds-\overline\eta \geq -C_1^{-1}t-\overline\eta.
\end{eqnarray*}
By the last two inequalities and by \eqref{stimau4}, we get
\begin{eqnarray*}
&&u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(0,x,\omega)\geq\\
&&-m\bigg(C_1^{1/\lambda}(\|u^{\varepsilon}\|_{\infty}+\|g\|_{\infty}+C_1^{-1}t+1)^{1/\lambda}t^{1/\lambda-1}\bigg)-C_1^{-1}t-2\overline\eta,
\end{eqnarray*}
where the bound of $\|u^\varepsilon\|_\infty$ is proved in Theorem~\ref{th1}.
Hence, for $t$ sufficiently small and by the assumption on $g$, we get \eqref{Step1_Appendix}.
{\em Step 2.} We claim that, for any $\delta_1 >0$ there exists $m_{\delta_1}$ such that
\begin{equation}\label{Step2_Appendix}
|u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(s,x,\omega)|<m_{\delta_1}(|t-s|),\ \forall t,s \geq \delta_1, \forall x\in\mathbb{R}^N.
\end{equation}
Indeed, for any $\eta>0$ there exists $\overline y\in\mathbb{R}^N$ such that
\begin{eqnarray*}
&&u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(s,x,\omega)
\geq L^{\varepsilon}(x,\overline y,t,\omega)-L^{\varepsilon}(x,\overline y,s,\omega)-\eta.
\end{eqnarray*}
The other inequality is similar. Using Lemma \ref{prop1}, we get \eqref{Step2_Appendix}.
{\em Step 3.} We claim that for any $\delta_2 >0$ there exists $m_{\delta_2}$ such that
\begin{equation}\label{Step3_Appendix}
|u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(t,y,\omega)|<m_{\delta_2}(\|y^{-1}\circ x\|_{CC}),\ \forall t\geq \delta_2, \forall x,y\in\mathbb{R}^N.
\end{equation}
Indeed, arguing as in Step~2, it is enough to prove
$$|L^{\varepsilon}(x,z,t,\omega)-L^{\varepsilon}(y,z,t,\omega)|\leq m_{\delta_2}(\|y^{-1}\circ x\|_{CC}).$$
Actually, adding and subtracting $L^{\varepsilon}(y,z,t+\|-y\circ x\|_{CC},\omega))$, and using Lemma~\ref{lemma41}, we can conclude \eqref{Step3_Appendix}.
{\em Step 4}
W.l.o.g. assume $s\geq t$ and $\delta$ sufficiently small. For $0\leq t\leq s\leq \delta$, from step 1, we have
\begin{equation*}
|u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(s,y,\omega)|
\leq 2\eta+m_{\delta}(d_{CC}(x,y)).
\end{equation*}
For $s>\delta$ and $|t-s|<\bar \delta$ (with $\bar \delta<\delta/2$), by step~2 and step~3, we get
$$|u^{\varepsilon}(t,x,\omega)-u^{\varepsilon}(s,y,\omega)|\leq m_{\bar\delta}(\|-y\circ x\|_{CC}+|t-s|).$$
To conclude it suffices to choose $\bar \delta$ sufficiently small.
\end{proof}
Here we state the following result that will play a crucial role for the proof of Theorem \ref{th1} .
\begin{lemma}
\label{Step2}
For every $R, T>0$ there exists $\mu=\mu(T,R)>0$ such that for every $(t,x)\in (0,T)\times B_R(0)$ there holds
$$
u^{\varepsilon}(t,x,\omega)=\inf\big\{g(\xi(0))+\int_0^t H^*\left(\delta_{1/\varepsilon}(\xi(s)), \alpha^{\xi}(s),\omega\right)ds\big\}$$
where the infimum is over all the $\alpha\in \mathcal{F}^t_x$ with $|\alpha^{\xi}|\leq \mu(R,T)$
where $ \mathcal{F}^t_x$ is the set of
all the $m$-valued measurable functions $\alpha$ such that the corresponding horizontal curve is $\xi^{\alpha}(s)$ with $\xi^{\alpha}(t)=x$.
\end{lemma}
\begin{proof}
The proof is the same as \cite[Theorem 2.1]{BCP} using \cite[Theorem 7.4.6]{CS}.
\end{proof}
Let us now prove Theorem \ref{th1}.
\begin{proof}[Theorem \ref{th1}]
We only sketch the proof; for the detailed calculations we refer the reader to \cite[Section 10.3.3]{Evans} and to \cite{BCP}.
First of all we prove that $u^{\varepsilon}$ is a solution of \eqref{ApproxPr}.
We observe that by Lemma \ref{Step2} $u^\varepsilon$ satisfies the following optimality condition: for any $0\leq h\leq t$ we have
$$
u^{\varepsilon}(t,x,\omega)=\inf\left\{\int_{t-h}^t L\left(\delta_{1/\varepsilon}(\xi(s)), \alpha^{\xi}(s),\omega\right)ds+ u^{\varepsilon}(t-h,\xi(t-h),\omega)\right\},$$
where the infimum is over all the $\alpha\in \mathcal{F}^t_x$ with $|\alpha^{\xi}|\leq \mu(R,T)$.\\
From assumption {\bf(L2)}, Proposition \ref{prp71} and \cite[Lemma 10.3.3]{Evans}, we get
\begin{equation*}\label{bnd}
\|u^{\varepsilon}\|_{\infty}\leq C,\qquad
\textrm{for any compact }K\subset \mathbb{R}^N,\quad
\|u^{\varepsilon}\|_{W^{1,\infty}([0,T]\times K)}\leq C_K.
\end{equation*}
Following the same arguments of \cite[Theorem 2, Section 10.3.3]{Evans} and \cite{BCP}, we get that $u^\varepsilon$ fulfills $u^{\varepsilon}(0,x)=g(x)$ and it is a viscosity solution of
$$u_t+\mathcal H(x, Du)=0,$$
where
$\mathcal H(x, Du)=\max_{a\in\mathbb{R}^m, |a|\leq \mu(R,T)}\{p\cdot \sigma(x)a-L(x,a)\}$.\\
Arguing as in \cite[equation (45) and proof of Theorem 3.2]{BCP} we get that, if $u$ is differentiable, then $\mathcal H(x, Du)=H(x,\sigma Du)$.
Applying this property to a smooth test function, we conclude that
$u^{\varepsilon}$ is a viscosity solution of problem \eqref{ApproxPr}.
The uniqueness of the solution follows from the uniform continuity of $u^\varepsilon$ (see Proposition~\ref{UC}) and applying the result of Biroli \cite[Theorem 4.4]{Bir}.
\end{proof}
Acknowledgments:
The first author was partially supported by the Leverhulme Trust via grant RPG-2013-261 and by EPSRC via grant EP/M028607/1.
The second author was partially supported by the EPSRC Grant ``Random Perturbations of ultra-parabolic PDEs under rescaling''.
The third and forth authors are members of GNAMPA-INdAM and were partially supported also by the research project of the University of Padova "Mean-Field Games and Nonlinear PDEs" and by the Fondazione CaRiPaRo Project "Nonlinear Partial Differential Equations: Asymptotic Problems and Mean-Field Games".\\
The authors would also like to thank P.E. Souganidis and R. Monti for the many interesting conversations.
|
2,877,628,091,339 | arxiv |
\section{#1}}
\newcommand\id[1][]{\ensuremath{\mathrm{id}_{#1}}}
\def{\textstyle \bigotimes}{{\textstyle \bigotimes}}
\newcommand\sxleftarrow[1]{\xleftarrow{\smash{#1}}}
\newcommand\sxrightarrow[1]{\xrightarrow{\smash{#1}}}
\newcommand\xto[1]{\xrightarrow{#1}}
\newcommand\sxto[1]{\sxrightarrow{#1}}
\renewcommand{\-}[0]{\nobreakdash-\hspace{0pt}}
\DeclareMathOperator{\Ob}{Ob}
\newcommand\ket[1]{\ensuremath{| #1 \rangle}}
\newcommand\bra[1]{\ensuremath{\langle #1 |}}
\newcommand\C{\ensuremath{\mathbb{C}}}
\newcommand{\inprod}[2]{\ensuremath{\langle #1\hspace{0.5pt}|\hspace{0.5pt}#2 \rangle}}
\newcommand{\ensuremath{\mathop{\downarrow}}}{\ensuremath{\mathop{\downarrow}}}
\newcommand\Hom{\ensuremath{\textrm{Hom}}}
\newcommand\pdag{{\phantom{\dagger}}}
\DeclareMathOperator{\CP}{CPM}
\DeclareMathOperator{\CPstar}{CP^*}
\newcommand{\ensuremath{p}}{\ensuremath{p}}
\newcommand{\ensuremath{i}}{\ensuremath{i}}
\DeclareMathOperator{\Prob}{Prob}
\newcommand{\textit{i.e.}\xspace}{\textit{i.e.}\xspace}
\newcommand{\textit{e.g.}\xspace}{\textit{e.g.}\xspace}
\newcommand\ud{\ensuremath{\mathrm{d}}}
\usepackage{color}
\newcommand{\todo}[1]{{\color{red}#1}}
\def-145{-145}
\def-35{-35}
\def145{145}
\def35{35}
\def0.8{0.8}
\def0.75{0.75}
\ignore{
\usepackage{mathtools}
\tikzstyle{dot}=[inner sep=0.7mm,minimum width=0pt,minimum
height=0pt,fill=black,draw=black,shape=circle]
\tikzstyle{black dot}=[dot,fill=black]
\tikzstyle{white dot}=[dot,fill=white]
\tikzstyle{gray dot}=[dot,fill=gray!40!white]
}
\section{Introduction}
\subsection{Overview}
Pairs of complementary dagger-Frobenius algebras play an important role in the high-level characterization of quantum phenomena~\cite{vicary-tqa, bobross}, as the algebraic content of mutually unbiased bases. In Section~\ref{sec:unitary}, we show that if a such a pair is equipped with a self-conjugate comonoid homomorphism onto one of the algebras, a \textit{unitary} map can be constructed that has the same abstract structure as an \textit{oracle} in the theory of quantum algorithms. This gives insight into the logical structure of quantum algorithms and opens up a new avenue for their generalization.
Most known quantum algorithms are constructed using these black-box quantum oracles, whose structure can be depicted graphically in the following way:
\begin{equation}
\label{eq:qoracle}
\begin{aligned}
\begin{tikzpicture}[string,yscale=0.8,yscale=0.8]
\node (dot) [blackdot] at (0,1) {};
\node (f) [morphism, wedge] at (0.7,2) {$f$};
\node (m) [whitedot] at (1.4,3) {};
\draw (0,0.25)
node [below] {$x$}
to (0,1)
to [out=left, in=south] (-0.7,2)
to (-0.7,3.75)
node [above] {$x$};
\draw (0,1)
to [out=right, in=south] (f.south);
\draw (f.north)
to [out=up, in=left] (1.4,3)
to [out=right, in=up] +(0.7,-1)
to (2.1,0.25)
node [below] {$y$};;
\draw (m.center) to +(0,0.75) node [above] {$y \oplus f(x)$};
\end{tikzpicture}
\end{aligned}
\end{equation}
Here we read the diagram from bottom to top, defining a map of type $\mathbb{C}^n\otimes\mathbb{C}^m \to \mathbb{C}^n\otimes\mathbb{C}^m$ that acts as $|x\rangle \otimes |y \rangle\mapsto|x\rangle \otimes |y\oplus f(x)\rangle$ for a group product $\oplus$. Section~\ref{sec:unitary} contains a full abstract description. Oracle-based algorithms include the Deutsch-Jozsa, Grover, and hidden subgroup algorithms. In the Deutsch-Jozsa and Grover algorithms the oracle implements a function \mbox{$f:S\to\{0,1\}$} where $S$ is a finite set. In the hidden subgroup algorithm, the oracle implements a function $f:G \to S$ where $G$ is a finite group and $S$ is a finite set. In \cite{vicary-tqa} it was shown that the unitary oracle described in Section \ref{sec:unitary} characterizes the structure of these well-known algorithms.
\def\licsscale{0.70}
For these oracles to be physically implementable, they must be \textit{unitary operators}. In this paper we give an abstract proof of unitarity for these operators using categorical algebra. In Section~\ref{sec:algorithm} we apply this result to develop a new quantum algorithm for the identification of group homomorphisms into an abelian group, in a number of queries which is equal to the number of simple factors of the target group. The graphical approach provides a simple proof of correctness of the algorithm, and leads to an algorithm which is more general than existing work in the literature~\cite{hoyer-conjops}.
In Section~\ref{sec:signalflow} we investigate an application to the theory of signal-flow networks~\cite{baezerbele, fong-transfer, sobocinski}. We show that the formalism contains dagger-Frobenius algebras equipped with self-conjugate homomorphisms, and that, as a consequence, the network representing a single resistor is unitary.
\paragraph{Acknowledgements.} We are grateful to John Baez and Pawel Sobocinski for useful discussions about signal-flow networks. Section~4 of this paper has some technical overlap with~\cite{sobocinski} and was prepared independently. We are grateful to the authors for pointing out their work to us in the prepublication phase of this article. Will Zeng acknowledges the support of the Rhodes Trust in funding this work.
\subsection{Frobenius monoids and complementarity}
In this Section we collect some standard results from the literature~\cite{bobross}. We assume some familiarity with the graphical calculus for symmetric monoidal dagger-categories~\cite{selinger}. We use a notation in which morphisms are drawn from bottom-to-top.
\begin{defn}
In a monoidal category, a \textit{comonoid} is a triple \whitecomonoid{A} of an object $A$, a morphism $\tinycomult[whitedot] : A \to A \otimes A$ called the comultiplication, and a morphism $\tinycounit[whitedot] : A \to I$ called the counit, satisfying coassociativity and counitality equations:
\def\frobscale{0.5}
\begin{calign}
\begin{aligned}
\begin{tikzpicture}[thick, scale=0.7, yscale=-1]
\draw (0,0) to [out=up, in=-145] (0.5, 1);
\draw (1,0) to [out=up, in=-35] (0.5,1);
\draw (2,0) to [out=up, in=-35] (1.25,2);
\draw (0.5,1) to [out=up, in=-145] (1.25, 2);
\draw (1.25,2) to (1.25, 3);
\node [whitedot] at (0.5,1) {};
\node [whitedot] at (1.25,2) {};
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[xscale=-1, thick, scale=0.7, yscale=-1]
\draw (0,0) to [out=up, in=-145] (0.5, 1);
\draw (1,0) to [out=up, in=-35] (0.5,1);
\draw (2,0) to [out=up, in=-35] (1.25,2);
\draw (0.5,1) to [out=up, in=-145] (1.25, 2);
\draw (1.25,2) to (1.25, 3);
\node [whitedot] at (0.5,1) {};
\node [whitedot] at (1.25,2) {};
\end{tikzpicture}
\end{aligned}
&\qquad\qquad\qquad
\begin{aligned}
\begin{tikzpicture}[thick, scale=0.7, yscale=-1]
\draw (0,-1.5) to (0,-0.5) to [out=up, in=-145] (0.75,0.5) node [whitedot] {} to (0.75,1.5);
\draw (1.5,-0.5) node [whitedot] {} to [out=up, in=-35] (0.75,0.5);
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[thick, scale=0.7, yscale=-1]
\draw (0,0) to (0,3);
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[thick, xscale=-1, scale=0.7, yscale=-1]
\draw (0,-1.5) to (0,-0.5) to [out=up, in=-145] (0.75,0.5) node [whitedot] {} to (0.75,1.5);
\draw (1.5,-0.5) node [whitedot] {} to [out=up, in=-35] (0.75,0.5);
\end{tikzpicture}
\end{aligned}
\end{calign}
\end{defn}
\noindent
In a monoidal dagger-category, we can apply the dagger operation to these structures to obtain the associated monoid. We can then ask for the comonoid and monoid to interact in various ways.
\begin{defn}
In a monoidal dagger-category, a comonoid \whitecomonoid{A} is \emph{dagger-Frobenius} when the following equation holds:
\begin{equation}\label{eq:frobenius}
\begin{aligned}
\begin{tikzpicture}[scale=0.7, thick]
\draw (0,0) to (0,1) to [out=up, in=-145] (0.5,2) node [whitedot] {} to (0.5,3);
\draw (0.5,2) to [out=-35, in=145] (1.5,1) node [whitedot] {};
\draw (1.5,0) to (1.5,1) to [out=35, in=down] (2,2) to (2,3);
\end{tikzpicture}
\end{aligned}
\quad = \quad
\begin{aligned}
\begin{tikzpicture}[scale=0.7, thick, xscale=-1]
\draw (0,0) to (0,1) to [out=up, in=-145] (0.5,2) node [whitedot] {} to (0.5,3);
\draw (0.5,2) to [out=-35, in=145] (1.5,1) node [whitedot] {};
\draw (1.5,0) to (1.5,1) to [out=35, in=down] (2,2) to (2,3);
\end{tikzpicture}
\end{aligned}
\end{equation}
\end{defn}
\begin{defn}
In a symmetric monoidal dagger-category, a \textit{classical structure} is a commutative dagger-Frobenius comonoid \whitecomonoid{A} satisfying the \textit{specialness} condition:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[thick, scale=0.7]
\draw (0,0.25) to (0,1) node [whitedot] {} to [out=145, in=down] (-0.5,1.5) to [out=up, in=-145] (0,2) node [whitedot] {} to (0,2.75);
\draw (0,1) to [out=35, in=down] (0.5,1.5) to [out=up, in=-35] (0,2);
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, thick, yscale=0.7]
\draw (-0.5,0) to (-0.5,3.2);
\end{tikzpicture}
\end{aligned}
\end{equation}
\end{defn}
\begin{defn}
In a symmetric monoidal dagger-category, a dagger-Frobenius comonoid is \emph{symmetric} when the following condition holds:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}
\draw (0,0) node [whitedot] {} to (0,0.5) node [whitedot] {} to [out=145, in=down] (-0.5,1.0) to [out=up, in=down] (0.5,2);
\draw (0,0.5) to [out=35, in=down] (0.5,1) to [out=up, in=down] (-0.5,2);
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}
\draw (0,0) node [whitedot] {} to (0,0.5) node [whitedot] {} to [out=145, in=down] (-0.5,1.0) to [out=up, in=down] (-0.5,2);
\draw (0,0.5) to [out=35, in=down] (0.5,1) to [out=up, in=down] (0.5,2);
\end{tikzpicture}
\end{aligned}
\end{equation}
\end{defn}
\begin{defn}
In a symmetric monoidal dagger-category, the \emph{dimension} $d(A)$ of an object $A$ equipped with a dagger-Frobenius comonoid \whitecomonoid{A} is given by the following composite:
\begin{equation}\label{eq:dim}
\ud(A)
\quad := \quad
\begin{aligned}\begin{tikzpicture}[yscale=0.8,xscale=-1]
\node (0) at (0,0) {};
\node[whitedot] (1) at (0,0.66) {};
\node (2) at (-0.5,1.2) {};
\node (3) at (0.5,1.2) {};
\node (4) at (-0.5,2.0) {};
\node (5) at (0.5,2.0) {};
\draw[string] (0.center) to (1.center);
\draw[string, out=180, in=270] (1.center) to (2.center);
\draw[string, out=0, in=270] (1.center) to (3.center);
\draw[string, out=90, in=270] (2.center) to (5.center);
\draw[string, out=90, in=270] (3.center) to (4.center);;
\node[whitedot] (6) at (0,2.54) {};
\node (7) at (0,3.2) {};
\draw[string] (0.center) node [whitedot] {} to (1);
\draw[string] (6.center) to (7.center) node [whitedot] {$$};
\draw[string, in=left, out=up] (4.center) to node [auto] {$$} (6.center);
\draw[string, in=right, out=up] (5.center) to node [auto, swap] {$$} (6.center);
\end{tikzpicture}\end{aligned}
\end{equation}
\end{defn}
\noindent
When the algebra is commutative and special, equation~\eqref{eq:dim} can be simplified to the composition of the unit and counit.
\begin{defn}[Complementarity]
\label{def:complementarity}
In a symmetric monoidal dagger-category, two special symmetric dagger-Frobenius comonoids \whitecomonoid{A} and \graycomonoid{A} are \emph{complementary} when the following equation holds:
\begin{equation}
\label{eq:complementarity}
\ud(A) \,\,
\begin{pic}[string, yscale=0.8, xscale=0.75]
\draw (-0.5,0.25) to (-0.5,1) node [graydot] {} to [out=left, in=right] (-1,2) node [graydot] {} to [out=left, in=right] (-1.5,1.5) node [whitedot] {} to [out=left, in=down] (-2,2) to [out=up, in=left] (-0.75,3) node (a) [whitedot] {} to [out=right, in=right] (-0.5,1);
\draw (a.center) to +(0,0.75);
\end{pic}
\quad=\quad\,\,\,
\begin{pic}[string, yscale=0.8, xscale=0.75]
\draw (0,0.25) to (0,1) node [graydot] {};
\draw (0,3) node [whitedot] {} to (0,3.75);
\end{pic}
\end{equation}
\end{defn}
\noindent
Note that this is not a symmetric condition between the gray and white structures. However, thanks to the symmetric property of the dagger-Frobenius algebras, it is equivalent to the following alternative condition:
\begin{equation}
\ud(A) \,\,
\begin{pic}[string, yscale=0.8, xscale=0.75, yscale=-1]
\draw (-0.5,0.25) to (-0.5,1) node [whitedot] {} to [out=left, in=right] (-1,2) node [whitedot] {} to [out=left, in=right] (-1.5,1.5) node [graydot] {} to [out=left, in=down] (-2,2) to [out=up, in=left] (-0.75,3) node (a) [graydot] {} to [out=right, in=right] (-0.5,1);
\draw (a.center) to +(0,0.75);
\end{pic}
\quad=\quad\,\,\,
\begin{pic}[string, yscale=0.8, xscale=0.75]
\draw (0,0.25) to (0,1) node [graydot] {};
\draw (0,3) node [whitedot] {} to (0,3.75);
\end{pic}
\end{equation}
The daggers of these equations give rise to two further equivalent conditions.
By the symmetric property of the dagger-Frobenius algebras, this condition is equivalent to
\begin{defn}
In a monoidal dagger-category, a comonoid homomorphism \mbox{$f:\blackcomonoid{A} \to \graycomonoid{B}$} between dagger-Frobenius comonoids is \emph{self-conjugate} when the following property holds:
\begin{equation}
\label{eq:comonoidhomomorphismselfconjugate}
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75, thick]
\node [morphism, wedge] (f) at (2,1) {$f$};
\draw (0,-1) to [out=up, in=left, in looseness=0.9] (1,2) node [graydot] {} to (1,2.5) node [graydot] {};
\draw (1,2) to [out=right, in=up] (f.north);
\draw (f.south) to [out=down, in=left] (3,0) node [blackdot] {} to [out=right, in=down, out looseness=0.9] (4,3);
\draw (3,0) to (3,-0.5) node [blackdot] {};
\node [graydot] at (1,2) {};
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[string]
\node (f) at (0,0) [morphism, wedge, hflip] {$f$};
\draw (0,-1.5) to (f.south);
\draw (f.north) to (0,1.5);
\end{tikzpicture}
\end{aligned}
\end{equation}
\end{defn}
\begin{lemma}
\label{lem:comonoidhomomorphismselfconjugate}
In {\bf Hilb}, comonoid homomorphisms $f:\blackcomonoid{A} \to \graycomonoid{B}$ of classical structures are self-conjugate.
\end{lemma}
\begin{proof}
Recall that comonoid homomorphisms between classical structures in \cat{Hilb} are exactly classical functions between the copyable points~\cite{OrthBasis:2008}. The linear maps on either side of~\eqref{eq:comonoidhomomorphismselfconjugate} will be the same if and only if their matrix elements are the same, obtained by composing with $\ket i$ at the bottom and $\bra j$ at the top. On the left-hand side, this gives the following result:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75, thick]
\node [morphism, wedge] (f) at (2,1) {$f$};
\draw (0,0) node [state] {$i$} to [out=up, in=left, in looseness=0.9] (1,2) node [graydot] {} to (1,2.5) node [graydot] {};
\draw (1,2) to [out=right, in=up] (f.north);
\draw (f.south)
to [out=down, in=left] (3,0)
node [blackdot] {}
to [out=right, in=down, out looseness=0.9] (4,2)
node [state, hflip] {$j$};
\draw (3,0) to (3,-0.5) node [blackdot] {};
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[string]
\node (f) [morphism, wedge] at (0,0) {$f$};
\draw (0,-0.75) node [state] {$j$} to (f.south);
\draw (0,0.75) node [state, hflip] {$i$} to (f.north);
\end{tikzpicture}
\end{aligned}
\quad=\quad
\left\{
\begin{array}{ll}
1 & \text{ if } i=f(j), \\
0 & \text{ if } i \neq f(j).
\end{array}
\right.
\end{equation}
On the right we can do this calculation:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[string, yscale=0.8, xscale=0.75]
\node (f) [morphism, wedge, hflip] at (0,0) {$f$};
\draw (0,-0.75) node [state] {$i$} to (f.south);
\draw (0,0.75) node [state, hflip] {$j$} to (f.north);
\end{tikzpicture}
\end{aligned}
\quad=\quad
\left(
\begin{aligned}
\begin{tikzpicture}[string]
\node (f) [morphism, wedge] at (0,0) {$f$};
\draw (0,-0.75) node [state] {$j$} to (f.south);
\draw (0,0.75) node [state, hflip] {$i$} to (f.north);
\end{tikzpicture}
\end{aligned}
\right) ^\dag
\quad=\quad
\left\{
\begin{array}{ll}
1 & \text{ if } i=f(j) \\
0 & \text{ if } i \neq f(j)
\end{array}
\right\}^\dag
\quad = \quad
\left\{
\begin{array}{ll}
1 & \text{ if } i=f(j), \\
0 & \text{ if } i \neq f(j).
\end{array}
\right.
\end{equation}
This is the same result as for the left-hand side, and so expression~\eqref{eq:comonoidhomomorphismselfconjugate} holds.
\end{proof}
\section{Unitary oracles}
\label{sec:unitary}
\subsection{Complementarity via unitarity}
A pair of symmetric dagger-Frobenius algebras can be used to build a linear map in the following way:
\begin{equation}
\label{eq:generalizedcnot}
\sqrt{\ud(A)}\,\,
\begin{aligned}
\begin{tikzpicture}[yscale=0.8,string]
\node (b) [graydot] at (0,0) {};
\node (w) [whitedot] at (1,1) {};
\draw (-0.75,2) to [out=down, in=left] (b.center);
\draw (b.center) to [out=right, in=left] (w.center);
\draw (w.center) to (1,2);
\draw (b.center) to (0,-1);
\draw (w.center) to [out=right, in=up] (1.75,-1);
\end{tikzpicture}
\end{aligned}
\end{equation}
Here we have assumed that we operate in a category where square roots of scalars exist. The two algebras are complementary exactly when this composite is unitary, as we show in the following theorem.
\begin{theorem}[Complementarity via a unitary]
\label{thm:complementarityunitary}
In a dagger symmetric monoidal category, two symmetric dagger-Frobenius algebras are complementary if and only if the composite~\eqref{eq:generalizedcnot} is unitary.
\end{theorem}
\begin{proof}
Composing ~\eqref{eq:generalizedcnot} with its adjoint in one order, we obtain the following:
\tikzset{every picture/.style={scale=0.95,yscale=0.8}}
\begin{equation}
\label{eq:generalizedcnotunitaryproof}
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75,string]
\node at (-1.5,-2.6) {$\ud (A)$};
\node (A) at (0,0) {};
\node (B) at (1.75,0) {};
\node (b1) [graydot] at (0,-1) {};
\node (w1) [whitedot] at (1,-2) {};
\node (w2) [whitedot] at (1,-3) {};
\node (b2) [graydot] at (0,-4) {};
\node (C) at (0,-5) {};
\node (D) at (1.75,-5) {};
\draw (A.center) to (b1.center);
\draw (b1.center) to [out=right, in=left] (w1.center);
\draw (w1.center) to (w2.center);
\draw (w2.center) to [out=left, in=right] (b2.center);
\draw (b2.center) to (C.center);
\draw (w2.center) to [out=right, in=up] (D.center);
\draw (w1.center) to [out=right, in=down] (B.center);
\draw (b1.center) to [out=left, in=left] (b2.center);
\end{tikzpicture}
\end{aligned}
\,\,=
\,\,
\hspace{-3pt}
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75,string]
\node at (-2.1,-2.6) {$\ud (A)$};
\node (A) at (-1.75,0) {};
\node (B) at (0.5,0) {};
\node (w1) [whitedot] at (0.5,-1.0) {};
\node (w2) [whitedot] at (1.25,-3) {};
\node (w3) [whitedot] at (0,-2) {};
\node (b1) [graydot] at (-1,-2) {};
\node (b2) [graydot] at (0,-4) {};
\node (b3) [graydot] at (-0.5,-1) {};
\node (C) at (0,-5) {};
\node (D) at (2,-5) {};
\draw (A.center) to [out=down, in=left] (b1.center);
\draw (w1.center) to [out=right, in=up] (w2.center);
\draw (w2.center) to [out=left, in=right] (b2.center);
\draw (b2.center) to (C.center);
\draw (w2.center) to [out=right, in=up] (D.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=left] (b2.center);
\draw (b1.center) to [out=right, in=left] (b3.center);
\draw (b3.center) to [out=right, in=left] (w3.center);
\draw (w3.center) to [out=right, in=left] (w1.center);
\end{tikzpicture}
\end{aligned}
\,\,= \,\,
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75,string,yscale=0.833]
\node at (-1.6,-3.1) {$\ud (A)$};
\node (A) at (-1,0) {};
\node (B) at (1.5,0) {};
\node (w1) [whitedot] at (1.5,-1.0) {};
\node (w2) [whitedot] at (1.0,-2) {};
\node (w3) [whitedot] at (0.5,-3) {};
\node (b1) [graydot] at (0.5,-4) {};
\node (b2) [graydot] at (0,-5) {};
\node (b3) [graydot] at (0,-2) {};
\node (C) at (0,-6) {};
\node (D) at (2.5,-6) {};
\draw (A.center) to [out=down, in=left, in looseness=0.6] (b2.center);
\draw (w1.center) to [out=left, in=up] (w2.center);
\draw (w2.center) to [out=right, in=right] (b1.center);
\draw (b2.center) to (C.center);
\draw (w1.center) to [out=right, in=up, out looseness=0.6] (D.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=right] (b2.center);
\draw (b1.center) to [out=left, in=left] (b3.center);
\draw (b3.center) to [out=right, in=left] (w3.center);
\draw (w2.center) to [out=left, in=right] (w3.center);
\end{tikzpicture}
\end{aligned}
\end{equation}
If the complementarity condition~\eqref{eq:complementarity} holds then this is clearly the identity on \mbox{$A \otimes A$}. The other composite can be shown to be the identity in a similar way, and so~\eqref{eq:generalizedcnot} is unitary.
Conversely, suppose~\eqref{eq:generalizedcnot} is unitary. Then the final expression of~\eqref{eq:generalizedcnotunitaryproof} certainly equals the identity on $A \otimes A$:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75,string,yscale=0.833]
\draw (0.5,-5) to (0.5,1);
\draw (-1.5,-5) to (-1.5,1);
\end{tikzpicture}
\end{aligned}
\quad=\quad\hspace{-8pt}
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75,string,yscale=0.833]
\node at (-1.6,-3.1) {$\ud (A)$};
\node (A) at (-1,0) {};
\node (B) at (1.5,0) {};
\node (w1) [whitedot] at (1.5,-1.0) {};
\node (w2) [whitedot] at (1.0,-2) {};
\node (w3) [whitedot] at (0.5,-3) {};
\node (b1) [graydot] at (0.5,-4) {};
\node (b2) [graydot] at (0,-5) {};
\node (b3) [graydot] at (0,-2) {};
\node (C) at (0,-6) {};
\node (D) at (2.5,-6) {};
\draw (A.center) to [out=down, in=left, in looseness=0.6] (b2.center);
\draw (w1.center) to [out=left, in=up] (w2.center);
\draw (w2.center) to [out=right, in=right] (b1.center);
\draw (b2.center) to (C.center);
\draw (w1.center) to [out=right, in=up, out looseness=0.6] (D.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=right] (b2.center);
\draw (b1.center) to [out=left, in=left] (b3.center);
\draw (b3.center) to [out=right, in=left] (w3.center);
\draw (w2.center) to [out=left, in=right] (w3.center);
\end{tikzpicture}
\end{aligned}
\end{equation}
Composing with the black counit at the top-left and the white unit at the bottom-right then gives back complementarity condition~\eqref{eq:complementarity} as required:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75,string,yscale=0.833]
\node (w) [whitedot] at (-0.5,-4) {};
\node (b) [graydot] at (-1.5,0) {};
\draw (-0.5,-4) to (-0.5,1);
\draw (-1.5,-5) to (-1.5,0);
\end{tikzpicture}
\end{aligned}
\,\,\,\,=
\begin{aligned}
\begin{tikzpicture}[yscale=0.8, xscale=0.75,string,yscale=0.833]
\node at (-1.6,-3.1) {$\ud (A)$};
\node (w) [graydot] at (-1,0) {};
\node (b) [whitedot] at (2.5,-6) {};
\node (A) at (-1,0) {};
\node (B) at (1.5,0) {};
\node (w1) [whitedot] at (1.5,-1.0) {};
\node (w2) [whitedot] at (1.0,-2) {};
\node (w3) [whitedot] at (0.5,-3) {};
\node (b1) [graydot] at (0.5,-4) {};
\node (b2) [graydot] at (0,-5) {};
\node (b3) [graydot] at (0,-2) {};
\node (C) at (0,-6) {};
\node (D) at (2.5,-6) {};
\draw (A.center) to [out=down, in=left, in looseness=0.6] (b2.center);
\draw (w1.center) to [out=left, in=up] (w2.center);
\draw (w2.center) to [out=right, in=right] (b1.center);
\draw (b2.center) to (C.center);
\draw (w1.center) to [out=right, in=up, out looseness=0.6] (D.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=right] (b2.center);
\draw (b1.center) to [out=left, in=left] (b3.center);
\draw (b3.center) to [out=right, in=left] (w3.center);
\draw (w2.center) to [out=left, in=right] (w3.center);
\end{tikzpicture}
\end{aligned}
\,\,= \,\, \mathrm{d}(A)
\begin{pic}[string, yscale=1, xscale=0.75]
\draw (-0.5,0.25) to (-0.5,1) node [graydot] {} to [out=left, in=right] (-1,2) node [graydot] {} to [out=left, in=right] (-1.5,1.5) node [whitedot] {} to [out=left, in=down] (-2,2) to [out=up, in=left] (-0.75,3) node (a) [whitedot] {} to [out=right, in=right] (-0.5,1);
\draw (a.center) to +(0,0.75);
\end{pic}
\end{equation}
This completes the proof.
\end{proof}
\subsection{Families of unitary oracles}
This pair of complementary observables automatically gives rise to a much larger family of unitaries, one for each self-conjugate comonoid homomorphism onto one of the classical structures in the pair. See equation~\eqref{eq:comonoidhomomorphismselfconjugate} for the definition of the self-conjugacy property. Lemma~\ref{lem:comonoidhomomorphismselfconjugate} demonstrated that in \cat{FHilb}, every comonoid homomorphism of classical structures is self-conjugate.
\begin{defn}[Oracle]
\label{oracle}
In a symmetric monoidal dagger-category, given a dagger-Frobenius comonoid $\blackcomonoid{A}$, a pair of complementary symmetric dagger-Frobenius comonoids \graycomonoid{B} and \whitecomonoid{B}, and a self-conjugate comonoid homomorphism $f : \blackcomonoid{A} \to \graycomonoid{B}$, the \emph{oracle} is defined to be the following endomorphism of $A \otimes B$:
\begin{equation}
\label{eq:oracle}
\sqrt{\ud(A)}
\begin{aligned}
\begin{tikzpicture}[string,yscale=0.8,yscale=0.8]
\node (dot) [blackdot] at (0,1) {};
\node (f) [morphism, wedge] at (0.7,2) {$f$};
\node (m) [whitedot] at (1.4,3) {};
\draw (0,0.25)
node [below] {$A$}
to (0,1)
to [out=left, in=south] (-0.7,2)
to (-0.7,3.75)
node [above] {$A$};
\draw (0,1)
to [out=right, in=south] (f.south);
\draw (f.north)
to [out=up, in=left] (1.4,3)
to [out=right, in=up] +(0.7,-1)
to (2.1,0.25)
node [below] {$B$};;
\draw (m.center) to +(0,0.75) node [above] {$B$};
\end{tikzpicture}
\end{aligned}
\end{equation}
\end{defn}
\begin{theorem}
\label{thm:familyofunitaries}
Oracles are unitary.
\end{theorem}
\begin{proof}
To demonstrate that the oracle~\eqref{eq:oracle} is unitary, we must compose it with its adjoint on both sides and show that we get the identity in each case. In one case, we obtain the following, making use of the Frobenius laws, self-conjugacy of $f$, associativity and coassociativity, the fact that $f$ preserves comultiplication, the complementarity condition, the fact that $f$ preserves the counit, and the unit and counit laws:
\tikzset{every picture/.style={scale=0.9,yscale=0.9}}
\begin{align*}
&\ud(A)
\begin{aligned}
\begin{tikzpicture}[yscale=0.6,string,xscale=1]
\node (A) at (0,2) {};
\node (B) at (1.75,0) {};
\node (b1) [blackdot] at (0,1) {};
\node (w1) [whitedot] at (1,-1) {};
\node (w2) [whitedot] at (1,-2) {};
\node (b2) [blackdot] at (0,-4) {};
\node (C) at (0,-5) {};
\node (D) at (1.75,-5) {};
\node (f1) [morphism, wedge] at (0.5,-3) {$f$};
\node (f2) [morphism, wedge, hflip] at (0.5,0) {$f$};
\draw (A.center) to (b1.center);
\draw (b1.center) to [out=right, in=up] (f2.north);
\draw (f2.south) to [out=down, in=left] (w1.center);
\draw (w1.center) to (w2.center);
\draw (w2.center) to [out=left, in=up] (f1.north);
\draw (b2.center) to (C.center);
\draw (w2.center) to [out=right, in=up] (1.5,-3 |- f1.north) to (1.5,-5);
\draw (w1.center) to [out=right, in=down] (1.5,1 |- f2.south) to (1.5,2);
\draw (b1.center) to [out=left, in=up] (-0.5,0 |- f2.north) to (-0.5,0 |- f1.south) to [out=down, in=left] (b2.center);
\draw (f1.south) to [out=down, in=right] (b2.center);
\end{tikzpicture}
\end{aligned}
=\,\,
\ud(A)
\hspace{-3pt}
\begin{aligned}
\begin{tikzpicture}[yscale=0.7,string,xscale=1.2]
\node (f1) [morphism, wedge] at (0.5,-4) {$f$};
\node (f2) [morphism, wedge, hflip] at (-0.5,-2) {$f$};
\node (A) at (-2,0) {};
\node (B) at (0.5,0) {};
\node (w1) [whitedot] at (0.5,-2.0) {};
\node (w2) [whitedot] at (1.0,-3) {};
\node (w3) [whitedot] at (-0.125,-3) {};
\node (b1) [blackdot] at (-1.5,-3) {};
\node (b2) [blackdot] at (0,-5) {};
\node (b3) [blackdot] at (-0.75,-1) {};
\node (C) at (0,-6) {};
\node (D) at (1.5,-6) {};
\draw (A.center) to +(0,-1.5) to [out=down, in=left] (b1.center);
\draw (w1.center) to [out=right, in=up] (w2.center);
\draw (w2.center) to [out=left, in=up] (f1.north);
\draw (f1.south) to [out=down, in=right] (b2.center);
\draw (b2.center) to (C.center);
\draw (w2.center) to [out=right, in=up] (D.center |- f1.north) to (D.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=left] (b2.center);
\draw (b1.center) to [out=right, in=left] (b3.center);
\draw (f2.south) to [out=down, in=left] (w3.center);
\draw (w3.center) to [out=right, in=left] (w1.center);
\draw (b3.center) to [out=right, in=up] (f2.north);
\end{tikzpicture}
\end{aligned}
=\,\,
\ud(A)
\hspace{-3pt}
\begin{aligned}
\begin{tikzpicture}[yscale=0.7,string,xscale=1.2]
\node (f1) [morphism, wedge] at (0.5,-4) {$f$};
\node (f2) [morphism, wedge] at (-1,-2) {$f$};
\node (A) at (-2,0) {};
\node (B) at (0.68,0) {};
\node (w1) [whitedot] at (0.68,-2.0) {};
\node (w2) [whitedot] at (1.0,-3) {};
\node (w3) [whitedot] at (0.25,-3) {};
\node (b1) [blackdot] at (-1.5,-3) {};
\node (b2) [blackdot] at (0,-5) {};
\node (b3) [graydot] at (-0.5,-1) {};
\node (C) at (0,-6) {};
\node (D) at (1.5,-6) {};
\draw (A.center) to +(0,-1.5) to [out=down, in=left] (b1.center);
\draw (w1.center) to [out=right, in=up] (w2.center);
\draw (w2.center) to [out=left, in=up] (f1.north);
\draw (f1.south) to [out=down, in=right] (b2.center);
\draw (b2.center) to (C.center);
\draw (w2.center) to [out=right, in=up] (D.center |- f1.north) to (D.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=left] (b2.center);
\draw (w3.center) to [out=left, in=right] (b3.center);
\draw (f2.south) to [out=down, in=right] (b1.center);
\draw (w3.center) to [out=right, in=left] (w1.center);
\draw (b3.center) to [out=left, in=up] (f2.north);
\end{tikzpicture}
\end{aligned}
\\
&
\hspace{2cm}
= \,\,\ud(A)
\begin{aligned}
\begin{tikzpicture}[yscale=0.8,string,xscale=1.2]
\node (f1) [morphism, wedge] at (-0.25,-3) {$f$};
\node (f2) [morphism, wedge] at (1.25,-3) {$f$};
\node (A) at (-1,0) {};
\node (B) at (1.5,0) {};
\node (w1) [whitedot] at (1.5,-1.0) {};
\node (w2) [whitedot] at (1.0,-2) {};
\node (w3) [whitedot] at (0.5,-2.5) {};
\node (b1) [blackdot] at (0.5,-4) {};
\node (b2) [blackdot] at (0,-5) {};
\node (b3) [graydot] at (0,-2) {};
\node (C) at (0,-6) {};
\node (D) at (2.25,-6) {};
\draw (A.center) to [out=down, in=left, in looseness=0.59] (b2.center);
\draw (w1.center) to [out=left, in=up] (w2.center);
\draw (w2.center) to [out=right, in=up] (f2.north);
\draw (b2.center) to (C.center);
\draw (w1.center) to [out=right, in=up, out looseness=0.45] (D.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=right] (b2.center);
\draw (b1.center) to [out=left, in=down] (f1.south);
\draw (b3.center) to [out=right, in=left] (w3.center);
\draw (w2.center) to [out=left, in=right] (w3.center);
\draw (f1.north) to [out=up, in=left] (b3.center);
\draw (f2.south) to [out=down, in=right] (b1.center);
\end{tikzpicture}
\end{aligned}
= \,\,\ud(A)
\begin{aligned}
\begin{tikzpicture}[yscale=0.8,string,xscale=1.2]
\node (f) [morphism, wedge] at (0.5,-4) {$f$};
\node (A) at (-0.5,0) {};
\node (B) at (1.5,0) {};
\node (w1) [whitedot] at (1.5,-1.0) {};
\node (w2) [whitedot] at (1.0,-2) {};
\node (w3) [whitedot] at (0.5,-2.5) {};
\node (b1) [graydot] at (0.5,-3.25) {};
\node (b2) [blackdot] at (0,-5) {};
\node (b3) [graydot] at (0,-2) {};
\node (C) at (0,-6) {};
\node (D) at (2,-6) {};
\draw (A.center) to +(0,-4) to [out=down, in=left] (b2.center);
\draw (w1.center) to [out=left, in=up] (w2.center);
\draw (w2.center) to [out=right, in=right] (b1.center);
\draw (b2.center) to (C.center);
\draw (D.center) to +(0,4) to [out=up, in=right] (w1.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=up] (f.north);
\draw (f.south) to [out=down, in=right] (b2.center);
\draw (b1.center) to [out=left, in=left] (b3.center);
\draw (b3.center) to [out=right, in=left] (w3.center);
\draw (w2.center) to [out=left, in=right] (w3.center);
\end{tikzpicture}
\end{aligned}
\\
&
\hspace{2cm}
=\hspace{5pt}
\begin{aligned}
\begin{tikzpicture}[yscale=0.7,string,xscale=1]
\node (f) [morphism, wedge] at (0.5,-4) {$f$};
\node (A) at (-0.5,0) {};
\node (B) at (1.5,0) {};
\node (w1) [whitedot] at (1.5,-1.0) {};
\node (w2) [whitedot] at (1.0,-2) {};
\node (b1) [graydot] at (0.5,-3) {};
\node (b2) [blackdot] at (0,-5) {};
\node (C) at (0,-6) {};
\node (D) at (2,-6) {};
\draw (A.center) to +(0,-4) to [out=down, in=left] (b2.center);
\draw (w1.center) to [out=left, in=up] (w2.center);
\draw (b2.center) to (C.center);
\draw (D.center) to +(0,4) to [out=up, in=right] (w1.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=up] (f.north);
\draw (f.south) to [out=down, in=right] (b2.center);
\end{tikzpicture}
\end{aligned}
\hspace{5pt}=\hspace{5pt}
\begin{aligned}
\begin{tikzpicture}[yscale=0.7,string,xscale=1]
\node (A) at (-0.5,0) {};
\node (B) at (1.5,0) {};
\node (w1) [whitedot] at (1.5,-1.0) {};
\node (w2) [whitedot] at (1.0,-2) {};
\node (b1) [blackdot] at (0.5,-4) {};
\node (b2) [blackdot] at (0,-5) {};
\node (C) at (0,-6) {};
\node (D) at (2,-6) {};
\draw (A.center) to +(0,-4) to [out=down, in=left] (b2.center);
\draw (w1.center) to [out=left, in=up] (w2.center);
\draw (b2.center) to (C.center);
\draw (D.center) to +(0,4) to [out=up, in=right] (w1.center);
\draw (w1.center) to [out=up, in=down] (B.center);
\draw (b1.center) to [out=down, in=right] (b2.center);
\end{tikzpicture}
\end{aligned}
\hspace{5pt}=\hspace{5pt}
\begin{aligned}
\begin{tikzpicture}[yscale=0.7,string,xscale=1]
\draw (0,0) to (0,6);
\draw (1.5,0) to (1.5,6);
\end{tikzpicture}
\end{aligned}
\end{align*}
There is a similar argument that the other composite also gives the identity. \end{proof}
\section{Identifying group homomorphisms into abelian groups}
\label{sec:algorithm}
\subsection{Introduction}
In this Section we construct a new deterministic quantum algorithm to identify group homomorphisms.
\begin{defn}[Group homomorphism identification problem]
Given finite groups $G$ and $A$ where $A$ is abelian, and a blackbox function $f:G\to A$ that is promised to be a group homomorphism, identify the homomorphism $f$.
\end{defn}
\noindent
We will define a quantum algorithm that solves the group homomorphism identification problem with a number of queries equal to the number of simple factors of the abelian group $A$.
For comparison, we can consider the obvious classical algorithm for this problem.
\begin{lemma}
Given finite groups $G$ and $A$, where $A$ is abelian and $G$ has a generating set of order $m$, and a blackbox function $f:G\to A$ that is promised to be a group homomorphism, a classical algorithm can determine $f$ with $m$ oracle queries.
\end{lemma}
\begin{proof}
Once we have evaluated $f$ classically on the generating set of $G$, we have fully characterized~$f$.
\end{proof}
\noindent
We are unable to prove optimality in either the quantum or classical case. However, we note that the query complexities of these quantum and classical algorithms depend of different and unrelated parameters of the problem. Instances where the order of the generating set of $G$ is larger than the number of factors in the target group $A$ will demonstrate a quantum advantage.
In the simpler case where $G$ is an abelian group this quantum algorithm was previously described by H\o yer \cite{hoyer-conjops}, though his algebraic presentation differs significantly from ours. H\o yer also notes that the algorithm by Bernstein and Vazirani in~\cite{bernstein-qcomplex} is an instance of the abelian group identification problem where $G=\mathbb{Z}_n^n$ and $A=\mathbb{Z}_2$. Independently, Cleve et. al.~\cite{cleve-qAlgRevisited} also presented an algorithm for the abelian case where $G=\mathbb{Z}_2^n$ and $A=\mathbb{Z}_2^m$.
We will proceed using the abstract structure defined earlier, but will now work in the dagger-symmetric monoidal category {\bf FHilb}. Any choice of orthonormal basis for an object $A$ in {\bf FHilb} endows it with a dagger-Frobenius algebra $(A,\tinymult[blackdot],\tinyunit[blackdot])$, whose copying map $d: A \to A\otimes A$ is defined as the linear extension of $d(|i\rangle)=|i\rangle\otimes|i\rangle$. Any finite group $G$ induces a different dagger-Frobenius algebra on an object $A=\mathbb{C}[G]$, the Hilbert space with orthonormal basis given by the elements $G$, with multiplication given by linear extension of the group multiplication; we represent this structure as~$(A, \tinymult[whitedot], \tinyunit[whitedot])$. These two Frobenius algebras are complementary.
\def\Mat{\mathrm{Mat}}
In the case that $G$ is finite, its representations can be characterized as the homomorphisms \mbox{$G \sxto \rho \mbox{Mat}(n)$}. The homomorphism conditions take the following form~\cite[Section~A.7]{vicary-tqa}:
\begin{calign}
\label{eq:rhocopied}
\begin{aligned}
\begin{tikzpicture}[thick, scale=\licsscale]
\draw (-0.7,-1) node [below] {$G$} to [out=up, in=-145] (0,0);
\draw (0.7,-1) node [below] {$G$} to [out=up, in=-35] (0,0);
\draw (0,0) to (0,0.75);
\node (m) at (0,0) [whitedot] {};
\node (rho) at (0,0.75) [morphism, wedge, width=0, anchor=south] {$\rho$};
\draw ([xshift=5pt] rho.north) to +(0,0.70);
\draw ([xshift=-5pt] rho.north) to +(0,0.70);
\node at (0,2.25) [anchor=south] {$\Mat(n)$};
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[thick, scale=\licsscale]
\node (r1) at (0,1.5) [morphism, wedge] {$\rho$};
\node (r2) at (1.5,1.5) [morphism, wedge] {$\rho$};
\draw (0,0) node [below] {$G$} to (r1.south);
\draw (1.5,0) node [below] {$G$} to (r2.south);
\draw ([xshift=5pt] r1.north) to [out=up, in=up] ([xshift=-5pt] r2.north);
\draw ([xshift=-5pt] r1.north) to [out=up, in=down, in looseness=1] (0.55,3.25);
\draw ([xshift=5pt] r2.north) to [out=up, in=down, in looseness=1] (0.95,3.25);
\node [above] at (0.75,3.25) {$\Mat(n)$};
\end{tikzpicture}
\end{aligned}
&
\begin{aligned}
\begin{tikzpicture}[thick, scale=\licsscale]
\draw (0,-0.55) node [whitedot] {} to +(0,1) node (r1) [morphism, wedge, anchor=south] {$\rho$};
\draw ([xshift=-5pt] r1.north) to +(0,1);
\draw ([xshift=5pt] r1.north) to +(0,1);
\node at (0.0,2.25) [above] {$\Mat(n)$};
\node [below, white] at (0,-1) {$G$};
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[thick, scale=\licsscale]
\draw (0,0) to (0,-1) to [out=down, in=down, looseness=2] (0.5,-1) to (0.5,0);
\node at (0.25,0) [above] {$\Mat(n)$};
\node [below, white] at (0.25,-3.25) {$G$};
\end{tikzpicture}
\end{aligned}
\end{calign}
These will be essential for our proofs below.
\subsection{The algorithm}
The structure of the quantum algorithm that solves the group homomorphism identification problem is given by the topological diagram~\eqref{eq:theAlg} below. Here $\sigma:G\to\mathbb{C}$ is a normalized irreducible representation of $G$, representing the result of the measurement, and $\rho:A\to\mathbb{C}$ is a normalized irreducible representation of $A$. The representation $\rho$ is one-dimensional as $A$ is an abelian group. Physically, we are able to produce the input state $\rho$ efficiently, using $O(\log n)$ time steps, via the quantum Fourier transform for any finite abelian group~\cite{cleve-parallelQFT}. The measurement result $\sigma$ arises from a measurement in the Fourier basis, which can, by a similar procedure for any finite group~\cite{childs-qalgebraic}, also be implemented efficiently.
\begin{align}
\label{eq:theAlg}
\begin{aligned}
\begin{tikzpicture}[string, yscale=1]
\node (dot) [blackdot] at (0,1) {};
\node (f) [morphism, wedge] at (0.7,2) {$f$};
\node (m) [whitedot] at (1.4,3) {};
\node (topsig) [morphism, fill=white, wedge, anchor=south] at (-0.7,3.6) {$\sigma$};
\draw ([xshift=5pt] topsig.north) to +(0,0.3);
\draw ([xshift=-5pt] topsig.north) to +(0,0.3);
\draw (0,0.4)
node [blackdot] {}
node [anchor=20] {$\frac 1 {\sqrt{|G|}}$}
to (0,1)
to [out=left, in=south] (-0.7,2)
to (topsig.south);
\draw (0,1)
to [out=right, in=south] (f.south);
\draw (f.north)
to [out=up, in=left] (1.4,3)
to [out=right, in=up] +(0.7,-1)
to (2.1,0.4)
node [morphism, wedge, hflip, anchor=north] {$\rho$};
\draw (m.center) to (1.4,4.4)
node [above] {};
\draw [thin, lightgray] (-1.25,0.7) to (7.5,0.7);
\draw [thin, lightgray] (-1.25,3.3) to (7.5,3.3);
\node at (3,0) [anchor=west] {Prepare initial states};
\node at (3,2) [anchor=west] {Apply a unitary map};
\node at (3,4) [anchor=west] {Measure the left system};
\node at (-0.7,2) [anchor=east] {$\sqrt{|G|}$};
\end{tikzpicture}
\end{aligned}
\end{align}
We can compare the structure of this algorithm to that of the standard quantum algorithm for the hidden subgroup problem. There, the second system is prepared in a state given by the identity element of the group, corresponding to a uniform linear combination of the irreducible representations. A later measurement of this second system---which is not a part of the standard hidden subgroup algorithm, but can be done without changing the result of the procedure---would collapse this combination to a classical mixture of these representations. The hidden subgroup algorithm therefore contains an amount of classical nondeterminism in its initial setup. In principle removing this, and selecting the input representation strategically, can only improve performance, and we take advantage of this here.
We analyze the effect of our new algorithm as follows.
\begin{lemma}
The algorithm defined by~\eqref{eq:theAlg} gives output $\sigma$ with probability given by the square norm of~$\sigma\circ f^*\circ\rho^*$.
\end{lemma}
\begin{proof}
Using that $\rho$ is a group homomorphism and simple diagrammatic rewrites defined in~\cite[Section~A.9]{vicary-tqa},
we show the following, making use of the fact that representations are copyable points for group multiplication:
\begin{align}
\label{simplifyAlg}
\begin{aligned}
\begin{tikzpicture}[string]
\draw [use as bounding box, draw=none] (-0.3,0.6) rectangle +(3.45,3.7);
\node (f) [morphism, wedge] at (1.25,2) {$f$};
\node (s) [morphism, wedge] at (0,3.5) {$\sigma$};
\node (r) at (2.5,0.75) [morphism, wedge, hflip] {$\rho$};
\draw (f.south) to [out=down, in=right] +(-0.625,-0.5) node (b) [blackdot] {} to [out=left, in=down] +(-0.625, 0.5) to (s.south);
\draw (b.center) to +(0,-0.5) node [blackdot] {};
\draw ([xshift=4pt] s.north) to +(0,0.5);
\draw ([xshift=-4pt] s.north) to +(0,0.5);
\draw (f.north) to [out=up, in=left] +(0.625,0.5) node (w) [whitedot] {} to [out=right, in=up] +(0.625,-0.5) to (r.north);
\draw (w.center) to +(0,1.5);
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[string]
\draw [use as bounding box, draw=none] (-0.3,0.6) rectangle +(3.45,3.7);
\node (s) [morphism, wedge] at (0,3.5) {$\sigma$};
\node (f) [morphism, wedge] at (1.25,2) {$f$};
\node (r) at (1.25, 2.75) [morphism, wedge] {$\rho$};
\node (r2) at (2.5,3.5) [morphism, wedge, hflip] {$\rho$};
\draw (r2.north) to +(0,0.5);
\draw ([xshift=4pt] s.north) to +(0,0.5);
\draw ([xshift=-4pt] s.north) to +(0,0.5);
\draw (r.south) to (f.north);
\draw (f.south) to [out=down, in=right] +(-0.625,-0.5) node (b) [blackdot] {} to [out=left, in=down] +(-0.625, 0.5) to (s.south);
\draw (b.center) to +(0,-0.5) node [blackdot] {};
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[string]
\draw [use as bounding box, draw=none] (-0.66,0.6) rectangle +(2.75,3.7);
\node (r) [morphism, wedge, hflip, vflip] at (0,2) {$\rho$};
\node (s) [morphism, wedge] at (0,3.5) {$\sigma$};
\node (f) [morphism, wedge, hflip, vflip] at (0,2.75) {$f$};
\node (r2) at (1.5,3.5) [morphism, wedge, hflip] {$\rho$};
\draw ([xshift=4pt] s.north) to +(0,0.5);
\draw ([xshift=-4pt] s.north) to +(0,0.5);
\draw (s.south) to (f.north);
\draw (f.south) to (r.north);
\draw (r2.north) to +(0,0.5);
\end{tikzpicture}
\end{aligned}
\end{align}
The left hand system is thus in the state $\sigma\circ f^*\circ\rho^*$, and using the Born rule, the squared norm of this state gives the probability of this experimental outcome.
\end{proof}
\begin{lemma}\label{lem:irrep}
The composite $\rho\circ f$ is an irreducible representation of $G$.
\end{lemma}
\begin{proof}
The map $f$ is a homomorphism, so $\rho\circ f:G\to\mathbb{C}$ is a one-dimensional representation of $G$. All one-dimensional representations are irreducible, so $\rho\circ f$ is an irreducible representation.
\end{proof}
\begin{lemma}
\label{lem:equaliso}
One-dimensional representations are equivalent only if they are equal.
\end{lemma}
\begin{proof}
Let $\rho_1,\rho_2:G\to \mathbb{C}$ be irreducible representations of $G$. If they are isomorphic, then there exists a linear map $\mathcal{L}:\mathbb{C}\to\mathbb{C}$, i.e. some complex number, such that $\forall g\in G$
$$\mathcal{L}\rho_1(g) = \rho_2(g)\mathcal{L}.$$
Hence we see that $\forall g\in G$, $\rho_1(g) = \rho_2(g)$.
\end{proof}
\begin{theorem}[Structure theorem for finite abelian groups]
\label{thm:structure}
Every finite abelian group is isomorphic to a direct product of cyclic groups of prime power order.
\end{theorem}
\begin{proof}
See~\cite[Theorem 6.4]{artin-algebra} for a proof of this standard result.
\end{proof}
\begin{theorem}\label{rightCyclic}
For a finite group $G$ and cyclic group of prime power order $\mathbb{Z}_{p^n}$, the algorithm~\eqref{eq:theAlg} identifies a group homomorphism $f:G\to \mathbb{Z}_{p^n}$ in a single query.
\end{theorem}
\begin{proof}
Choose the input representation $\rho$ to be the fundamental representation of $\mathbb{Z}_{p^n}$. This representation is faithful. This means exactly that
\[ \rho\circ f = \rho\circ f' \qquad \Leftrightarrow \qquad f=f'. \]
Thus $\rho\circ f$ and $\rho\circ f'$ are different irreducible representations if and only if$f$ and $f'$ are different group homomorphisms. The single measurement on the state $(\rho\circ f)^*$ is performed by the algorithm in the representation basis of $G$, allowing us to determine $\rho\circ f$ up to isomorphism. Due to Lemma~\ref{lem:equaliso} we know that each equivalence class contains only one representative, and thus we can determine $f$ with a single query.
\end{proof}
\begin{theorem}\label{thm:intoAbThm}
For any two finite groups $G$ and $A$, where $A$ is abelian with $n$ simple factors, the quantum algorithm~\eqref{eq:theAlg} can identify a group homomorphism $f:G \to A$ with $n$ oracle queries.
\end{theorem}
\begin{proof}
We prove the result by induction.
\newline\newline
\noindent{\bf Base case.} When $A=\mathbb{Z}_{p^n}$ is simple, then by Theorem~\ref{rightCyclic} we can identify the homomorphism with a single query.
\newline\newline
\noindent{\bf Inductive step.} If $A$ is not simple, then we must have $A=H_1\times H_2$ by Theorem~\ref{thm:structure}, where the following hold:
\begin{enumerate}
\item The product $\times$ is the direct product whose projectors ($p_1,p_2$) are homomorphisms.
\item $H_1$ and $H_2$ are groups with $n_1$ and $n_2$ factors respectively such that the theorem holds, i.e. homomorphisms of the type $f_{1}:G\to H_1$ and $f_{2}:G\to H_2$ can be identified in $n_1$ and $n_2$ queries respectively.
\end{enumerate}
Since $p_1\circ f$ and $p_2\circ f$ are homomorphisms, we can run subroutines of the algorithm to determine them. Hence we recover $f$ as
\begin{align*}
f(x) = ( (p_1\circ f)(x),(p_2\circ f)(x) ).
\end{align*}
The first subroutine will require $n_1$ queries and the second will require $n_2$ queries, so the total number of queries will be $n_1+n_2$, which is the number of factors of $H_1\times H_2$.
\end{proof}
\ignore{In practice we run the algorithm once on each $k$-th factor of $A$ to determine homomorphisms
\[ p_k\circ f = f_k:G\to H_k \]
We then know $f$ as
\begin{align*}
f(n_0,n_1,...n_{n-1}) = \left(f_0(n_0), f_1(n_1),..., f_{n-1}(n_{n-1})\right)
\end{align*}
we know that each isomorphic class contains only one representation}
\subsection{Extension to the non-abelian case}
We now consider the more general case where the target group $A$ is non-abelian. We do not know how to extend the algorithm described above to this case. Nevertheless, it is instructive to analyze this scenario in our graphical approach.
Irreducible representations of a non-abelian group $A$ are not necessarily one dimensional, though we are still able to compute them via the Fourier transform efficiently \cite{childs-qalgebraic}. In this case the algorithm has the following structure, where $\psi$ represents the initial state of the right-hand system in the representation space:
\begin{equation}
\label{eq:NonAbAlg}
\begin{aligned}
\begin{tikzpicture}[string]
\draw [use as bounding box, draw=none] (-0.3,-0.3) rectangle +(3.45,4.55);
\node (f) [morphism, wedge] at (1.25,2) {$f$};
\node (topsig) [morphism, wedge] at (0,3.5) {$\sigma$};
\node (r) at (2.5,0.75) [morphism, wedge, hflip] {$\rho$};
\draw (f.south) to [out=down, in=right] +(-0.625,-0.5) node (b) [blackdot] {} to [out=left, in=down] +(-0.625, 0.5) to (s.south);
\draw (b.center) to +(0,-0.5) node [blackdot] {} node [anchor=east] {$\frac {1} {\sqrt{G}}$};
\draw ([xshift=4pt] s.north) to +(0,0.5);
\draw ([xshift=-4pt] s.north) to +(0,0.5);
\draw (f.north) to [out=up, in=left] +(0.625,0.5) node (w) [whitedot] {} to [out=right, in=up] +(0.625,-0.5) to (r.north);
\draw (w.center) to +(0,1.5);
\draw ([xshift=4pt] r.south) to +(0,-0.3);
\draw ([xshift=-4pt] r.south) to +(0,-0.3);
\node [morphism, wedge, anchor=north] at ([yshift=-0.3cm] r.south) {$\psi$};
\end{tikzpicture}
\end{aligned}
\quad=\quad
\begin{aligned}
\begin{tikzpicture}[string]
\draw [use as bounding box, draw=none] (-0.3,-0.3) rectangle +(3.45,4.55);
\node (s) [morphism, wedge] at (0,3.5) {$\sigma$};
\node (f) [morphism, wedge] at (1.25,1.5) {$f$};
\node (r) at (1.25, 2.25) [morphism, wedge] {$\rho$};
\node (r2) at (2.5,3.5) [morphism, wedge, hflip] {$\rho$};
\draw (r2.north) to +(0,0.5);
\draw ([xshift=4pt] s.north) to +(0,0.5);
\draw ([xshift=-4pt] s.north) to +(0,0.5);
\draw (r.south) to (f.north);
\draw (f.south) to [out=down, in=right] +(-0.625,-0.5) node (b) [blackdot] {} to [out=left, in=down] +(-0.625, 0.5) to (s.south);
\draw (b.center) to +(0,-0.5) node [blackdot] {};
\node (psi) [morphism, wedge, anchor=north] at (2.5,0.2) {$\psi$};
\draw ([xshift=4pt] psi.north) to ([xshift=4pt] r2.south);
\draw ([xshift=-4pt] psi.north) to ([xshift=-4pt] r.north -| r2.south) to [out=up, in=up] ([xshift=4pt] r.north);
\draw ([xshift=-4pt] r.north) to [out looseness=1.3, out=up, in=down, in looseness=0.8] ([xshift=-4pt] r2.south);
\end{tikzpicture}
\end{aligned}
\end{equation}
We notice two additional features in this case. First, it is clear that the left and right systems are no longer in a product state at the end of the protocol, as they were in the final diagram of \eqref{simplifyAlg}. Second, we now have an additional choice when preparing the input representation $\rho$; in order to construct a state from a representation $\rho$ we also must choose the state $\psi$.
While this provides a clear description of the algorithm in this more general setting, it is not clear that it would identify homomorphisms into non-abelian groups. Complications include the lack of a structure theorem that satisfies the conditions for Theorem~\ref{thm:intoAbThm}, and that Lemma~\ref{lem:irrep} no longer applies. In this setting it may be useful to make the problem easier by restricting to the identification of homomorphisms up to \emph{natural isomorphism}, i.e. where two homomorphisms $f_1,f_2:G\to H$ are considered equivalent when there exists some $\eta\in H$ such that, for all $g\in G$, we have $\eta f_1(g) \eta^{-1} = f_2(g)$.
\section{Application to signal-flow calculus}
\label{sec:signalflow}
\subsection{Introduction}
\label{sec:signalintroduction}
Signal-flow diagrams are a notation in electrical engineering that describe the flow of information in electrical circuits, including rich phenomena such as feedback. Various authors~\cite{fong-transfer, baezerbele, sobocinski} have have developed a categorical approach to modelling signal-flow diagrams, based on a category of linear relations on vector spaces over a field $k$. We show in this Section that unitary oracles exist in their setup, in the sense of our Definition~\ref{oracle}, and discuss the consequences of this.
We begin with a brief introduction to the theory, following the terminology of~\cite{baezerbele}.
\def\sto{\rightsquigarrow}
\begin{defn}
The category $\cat{FinRel}_k$ of \textit{linear relations} is defined in the following way, for any field~$k$:
\begin{itemize}
\item \textbf{Objects} are finite dimensional $k$-vector spaces
\item A \textbf{morphism} $f:V \sto W$ is a \textit{linear relation}, defined as a subspace $S_f \hookrightarrow V \oplus W$
\item \textbf{Composition} of linear relations $f:U \sto V$ and $g: V \sto W$ is defined as the following subspace of $U \oplus W$:
\begin{equation}
\{ (u,w) | \exists v \in V \text{ with } (u,v) \in S_f \text{ and } (v,w) \in S_g \}
\end{equation}
It can be verified that this defines a linear subspace of $U \oplus W$.
\end{itemize}
Note that a linear relation is in particular an ordinary relation, and that composition of linear relations is the same as for ordinary relations. The category $\cat{FinRel}_k$ can be given a monoidal structure in a natural way, using the direct sum of vector spaces.
\end{defn}
For every linear relation, we can define a converse as follows.
\begin{defn}
\label{def:converse}
Given a linear relation $f: U \sto V$ defined as the subspace $S_f \hookrightarrow U \oplus V$, its \emph{converse} is the linear relation $f ^\dag : V \sto U$ defined as the subspace $S_f \hookrightarrow U \oplus V \sxto{\text{swap}} V \oplus U$.
\end{defn}
\noindent
This makes $\cat{FinRel}_k$ into a monoidal dagger-category. Following the usual convention~\cite{selinger}, we depict the dagger of a linear relation as the original morphism flipped about a horizontal axis.
Certain canonical linear relations play an important role in the theory. We define them here, along with the graphical symbol we will use to denote them.
\def\br{\text{\textit{\textbf{r}}}}
\begin{defn}
\label{defn:basicrelations}
The \textit{addition}, \textit{zero}, \textit{copying}, \textit{deletion} and \textit{multiplier} linear relations are defined as follows, where the definitions in the last line are valid for all $a,b \in k$, and where the multiplier relation takes a parameter given by some $r \in k$:
\begin{equation}
\begin{array}{c@{\qquad}c@{\qquad}c@{\qquad}c@{\qquad}c}
\begin{aligned}
\begin{tikzpicture}[string]
\node (n) [uptriangle] at (0,0) {};
\draw (0,1) to (0,0);
\draw [shorten >=-5pt] (-0.7,-1) to [out=up, in=-140] (n.corner 2);
\draw [shorten >=-5pt] (0.7,-1) to [out=up, in=-40] (n.corner 3);
\end{tikzpicture}
\end{aligned}
&
\begin{aligned}
\begin{tikzpicture}[string]
\draw [white] (0,-1) to (0,1);
\node [circle, draw, fill=black, minimum width=10pt] at (0,0) {};
\draw (0,0) to (0,1);
\end{tikzpicture}
\end{aligned}
&
\begin{aligned}
\begin{tikzpicture}[string]
\node (n) [downtriangle, fill=white] at (0,0) {};
\draw (0,-1) to (0,0);
\draw [shorten >=-5pt] (-0.7,1) to [out=down, in=140] (n.corner 3);
\draw [shorten >=-5pt] (0.7,1) to [out=down, in=40] (n.corner 2);
\node (n) [downtriangle, fill=white] at (0,0) {};
\end{tikzpicture}
\end{aligned}
&
\begin{aligned}
\begin{tikzpicture}[string]
\draw [white] (0,-1) to (0,1);
\draw (0,0) to (0,-1);
\node [circle, draw, fill=white, minimum width=10pt] at (0,0) {};
\end{tikzpicture}
\end{aligned}
&
\begin{aligned}
\begin{tikzpicture}[string]
\draw (0,-1) to (0,1);
\node [circle, draw, inner sep=0pt, minimum width=13pt, fill=white] at (0,0) {\br};
\end{tikzpicture}
\end{aligned}
\\
\text{Addition}
&
\text{Zero}
&
\text{Copying}
&
\text{Deletion}
&
\text{Multiplier}
\\
\blacktriangle : k \oplus k \sto k
& \newmoon : \{0\} \sto k
& \nabla : k \sto k \oplus k
& \ocircle: k \sto \{0\}
& \br:k \sto k
\\
(a,b,a+b) \in \blacktriangle
& (0,0) \in \newmoon
& (a,a,a) \in \nabla
& (a,0) \in \ocircle
& (a,ra) \in \br
\end{array}
\end{equation}
\end{defn}
\noindent
They use their theory to model resistors in electrical circuits, using the following network:
\begin{equation}
\label{eq:resistor}
\begin{aligned}
\begin{tikzpicture}[string, yscale=1, xscale=1]
\node (black) [uptriangle] at (0,1.2) {};
\node (white) [downtriangle, fill=white] at (-1,-0.2) {};
\draw [shorten <=-1pt, shorten >=-1pt] (black.corner 2) to [out=-150, in=30] (white.corner 2);
\draw [shorten >=-1pt] (1,-1.2) node [below] {$v\vphantom{i}$} to [out=up, in=-30, out looseness=0.5] (black.corner 3);
\draw [shorten <=-1pt] (white.corner 1) to (-1,-1.2) node [below] {$i$};
\draw [shorten >=-1pt] (-2,2.2) node [above] {$i$} to [out=down, in=150, out looseness=0.5] (white.corner 3);
\draw [shorten <=-1pt] (black.corner 1) to (0,2.2) node [above] {$v+ir$};
\node (r) [circle, draw, inner sep=0pt, minimum width=13pt, fill=white] at (-0.5,0.5) {\br};
\end{tikzpicture}
\end{aligned}
\end{equation}
The left-hand wire represents the current variable, and the right-hand wire represents the voltage variable. The initial current-voltage pair $(i,v)$ is mapped to the output current-voltage pair $(i,v+ir)$. This respects the usual law for resistors in electrical circuits, whereby if $\delta v$ is the change in voltage over a resistor, $i$ is the current through the resistor, and the value of the resistance is $r$, then $\delta v = i r$.
It has been recognized in~\cite{baezerbele} that the linear relations given in Definition~\ref{defn:basicrelations} satisfy many interesting relationships, which we summarize here without proof:
\begin{lemma}
\label{lem:initialproperties}
In $\cat{FinRel}_k$, the following relationships hold between the addition, zero, copying, deletion and multiplier linear relations:
\begin{enumerate}
\item Addition and zero together form a commutative monoid.
\item Copying and deletion together form a commutative comonoid.
\item This monoid and comonoid together form a bialgebra.
\item The multiplier relation is a monoid homomorphism for addition, and a comonoid homomorphism for copying.
\end{enumerate}
\end{lemma}
\subsection{Complementary dagger-Frobenius structure}
In this section we prove new results about the structures introduced in Section~\ref{sec:signalintroduction}. We begin by establishing the existence of dagger-Frobenius properties of the addition and copying operations.
\begin{lemma}
\label{lem:signalfrobenius}
In $\cat{FinRel}_k$, the addition and copying linear relations separately form commutative dagger-Frobenius algebras.
\end{lemma}
\begin{proof}
That addition and zero forms a commutative monoid, and copying and deletion forms a commutative comonoid, is established in Lemma~\ref{lem:initialproperties}. It remains to demonstrate that the dagger-Frobenius conditions hold for each of these structures.
We first evaluate the action of the following composite linear relation, which is one side of the dagger-Frobenius condition for the copying linear relation:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[string, scale=1, yscale=1]
\node (black) [uptriangle, fill=white] at (0,1) {};
\node (white) [downtriangle, fill=white] at (1,0.2) {};
\draw [shorten <=-1pt, shorten >=-1pt] (black.corner 3) to [out=-30, in=150] (white.corner 3);
\draw [shorten >=-1pt] (-0.8,-0.5) to [out=up, in=-150] (black.corner 2);
\draw [shorten <=-1pt] (white.corner 1) to (1,-0.5);
\draw [shorten >=-1pt] (1.8,1.7) to [out=down, in=30] (white.corner 2);
\draw [shorten <=-1pt] (black.corner 1) to (0,1.7);
\node [below] at (-0.8,-0.5) {$a\vphantom{|}$};
\node [below] at (1,-0.5) {$b$};
\node [below] at (0.3,0.8) {$b$};
\node [above] at (0,1.7) {$a$};
\node [above] at (-1,1.63) {(if $a=b$)};
\node [above] at (1.8,1.7) {$b$};
\end{tikzpicture}
\end{aligned}
\end{equation}
We see that this composite relation can be defined as $\forall a, (a,a) \smallwhitediagram (a,a)$, and similarly it can be shown that $\forall a, (a,a) \smallwhitediagramflip (a,a)$. Hence we have demonstrated the dagger-Frobenius condition $\smallwhitediagram = \smallwhitediagramflip$.
For the addition linear relation, we calculate the left side of the dagger-Frobenius condition as follows:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[string, scale=1.1, yscale=1.1]
\node (black) [uptriangle] at (0,1) {};
\node (white) [downtriangle] at (1,0.2) {};
\draw [shorten <=-1pt, shorten >=-1pt] (black.corner 3) to [out=-30, in=150] (white.corner 3);
\draw [shorten >=-1pt] (-0.8,-0.5) to [out=up, in=-150] (black.corner 2);
\draw [shorten <=-1pt] (white.corner 1) to (1,-0.5);
\draw [shorten >=-1pt] (1.8,1.7) to [out=down, in=30] (white.corner 2);
\draw [shorten <=-1pt] (black.corner 1) to (0,1.7);
\node [below] at (-0.8,-0.5) {$a\vphantom{|}$};
\node [below] at (1,-0.5) {$b$};
\node [left] at (0.65,0.4) {$\forall c, c$};
\node [above] at (0,1.7) {$a+c$};
\node [above] at (-1,1.63) {$\forall c,$};
\node [above] at (1.8,1.7) {$b-c$};
\end{tikzpicture}
\end{aligned}
\end{equation}
We can write this action succinctly as $\forall c,(a,b) \smallblackdiagram (a+c,b-c)$. Similarly, the other composite can be shown to have action $\forall c,(a,b) \smallblackdiagramflip (a-c,b+c)$. Making the substitution $c':=-c$, we can rewrite this second definition as $\forall c',(a,b) \smallblackdiagramflip (a+c',b-c')$. This demonstrates that $\smallblackdiagram = \smallblackdiagramflip$ as linear relations, verifying the dagger-Frobenius condition for the addition linear relation.
\end{proof}
Furthermore, these Frobenius algebras interact as complementary structures.
\begin{lemma}
In $\cat{FinRel}_k$, the addition and copying linear relations form complementary dagger-Frobenius algebras.
\end{lemma}
\begin{proof}
We have already established the Frobenius properties in Lemma~\ref{lem:signalfrobenius}. It remains to demonstrate the complementarity condition.
We evaluate the action of the following composite relation:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[string]
\node (black) [uptriangle] at (0,1) {};
\node (white) [downtriangle, fill=white] at (1,0.2) {};
\draw [shorten <=-1pt, shorten >=-1pt] (black.corner 3) to [out=-30, in=150] (white.corner 3);
\draw [shorten >=-1pt] (-0.8,-0.5) to [out=up, in=-150] (black.corner 2);
\draw [shorten <=-1pt] (white.corner 1) to (1,-0.5);
\draw [shorten >=-1pt] (1.8,1.7) to [out=down, in=30] (white.corner 2);
\draw (0,1) to (0,1.7);
\node [below] at (-0.8,-0.5) {$a$};
\node [below] at (1,-0.5) {$b$};
\node [below] at (0.3,0.8) {$b$};
\node [above] at (0,1.7) {$a+b$};
\node [above] at (1.8,1.7) {$b$};
\end{tikzpicture}
\end{aligned}
\end{equation}
Writing $K$ for this linear relation, we see that $K$ is given by $\forall a,b\in k, (a,b) K (a+b,b)$. By Definition~\ref{def:converse} of the converse relation, we see that $K ^\dag$ is defined as $\forall a,b\in k,(a+b,b)K^\dag (a,b)$, or equivalently $\forall a,b \in k,(a,b) K^\dag (a-b,b)$. Since $K$ is single-valued and total, it is clear that $K$ and $K^\dag$ are inverse, as can be shown by explicit calculation. By Theorem~\ref{thm:complementarityunitary}, it follows that addition and copying are complementary.
\end{proof}
The final property that we establish is that multipliers are self-conjugate.
\begin{lemma}
In $\cat{FinRel}_k$, a multiplier $\br : k \sto k$ is a self-conjugate morphism.
\end{lemma}
\begin{proof}
We must verify that $\br$ is equal to the transpose of its dagger:
\begin{equation}
\begin{aligned}
\begin{tikzpicture}[string, yscale=1.25]
\draw (0,0) node [below] {$a$} to node [circle, draw, inner sep=0pt, minimum width=13pt, fill=white] {\br} (0,3.4) node [above] {$ra$};
\end{tikzpicture}
\end{aligned}
\qquad=\qquad
\begin{aligned}
\begin{tikzpicture}[string, yscale=1.25]
\draw [lightgray] (-1.3,-0.8) to (2.3,-0.8);
\draw [lightgray] (-1.3,-0.2) to (2.3,-0.2);
\draw [lightgray] (-1.3,0.5) to (2.3,0.5);
\draw [lightgray] (-1.3,1.2) to (2.3,1.2);
\draw [lightgray] (-1.3,1.8) to (2.3,1.8);
\node (black) [uptriangle] at (0,1.2) {};
\node (white) [downtriangle] at (1,-0.2) {};
\draw [shorten <=-1pt, shorten >=-1pt] (black.corner 3) to [out=-30, in=150] (white.corner 3);
\draw [shorten >=-1pt] (-1,-1.2) to [out=up, in=-150, out looseness=0.5] (black.corner 2);
\draw [shorten <=-1pt] (white.corner 1) to (1,-0.8);
\draw [shorten >=-1pt] (2,2.2) to [out=down, in=30, out looseness=0.5] (white.corner 2);
\draw [shorten <=-1pt] (black.corner 1) to (0,1.7);
\node (r) [circle, draw, inner sep=0pt, minimum width=13pt, fill=white] at (0.5,0.5) {$\br \smash{\scriptstyle {}^\dagger}$};
\node [below] at (-1,-1.2) {$a$};
\node [left] at (0.55,0.1) {$b\vphantom{,|}$};
\node at (1.3,-0.55) {$0\vphantom{,|}$};
\node at (1.9,0.1) {$-b\vphantom{,|}$};
\node [left] at (-1.2,0.1) {$\forall b,\vphantom{,|}$};
\node [left] at (-0.8,0.1) {$a\vphantom{,|}$};
\node [left] at (-1.2,0.8) {$\forall b,\vphantom{,|}$};
\node [left] at (1.3,0.8) {$b/r\vphantom{,|}$};
\node [left] at (1.5,1.5) {$a+b/r\vphantom{,|}$};
\node at (2.2,0.8) {$-b\vphantom{,|}$};
\node at (2.35,1.5) {$-b\vphantom{,|}$};
\node [left] at (-0.5,0.8) {$a\vphantom{,|}$};
\node [left] at (-1.2,1.5) {$\forall b,\vphantom{,|}$};
\node [above] at (2,2.2) {$ra$};
\node [circle, draw, fill=black] at (0,1.8) {};
\node [circle, draw, fill=black] at (1,-0.8) {};
\end{tikzpicture}
\end{aligned}
\end{equation}
On the right-hand side we see that $a$ is related to $-b$, with the constraint that $a+b/r=0$, i.e. that $-b=ra$. This is equal as a linear relation to that of $\br$ itself, given on the left-hand side. This establishes the result.
\end{proof}
Given these results, we are motivated to make the following definitions which generalize the motivating example of the theory of signal-flow diagrams in $\cat{FinRel}_k$.
\begin{defn}
In a symmetric monoidal dagger-category, a \emph{signal-flow structure} is an object $A$ equipped with a pair of commutative dagger-Frobenius algebras, which interact as a bialgebra. A \emph{multiplier} for this signal-flow structure is a self-conjugate morphism $\br:A \to A$ which is a monoid and comonoid homomorphism for both structures.
\end{defn}
\begin{defn}
Given a signal-flow structure equipped with a multiplier $\br$, the \emph{resistor} associated to $\br$ is the composite given by diagram~\eqref{eq:resistor}.
\end{defn}
\noindent
We then apply our earlier result to show that resistors are always unitary.
\begin{corollary}
Given a signal-flow structure equipped with a multiplier, its resistor is unitary.
\end{corollary}
\begin{proof}
An immediate application of Theorem~\ref{thm:familyofunitaries}.
\end{proof}
\noindent
The appearance of this unitary structure in both quantum algorithm and the signal-flow calculus highlights the general role that this abstract structure can play in different process theories.
\bibliographystyle{eptcs}
|
2,877,628,091,340 | arxiv | \section{Introduction}
Let $(R,{\mathfrak m})$ be a local ring containing a field of positive characteristic $p>0$. If $I$ is an ideal in $R$, then $I^{[q]}=(i^q: i \in I)$, where $q=p^e$ is a power of the characteristic. Let $R^{\circ} = R \setminus \cup P$, where $P$ runs over the set of all minimal primes of $R$. An element $x$ is said to belong to the {\it tight closure} of the ideal $I$ if there exists $c \in R^{\circ}$ such that $cx^q \in I^{[q]}$ for all sufficiently large $q=p^e$. The tight closure of $I$ is denoted by $I^\ast$. By a ${\it parameter \ ideal}$ we mean an ideal generated by a full system of parameters in $R$. For an ${\mathfrak m}$-primary ideal $I$, one can consider the Hilbert-Samuel multiplicity and the Hilbert-Kunz multiplicity. A ring $R$ is called unmixed if
${\rm dim} (R/Q) = {\rm dim} (R)$, for all associated primes $Q$ of $R$.
\begin{Definition}
Let $I$ be an ${\mathfrak m}$-primary ideal in a $d$-dimensional local ring $(R,{\mathfrak m})$.
In what follows $\operatorname{\lambda}(-)$ denotes the length function.
{\it The Hilbert-Kunz multiplicity of $R$ at $I$} is defined by $\operatorname{e} _{HK} (I)= \operatorname{e} _{HK}(I,R): = \displaystyle\lim_{q \to \infty} \frac{\operatorname{\lambda}(R/I^{[q]})}{q^d}$. Monsky has shown that this limit exists and is positive. If $I ={\mathfrak m}$, then we call $\operatorname{e}_{HK} ({\mathfrak m}, R)$ the Hilbert-Kunz multiplicity of $R$ and denote it by $\operatorname{e}_{HK}(R)$.
{\it The Hilbert-Samuel multiplicity of $R$ at $I$} is defined by $\operatorname{e} (I)= \operatorname{e} (I,R) := \displaystyle\lim_{n \to \infty} d! \frac{\operatorname{\lambda}(R/I^n)}{n^d}$. The limit exists and it is positive and similarly $\operatorname{e} ({\mathfrak m}, R)$ is simply denoted $\operatorname{e}(R)$ and called the Hilbert-Samuel multiplicity of $R$.
\end{Definition}
It is known that for parameter ideals $I$, one has $\operatorname{e}(I) = \operatorname{e}_{HK}(I)$. The following sequence of inequalities is also known to hold:
$${\rm max} \{ 1, \frac{1}{d!} \operatorname{e} (I) \} \leq \operatorname{e}_{HK} (I) \leq \operatorname{e}(I)$$
for every ${\mathfrak m}$-primary ideal $I$.
By a result of Watanabe and Yoshida \cite{WY1},
an unmixed local ring $R$ of characteristic $p>0$ is
regular if and only if the Hilbert-Kunz multiplicity,
\[
\operatorname{e}_{HK}(R)= 1.
\]
A short proof of this was given by Huneke and Yao in~\cite{HY}.
In~\cite{BE}, Blickle and Enescu have started a first investigation of the number
\[
\epsilon_{HK}(d,p) = \inf\{\operatorname{e}_{HK}(R)-1 : \text{$R$ non--regular,
unmixed, $\dim R = d$, $char R = p$} \}.
\]
by showing that $\epsilon_{HK}(d,p)$ is always
\emph{strictly} positive, i.e\ the Hilbert-Kunz multiplicity of a
non-regular ring of fixed dimension and characteristic cannot be
arbitrarily close to one. They have raised the natural question whether
$\epsilon_{HK}(d,p)$ is attained. And if this is the case, what is the
significance of such rings with minimal Hilbert-Kunz multiplicity?
In~\cite{WY2}, Watanabe and Yoshida have formulated the following conjecture
\begin{Conjecture}[Watanabe-Yoshida]
\label{conjecture}
Let $d \geq 2$ and $p \neq 2$ prime. Put
$$R_{p,d}: = k[[X_0,...,X_d]]/(X_0^2+ \cdots + X_d^2).$$
Let $\text{$(R,\fm,k)$ }$ be a $d$-dimensional unmixed local ring and let $k = \overline {\mathbf{F_p}}$. Then the following statements hold:
\item
$(1)$ If $R$ is not regular, then $\operatorname{e}_{HK}(R) \geq \operatorname{e}_{HK}(R_{p,d})$.
\item
$(2)$ If $\operatorname{e}_{HK}(R) = \operatorname{e}_{HK}(R_{p,d})$, then the ${\mathfrak m}$-adic completion of $R$ is isomorphic to $R_{p,d}$ as local rings.
\end{Conjecture}
The case $d=2$ has been solved affirmatively (see ~\cite{WY1, BE}). The cases $d=3,4$ are more difficult and have been answered affirmatively by Watanabe and Yoshida, ~\cite{WY2}. The case $d=1$ is easy to interpret since $\operatorname{e}_{HK} (A) = \operatorname{e} (A)$.
In this paper we would like to prove part (1) of the Conjecture for complete intersections.
We would like to finish the introduction by mentioning two results that will be needed later.
\begin{Proposition}[Kunz, 3.2 in~\cite{K1} and 3.9 in~\cite{K2}]
\label{kunz}
Let $\text{$(R,\fm,k)$ } \to \text{$(S,\fn,k)$}$ be a flat local homomorphism of Noetherian rings of characteristic $p$ such that $S/{\mathfrak m} S$ is regular.
\item
$(1)$ If $x$ is part of a system of parameters on $R$ then $\operatorname{e}_{HK} (R) \leq \operatorname{e}_{HK}(R/xR)$.
\item
$(2)$ $\operatorname{e}_{HK}(R) = \operatorname{e}_{HK}(S)$.
\end{Proposition}
We should note that Watanabe and Yoshida (\cite{WY1}) gave an alternate proof of (1) under the assumption that $x$ is nonzerodivisor on $R$.
An element $f \in A[[t]]$ over a local ring $\text{$(A,\fm)$}$ is called a ${\it distinguished \ polynomial}$ if $f = a_o + a_1 t + \cdots + a_{n-1} t+ t^n$, for some integer $n$ and $a_i \in {\mathfrak m}, i \geq 0$.
In what follows we will need the following classical result:
\begin{Theorem} [Weierstrass Preparation Theorem,~\cite{G}]
Let $\text{$(A,\fm)$}$ be a complete local ring and let $B=A[[t]]$. If $f= \sum_{i=0}^{\infty} a_i t^i \in B$ and if there exists $n \in \mathbf{N}$ such that $a_i \in {\mathfrak m} $ for all $i < n$ and $a_n \notin {\mathfrak m}$, then $f = u f_o$ where $u$ is a unit in $B$ and $f_o$ is a distinguished polynomial of degree $n$. Also, $u$ and $f_o$ are uniquely determined by $f$.
\end{Theorem}
We would like to thank Paul C.~Roberts for valuable advice with regard to this paper. We are grateful to the referee for helpful comments that enhanced our exposition. In particular, Lemma~\ref{claim} was suggested by the referee.
Also, Ian Aberbach and C\u at\u alin Ciuperc\u a have informed us that they have obtained Theorem~\ref{main} independently. While their methods do not use the dense upper-semicontinuity of the Hilbert-Kunz multiplicity, they resemble ours in spirit.
\section{Dense upper-semicontinuity of the Hilbert-Kunz Multiplicity}
Let $R$ be an equidimensional ring of characteristic $p >0$ such that $R$ is finite over $R^p$, i.e. $R$ is $F$-finite. Kunz has shown that if $R$ is $F$-finite, then $R$ is excellent.
We would like to discuss here several aspects of the Hilbert-Kunz multiplicity.
E.~Kunz has shown that the function $f_e : \operatorname{Spec}(R) \to \mathbf{Q}$ where $$f_e(P) = \operatorname{\lambda} (R_P/ P^{[p^e]}R_P) / p^{e \operatorname{height}(P)}$$ is upper-semi continuous on $\operatorname{Spec}(R)$ (Corollary 3.4 in~\cite{K2}).
\begin{Definition}
Let $\operatorname{e}_{HK} : \operatorname{Spec}(R) \to \mathbf{R}$, defined by $$\operatorname{e}_{HK}(P) : = \operatorname{e}_{HK}(PR_P, R_P).$$ We caution the reader that, although one can talk about the Hilbert-Kunz multiplicity of an ideal primary to the maximal ideal in a local ring, the notation just introduced will always refer to the Hilbert-Kunz multiplicity of a local ring, $R_P$, at its maximal ideal.
Clearly, $\operatorname{e}_{HK}(P) = \lim_{e \to \infty} f_e(P)$.
\end{Definition}
\begin{Question}
Is $\operatorname{e}_{HK}$ an upper-semi continuous function on $\operatorname{Spec}(R)$?
\end{Question}
It is known that $\operatorname{e}_{HK}(P) \leq \operatorname{e}_{HK}(Q)$ if $P \subset Q$ are prime ideals in $R$ (Proposition 3.3 in~\cite{K2}). However, this does not immediately imply that $\operatorname{e}_{HK}$ is upper-semi continuous.
\begin{Definition}
Let $T$ be a topological space. A function $f : T \to \mathbf{R}$ is called dense upper semi-continuous if for every $x$ in $T$ one can find a dense subset $U$ of $T$ containing $x$ such that $f(y) \leq f(x)$ for every $y \in U$.
\end{Definition}
We would like to introduce some more definitions before stating our next result. In what follows, by a variety, we always mean an irreducible, reduced scheme defined over an algebraically closed field. For a linear system $\Gamma$ (complete or not) on a variety $X$ we can define a rational map $\phi_{\Gamma} : X \dasharrow \mathbf{P}^N$ by sending $x \in X$ to $[s_o (x): \cdots : s_N(x)]$, where $s_i$ form a $K$-basis of the system. $\Gamma$ is said to be composed of a pencil if the image of this map is one dimensional.
\begin{Lemma}[First Theorem of Bertini,~\cite{FOV}, Theorem 3.4.10 ]
Let $X$ be a variety over $K$ and let $\Gamma$ be linear system which is not composed of a pencil such that its base locus has codimension at least $2$. Then the generic member of $\Gamma$ is irreducible.
\end{Lemma}
\begin{Corollary}
Let $X$ be a n-dimensional variety over $K$. Then for every $x, y$ in $X$ there is an irreducible curve $C$ that passes through $x$ and $y$.
\end{Corollary}
\begin{proof}
If $X$ is a curve then there is nothing to prove. Assume that $\dim X \geq 2$.
Consider the linear system $\Gamma$ consisting of all the hyperplane sections that pass through $x$ and $y$. Then by Bertini there is an irreducible member $X_1 \in \Gamma$ such that $x, y \in X_1$. Take the reduced structure of $X_1$ so that it is a variety, denoted by $(X_1)_{red}$. Again apply Bertini to $(X_1)_{red}$ to get irreducible $X_2$ chosen from the linear system consisting of all the hyperplanes passing through $x, y$ in $(X_1)_{red}$. Keeping this procedure, we obtain the chain of closed subvarieties, say
$$X \supseteq (X_1)_{red} \supseteq \cdots \supseteq (X_{n-1})_{red}$$
such that $(X_{n-1})_{red}$ is one-dimensional, irreducible, and contains $x, y$.
Hence $(X_{n-1})_{red}$ is our desired curve.
\end{proof}
\begin{Theorem}
Let $K$ be an uncountable algebraically closed field and $R$ a finitely generated $K$-algebra which is equi-dimensional. Let $\operatorname{Sing}(R) \subset \operatorname{Max}(R)$ be the singular locus. Then $\operatorname{e}_{HK} : \operatorname{Max}(R) \to \mathbf{R}$ is dense upper semi-continuous on each component of $\operatorname{Max}(R)$. In particular, $\operatorname{e}_{HK} : \operatorname{Max}(R) \to \mathbf{R}$ is dense upper semi-continuous on each irreducible component of $\operatorname{Sing}(R)$.
\end{Theorem}
\begin{proof}
$R$ is an excellent ring and hence the regular locus of $R$ is open.
The case when $R$ is a domain goes as follows: the regular locus is non-empty (the zero ideal is in it) and, for each $Q$ as in the hypothesis, one can take $\Lambda = \operatorname{Reg}(R) \cup \{ Q \}$. This is a dense set and $\operatorname{e}_{HK} (P) = 1 \leq \operatorname{e}_{HK} (Q) $ for every $P \in \Lambda$.
Now if $R$ is not a domain (and in particular if the regular locus happens to be empty) we have to argue differently:
We know that for every $e$ there exists an open set $Q \in \Lambda _e$ such that $f_e (P) \leq f_e (Q)$ for every $P \in \Lambda_e$ (see Corollary 3.4 in \cite{K1}).
We will take $\Lambda: = \cap _e \Lambda_e$ and show that $\Lambda$ is dense.
In the following, since we work on one component of Max(R), we may assume that Max(R) is irreducible but may possibly be non-reduced.
We need to show that, for every $x \in \operatorname{Max}(R)$ and every open set $x \in U$, $ U \cap \Lambda \neq \emptyset$ holds. In other words, $U \cap _e \Lambda_e \neq \emptyset$. Then by Corollary applied to $\operatorname{Max}(R)_{red}$ there is an irreducible curve $C$ that passes through $x$ and $Q$ and set $\lambda_e = C \cap \Lambda_e$. Each $\lambda_e$ is open in $C$ and hence it is the complement of a finite set.
We have that $ (U \cap C)$ is an open set in $C$ containing $x$ and so $ (U \cap C) \cap \lambda_e \neq \emptyset$. Otherwise, $U \cap C$ is contained in the union of the complements of $\lambda_e$ which is a countable set. But $U \cap C$ is open in $C$ and hence it is definitely uncountable and therefore dense.
We have shown that $ (U \cap C) \cap \lambda_e \neq \emptyset$ which shows that $U \cap _e \Lambda_e \neq \emptyset$ must also be true.
The second statement follows from the similar argument by applying Bertini to irreducible component of $\operatorname{Sing}(R)_{red}$.
\end{proof}
Let $R_o =k[[x_1,...,x_n]]/(f)$ be an $n-1$-dimensional hypersurface ring and define an $n$-dimensional hypersurface ring $R= k[[x_1,...,x_n]][t]/(f+tg)$, where $g$ is a formal power series with $g \neq 0, g(0) =0, g \notin k \cdot f$. Obviously, $t$ is a nonzerodivisor on $R$.
In this section, we would like to study the behavior of the Hilbert-Kunz multiplicity of the fibers of the natural homomorphism $k[t] \to R= k[[x_1,...,x_n]][t]/(f+tg)$. We will assume that $k$ is an uncountable algebraically closed and so all the maximal ideals of $k[t]$ are of the form $(t-\alpha)$, with $\alpha \in k$.
Let $t_{\alpha}= t-\alpha$. One can note that $R/(t_{\alpha})$ is a local ring isomorphic to $R_{\alpha} =k[[x_1,...,x_n]]/(f + \alpha g)$ which is a $n-1$-dimensional hypersurface. This makes $t_\alpha$ a nonzerodivisor on $R$, for every $\alpha \in k$. We would also like to note that every maximal ideal of $R$ is of the form ${\mathfrak m}_{\alpha} = (x_1,...,x_n,t-\alpha)$ with $\alpha \in k$.
\begin{Theorem}
\label{sc-hyp}
Assume that we are in the situation described above.
One can find a dense subset $\Lambda \subset k$ such that, for every $\alpha \in \Lambda$, $$\operatorname{e}_{HK}( (R/t_{\alpha})_{{\mathfrak m} _{\alpha}})) = \operatorname{e}_{HK}(\frac{k[[x_1,...,x_d]]}{(f + \alpha g)}) \leq \operatorname{e}_{HK} ((R/tR)_{{\mathfrak m} _0})) = \operatorname{e}_{HK}( \frac{k[[x_1,..,x_d]]}{(f)}),$$
where ${\mathfrak m}_0 = (x_1,...,x_n,t)$.
\end{Theorem}
\begin{proof}
As remarked earlier, $R/t_{\alpha}R$ is already local with maximal ideal ${\mathfrak m} _{\alpha}$.
If $\text{$(A,\fm)$}$ is a local ring of dimension $d$, the $\operatorname{e}_{HK} (A) = \lim _{q \to \infty} \operatorname{\lambda} (A/ {\mathfrak m} ^{[q]}) / q^d$. Since $R/t_{\alpha}R$ and $R/tR$ have the same dimension, to prove the inequality in the statement we need to prove the inequality between the corresponding lengths.
Let us observe that, for every $\alpha$, $R / ({\mathfrak m} _{\alpha} ^{[q]} + t_{\alpha})R = R/ (x_1,...,x_n)^{[q]} \otimes _{k[t]} k[t]/(t_{\alpha})$.
Moreover, let $A= R/ (x_1,...,x_n)^{[q]}$ and note that this is a finitely generated module over $k[t]$. So, if we localize at the multiplicative set $k[t] \setminus (t_\alpha)$ we get that $A_{(t_\alpha)}$ is a finitely generated module over $k[t]_{(t_\alpha)}$. Moreover, $A/(t_\alpha)$ is already local and we have that
$A/(t_\alpha) \simeq (A/(t_\alpha)) _{(t_\alpha)}$.
Since $k$ is algebraically closed, $\operatorname{\lambda} (R / ({\mathfrak m} _{\alpha} ^{[q]} + t_{\alpha})R)$ equals the dimension of the $k$-vector space $R / ({\mathfrak m} _{\alpha} ^{[q]} + t_{\alpha})R = A/(t_\alpha)$. This, by NAK lemma, equals the minimal number of generators of $(R/ (x_1,...,x_n)^{[q]}) _{(t_{\alpha})} = A_{(t_\alpha)}$ over $k[t] _{(t_{\alpha})}$.
So, if we start with a set of minimal generators of $A _{(t)}$ over $k[t]_{(t)}$ we can find an open set $\Lambda_q$ in $k$, containing $0$, where we can extend these generators.
Let $\Lambda = \cap _{q} \Lambda_q$. Since $k$ is uncountable and the complements of $\Lambda_q$ are all finite we see that $\Lambda$ must be an uncountable set and hence dense in $k$ in the Zariski topology.
For all $\alpha \in \Lambda$ we have that, for all $q$,
$$ \operatorname{\lambda} (R / ({\mathfrak m} _{\alpha} ^{[q]} + t_{\alpha})R) \leq \operatorname{\lambda} (R / ({\mathfrak m} _{0} ^{[q]} + t_{0})R), $$
and this gives the inequality that we want.
\end{proof}
We would like to close this section by discussing an example by Monsky that shows that one cannot hope to replace dense upper semi-continuity by upper semi-continuity in Theorem~\ref{sc-hyp}.
First we would like to recall Monsky's example (\cite{M}):
\begin{Theorem}[Monsky]
Let $k$ be an algebraically closed field of characteristic $2$ and $R_\alpha=k[[x,y,z]]/(f+\alpha g)$, where $f = z^4+xyz^2+(x^3+y^3)z$, $g=x^2y^2$ and $0 \neq \alpha \in k$.
Then $\operatorname{e}_{HK} (R_\alpha) = 3+ 4^{-m_\alpha}$, where $m_\alpha$ is computed as follows. Write $\alpha = \beta ^2 + \beta$ with $\beta \in k$.
\item
$(1)$ If $\alpha$ is algebraic over $\mathbf{Z}/2\mathbf{Z}$, then $m_\alpha$ is the degree of $\beta$ over $\mathbf{Z}/2\mathbf{Z}$.
\item
$(2)$ If $\alpha$ is not algebraic over $\mathbf{Z}/2\mathbf{Z}$, then let $m_\alpha = \infty$.
\end{Theorem}
We would like to consider the case when $k$ is the algebraic closure of $(\mathbf{Z}/2\mathbf{Z}) (w)$, where $w$ is an indeterminate. Let $R = k[[x,y,z,t]]/(f + tg)$. We see that $R_\alpha = R/ (t-\alpha)$, where $\alpha \in k$.
We would like to show that $\operatorname{e}_{HK}$ is not necessarily upper semi-continuous in fibers over $k[t]$. More precisely, we will find $\alpha_0 \in k$ such that there exist no open subset $U$ in $k$ containing $\alpha_0$ such that $\operatorname{e}_{HK} (R_\alpha) \leq \operatorname{e}_{HK}(R_{\alpha_0})$ for every $\alpha \in U$. If such $U$ exists, it would imply that $\operatorname{e}_{HK} (R_\alpha) > \operatorname{e}_{HK}(R_{\alpha_0})$ only for finitely many $\alpha$. However, if one takes $\alpha_0 = w$, we see that $\operatorname{e}_{HK}(R_{\alpha_0}) = 3$, because $w$ is not algebraic over $\mathbf{Z}/2\mathbf{Z}$. However, there are infinitely many elements $\alpha$ in $k$ that are algebraic over $\mathbf{Z}/2\mathbf{Z}$ and hence $\operatorname{e}_{HK}(R_\alpha) > 3$ for all these $\alpha$.
In conclusion, this example shows that if one wants to study the upper semi-continuity of the Hilbert-Kunz multiplicity of the fibers of $k[t] \to R$, a weaker notion of upper-semicontinuity must be considered. One example is our notion that replaces open sets by dense sets.
In what follows we will show how this notion can be exploited to prove a conjecture of Watanabe and Yoshida on the minimal Hilbert-Kunz multiplicity of non-regular rings.
\section{Minimal Hilbert-Kunz multiplicity: the hypersurface case}
\begin{Lemma}
\label{claim}
Let $k$ be a field such that $1/2 \in k$ and put $A=k[[x_1, ....,x_d]]$. Consider $B = A[[x_0]]$
and $F = x_0^2 + \cdots + x_d^2 +G$ with $G \in m_B^3$, where $m_B$ is the maximal ideal of $B$.
Then there exist a unit $v_0$ in $B$, $a_0 \in (x_1,...,x_d)B$ and $G_1 \in (x_1,...,x_d)^3B$ such that
$$F = v_0(x_0+a_0)^2+x_1^2+ \cdots + x_d^2 +G_1$$
\end{Lemma}
\begin{proof}
Write
$$ G = \sum_{i=0}^{\infty} c_i x_0^i,$$ such that $c_i \in A$ and $c_0 \in m_A^3$, $c_1 \in m_A^2$ and $c_2 \in m_A$.
Let $v_0 = (1+c_2) + \sum_{i=1}^{\infty} c_{i+2}x_0^i$ and note that this is a unit in $B$.
Moreover,
$$F = v_0x_0^2+c_1x_0+c_0+x_1^2+ \cdots+x_d^2.$$
Now, let $a_0 = 2^{-1}v_0^{-1}c_1$ and $G_1=c_0-v_0a_0^2$ and note that the conclusion of the Lemma follows.
\end{proof}
\begin{Theorem}
\label{hypersurface}
For any $d$-dimensional singular hypersurface $k[[x_0,...,x_d]]/(f)$ over an uncountable algebraically closed field $k$ of characteristic different than $2$, we have that
$$\operatorname{e}_{HK}(k[[x_0,...,x_d]]/(\sum_{i=0}^{d} x_i^2)) \leq \operatorname{e}_{HK}(R).$$
\end{Theorem}
\begin{proof}
We can assume that $f = \sum_{i=0}^{\infty} f_i$ where each $f_i$ is a homogeneous polynomial of degree $i$ and $f_0=f_1 =0$.
Since the characteristic of $k$ is different than $2$, we can make a change of variables to have that $f_2 = \sum_{i=0}^{l} x_i^2$ for some $-1 \leq l \leq d$ where $l =-1$ means that $f_2 =0$.
Let us take $g_\alpha : = \alpha (x_{l+1}^2 + \cdots x_d ^2)$ with $\alpha \in k$. By Theorem~\ref{sc-hyp}, the Hilbert-Kunz multiplicity of $f$ is greater or equal than that of $F_\alpha = f+ g_\alpha$ for a dense set of $\alpha$'s in $k$. We can rescale our indeterminates and assume that $F_\alpha = x_o^2+ \cdots + x_d^2 + G$, where the $G$ contains only terms of degree greater or equal to $3$.
Apply Lemma~\ref{claim} to $F_\alpha$ and write $F_{\alpha} = v_0(x_0+a_0)^2+x_1^2+\cdots+x_d^2+G_1$, with $G_1$ an element of $(x_1,...,x_d)^3.$ We can continue now with $x_1^2+\cdots+x_d^2+G_1$ and by applying Lemma~\ref{claim} recursively we see that eventually we can write $F_\alpha = \sum_{i=0}^{d} v_i x_i^2$, where $v_i$ are all units, after a suitable change of variables.
Since we are working over an algebraically closed field of characteristic different than $2$, we can find $w_i$ units in $k[[x_0,...,x_d]]$ such that $w_i^2 = v_i$ (see Lemma~\ref{powers}). This allows us to transform $F_\alpha$ isomorphically into $\sum_{i=0}^d x_i^2$.
In conclusion, we get that
$$\operatorname{e}_{HK}(k[[x_0,...,x_d]]/(\sum_{i=0}^{d} x_i^2)) \leq \operatorname{e}_{HK}(R).$$
\end{proof}
\begin{Lemma}
\label{powers}
If $A$ is a ring such that $f = \sum u_i x^i$ is a formal power series in $A[[x]]$ and $u_o$ is a unit in $A$ that admits a square root in $A$ and $1/2 \in A$, we can find $g \in A[[x]]$ such that $g^2 =f$. In particular, if $f \in k[[x_0,...,x_d]]$ is a unit and $k$ is algebraically closed of characteristic different than $2$, then there exists $g \in k[[x_0,...,x_d]]$ such that $g^2 =f$.
\end{Lemma}
\begin{proof}
The first statement amounts to solving a system of equations where the unknowns are the coefficients of $g$.
The second statement reduces to the first, by thinking of $f \in A[[x_d]]$ where $A=k[[x_0,...,x_{d-1}]]$. First, we apply induction on $d$: since $f$ is a unit, by induction we see that its constant term (when thinking of it as a power series in $x_d$ only) has a square root in $A=k[[x_0,...,x_{d-1}]]$. Applying the first statement now, we can find a power series $g \in A[[x_d]]=k[[x_0,...,x_d]]$ such that $g^2 =f$.
\end{proof}
Using an argument similar to the one in the proof of Theorem~\ref{hypersurface}, one can show the following:
\begin{Theorem} Let $(R,{\mathfrak m},k)$ be a d-dimensional singular hypersurface complete local ring of characteristic $p>0$ and $p \ne 2,3$. Then one of the following is true.
\item
$(1)$ $R \cong k[[x_{0},...,x_{d}]]/(\sum_{i=0}^{d} x^{2}_{i})$, or
\item
$(2)$ $\operatorname{e}_{HK}(R) \ge \operatorname{e}_{HK}(k[[x_{0},...,x_{d}]]/(x^{2}_{0}+\cdots+x^{2}_{d-1}+x^{3}_{d})).$
\end{Theorem}
\begin{proof}
Suppose that $R$ is defined by some $f \in k[[x_{0},...,x_{d}]]$.
Assume $(1)$ is not the case. Then as in the proof of Theorem3.1, we can make change of variables to have that $f_{2}=\sum_{i=0}^{l}x^{2}_{i}$ for the homogeneous decomposition $f=\sum_{i=0}^{\infty}f_{i}$ of $f$. Since $(i)$ is not the case, we have that $l<d$.
Let us take $g_{\alpha}:=\alpha (x^{2}_{l+1}+\cdots+x^{2}_{d-1}+x^{3}_{d})$ with $\alpha\in k$. Then $F_{\alpha}:=f+g_{\alpha}$ is of the form $x^{2}_{0}+\cdots+x^2_l+ \alpha x^{2}_{l+1}+\cdots+\alpha x^{3}_{d}+G$ for $\alpha \ne 0$, where $G$ contains only terms of degree greater than 2.
Now we can keep track of the proof in Theorem~3.1 without any change to have that $F_{\alpha}=v_{o}x^{2}_{0}+\cdots+v_{d-1}x^{2}_{d-1}+v_{d}x^{3}_{d}$, where $v_{i}$ are all units. Since we can assume that $k$ is an algebraically closed field, and the characteristic of $k$ is different than 2 and 3, we can apply Lemma~3.2 to solve the system of equations in $w_{i}$; $w^{2}_{0}=v_{0}$,...,$w^{2}_{d-1}=v_{d-1}$,and $w^{3}_{d}=v_{d}$ (This is where $p \ne 3$ is used.). Therefore $F_{\alpha}$ can be transformed isomorphically into $x^{2}_{0}+\cdots+x^{2}_{d-1}+x^{3}_{d}$.
By dense upper semi-continuity, we get that
$$\operatorname{e}_{HK}(R) \ge \operatorname{e}_{HK}(k[[x_{0},...,x_{d}]]/(x^{2}_{0}+\cdots+x^{2}_{d-1}+x^{3}_{d})).$$
\end{proof}
Much has been learned about the Hilbert-Kunz multiplicity in Noetherian rings by comparing it to the more classical notion of Hilbert-Samuel multiplicity. It is true that in many instances the behavior of these two multiplicities is similar to each other.
A natural way of approaching the conjecture of Watanabe and Yoshida is to show that for any equidimensional local ring $R$ there is a hypersurface $S$ of same dimension such that $\operatorname{e}_{HK} (S) \leq \operatorname{e}_{HK}(R)$. A well-known result on the Hilbert-Samuel multiplicity says that for every ring $R$ of dimension $d$ one can naturally construct, through Noether normalization, a $d$-dimensional hypersurface $S$ such that $\operatorname{e}(R) = \operatorname{e}(S)$. In this section, we will show that, for such an $S$, $\operatorname{e}_{HK}(S)$ will turn out to be greater than $\operatorname{e}_{HK}(R)$ in many instances.
We would like to outline this construction in a specific example.
Let $\text{$(R,\fm,k)$ }$ be the ring obtained by killing the $2 \times 3$ -minors of a generic matrix, say $R = k[[x,y,z,u,v,w]]/(xv-uy,yw-vz,xw-uz)$. This ring is Cohen-Macaulay of dimension $4$ with $x, u-y, z-v,w$ a system of parameters. In fact, $R$ is $F$-regular.
Let $A= k[[x, u-y, z-v, w]] \subset R$ be a Noether normalization. For computational purposes, let $a= u-y, b = z-v$. With this change of variables $A= k[[x,a,b,w]] \subset R = k[[x,a,b,w,y,v]]/(y^2-xv+ay,yw-vb -v^2, xw-ab-yv-av-yb)$. Note that $Q(A) \subset Q(B)$ is a simple field extension generated by $y$. Indeed, $v = \frac{1}{x} (y^2 + ay)$.
Look now at $A[[y]] \to R$. The kernel of this map is a principal ideal generated by some $f$. Hence we have constructed a hypersurface $\text{$(S,\fn,k)$}$ in $R$. It is known that $\operatorname{e}(S) = \operatorname{e}(R)$. We would like to compare the Hilbert-Kunz multiplicities of $R$ and $S$.
Since $R$ is finite over $S$, we have that $\operatorname{e}_{HK}({\mathfrak n}, S) = \operatorname{e}_{HK}({\mathfrak n} R, R)/ r$, where $r$ is the rank of $Q(R)$ over $Q(S)$ (by Theorem 2.7 in~\cite{WY1}). But $Q(S) = Q(R)$ and so $r=1$. We can also note that ${\mathfrak n} R \subset {\mathfrak m} $, which implies that $\operatorname{e}_{HK} ({\mathfrak n} R, R) \geq \operatorname{e}_{HK} ({\mathfrak m}, R) = \operatorname{e}_{HK}(R)$. Moreover, $R$ is $F$-regular and so $ {\mathfrak n} R = ({\mathfrak n} R)^{*} \neq {\mathfrak m}$ which shows that $\operatorname{e}_{HK} (S) > \operatorname{e}_{HK}(R)$. ( As the referee points out, the reader can note that $\operatorname{e}_{HK}(R) = 13/8$ by applying the results of Section 5 in~\cite{WY3}.)
Examples like this are likely to abound. We have only used that $R$ is $F$-regular and that the finite extension $S \operatorname{\hookrightarrow} R$ has rank $1$.
\section{Complete intersections}
In this section, we give an affirmative answer to the Conjecture~\ref{conjecture} i) in the case of complete intersections. We do this by reducing the study of complete intersections to that of hypersurfaces, a case that was solved in the previous section.
We would like to state first prime avoidance result that will be used later in the section (~\cite{Ei}, Exercise 3.19).
\begin{Lemma}[Prime Avoidance]
\label{prime}
Suppose that $R$ is a ring containing a field $k$, and let $I_1,...,I_m$ be ideals.
If $f_1,...,f_n \in R$ are such that $(f_1,...,f_n) \nsubseteq I_i$
for each $i$,
then there exists a nonzero homogeneous polynomial $H(Z_1,...Z_n) \in k[Z_1,...,Z_n]$ such that
$$\sum_{i=1}^{n}a_if_i \notin \bigcup_i I_{i}$$
for all $(a_1,...,a_n) \in k^n$ with $H(a_1,...,a_n) \ne 0$.
\end{Lemma}
The Lemma will be used in the proof of the following
\begin{Proposition}
\label{sc-ci}
Let $k$ be an uncountable algebraically closed field of characteristic $p >0$. Let $A = k[[X_1,...,X_n]]$ and $\tilde R : = A/(f_1...f_l)$ a complete intersection ring and $f,g \in A$ such that they form a regular sequence on $\tilde R$. Let $0 \neq h \in \tilde R$.
Then there exist a dense subset $V \subset k $ such that $ah +f, g$ form a regular sequence on $\tilde R$ and
$$ \operatorname{e}_{HK} (\tilde R/ (f,g) ) \geq \operatorname{e}_{HK} (\tilde R/ (a h+f, g),$$
for all $a \in V$.
\end{Proposition}
\begin{proof}
Since $f,g$ form a regular sequence on $\tilde R$, we note that $(h,f) \not\subseteq P$ for every associated prime $P$ of $\tilde R/ (g)$. Hence, we can find a nonzero homogeneous polynomial $H(Z_1,Z_2)$ such that $$a h +f \notin P$$
for every $P$ associated prime of $\tilde R/ (g)$ and every $a$ in the open non-empty subset $U: = \{ a \in k: H(a,1) \neq 0 \}$. That is, $ah +f$ and $ g$ form a regular sequence on $\tilde R$. Let us consider the natural ring homomorphism
$$k [t] \to \tilde R [t] / (th+f, g).$$
The fiber over each $a \in U$ is of dimension $n-l-2$. As in the proof of
Theorem~\ref{sc-hyp} we can find a dense subset $V$ in $U$ such that
$$\operatorname{e}_{HK} (\tilde R / (f, g) \geq \operatorname{e}_{HK} (\tilde R/ (ah+f, g),$$
for all $a \in V$.
\end{proof}
\begin{Theorem}
\label{ci}
Let $(R,{\mathfrak m},k)$ be a non-regular complete intersection whose residue field is
an uncountable algebraically closed field of characteristic $p>0$.
Then there exists a non-regular hypersurface $k[[X_1,...,X_{d+1}]]/(F)$ such that
$$\operatorname{e}_{HK}(k[[X_1,...,X_{d+1}]]/(F)) \le \operatorname{e}_{HK}(R).$$
\end{Theorem}
\vspace{0.3cm}
\begin{proof}
Let $R$ be a non-regular complete intersection of dimension d.
Since we can complete $R$, $R$ is isomorphic to
$$k[[X_1,...,X_{d+e}]]/(f_1,...,f_e),$$
where $(f_1,...,f_e)$ is a regular sequence.
\vspace{0.2cm}
($e=1$): In this case, since $R$ is already a hypersurface, so we are done.
\vspace{0.2cm}
($e>1$): We will give a proof based on induction on the length of a regular sequence.
The idea of the proof is to work on the regular sequence. In each step, we try to obtain
another regular sequence whose corresponding residue ring is of dimension d, non-regular, and has multiplicity
smaller than equal to that of the residue ring corresponding to regular sequence obtained in the previous step.
First of all,
we will apply the following procedures to the ring $R$.
\vspace{0.2cm}
(1): Suppose that some $f_i$ ($1 \le i \le e$) defines a regular hypersurface ring, then by Cohen's structure theorem, there is an isomorphism $$k[[Y_1,...,Y_{d+e-1}]] \cong k[[X_1,...,X_{d+e}]]/(f_i),$$ where $k[[Y_1,...,Y_{d+e-1}]]$ is the power series ring.
Then there is an isomorphism $$k[[Y_1,...,Y_{d+e-1}]]/(f'_1,...,f'_{i-1},f'_{i+1},...,f'_e) \cong k[[X_1,...,X_{d+e}]]/(f_1,...,f_e)),$$
where $f'_j$ is the inverse image of $f_j$. Note that $(f'_1,...,f'_{i-1},f'_{i+1},...,f'_e)$ is a regular ideal and its length is equal to $e-1$.
Following this procedure, we can shrink the length of the regular sequence as small as possible, therefore we can assume that none of $f_i$'s defines a regular hypersurface.
\vspace{0.2cm}
(2): After (1) is done, by making some linear change of $X_1,...,X_{d+e}$, we can assume that each $f_i$ contains a term, $c_iX_{1}^{t_i}$ with $0\ne c_i\in k$, and that the order of $f_i$ is equal to $t_i$ for each $i$.
The coefficients of $X_1^{t_i}$ are of the form $c_i+m_i$ with $m_i$ in the maximal ideal of $k[[Y_2,...,Y_{d+e-1}]]$.
Then by Weierstrass preparation theorem, each $f_i$ can be written uniquely in the form $$f_i=u_i(X_1^{t_i}+a_{s-1}X_1^{t_i-1}+\cdots+a_0),$$ where $u_i$ is a unit, and $a_i$ is in the maximal ideal of $k[[Y_2,...,Y_{d+e-1}]]$.
\vspace{0.3cm}
Since we consider ideals, so we can ignore the unit $u_i$,
hence again, we may put $$f_i=(X_1^{t_i}+a_{s-1}X_1^{t_i-1}+\cdots+a_0), R:=k[[X_1,...,X_{d+e}]]/(f_1,...,f_{e}).$$
To apply the induction step, let us prove the following proposition.
\begin{Proposition}
\label{udsc}
Let $\tilde R:=k[[X_1,...,X_n]]/(f_1,...,f_l)$ be a complete intersection and $f$, $g$ be elements of $A:=k[[X_1,...,X_n]]$ that form a regular sequence on $\tilde R$.
Assume that both $A/(f)$ and $A/(g)$ are non-regular, and $f$, $g$ are distinguished polynomials with respect $X_1$, that is, they can be written as $f=(X_1^{t}+a_{t-1}X_1^{t-1}+\cdots+a_0)$, $g=(X_1^{s}+b_{s-1}X_1^{s-1}+\cdots+b_0)$,
where $a_i$, $b_i$ are in the maximal ideal of $k[[X_2,..,X_n]]$.
Then, there exists a regular sequence $f', g'\in k[[X_1,...,X_n]]$ in $\tilde R$ such that
$$ \operatorname{e}_{HK}(\tilde R/(f,g)) \ge \operatorname{e}_{HK}(\tilde R/(f',g')),$$
and such that following holds:
$f'$ $(or ~g')$ contains a linear term in $X_1$: that is, $f' = u' X_1 + v'$ with
$u'$ unit in $\tilde{R}$ and $v' \in k[[X_2,..., X_n]]$
Moreover, one can arrange that $\tilde R/(f', g')$ is non-regular.
\end{Proposition}
\begin{Remark}
By Kunz, Proposition~\ref{kunz}, we note that $e_{HK}(\tilde R/(f)), e_{HK}(\tilde R/(g)) \le e_{HK}(\tilde R/(f, g))$, hence $\tilde R/(f, g)$ is also non regular. In the same manner, if one of $f'$ and $g'$ defines a non-regular hypersurface, then $\tilde R/(f', g')$ is also non-regular.
\end{Remark}
\begin{proof}[Proof of the Proposition]
The plan is to start with the ideal $(f,g)$ in $\tilde{R}$ and perform transformations on $f$ or $g$ to decrease the degree of $X_1$ in either $f$ or $g$ until we come to one of the cases described below.
The first step is natural and easy to describe: Without loss of generality, we may assume $t \ge s$.
Then $F':=f-X_1^{t-s}g$ has $deg_{X_1}(F') < t$, where $deg_{X_1}$ denotes the degree with respect to $X_1$.
So we have $(f,g)=(F',g)$ as ideals.
Since every $a_i$ and $b_i$ is in the maximal ideal, the top coefficient of $F'$ is also in the maximal ideal.
We see that $F', g$ is a regular sequence by the vanishing of Koszul homology.
Let us put $t':=deg_{X_1}(F')$, $s':=deg_{X_1}(g)$, and $G':=g$. So starting with $f, g$, we obtained $F', G'$.
This first step fits under the general procedure that is described in the next:
We have two elements $F, G \in k[[X_1,...,X_n]]$ in $\tilde R$ such that
$$ \operatorname{e}_{HK}(\tilde R/(f,g)) \ge \operatorname{e}_{HK}(\tilde R/(F,G)),$$ and, at least one of them, say $F$, has the leading term in $X_1$ of the form $u X_1^{s}$, with $u$ a unit in $\tilde{R}$.
We would like to show that one can construct $F', G'$ such that
$$ \operatorname{e}_{HK}(\tilde R/(F,G)) \ge \operatorname{e}_{HK}(\tilde R/(F',G')),$$ and
$deg_{X_1}(F)+deg_{X_1}(G) > deg_{X_1}(F')+deg_{X_1}(G')$, such that either $F'$ (or $G'$) has the leading term in $X_1$ of the form $u' X_1^{t'}$ (or $u' X_1^{s'}$) with $u'$ a unit.
The first step described above is a particular case of the general procedure if one takes $F:=f, G:=g$.
Let us explain now how to make $F', G'$ from the given $F, G$. Let $deg_{X_1}(F) =t $ and $deg_{X_1}(G) =s$ and as above $F = u X_1^t + \cdots$, with $u$ a unit in $\tilde{R}$ and $G = vX_1^s +\cdots$, with $v$ not necessarily a unit.
We have two cases to consider for the ideal $(F,G)$ as follows.
\vspace{0.2cm}
($\alpha$): If $t \le s$, we can take $$G':=G-vX_1^{s-t}{u}^{-1}F,~F':=F,$$ and put $t':=deg_{X_1}(F')$, and $s':=deg_{X_1}(G')$. Then we see that $deg_{X_1}(G)>deg_{X_1}(G')$ and that $(F',G')=(F,G)$.
Again $F', G'$ is a regular sequence on $\tilde R$.
\vspace{0.2cm}
($\beta$): If $t \ge s$, then we can not use $G$ to eliminate the leading term in $X_1$ in $F$ since $v$ might not be a unit. Hence we will use Proposition~\ref{sc-ci} to replace $G$ by another power series $G_1$ such that $G_1$ has the leading term in $X_1$ of the form $v_1 X_1^s$ where $v_1$ is a unit in $\tilde{R}$.
Consider the sequence ${a}X_1^{s}+G$, $F$, where $a \in k$.
Note that the top coefficient of $G_1: = {a}X_1^{s_1}+G$ is a unit in $A$ unless
$a=0$.
We apply Proposition~\ref{sc-ci} for $A$, $\tilde R$ and the regular sequence $F, G$ on $\tilde R$: there is a dense subset $V \subseteq \operatorname{Max}(k[t]) \simeq k$ for which
$$\operatorname{e}_{HK}(\tilde R/(F, G)) \ge \operatorname{e}_{HK}(\tilde R/(aX_1^{s}+G, F))$$ holds for all $a \in V$, and $aX_1^{s}+G, F$ form a regular sequence.
Working with the new sequence $(F, G_1=a X_1^{t}+G)$ for some $a\ne 0$ and $a \in V$, we obtain a new regular sequence $F', G'$ such that
$$ F': = F - u X_1^{t-s}v_1^{-1}G_1, \ G' :=G_1 $$ where $v_1$ is the top coefficient of $G_1$. Also we remark that $(F', G')=(F, G_1)$ as ideals, and $deg_{X_1}(F)>deg_{X_1}(F').$
One can see in either case $F'$ (or $G'$) has the leading term in $X_1$ of the form $u' X_1^{t'}$ (or $u' X_1^{s'}$) with $u'$ a unit.
Moreover, the new pair $F',G'$ satisfies the property: $deg_{X_1}(F')+deg_{X_1}(G') < deg_{X_1}(F)+deg_{X_1}(G)$. We also note that whenever we apply Proposition~\ref{sc-ci}, then the ideal $(F', G')$ is different than the ideal $(F, G)$.
Once we have $F', G'$, we continue by applying the procedure to $F', G'$ themselves.
We would like to show that by doing this repeatedly we will eventually reach one of the forms stated in the conclusion of the Proposition.
Both $f,g$ belong to ${\mathfrak m}_A^2$. We notice that if $F, G$ belong to ${\mathfrak m}_A ^2$, then $F', G'$ will also belong to ${\mathfrak m}_A ^2$ unless $min(deg_{X_1}(F), deg_{X_1}(G))=1$. Once this situation occurs, we stop our procedure at once; if say $deg_{X_1}(F) =1$, then by changing the coefficient of $X_1$ with the help of Proposition~\ref{sc-ci} if necessary, we see that we end up in the case described.
If we never encounter the situation where $min(deg_{X_1}(F), deg_{X_1}(G))=1$, then we eventually end up with $f'$ (or $g'$) $\in k[[X_2,...,X_n]].$ But then using Proposition~\ref{sc-ci} add $uX_1$ to $f'$ or $g'$ and we end up in the situation described in the conclusion
of our Proposition.
To end the proof, it is enough to say that at least one of $f'$ or $ g'$ is in ${\mathfrak m}_A^2$. Then this guarantees that $\tilde{R}/(f',g')$ is non-regular.
\end{proof}
\vspace{0.8cm}
Now let us go back to the proof of the theorem.
We apply the Proposition 4.4 for $A:=k[[X_1,...,X_{d+e}]]$, $l:=e-2$ to $f_1,...,f_{e}$
inductively.
Start with $f_1$ and $f_2$ and put $\tilde R:=k[[X_1,...,X_{d+e}]]/(f_3,...,f_{e})$.
Then we can find such $F_1, F_2$ as stated in the Proposition.
Once we come to the conclusion in the Proposition, then we can find the desired hypersurface by applying the induction step on the length of the regular sequence by eliminating $X_1$, so we are done.
\end{proof}
We would like to close this section by proving the part (1) of Conjecture of Watanabe and Yoshida stated in the introduction for complete intersections
\begin{Theorem}
\label{main}
Let $d \geq 2$, $ p \neq 2$ prime and $k$ a field of characteristic $p>0$. If $(R, {\mathfrak m}, k)$ is a complete intersection, not regular, then $\operatorname{e}_{HK}(R) \geq \operatorname{e}_{HK}(R_{d,p})$.
\end{Theorem}
\begin{proof}
We can enlarge the residue field such that we have an uncountable algebraic closed field $K$.
By Theorems~\ref{hypersurface} and~\ref{ci} we see that over $K$, $\operatorname{e}_{HK} (R \otimes_k K) \geq \operatorname{e}_{HK} (R_{d,p} \otimes_k K)$ which implies the result over $k$.
\end{proof}
\begin{Remark}
{\rm Although we stated Propositions~\ref{udsc} and~\ref{sc-ci} for the case of complete intersection only, this assumption was in fact not needed in their corresponding proofs. We kept this as hypothesis for the convenience of the reader, since this section deals only with complete intersections.}
\end{Remark}
\section{Remarks on the general case}
In this section, we would like to show how using ideas related to the upper semi-continuity of the Hilbert-Kunz multiplicity can provide insight into the general case of the Conjecture stated in Section 1. A local ring $S$ such that $\dim (S) - {\rm depth} (S) =1 $ is called $almost ~Cohen$-$Macaulay$.
\begin{Proposition}
Let $\text{$(R,\fm,k)$ }$ be an catenary unmixed non-regular ring of positive characteristic $p >0$. Then there exists a non-regular unmixed ring of same dimension $\text{$(S,\fn,k)$}$ which is Cohen-Macaulay or almost Cohen-Macaulay such that $$\operatorname{e}_{HK} (S) \leq \operatorname{e}_{HK} (R).$$
\end{Proposition}
\begin{proof}
Let $x_1, \cdots, x_n$ be a maximal regular sequence on $R$ and let $P$ be a minimal prime over $(x_1, \cdots, x_n)$. We have that $\operatorname{e}_{HK} (R_P) \leq \operatorname{e}_{HK}(R)$ by Theorem 3.8 in~\cite{K2} (this is where we need catenary). If $R_P$ is not regular we are done, since we can adjoin a finite number of indeterminates to $R_P$ to obtain a Cohen-Macaulay ring $S$ with $\operatorname{e}_{HK} (S) = \operatorname{e}_{HK} (R_P) \leq \operatorname{e}_{HK} (R)$ (the first equality comes from Proposition~\ref{kunz}).
If $R_P$ is regular, then consider $P \subset Q$ such that $\operatorname{height}(Q/P)=1$. Localize at $Q$ and get $\operatorname{e}_{HK}(R_Q) \leq \operatorname{e}_{HK} (R)$. Since $x_1, \cdots, x_n$ is a maximal regular sequence we see that $R_Q$ is almost Cohen-Macaulay. As before, by adjoining a number of indeterminates over $R_Q$ we obtain an example of same dimension as $R$.
\end{proof}
We would like to show that part (1) of the Conjecture can be reduced to the case of an isolated singularity:
Assume that $\text{$(R,\fm,k)$ }$ is excellent and unmixed. It is immediate that $\operatorname{e}_{HK}(R) \geq \operatorname{e}_{HK}(R_{red})$ and hence we can pass to $R_{red}$ and assume that $R$ is excellent and reduced.
By induction on the dimension of $R$ we can assume that for all non-regular unmixed rings $A$ of smaller dimension one can find a hypersurface $B$ of same dimension such that $\operatorname{e}_{HK} (B) \leq \operatorname{e}_{HK}(A)$.
Let $\operatorname{Sing}(R)$ be the singular locus of $\text{$(R,\fm,k)$ }$. It is a non-empty closed set defined by an ideal $J$. If $J$ is ${\mathfrak m}$-primary, then there is nothing to prove. Otherwise, let $P_i$, $i=1, \cdots, n$, be the collection of all minimal primes of $J$. Let $P$ be one such minimal prime $P_i$ with height less than the dimension of $R$.
Then $\operatorname{e}_{HK} (R_{P}) \leq \operatorname{e}_{HK} (R)$. By induction, we can find a hypersurface $S$ such that $\operatorname{e}_{HK} (S) \leq \operatorname{e}_{HK} (R_P)$. By adjoining a finite number of indeterminate to $R_P$ we obtain a hypersurface, relabeled $S$, of dimension equal to $\dim (R)$ and $\operatorname{e}_{HK}(S) \leq \operatorname{e}_{HK} (R)$.
Our result Theorem~\ref{hypersurface} shows that among hypersurfaces $\sum_{i=0}^{d} x_i ^2$ is the one with minimal Hilbert-Kunz multiplicity.
We would like to close now with an observation related to the questions addressed in this paper: Let $A$ be a finitely generated $K$-algebra which is non-regular and locally unmixed. Is there a minimal value for the Hilbert-Kunz multiplicity of $A_P$ where $P$ is a non-regular prime?
\begin{Proposition}
Let $A$ be an excellent, nonregular and locally unmixed. Then $\operatorname{e}_{HK} : \operatorname{Spec} (R) \to \mathbf{R}$ has minimum when restricted to the non-regular locus of $\operatorname{Spec} (R)$.
\end{Proposition}
\begin{proof}
$A$ is excellent and hence its singular locus is defined by an ideal $J$. For any prime containing $J$ we can find a minimal prime $P$ of $J$, $P \subset Q$ such that $\operatorname{e}_{HK} (A_P) \leq \operatorname{e}_{HK} (A_Q)$.
Since there are only finitely many minimal primes over $J$ we are done.
\end{proof}
|
2,877,628,091,341 | arxiv | \section{INTRODUCTION}
The ATLAS collaboration consists of around 1900 scientific authors, from 165
institutes in 35 countries. The detector is roughly cylinder-shaped with
a height of 46$\,\mathrm{m}$ and 25$\,\mathrm{m}$ in diameter. It is installed in a cavern 92$\,\mathrm{m}$ below
ground at CERN. A 'ship-in-a-bottle' assembly has been performed, as the
cavern is just large enough for the detector. This document focusses on
progress of commissioning the detector since the report at HCP2007 \cite{Amelung}.
\section{ATLAS DETECTOR COMPONENTS}
\begin{figure*}[t]
\centering
\includegraphics[width=110mm]{ATLAS_grey2.eps}
\caption{The ATLAS detector. From inside to outside: Inner detector with
Pixel, SCT and TRT, then calorimeters (Liquid argon and scintillating tiles),
and muon spectrometer outside with toroid magnets.} \label{figure:1}
\end{figure*}
The detector's scale and architecture is determined by the requirements of
the physics goals of the LHC programme, most notably excellent energy resolution of
the calorimeters is required, as well as very good muon momentum resolution,
and inner detector performance for heavy flavour identification.
A detailed description of the ATLAS detector is given in \cite{TechProposal}.
An overview is given below:
\begin{itemize}
\item {\bf Inner Detector} with silicon pixel detector (Pixel) closest to the beam-line,
silicon strips detector (SCT) and transition radiation tracker (TRT), all
located inside a 2~T solenoidal magnetic field. These detectors provide
precision tracking of charged particles and secondary vertex finding in the pseudo-rapidity
region $|\eta| <$ 2.5.
\item {\bf Calorimeters}: Liquid argon calorimeter (LAr) for
electromagnetic barrel and endcap calorimeters, and hadronic endcaps (HEC) and
forward calorimeters (FCAL). The electromagnetic calorimeter
provides coverage for $|\eta| <$ 3.2, the limit
of hermiticity is $|\eta| =$ 4.9. Steel and scintillator tiles are used
in the barrel region of the hadronic calorimeter (Tile).
\item {\bf Muon spectrometer}:
consisting of monitored drift tubes (MDT) in the barrel and endcap regions
for precision measurements, and cathode strip chambers (CSC) in the forward region,
integrated into the air core toroid magnet system. Embedded are the fast-responding
muon trigger chambers, which are Resistive Plate Chambers (RPC)
in the barrel and thin gap chambers (TGC) in the forward region.
The main requirement is precise muon momentum measurements within $|\eta| <$ 2.7.
\item {\bf Trigger and Data Acquisition} architecture: Three-level data
selection architecture. The first level (Level-1) in custom-build
parallel-processing
pipelined hardware is separated into calorimeter and muon Level-1 trigger.
The high level triggers (HLT) consisting of the
second level (Level-2) and third level (Event Filter) are
implemented in software running on large PC farms with dedicated network
infrastructure.
The event rate of 40 MHz from the detector is reduced to 200 Hz for event-data recording.\\
Readout is performed using custom-built buffers in
PCs (Readout System: ROS). Data acquisition (DAQ) software to
control, configure and monitor all systems.
\cite{TechProposal}.
\end{itemize}
\section{WORKING TOWARDS DATA-TAKING}
During the last year, the focus of the commissioning effort has evolved
from single detector operation to combined running and integration.
Monthly integration weeks are scheduled
to integrate detector, trigger and data acquisition into one
global setup for each group of sub-detectors (calorimeters, muon detectors and
inner detector), which are then combined together for the milestone weeks.
All experts are brought together for those weeks, of which the sixth one, M6,
took place in April, and M7 in May 2008.
In addition, technical runs
are performed, feeding simulated and recorded data into the data acquisition
system to perform full-rate tests at Level-1 trigger rates of up to 50 KHz.
The start-up schedule as of May 2008 expects the ATLAS detector to be
closed by mid-July, with first LHC injections by the end of July, and high-energy
proton-proton-collisions at 10 TeV by September 2008. A few weeks of
stable running is planned, producing a few $\,\mathrm{pb}^{-1}$ of data.
The M6 milestone week included all detector parts apart from the Pixel
detector, with sizable percentages of each sub-detector included into the
data acquisition, especially almost the entire complement of
barrel calorimeters, muon barrel
system and the Level-1 trigger systems for both.
Cosmic muons are very useful for the detector commissioning. They do
however occur with very low rate and also do not originate in the
interaction region to really mimic LHC events \cite{Amelung}. Different
strategies are followed to make the detection of cosmic muons as useful as
possible, modifying the standard data acquisition and reconstruction
tools, for example by increasing the recorded time-interval and
using calorimeter
data to form track-like objects. Other important commissioning tools are the
calibration pulser systems built into the detector front-end electronics,
and a laser calibration system for the scintillating tile calorimeter.
\section{CALORIMETER COMMISSIONING}
The Liquid Argon calorimeter endcaps were fully switched on during the April
Calo Week after the M6 run. Cosmic muons were used to run in a specially extended
32-sample recording mode and also in 5-sample physics mode.
This mode records 32 samples of 25~ns each, unlike the 5-sample-mode in normal
LHC operation, which will be the identified bunch-crossing itself plus the two
leading and two trailing samples, and in addition with filter coefficients applied
in the calorimeters.
Timing studies with the trigger systems, both the first stage (Level-1) and the high
level triggers (HLT), have been performed, as well as monitoring and data quality tools
being tested and improved.
The full Liquid Argon calorimeter system was operational and was read out-during the M7
week. Events triggered by the Level-1 calorimeter trigger were studied, and
good timing alignment was achieved for the whole detector.
Comparison with simulated pulse shapes showed good agreement.
The Level-1 calorimeter trigger algorithms foreseen for LHC operations were
set-up and adjusted, and trigger objects and results were analysed
and verified using samples recorded in combined data-taking.
Cosmic muon tracks and pulse shapes were studied using event display and monitoring tools.
The hadronic barrel calorimeter, consisting of scintillating tiles,
was operational at 95\% during M6 with the remaining modules undergoing power supply
refurbishment.
The detector uses a specially developed algorithm ('TileMuonFitter') for
commissioning, which forms track-like objects from the calorimeter cell data,
as they would be expected from a cosmic muon passing through. This
shows a good energy density peak. No top-bottom bias
has been seen in the detector. A laser calibration system is commissioned
to send light into the photomultiplier tubes (PMTs) to align the timing,
and its operation integrated into the
data acquisition. The pulse-shapes from the ADC counts are inspected and
compared with the Level-1 trigger pulse-shapes. Fig. \ref{figure:7} shows
the pulses as analog-to-digital converter
(ADC) counts and fitted pulse-shapes for both the LAr hadronic endcap (HEC)
and the electromagnetic barrel LAr calorimeter (EMB), along with
an event display of the energy depositions.
\begin{figure*}[t]
\centering
\includegraphics[width=120mm]{CaloPulsesM6moreRes_grey.eps}
\caption{LAr Calorimeter ADC counts and fitted pulse-shapes
hadronic endcap (HEC) and electromagnetic barrel (EMB) systems
from commissioning in M6 Milestone week
and energy deposition in event display (April 2008).} \label{figure:7}
\end{figure*}
\section{MUON DETECTOR COMMISSIONING}
The barrel and endcap muon spectrometer (MDT) consists of 16 sectors,
out of which 12 were ready to operate in the milestone week M6.
The upper sectors have already been commissioned with cosmic muons.
This effort continues with the remaining four sectors in the lower half.
An example of cosmic data analysis is given in Fig. \ref{figure:muon}, where
clusters from the muon trigger chambers (RPC) are compared with tracks in the
precision muon spectrometer chambers
(MDT), with a very good correlation observed. The residual distribution
shows a width of 9~mm. The MDT system
shows very good track quality for cosmic muons, with six hits per track, the
residuals centred at zero, and a spread in the distribution (RMS) of 160$\,\upmu\mathrm{m}$.
Timing calibration of
the resistive plate chambers (RPC) as part of the Level-1 muon trigger system
is performed, where the
trigger settings are aligned between the planes using offsets from the
time-of-flight distribution. Monitoring and event display tools have been
used to achieve synchronised read-out and investigate hits counts, noise levels
and track quality.
\begin{figure*}[t]
\centering
\begin{minipage}{145mm}
\includegraphics[width=60mm, angle=0]{MDTRPCCorrelation_grey.eps}
\end{minipage}
\caption{
Correlation of muon spectrometer (MDT) and embedded
Level-1 muon trigger chambers (RPC) from commissioning with cosmic muons
(February 2008). {\bf Top:} Correlation of extrapolated z coordinate from
both systems in $\,\mathrm{mm}$, {\bf bottom:} Distance between clusters in $\,\mathrm{mm}$.
}\label{figure:muon}
\end{figure*}
\section{INNER DETECTOR COMMISSIONING}
The inner tracking detector commissioning efforts have been seriously
disrupted by the break-down of the cooling compressor at the beginning of May
after only 5 days of
Pixel detector commissioning in-situ. The compressors are being repaired.
The SCT has not participated in later milestone weeks as it shares its cooling
infrastructure with the Pixel detector.
The Pixel detector has not been in combined cosmics running due to the problems
with the cooling system, and also as its commissioning was queued behind the
SCT commissioning.
Combined performance studies of the SCT and TRT have been performed using
cosmic data taken during earlier milestone weeks.
Alignment and calibration studies and improvements reduce the
spread of the residual distribution considerably, e.g. for the TRT from
450$\,\upmu\mathrm{m}$ to 270$\,\upmu\mathrm{m}$. In a further step, the track position
as measured by the inner detector (SCT and TRT) and in addition by
the precision muon spectrometer (MDT)
were combined, this is shown in Fig. \ref{figure:indet}, along with an event
display of a cosmic muon track seen in all three sub-detectors.
This uses the top-half muon spectrometer available in this milestone week.
For the angle $\phi$, the correlation width of $\sigma$ = 10.3
mrad is achieved, while for angle $\Theta$ the width is $\sigma$ = 10.7 mrad.
Such an analysis requires all the systems to be operational, timed-in and
read-out successfully.
\begin{figure*}[t]
\centering
\includegraphics[width=120mm]{TRTSCTCombinedAnalysis_grey.eps}
\caption{{\bf Left:} Combined analysis of track parameters $\phi$ and $\Theta$
for inner detector (SCT and TRT) and muon spectrometer (MDT),
using data from M6 Milestone week
(April 2008), {\bf Right:} Event display of a reconstructed cosmic muon track
seen in both the inner detector and muon spectrometer.} \label{figure:indet}
\end{figure*}
\section{TRIGGER AND DATA ACQUISITION}
The trigger and data acquisition architecture
is described in detail in \cite{TechProposal}. The Level-1
calorimeter trigger signals need to be thoroughly tested before access to
the detector ends. Level-1 muon trigger commissioning is done sector by
sector as they are connected to their respective gas and power supplies.
The timing is being
addressed to ensure all signals used for the trigger decision are
synchronous, throughout the whole system.
Data analysis shows that the hits and clusters recorded in the trigger system are
well correlated with those reconstructed from the detector read-out.
The high-level trigger (HLT) computing farms are being commissioned and tested,
at a rate of about ninety rack mounted PCs per week, and they perform second and third
stage trigger algorithm (Level-2 and Event Filter) tasks during the
commissioning runs.
Five server-level PCs, with sufficient buffer diskspace to hold
many hours of data, form the last stage of the on-site data acquisition.
Then the data is transfered via a dedicated link to the CERN computing center,
where it is reconstructed and made available for analysis.
The track trigger implemented into the second trigger stage (Level-2)
was studied in cosmic data from the M6 milestone week,
regarding its efficiency for events with high-momentum reconstructed
tracks and its selectiveness of events to go into different output streams.
Dedicated TDAQ technical runs use simulated events, as well as recorded cosmic
data from the earlier M4 and M5 weeks, at the expected rate of data at LHC
collisions.
The high-level trigger infrastructure and algorithms are commissioned
to correctly identify cosmic-ray tracks when compared to the reconstructed
quantities. These studies use a transparent trigger mode, where events are
flagged with high-level trigger results, but not discarded accordingly,
as it would happen in standard operation.
In the technical runs,
stable operation was achieved for hours without intervention, with the system
controlling 1500 applications in 350 nodes.
\section{THE FUTURE}
Looking ahead, the activities during beam commissioning in single-beam
operation will include validation of the beam protection systems, first
synchronisation with the LHC clock and detailed timing and alignment
studies, and feedback to the machine. With collisions, the trigger
systems will be fully synchronised with the LHC. Full understanding of the
whole detector has to be achieved, using well-known physics processes
\cite{TatjanaTalk}.
\section{SUMMARY}
ATLAS is in the process of commissioning the detector using cosmic rays and calibration
systems. The sizeable fraction of sub-detectors is already integrated with
the trigger and data acquisition systems, allowing for stable combined data-taking
and combined data analysis.
An intense commissioning programme still lies ahead to bring
the components so far not integrated into the combined running into operations
and achieve stable data taking with the full detector
in time for the LHC start-up.
\begin{acknowledgments}
The results presented here are the result of the work of many ATLAS
colleagues, and their contributions are gratefully acknowledged. Material
has been provided by the ATLAS Commissioning Working Group chaired by
Jamie Boyd and Maria Costa, by Ludovico Pontecorvo, Jose Maneira, Pippa
Wells, Stephen Hillier and more. I'd like to thank the organisers of the
conference for providing an inspiring atmosphere in a beautiful location.
\\
This work is supported by the Science and Technology Facilities
Council (STFC) in the UK.
\end{acknowledgments}
|
2,877,628,091,342 | arxiv | \part{Background material}
\input{2._review1}
\input{2._review2}
\input{2._review3}
\input{2._review4}
\input{2._setting}
\input{3._endom-action_1}
\input{3._endom-action_2}
\input{3._endom-action_3}
\part{Determining the trivial relations}
\input{4._local_monodromy_and_polarization}
\input{4._trivialrelations1}
\input{4._trivialrelations2}
\input{4._trivialrelations3}
\input{4._trivialrelations4}
\part{Constructing non-trivial relations}
\input{5._pseudo-cm_notation}
\input{5._pseudo-cm_setting}
\input{6._involutions}
\input{7._relations_1}
\input{7._relations_2}
\input{7._relations_3}
\input{7._relations_4}
\input{8._non-trivial}
\part{Towards globality}
\input{9._inertia_and_filtration_1}
\input{9._inertia_and_filtration_2}
\input{10._good_models_1}
\input{10._good_models_2}
\input{11._local_monodromy_1}
\input{11._local_monodromy_2}
\input{11._local_monodromy_3}
\input{11._local_monodromy_4}
\input{11._local_monodromy_5}
\input{12._conditions}
\input{12._good_reduction_1}
\input{12._good_reduction_2}
\input{13._v-adic_proximity}
\input{14._the_final_proof}
\input{15._cm-points_0}
\input{15._cm-points_1}
\input{19._examples_1}
\input{19._examples_2}
\part*{Appendix}
\subsection{The main Theorem}
\textbf{Our setting:} Let $K$ be a number field and let $S'$ be a smooth geometrically irreducible complete curve over $K$, $\Sigma_S\subset S'(K)$ a finite set of $K$-point of $S'$, and fix $s_0$ an element of $\Sigma_S$. Let us consider $S$ to be the curve $S'\backslash\Sigma_S$, $X$ a smooth variety over $K$, and let $f:X\rightarrow S$ be a smooth projective morphism that is also defined over $K$ and assume that the dimension of the fibers of $f$ is $n$.
For each $i\in\{0,\ldots, 2n\}$ the morphism $f$ defines variations of Hodge structures on the analytification $S^{an}$ of $S$, namely the variations given by $R^{i}f^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}}\otimes \mathcal{O}_{S^{an}_\mathbb{C}}$. We focus on the variation with $i=n$ and set $\mathbb{V}:= R^{n}f^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}}$. We furthermore assume that there exists a smooth $K$-scheme $X'$ and a projective morphism $f':X'\rightarrow S'$ such that:\begin{enumerate}
\item $f'$ is an extension of $f$, and
\item $Y=f^{-1} (s_0)$ is a union of transversally crossing smooth divisors $Y_i$ entering the fiber with multiplicity $1$.
\end{enumerate}
Let $\Delta\subset S'^{an}_{\mathbb{C}}$ be a small disk centered at $s_0$ such that $\Delta^{*}\subset S^{an}_{\mathbb{C}}$. From work of Katz it is known that the residue at $s_0$ of the Gauss-Manin connection of the relative de Rham complex with logarithmic poles along $Y$ is nilpotent if we have $(2)$ above. From this it follows, by \cite{steenbrink} Theorem $2.21$, that the local monodromy around $s_0$ acts unipotently on the limit Hodge structure $H^n_{\mathbb{Q}-lim}$. By the theory of the limit Hodge structure we then get the weight monodromy filtration $W_{\bullet}$. We let $h:=\dim_{\mathbb{Q}} W_{0}$.\\
\textbf{Main result:} Our main result is the following theorem.
\begin{theorem}\label{maintheorem}Let $S'$, $s_0$, and $f:X\rightarrow S$ be as above and all defined over a number field $K$. We assume that the dimension $n$ of the fibers is odd, that the Hodge conjecture holds, and that a good arithmetic model, in the sense of \ref{section:canmodels}, exists for the morphism $f$ over $\mathcal{O}_K$.
For the variation whose sheaf of flat sections is given by $\mathbb{V}:=R^nf^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}}$ we assume the following hold true:\begin{enumerate}
\item the generic special Mumford-Tate group of the variation is $Sp(\mu,\mathbb{Q})$, where $\mu=\dim_{\mathbb{Q}} \mathbb{V}_z$ for any $z\in S^{an}$, and
\item $h\geq 2$.
\end{enumerate}
Let $\Sigma\subset S(\bar{\mathbb{Q}})$ be the set of points for which the decomposition $\mathbb{V}_s=V_1^{m_1}\oplus\cdots\oplus V_r^{m_r}$ of $\mathbb{V}_s$ into simple polarized sub-$\mathbb{Q}$-HS and the associated algebra $D_s:=M_{m_1}(D_1)\oplus \cdots \oplus M_{m_r}(D_r)$ of Hodge endomorphisms are such that:\begin{enumerate}
\item $s$ satisfies condition $\star$ in \ref{section:conditions}, and either
\item $h> \frac{\dim_\mathbb{Q} V_j}{[Z(D_j):\mathbb{Q}]}$ for some $j$, or
\item there exists at least one $D_i$ that is of type IV in Albert's classification and $h\geq \min\{ \frac{\dim_\mathbb{Q} V_i}{[Z(D_i):\mathbb{Q}] } : i \text{ such that } D_i=\End_{HS}(V_i) \text{ is of type IV } \}$.
\end{enumerate}
Then, there exist constants $C_{1}$, $C_2>0$ such that for all $s\in\Sigma$ we have\begin{center}
$h(s)\leq C_1 [K(s):K]^{C_2}$,
\end{center}where $h$ is a Weil height on $S'$.\end{theorem}
\begin{rmk}
We note that CM-points of the variation will be in the set $\Sigma$ of this \ref{maintheorem}. We can also create concrete examples of possible algebras of Hodge endomorphisms for which the conditions that guarantee $s\in\Sigma$ above can be checked fairly easily, once we have information on the weight monodromy filtration defined by the local monodromy around the point of degeneration $s_0$. We return to this issue in \ref{section:examples}.
\end{rmk}
\section{Introduction}
\subsection{History}
In \cite{andre1989g} Y.Andr\'e considers an abelian scheme $f:X\rightarrow S$, where $S=S'\backslash \{s_0\}$ with $S'$ a smooth connected curve defined over a number field $K$ and $s_0\in S'(K)$. He then considers a Weil height $h$ on the curve $S'$. He assumes that the generic fiber $X_{\eta}$ is a simple abelian variety of odd dimension $g>1$, and that the scheme has completely multiplicative reduction at the point $s_0$. He then shows that for any point in the set $\{ s\in S(\bar{\mathbb{Q}}): \End X_s\not\hookrightarrow M_g(\mathbb{Q})\} $ the height of the point is bounded from above by an effectively computable power of $[K(s):K]+1$.
Recently, in \cite{daworr}, C.Daw and M.Orr prove an analogous result for the case $g>1$ under some stronger assumptions, see Theorem $9.1$ in \cite{daworr}. Using this height bound together with the Masser-Wustholtz isogeny Theorem, in \cite{daworr} and \cite{daworr2}, they prove unconditionally a so called ``Large Galois Orbit conjecture''. The existence of large Galois orbits allows them to prove unconditionally a significant part of the Zilber-Pink conjecture for curves such as $S$ embedded in $\mathcal{A}_2$, i.e. curves whose Zariski closure in the Baily-Borel compactification of $\mathcal{A}_2$ intersect the $0$-dimensional stratum of the boundary of the compactification. Namely, they show that there are only finitely many points $s\in S(\bar{\mathbb{Q}})\subset\mathcal{A}_2(\bar{\mathbb{Q}})$ for which the corresponding abelian surface has quaternionic multilplication or is of the form $E\times {CM}$. An abelian surface is said to be of the form $E\times CM$ if it is isogenous to $E_1\times E_2$ where $E_1$ and $E_2$ are non-isogenous elliptic curves only one of which has complex multiplication. Most recently, in \cite{daworr3}, Daw and Orr employ the same G-function method to establish some cases of the Zilber-Pink conjecture unconditionally for curves such as above in $\mathcal{A}_g$ for $g$ even.
Their method follows the general strategy set out by Pila and Zannier in the breakthrough paper\cite{pilazannier}. The Pila-Zannier method mainly rests on comparing two bounds. On the one hand, an upper bound for the number of rational points on a transcendental variety and on the other hand a lower bound for Galois orbits of certain points of interest. Establishing the existence of large Galois orbits seems, at least to the author, to be perhaps the biggest missing piece in solving several problems in the theory of unlikely intersections via the Pila-Zannier method.
With these thoughts in mind, motivated by the formulation of the Zilber-Pink conjecture in the setting of mixed variations of Hodge structures by B.Klingler in \cite{klingler2017hodge}, we prove height bounds for certain ``exceptional'' points on a curve $S$ with respect to a geometric variation of Hodge structures over $S^{an}$. The vague term ``exceptional point'' reflects that assigned to the points we study we have a Hodge structure that has smaller Mumford-Tate group than the generic Mumford-Tate of a variation of Hodge structures supported on the curve $S$. These results of ours depend on the validity of the Hodge conjecture and the existence of what we refer to as ``good arithmetic models''.
\subsection{Organization of the paper- A summary of the proof}
We start by reviewing some aspects of the theory of G-functions in \ref{section:reviewgfunctions}. The method that Andr\'e uses to obtain his height bounds hinges on two results from the theory of G-functions. First is the fact that among the relative $n$-periods associated to the morphism $f:X\rightarrow S$ there are some that are G-functions. Namely they will be the ones that can be written as $\int_{\gamma}\omega$ where $\gamma_z\in\im{(2\pi i N^{*})^{n}}$ for $z\in\Delta^{*}$, where $\Delta^{*}$ and $N^{*}$ are the aforementioned punctured disc and nilpotent endomorphism. The second main result we will need is a result that can be described as a ``Hasse principle'' for the values of G-functions. This is what will ultimately allow us to extract height bounds.
We then move on in \ref{section:hodgereview} where we review some standard facts about the structure of the algebra of Hodge endomorphisms of a Hodge structure. After this, in \ref{section:notations} we fix some general notation with the hope of making the exposition easier.
In \ref{section:derhamendo} we address some technical issues that appear later on in our exposition. Namely we consider the isomorphism between algebraic de Rham and singular cohomology for a smooth projective variety $Y/k$, where $k$ is a subfield of $\bar{\mathbb{Q}}$\begin{center}
$P^n:H^n_{DR}(Y/k)\otimes_k\mathbb{C}\rightarrow H^n(Y^{an},\mathbb{Q})\otimes_\mathbb{Q} \mathbb{C}$.
\end{center}The singular cohomology is endowed with a Hodge structure and we consider its algebra of Hodge endomorphisms $D$. Later on we will want to create splittings of both de Rham and singular cohomology with respect to actions of $D$ on these vector spaces. To do that we will need to have an action of $D$ on $H^n_{DR}(Y/k)$, which a priori we do not. We show that assuming the absolute Hodge conjecture we may base change $Y$ by a finite extension $L$ of $k$ to obtain such an action that will be compatible with the action of $D$ on $H^n(Y^{an},\mathbb{Q})$ via the isomorphism $P^n$. We also show that this field extension may be chosen so that its degree is bounded from above only in terms of the dimension $\dim_{\mathbb{Q}} H^n(Y^{an},\mathbb{Q})$. We believe these results are known to experts in the field, however being unable to find a reference for these arguments we include them here for the sake of completeness.
Our next goal, realized in \ref{section:trivialrelations}, is to describe the trivial relations among those relative $n$-periods associated to the morphism $f$ which are G-functions. This amounts to describing the polynomials defining the $\bar{\mathbb{Q}}[x]$-Zariski closure of a certain $h\times \mu$ matrix, where $x$ here is a local parameter of $S'$ at the point $s_0$. This is achieved by a monodromy argument using Andr\'e's Theorem of the Fixed part.
The next part of our exposition, mainly \ref{section:pseudocmrelations}, consists of creating relations among the values of the G-functions in question at certain exceptional points that are ``non-trivial''. That means that these do not come from specializing the trivial relations we described earlier.
The last part of our exposition is dedicated to showing that the relations we created are ``global'', see \ref{section:hasseprinciple} for the term. To achieve this we need to assume the existence of certain good arithmetic models. We discuss these models in \ref{section:canmodels}.
To achieve this we first study the relation between the algebra of Hodge endomorphisms $D_s=\End_{HS}(H^n(X_{s}^{an},\mathbb{Q}))$ and the algebra of inertia-invariant endomorphisms of the \'etale cohomology group $H^n_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l)$. In \ref{section:inertiaendom} we prove that assuming the Hodge conjecture the former algebra naturally injects in the latter.
This forces an interplay between the algebra of Hodge endomorphisms and the endomorphisms of the graded quotients of the monodromy filtration of $H^n_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l)$. Taking advantage of this interplay we establish conditions in \ref{section:conditions} that guarantee the impossibility of the point $s$ being $v$-adically close to the degeneration $s_0$. Establishing this rests on the comparison between the local monodromy representation and the representation defined by inertia which follows from the theorem on the Purity of the branch locus. To employ this comparison we need to assume the existence of the arithmetic models of \ref{section:canmodels}.
After this we put all the aforementioned ideas together in \ref{section:proofoftheresult}. In summary, the relations created in \ref{section:pseudocmrelations} are shown to be non-trivial and global, under the aforementioned conditions. Applying the ``Hasse Principle'' for the values of G-functions, we obtain the height bounds we want.
We finish with a section centered around examples of algebras where \ref{maintheorem} applies. In particular we study the CM-points of variations satisfying the conditions of \ref{maintheorem} and establish that these are in fact points of the set $\Sigma$.
We have also included an appendix on polarizations. The main result we need about polarizations in the text is a description of the relations they define among the $n$-periods. This description in the case where the weight of the Hodge structures is $1$ already appears in \cite{andre1989g}. The description in the case of arbitrary odd weight is not different at all. We include it in this appendix for the sake of completeness.\\
\textbf{Acknowledgements: }The author thanks his advisor Jacob Tsimerman for countless helpful discussions and suggestions.
\section{Good models and $v$-adic proximity}\label{section:canmodels}
We return from now on to the notation of \ref{section:notations}. So let us fix for the remainder a G-admissible variation of $\mathbb{Q}$-HS given by the map $f:X\rightarrow S$ with all the relevant notions of \ref{section:notations} defined over some fixed number field $K$. Let us also fix a local parameter $x\in K(S')$ of $S'$ at the point $s_0$.
We start by considering a regular proper model $\tilde{S}$ of the curve $S'$ over the ring of integers $\mathcal{O}_K$. We also consider a point $s\in S(L)$, where $L/K$ is some finite extension. By the valuative criterion of properness $s_0$ and $s$ give sections, which we denote by $\tilde{s}_0$ and $\tilde{s}$, of the arithmetic pencil\begin{center}
$\tilde{S}\underset{\mathcal{O}_K}{\times} \mathcal{O}_L\rightarrow \mathcal{O}_L$.
\end{center}
\begin{lemma}\label{lemmamultiply} The basis $\omega_i$ of $H^{n}_{DR}(X/S)$ in \ref{existence} may be chosen so that the following property holds:
\begin{center}
For any $L/K$ and any $s\in S(L)$ we have the if $s$ is $v$-adically close to $s_0$ for some $v\in \Sigma_{L,f}$ then $\tilde{s}$ and $\tilde{s}_0$ have the same image in $\tilde{S}(\mathbb{F}_{q(v)})$.
\end{center}\end{lemma}
\begin{proof}
This follows from the discussion on page 209 of \cite{andre1989g}. In essence we might need to multiply a given basis of sections of $H^n_{DR}(X/S)$ by a factor of the form $\frac{\zeta}{\zeta -x}$, for an appropriately chosen $\zeta\in K^{\times}$. This amounts to multiplying the G-functions by the same factor.
We still get G-functions but these will have possibly smaller local radii for a finite set of finite places to ensure the property above.
\end{proof}
\subsection{Good arithmetic models}
We aim to apply the results of \S4 of \cite{pst}. To apply these results we need to assume the existence of good arithmetic models over $\mathcal{O}_K$ both for the curve $S$ and the morphism $f$. We consider an indexing of the elements of $\Sigma_S=S'\backslash S=\{ s_0,\ldots, s_g\}$.\\
\begin{defn}\label{conjarithm}
We say that $S$ has a \textbf{good arithmetic model} over $\mathcal{O}_K$ if for the triple $(S',S,\Sigma_S)$ there exists a model $(\tilde{S},\mathcal{C}, D)$ over $\mathcal{O}_K$ such that $\tilde{S}$ is smooth and proper and $D$ is a normal crossings divisor.
Furthermore, we assume that for all $i\neq 0$ and all primes $q$ we have that $(\tilde{s}_i)_{\bar{\mathbb{F}}_q}\neq (\tilde{s}_0)_{\bar{\mathbb{F}}_q}$, where $\tilde{s}_i$ are the sections $\mathcal{O}_K\rightarrow \tilde{S}$ defined by these points as above.
\end{defn}
The smooth proper $K$-morphism $f:X\rightarrow S$ provides us, as we have seen, with a weight $n=(\dim X-1)$ variation of Hodge structures on the $\mathbb{C}$-manifold $S^{an}_\mathbb{C}$. We are also provided with a $\mathbb{Z}$-local system $\mathbb{\mathbb{V}} := R^nf^{an}_{*} \mathbb{Z}_{X^{an}_{\mathbb{C}}}$, contained in the local system of flat sections of the variation of Hodge structures we study.
On the other hand, we know that for any prime $l$ the morphism $f$ defines an associated $l$-adic \'etale local system (lisse $l$-adic sheaf) over $S$ which we denote by $\mathbb{V}_l:= \underset{\leftarrow}{\lim} R^nf_{*} (\mathbb{Z}/l^m\mathbb{Z})$. We note that the analytification of $\mathbb{V}_l$ is nothing but the $l$-adic completion of the local system $\mathbb{V}$.
From the proper base change theorem in \'etale cohomology and Artin's comparison theorem we know that for each $s\in S(L)$ we have an isomorphism\begin{equation}\label{eq:supercomparison}
H^n_{\text{\'et}}(\bar{X}_s,\mathbb{Z}_l)=(\mathbb{V}_l)_{\bar{s}}\simeq (\mathbb{V}_l)^{an}_{\bar{s}} = H^n(X^{an}_{s},\mathbb{Z}_l).
\end{equation}
\subsubsection{Extending the \'etale local system}
Let us fix some notation. For a finite place $v$ of an extension $L/K$, let $p=p(v)$ be the characteristic of the residue field $\mathcal{O}_{L_v}/m_{L_v}$, and $l\neq p(v)$ a prime. We also let $M:=L^{ur}_v$ and $\mathcal{O}_M$ be its ring of integers. We consider $(\tilde{S},\mathcal{C},D)$ to be as in \ref{conjarithm}, and let $\mathbb{V}$ and $\mathbb{V}_l$ be as in the previous section.\\
For our argument to work, we need to have the analogue of Lemma $4.3$ in \cite{pst}. In other words we would like to be able to extend the $l$-adic \'etale local system $\mathbb{V}_l$ to some $l$-adic \'etale local system $\widetilde{\mathbb{V}}_l$ on $\mathcal{C}_{\mathcal{O}_M}:=\mathcal{C}\times_{\spec(\mathcal{O}_K)} \spec (\mathcal{O}_M)$.
The argument in \cite{pst} assumes that there is at least one CM point that is integral.
This is achieved by a standard spreading out argument. Since we want to be able to deal with all of the finite places $v$ of the field $L$, we cannot employ a similar tactic.
With that in mind we start with the following definition.
\begin{defn}[Arithmetic models for $f$] Let $f:X\rightarrow S$ be projective smooth morphism of $K$-varieties with $S$ a curve, as above. We say $f$ has \textbf{a good arithmetic model over }$\mathcal{O}_K$ if there exists a good arithmetic model for the triple $(S',S,\Sigma_S)$ over $\mathcal{O}_K$, such that for each $L/K$ finite and each $v\in\Sigma_{L,f}$ we have that \begin{enumerate}
\item there exists an $\mathcal{O}_{M}$-scheme $\mathcal{X}_v$ such that $(\mathcal{X}_v)_{L^{ur}_v}=X_{L^{ur}_v}$, and
\item there exists a smooth proper morphism $\tilde{f}_v:\mathcal{X}_v\rightarrow \mathcal{C}_{\mathcal{O}_M}$ of $\mathcal{O}_M$-schemes whose generic fiber is the morphism $f_v$(see below).
\end{enumerate}
\end{defn}
Let us fix a finite extension $L/K$ and a place $v\in\Sigma_{L,f}$. Assume the existence of such a pair $(\mathcal{X}_v,\tilde{f}_v)$ and let $f_v$ be the base change of $f$ via the morphism $\spec L^{ur}_v\rightarrow \spec K$. We then define $\widetilde{\mathbb{V}}_l$ to be the $l$-adic \'etale sheaf on $\mathcal{C}_{\mathcal{O}_M}$ given by \begin{center}
$ R^n(\tilde{f}_v)_{*} (\mathbb{Z}_l) = \underset{\leftarrow}{\lim} \text{ }R^n(\tilde{f}_v)_{*} (\mathbb{Z}/l^m\mathbb{Z})$.
\end{center}
\begin{lemma}\label{extendingthesheaff}Let $f:X\rightarrow S$ over $K$ be as above. Assume that there exists a good arithmetic model $(\mathcal{X},\tilde{f})$ for the morphism $f$ over $\mathcal{O}_K$. Then the $l$-adic \'etale sheaf $\widetilde{\mathbb{V}}_l:= R^n(\tilde{f}_v)_{*} (\mathbb{Z}_l)$ on $\mathcal{C}_{\mathcal{O}_M}$ is an $l$-adic \'etale local system that extends the $l$-adic \'etale local system $R^n(f_v)_{*} (\mathbb{Z}_l)$ on $S_{M}$.
\end{lemma}
\begin{proof}From the proper base change theorem for lisse $l$-adic sheaves we have that $R^n(\tilde{f}_v)_{*} (\mathbb{Z}_l)$ is an extension of the sheaf $R^n(f_v)_{*} (\mathbb{Z}_l)$.
The fact that $\widetilde{\mathbb{V}}_l$ is also an $l$-adic \'etale local system on $\mathcal{C}_{\mathcal{O}_M}$ follows from the smooth proper base change theorem for lisse $l$-adic sheaves.
\end{proof}
\section{A comparison of monodromy operators}
The crux of our argument rests on a comparison between the action of inertia and that of the local monodromy around the degeneration $s_0$. To be able to establish this connection we need to assume the existence of the aforementioned good arithmetic models.
\subsection{Review on the monodromy weight filtration}\label{section:reviewmonodromy}
We present a short review of the two monodromy operators we want to compare.
\subsubsection*{The local monodromy operator}
Let $f:X\rightarrow S$ be a G-admissible variation defined over the number field $K$ and, as usual, $\Delta^{*} \subset S^{an}_{\mathbb{C}}$ a small disk, relative to a fixed embedding $K\hookrightarrow \mathbb{C}$, centered at the point of degeneration $s_0\in S'(K)$.
We have by our assumptions that the local monodromy around $s_0$ on $\mathbb{V}_z:=H^n(X^{an}_z,\mathbb{Q})$ is in fact unipotent for any $z\in\Delta^{*}$. We denote by $N_z : \mathbb{V}_z\rightarrow \mathbb{V}_z$ the corresponding nilpotent endomorphism that is the logarithm of the unipotent endomorphism that defines this action.
We know that $N_z$ has degree of nilpotency $\leq n+1$ and hence, by $(6.4)$ of \cite{schmid}, we get an ascending filtration
\begin{equation}\label{eq:weightmonodromyfiltration}
0\subset W^B_0\subset W^B_1\subset \ldots \subset W^B_{2n-1} \subset W^B_{2n} = \mathbb{V}_z.
\end{equation}
The filtration $W^B_{\bullet}$ is called the weight monodromy filtration. By Lemma $(6.4)$ of \cite{schmid}, we know that this is the unique ascending filtration characterized by the properties\begin{enumerate}
\item $N_z(W^B_i)\subset W^B_{i-2}$ for all $i$, and
\item for all $1\leq i\leq n$ the endomorphism $N_{z}^{i}$ defines an isomorphism $\gr_{n+i} ^{W^B_{\bullet}} \rightarrow \gr_{n-i} ^{W^B_{\bullet}}$.
\end{enumerate}
We define from now on $h^{B}_i:=\dim_{\mathbb{Q}} \gr_i^{W^B_{\bullet}}$ and let $h^{B}_{\max}:=\max\{h_j:0\leq j\leq 2n\}$. We note that the number $h$, the dimension of the local system $M_0R_n(f^{an}_\mathbb{C})_{*} \mathbb{Q}_{X^{an}_{\mathbb{C}}}|_{\Delta^{*}}$, is equal to $h^B_0$.
We note that while the matrix $N_z$ depends on the point $z$ the numbers $h^B_j$ do not.
\subsubsection*{The monodromy operator from inertia}
Let $s\in S(L)$ with $L/K$ finite, be a point of the G-admissible variation of $\mathbb{Q}$-HS given by the morphism $f:X\rightarrow S$.
Let $v\in \Sigma_{L,f}$ be a finite place of $L$. We then have the nilpotent endomorphism $N_v$, given by the action of some subgroup of finite index of the inertia group $I_{L_v}$, acting on the \'etale cohomology group $H^n_{\text{\'et}}(\bar{X}_{s,v},\mathbb{Q}_l)$, where $l\neq p(v)$.
Let $W^{\text{\'et }}_{\bullet}$ be the monodromy filtration defined on $H^n_{\text{\'et}}(\bar{X}_{s,v},\mathbb{Q}_l)$ via the action of the above $N_{v}$. We also let $h^{\text{\'et}}_{i}:= \dim_{\mathbb{Q}_l} \gr_{i}^{W^{\text{\'et}}_{\bullet}}$.
\subsection{$v$-adic proximity and comparison of operators}
Consider $f:X\rightarrow S$ some G-admissible variation of Hodge structures.
Throughout this subsection we consider $s\in S(L)$ a fixed point, where $L/K$ is some finite extension, and $v\in\Sigma_{L,f}$ some fixed finite place of $L$. We let $p=p(v)$ be the characteristic of the finite field $\mathcal{O}_{L_v}/m_v$ and we fix some $l\neq p$.
We assume that a good arithmetic model, in the sense of \ref{section:canmodels}, $(\tilde{S},\mathcal{C},D)$ over $\mathcal{O}_K$ exists for $S$ and that a good arithmetic model for the morphism $f$ exists over $\mathcal{O}_K$ with respect to this triple.
Motivated by the exposition in \S $4$ of \cite{pst} we prove the following lemma.
\begin{lemma}\label{comparisonlemma}
Under the above assumptions, we have that if $s$ is $v$-adically close to $s_0$ we have $h^{\text{\'et}}_i =h^B_{i+n}$ for all $i$, where $h^B_j$ and $h^{\text{\'et}}_j$ are as in \ref{section:reviewmonodromy}.
\end{lemma}
\begin{proof}
We start with a bit of notation following the exposition in \S $4$ of \cite{pst}. First of all, for any group $G$ we will denote by $G_{(l)}$ its maximal pro-$l$ quotient. As always we fix a punctured unit disk $\Delta^{*}\subset S^{an}$ centered at $s_0$.
We let $y$ and $y_0$ be the sections of the arithmetic pencil $\tilde{S}\times_{\mathcal{O}_K} \mathcal{O}_L \rightarrow \mathcal{O}_L$ whose generic fibers are the points $s$ and $s_0$ respectively. By \ref{lemmamultiply} we may assume without loss of generality that $y_{\bar{\mathbb{F}}_p} =y_{0,\bar{\mathbb{F}}_p}\in D(\bar{\mathbb{F}}_p)$. We also let $t$ be such that it cuts out $\tilde{s}_0\subset D$ in $\tilde{S}$. In particular, for this choice we get, by our assumption that for all $s_i \in \Sigma_{S}$ and all primes $q$ we have that $(\tilde{s}_i)_{\bar{\mathbb{F}}_q}\neq (\tilde{s}_0)_{\bar{\mathbb{F}}_q}$ when $i\neq 0$, that \begin{center}
$\widehat{\mathcal{O}}_{\tilde{S}_v, y_{0,\bar{\mathbb{F}}_p}}\cong \mathcal{O}_{L_v} [[t]]$.
\end{center}
We have, by our assumptions, that the local monodromy $\pi_1(\Delta^{*}, z) $ acts unipotently on the fiber $(R^nf^{an}_{*}\mathbb{Z})_z$ for all $z\in \Delta^{*}$. Letting $\gamma_0\in \pi_1(\Delta^{*}, z) $ be a generator, we denote by $U_0\in \GL(H^n(X^{an}_z,\mathbb{Z}))$ the unipotent endomorphism it maps to via the local monodromy representation. We define $N_z$ to be the nilpotent logarithm of $U_0$.
By Grothendieck's quasi-unipotent action theorem we know that there exists a finite extension $F/L_v$ such that the Galois representation $\rho_l:G_{L_v}\rightarrow \GL(H^n_{\acute{e}t} (\bar{X}_{s,v}, \mathbb{Z}_l))$ restricts to a unipotent representation of the inertia group $I_F$. In other words $\rho_l|_{I_F}$ is given by $\sigma \mapsto \exp(N_v t_l(\sigma))$ with $N_v$ nilpotent with degree of nilpotency $\leq n+1$.
We let $M:=F^{ur}$ and $\mathcal{O}_M$ be its ring of integers. We consider the rings $R_1:= \mathcal{O}_M[[t]]\big[\frac{1}{t}\big]$, $R_2:=M[[t]]\big[\frac{1}{t}\big] $, and $R_3:=\mathbb{C}[[t]]\big[\frac{1}{t}\big]$. We note that, after fixing an inclusion $\mathcal{O}_M\hookrightarrow \mathbb{C}$, these define the following commutative diagram \begin{center}
\begin{tikzcd}
\spec R_3 \arrow[r] \arrow[d, "g_3"'] & \spec R_ 2 \arrow[r] \arrow[d, "g_2"] &\spec R_1 \arrow[d, "g_1"] \\
S_\mathbb{C}\arrow[r]& S_M\arrow[r] & \mathcal{C}_{\mathcal{O}_M}
\end{tikzcd}
\end{center}with $g_1$, $g_2$ and $g_3$ being \'etale.
In fact, note that the $\spec R_i$ are \'etale neighborhoods of the geometric point $\bar{y}_M=\bar{s}$. Originally, $\spec{R_1}$ is an \'etale neighborhood of $\bar{y}_M$ in $\tilde{S}_M$ but it will not intersect the divisor $D$, by construction and our assumption that for all $s_j, \: s_i \in \Sigma_{S}$ and all primes $q$ we have that $(\tilde{s}_i)_{\bar{\mathbb{F}}_q}\neq (\tilde{s}_0)_{\bar{\mathbb{F}}_q}$ when $i\neq 0$.\\
Since $y_{\bar{\mathbb{F}}_p}=y_{0,\bar{\mathbb{F}}_p}$, we get that $t$ pulls back to an element $t_M$ of the maximal ideal of $\mathcal{O}_M$ via $\spec(\mathcal{O}_M) \overset{y_{\mathcal{O}_M}}{\rightarrow } \tilde{S}_{\mathcal{O}_M}$. From this we get a morphism $f_1 : \mathcal{O}_M[[t]]\big[\frac{1}{t}\big] \rightarrow M$, since $v_M(t_M)>0$.
By Lemma $4.2$ of \cite{pst} we get an induced map \begin{center}
$\phi_{1} : G_M^{(p)} \rightarrow \pi_1^{\acute{e}t} (\spec R_1, \bar{y}_M)^{(p)}$,
\end{center} between the prime-to-$p$ Galois groups, whose image is completely determined by $v_M(t_M)>0$. In particular we get a map \begin{center}
$ \phi_{1,l} : (G_M)_{(l)} \rightarrow \pi_1^{\acute{e}t} (\spec R_1, \bar{y}_M)_{(l)}$.
\end{center}
With respect to the above embedding $\mathcal{O}_M\hookrightarrow \mathbb{C}$ we get a map $f_2:R_1\rightarrow R_3$ which induces an isomorphism of prime-to-$p$ Galois groups, and hence of their maximal pro-$l$ quotients, which we denote by \begin{center}
$\phi_{2,l}: \pi_1^{\acute{e}t} (\spec R_3, \bar{y}_M)_{(l)} \rightarrow \pi_1^{\acute{e}t} (\spec R_1, \bar{y}_M)_{(l)} $.
\end{center}We also consider the composition $\phi_l:= \phi_{2,l}^{-1}\circ \phi_{1,l}: (G_M)_{(l)}\rightarrow \pi_1^{\acute{e}t} (\spec R_3, \bar{y}_M)_{(l)} $.
Finally, we let $F:R_3\rightarrow R_3$ be the map defined by $t\mapsto t^{v_M(t_M)}$, by abuse of notation we also let $F:\spec(R_3)\rightarrow \spec(R_3)$ be the \'etale cover induced from $F$. Letting $\bar{y}_1$ be any geometric point in the fiber of $\bar{y}_M=\bar{s}$ over $F$, we then get from $F$ an induced morphism \begin{center}
$ \psi_l: \pi_1^{\acute{e}t} (\spec R_3, \bar{y}_1)_{(l)} \rightarrow \pi_1^{\acute{e}t} (\spec R_3, \bar{y}_M)_{(l)}$.
\end{center}We also note that by construction we have that $\psi_l$ has the same image as $\phi_l$.\\
On $S_{M}$ we have the lisse $l$-adic sheaf $\mathbb{V}_l:=( \underset{\leftarrow }{\lim } R^n(f_v)_{*} (\mathbb{Z}/l^i\mathbb{Z}))$. Its analytification $\mathbb{V}_l^{an}$ is nothing but the local system $R^nf^{an}_{*} \mathbb{Z}_{X^{an}_{\mathbb{C}}} \otimes_\mathbb{Z} \mathbb{Z}_l$ on $S^{an}_\mathbb{C}$, which is the $l$-adic completion of the $\mathbb{Z}$-local system that underlies the variation of $\mathbb{Z}$-HS we are studying. The assumption that good arithmetic models for the morphism $f$ exist over $\mathcal{O}_K$ then implies, see \ref{extendingthesheaff}, that we have that $\mathbb{V}_l$ extends to a lisse $l$-adic sheaf $\widetilde{\mathbb{V}}_l$ on $\mathcal{C}_{\mathcal{O}_M}$. We let $\widetilde{\mathbb{V}}_{l,1}$ be the pullback of this sheaf via $g_1$. Note that the pullback of $\mathbb{V}_l$ via the morphism $\spec(R_2) \overset{g_2}{\rightarrow} S_M$, which we will denote by $\mathbb{V}_{l,1}$, is nothing but the generic fiber of the lisse $l$-adic sheaf $\widetilde{\mathbb{V}}_{l,1}$. By abuse of notation we also denote by $\mathbb{V}_{l,1}$ the pullback of this last sheaf via the morphism $\spec(R_3)\rightarrow \spec(R_2)$ above. Finally, we denote by $\mathbb{V}_{l,2}$ the pullback of $\mathbb{V}_{l,1}$ via the map $F$.\\
For any $z\in \Delta^{*}$ we have the local monodromy representation given by $\rho: \pi_1(\Delta^{*},z)\rightarrow \GL((\mathbb{V}_l^{an})_z)$. By our assumptions this action is unipotent given by $\gamma_0\mapsto U_0$, where $\gamma_0$ a generator of $\pi_1(\Delta^{*},z)$. From the tower of \'etale covers $(\spec R_3)^{an}\overset{F^{an}}{\rightarrow } (\spec R_3)^{an}\overset{g_3^{an}}{\rightarrow } S^{an}_\mathbb{C}$ we get an inclusion $\pi_1(\spec(R_3)^{an}, {z}_2)\hookrightarrow \pi_1(\spec(R_3)^{an}, {z}_1)\hookrightarrow \pi_1(\Delta^{*},z)$ where $z_1\in (g_3^{an})^{-1}(z)$ and $z_2\in (F^{an})^{-1}(z_1)$. Letting $\gamma_i\in \pi_1(\spec(R_3)^{an}, {z}_i)$ for $i=1,2$ be generators of these groups we can identify these with $\gamma_0^{a_i}$, where $a_i\in\mathbb{Z}$ and $a_1|a_2$. Without loss of generality we may and do assume that $a_i>0$.
By construction we then have the following commutative diagrams for $i=1,2$\begin{center}
\begin{tikzcd}
\pi_1(\spec(R_3)^{an},z_i) \arrow[r,"\rho_i"] \arrow[d] & \GL( (\mathbb{V}_{l,i})^{an}_{z_i} ) \arrow[d] \\
\pi_1(\Delta^{*}, z) \arrow[r, "\rho"] & \GL( (\mathbb{V}_{l})_{z})
\end{tikzcd}
\end{center}from which we see that $\gamma_i$ maps to $U_{i,z}:=U_0^{a_i}$. In particular, the nilpotent logarithm $N_{i,z}$ of $U_{i,z}$ is $a_iN_z$.\\
\section{Conditions for $v$-adic proximity}\label{section:conditions}
Motivated by \ref{filtrationsandendomorphisms} we make the following definition.
\begin{defn}Let $f:X\rightarrow S$ be a G-admissible variation defined over the number field $K$. Let $s\in S(\bar{\mathbb{Q}})$ be a point of the variation whose associated algebra of Hodge endomorphism is $D_s=M_{m_1}(D_1)\oplus \ldots\oplus M_{m_r}(D_r)$. Let us also define $F_i:= Z(D_i)$, $d_i^2:=[D_i:F_i]$, and $f_i:=[F_i:\mathbb{Q}]$.
We say that the point $s$ satisfies condition $\star$ if it satisfies \underline{any} of the following \begin{enumerate}
\item[$\star_{1}$] if we have that \begin{equation}\label{eq:starta2} \tag{$\star_{1}$}
\Sum{i=1}{r} m_if_i> \mu-\dim_{\mathbb{Q}} \im{N^B}.
\end{equation}
\item[$\star_{2}$] if there exists $i$ such that for the set\begin{center}
$\Pi^{i}_{D_s}:=\{l\in \Sigma_{\mathbb{Q},f} : \exists w\in\Sigma_{F_i,f}, \text{ with } w|l \text{ and } [F_{i,w}:\mathbb{Q}_l] >\frac{h^B_{\max}}{m_i} \}$
\end{center}
we have that
\begin{equation}\label{eq:starta1} \tag{$\star_{2}$}
|\Pi^{i}_{D_s}|\geq 2.
\end{equation}
\item[$\star_3$] if we have that $\exists i\in \{ 1,\ldots, r\}$ such that
\begin{enumerate}
\item $d_i m_i \geq h^B_{\max}$, and
\item for the sets $P^i_{D_s}:=\{ l\in \Sigma_{\mathbb{Q},f} : l \text{ is totally split in } F_i \}$ and $Q^{i}_{D_s} := \{ l\in \Sigma_{\mathbb{Q},f} : \exists w\in \Sigma_{F_i,f}\text{, }w|l \text{ and } \inv_w (D_i)\notin \mathbb{Z} \}$ we have that $|P^{i}_{D_s}\cap Q^{i}_{D_s}|\geq 2$.
\end{enumerate}
\item[$\star_{4}$] if for some $i$ we have that $D_i$ is a quaternion algebra, and letting
$R^i_{D_s}:= \{ l\in \Sigma_{\mathbb{Q},f}: \exists w\in \Sigma_{F_i,f}\text{, } w|l \text{, } \inv_w(D_i)\not\in \mathbb{Z} \text{,and } m_i4[F_{i,w},\mathbb{Q}_l] \not{|}h^B_j \text{ }\forall j \}$, we have that $|R^i_{D_s} |\geq 2$.
\item[$\star_{5}$] if for some $i$ we have that $D_i$ is a quaternion algebra, and letting
$S^i_{D_s}:= \{ l\in \Sigma_{\mathbb{Q},f}: \exists w\in \Sigma_{F_i,f}, \text{ } w|l \text{, } \inv_w(D_i)\in \mathbb{Z} \text{,and } m_i2[F_{i,w},\mathbb{Q}_l] \not|h^B_j \text{ }\forall j \}$, we have that $|S^i_{D_s} |\geq 2$.
\item[$\star_{6}$] if for some $i$ we have that $D_i$ is a quaternion algebra and for the above sets $|R^i_{D_s}\cup S^{i}_{D_s}|\geq 2$.
\item[$\star_7$] if for some $i$ we have that $D_i$ is of Type IV with $d_i^2=\dim_{F_i}[D_i:F_i]$ and for the set \begin{center}
$T^i_{D_s}:= \{ l\in \Sigma_{\mathbb{Q},f}: \exists w\in \Sigma_{F_i,f}, \text{ } w|l \text{ and } m_id_i[F_{i,w},\mathbb{Q}_l] \not|h^B_j \text{ }\forall j \}$,
\end{center}we have that $|T^i_{D_s} |\geq 2$.
\end{enumerate}
\end{defn}
\begin{rmk}1. We remark that all conditions above only depend on data coming from the original G-admissible variation $f:X\rightarrow S$. These should be thought of as demanding that we have ``a lot of endomorphisms''. This ``a lot'' here should be contrasted with the fact that the generic algebra of Hodge endomorphisms is just $\mathbb{Q}$.\\
2. The necessity for demanding that several of the sets of primes in these conditions have at least two elements arises from \ref{comparisonlemma}. As we will see in the proof of \ref{vadicproximity}, once we fix the place $v$, and by extension the prime $p\in\mathbb{Q}$ with $v|p$, we need some condition as those in $\star_2$-$\star_7$ to hold for some prime $l\neq p$ to be able to apply the aforementioned result.\\
3. We note that the sets in conditions $\star_2$, $\star_5$, $\star_6$, and $\star_7$ will potentially have infinitely many elements, while the sets in $\star_3$ and $\star_4$ will have at most finitely many elements. This rests on the fact that $\inv_w(D)\notin \mathbb{Z}$ only for finitely many primes.\\
4. We note that, assuming that the extension $F_i/\mathbb{Q}$ is cyclic, condition $\star_{5}$ is equivalent to the existence of some $i$ for which $m_i 2[F_i:\mathbb{Q}] \not|h^B_j$ for all $j$.\\
5. The different treatment needed for general type IV algebras, addressed in $\star_7$, stems from the fact that even if we know that that the restriction of the division algebra at some place $w$ is not of the form $M_t(F_{i,w})$ for some, it might still be of the form $M_t(D')$ with $D'$ a central division algebra $D'$ with $[D':F_{i,w}]>1$ and $t>1$. This issue does not appear in the case of quaternion algebras since the only two possibilities are $D_i\otimes_{F_i} F_{i,w}\simeq M_{2} (F_{i,w})$ or $D_i\otimes_{F_i} F_{i,w}$ is a quaternion algebra over $F_{i,w}$.
\end{rmk}
\subsection{Some linear algebraic lemmas}
To prove the result we want, first we will need some elementary lemmas from linear algebra and the theory of linear algebraic groups.
\begin{lemma}\label{goodunipotent}Let $N\in M_n(k)$, where $k$ is a field with $char(k)=0$, be a non-zero nilpotent upper-triangular matrix with $\dim_k Im(N)=r$. Then, there exist unipotent upper triangular matrices $Q_L,Q_R\in GL_n(k)$ such that the matrix $N_{red}:=Q_LNQ_R$ is strictly upper triangular with at most one non-zero entry in each row and column.
\end{lemma}
\begin{proof}We employ row and column reduction together with induction.
\end{proof}
\begin{rmk}
Another way of phrasing the above lemma is that $N_{red}=(\epsilon_{i,j})$ such that there exist exactly $r$ entries $\epsilon_{i,j}\neq 0$, which we can take without loss of generality to be equal to $1$. These entries are all in distinct rows and columns, i.e. if $\epsilon_{i,j},\epsilon_{i',j'}\neq 0$ then $i\neq i'$ and $j\neq j'$.
\end{rmk}
\begin{lemma}\label{toricriterion}
Let $V$ be an $n$-dimensional vector space over an algebraically closed field $k$ of characteristic $0$ and let $N\in\End(V)$ be a nilpotent linear operator with $r:=\dim_{k}\im{N}>0$. Let $T$ be a sub-torus of the algebraic group $\GL(V)^{N}$ of automorphisms of $V$ commuting with $N$. Then $\dim_k T\leq n-r$.
\end{lemma}
\begin{proof}We may assume that the sub-torus $T$ is maximal in $G_{N}:=\GL(V)^{N}$. The torus $T$ will be contained in a maximal sub-torus $T_m$ of the group $\GL(V)$. From the theory of linear algebraic groups\footnote{The results we used from the theory of Linear algebraic groups can be found in Chapter 6 of \cite{springer}.} all maximal tori of $GL(V)$ are conjugate. Hence, there exists $P\in \GL(V)(k)$ such that $PT_mP^{-1}$ is the torus of diagonal matrices in $\GL(V)$, with respect to a fixed basis $\{\vec{e}_i:1\leq i\leq n\}$.
We notice that, setting $N_P:=PNP^{-1}$, we have that $P\GL(V)^{N}P^{-1}=\GL(V)^{N_P}$, and the sub-torus $T$ will be isomorphic to the sub-torus $PTP^{-1}$. For the element $N_P$ we will have that it is nilpotent and that $r=\dim_{k}\im{N}=\dim_{k}\im{N_P}$. We may thus assume from now on, replacing $T$ by $PTP^{-1}$ and $N$ by $N_P$, that $T$ is comprised of diagonal matrices and is contained in the subgroup of diagonal matrices $\mathbb{G}_m^n\subset\GL_n$.
\subsection{Ruling out $v$-adic proximity}
With an eye towards ``globality'' of relations among the values of G-functions at a certain point, we establish the following proposition.
\begin{prop}\label{vadicproximity}
Let $f:X\rightarrow S$ be a G-admissible variation of $\mathbb{Q}$-HS defined over some number field $K$. Let $s\in S(L)$, where $L/K$ is some finite extension, and let $v\in \Sigma_{L,f}$ be a finite place of $L$.
If $s$ satisfies condition $\star$ then, assuming the Hodge conjecture holds and that a good arithmetic model exists for the morphism $f$, the point $s$ cannot be $v$-adically close to the degeneration $s_0$.
\end{prop}
\begin{proof} \textbf{Step 1:} Assume that $s$ is $v$-adically close to $s_0$. We then get by \ref{comparisonlemma} that $h^{\text{\'et}}_i =h^B_{i+n}$, assuming the existence of a good arithmetic model for $f$ over $\mathcal{O}_K$.
From \ref{endinert}, assuming the validity of the Hodge conjecture, we get that\begin{equation}\label{eq:ladicembed}
D_s\otimes_{\mathbb{Q}}\mathbb{Q}_l \hookrightarrow \End(H^n_{\text{\'et}} (\bar{X}_{s,v},\mathbb{Q}_l))^{N_{v}},
\end{equation}where $l\neq p(v)$. We are thus in a position to use \ref{filtrationsandendomorphisms} for the algebra $D_s\otimes \mathbb{Q}_l$ and the nilpotent endomorphism $N_v$ on the space $H^n_{\text{\'et}} (\bar{X}_{s,v},\mathbb{Q}_l)$.
From this we get that we must have that for all $1\leq i\leq r$ and all $w\in \Sigma_{F_j,f}$ with $w|l$ there exists $j(i,w)$ such that \begin{equation}\label{eq:ladicembed2}
M_{m_i}(D_i\otimes_{F_i} F_{i,w})\hookrightarrow M_{h^B_{j(i,w)} } (\mathbb{Q}_l). \end{equation}
\textbf{Step 2:} We start by ruling out points that satisfy $\star_{1}$. This follows from \ref{toricriterion}. Indeed the dimension of the maximal subtorus of $(D_s\otimes \bar{\mathbb{Q}}_l)^{\times}$ is $\Sum{i=1}{r} m_i f_i$. On the other hand, the maximal subtorus of $\GL(H^n_{\text{\'et}} (\bar{X}_{s,v},\mathbb{Q}_l))^{N_{v}}$ has dimension $\leq \mu -\dim_{\mathbb{Q}_l} N_v$. The result then follows from \ref{comparisonlemma}, which implies that $\dim_{\mathbb{Q}_l} \im{N_v}=\dim_\mathbb{Q} \im{N_B}>0$.\\
Assume now that $s$ satisfies condition $\star_{2}$. We then get that there exists $i$, a prime $l\neq p(v)$, and a place $w\in \Sigma_{F_i,f}$ with $w|l$ for which $[F_{i,w}:\mathbb{Q}_l]m_i>h^B_{\max}$. This contradicts the validity of \eqref{eq:ladicembed2}. Indeed the maximal commutative semisimple algebra of $M_{m_i}(D_j\otimes_{F_i} F_{i,w})$ has dimension $\geq [F_{i,w}:\mathbb{Q}_l]m_i$ over $\mathbb{Q}_l$, while that of $M_{h^B_{\max}}(\mathbb{Q}_l)$ has dimension $h^{B}_{\max}$ over $\mathbb{Q}_l$.\\
Assume that $s$ satisfies condition $\star_{3}$ and choose $i$ as in $\star_{3}$. Then we have that there exists $l\in\Sigma_{\mathbb{Q},f}$ that is totally split in $F_i$ with $l\neq p(v)$. Therefore we have that $F_{i,w}=\mathbb{Q}_l$ for all $w\in\Sigma_{F_i,f}$ with $w|l$.
Also by $\star_{3}$ we know that we can find $w\in \Sigma_{F_i,f}$ with $w|l$ such that $\inv_v (D_i)\notin \mathbb{Z}$. Since once again by assumption we have $(m_id_i)\geq h^B_{\max}$ we get that \eqref{eq:ladicembed2} is an isomorphism in this case, with $h^B_{j(i,w)}=h^B_{\max}$. This would imply that $\inv_w(D_i)=0\in\mathbb{Q}/\mathbb{Z}$ contradicting the above assumption.\\
Assume that $s$ satisfies condition $\star_{4}$. Then by assumption there exists a prime $l\neq p(v)$ such that there exists $w\in \Sigma_{F_i,f}$ for which $D_{i,w}=D_i\otimes_{F_i} F_{i,w}$ is a quaternion algebra over $F_{i,w}$. If a $j(i,w)$ satisfying \eqref{eq:ladicembed2} existed for the simple summand $M_{m_i}(D_{i,w})$ of $D_s\otimes_\mathbb{Q} \mathbb{Q}_l$, we would have $m_i\dim_{\mathbb{Q}_l} D_{i,w}| h^B_{j(i,w)}$ which contradicts our assumptions. Indeed, such an embedding would imply an isomorphism of $M_{m_i}(D_{i,w})$-modules $\mathbb{Q}_l^{h^B_{j(i,w)}}\cong( D_{i,w}^{m_i})^r$ for some $r$. Comparing $\mathbb{Q}_l$-dimensions the contradiction follows.\\
The argument for $\star_{5}$ and $\star_{6}$ is practically identical to that of condition $\star_{4}$.\\
Finally, the proof in the case where $s$ satisfies condition $\star_7$ is practically identical to that of $\star_4$ but has a small twist. Let us assume that $|T^i_{D_s}|\geq 2$ and let $l\in T^i_{D_s}$ and $w|l$ be such that $l\neq p(v)$ and $m_id_i [F_{i,w}:\mathbb{Q}_l]\not|h^B_j$ for all $j$.
If $\inv_w(D)\neq0$ we have that $D_i\otimes_{F_i} F_{i,w}\simeq M_{r}(D')$ with $D'$ a division algebra with center $F_{i,w}$. Let $d':=\sqrt{[D':F_{i,w}]}$. If an index $j(i,w)$ satisfying \eqref{eq:ladicembed2} existed, by the same argument as above, we would have that $m_ir \dim_{\mathbb{Q}_l} (D')| h^B_{j(i,w)}$. Since $rd'=d_i$ and $\dim_{\mathbb{Q}_l} (D')= d'^2 [F_{i,w}:\mathbb{Q}_l]$ we get the contradiction we wanted.
The case $\inv_w(D)=0$ follows from the same argument above, though we do not have to introduce a new division algebras $D'$ since $D_i\otimes_{F_i} F_{i,w}\simeq M_{d_i} (F_{i,w})$.
\end{proof}
\begin{rmk}The proof in the case where condition $\star_7$ is satisfied shows that $\star_7$ is weaker than the strongest condition we can actually impose on these algebras.
In fact the proof shows that we would need to guarantee the impossibility of the occurrence of \begin{center}
$rd'^2 \cdot [F_{i,w}:\mathbb{Q}_l]\: |\: h^B_j$ for some $j$,
\end{center}where $r$ and $d'$ are as in the proof of $\star_7$.
\end{rmk}
\section{Proof of \ref{maintheorem}}\label{section:proofoftheresult}
Finally, we combine all parts of our exposition to prove \ref{maintheorem}.
\begin{proof}[Proof of \ref{maintheorem}:] Let $s\in S(L)$ be a point satisfying the conditions in \ref{maintheorem} for the variation $\mathbb{V}=R^nf^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}}$ where $L/K$ is some finite extension. We let $\xi:=x(s)$, where $x$ is the local parameter of $S'$ at $s_0$ with respect to which the $y_i$ are written as power series.
By \ref{propendodr} there exists a finite extension $\hat{L}/L$ such that $D_s$ acts on $H^n_{DR}(X_{s}\times_{L}\hat{L}/\hat{L})$. From \ref{propdegreebound} we also know that $\hat{L}$ may be chosen so that $[\hat{L}:L]$ is bounded only in terms of $\mu:=\dim_{\mathbb{Q}} H^n(X^{an}_{s},\mathbb{Q})$.
Let $y_1,\ldots , y_{h\mu}$ be the G-functions that comprise the first $h$ columns of the relative $n$-period matrix associated to the morphism $f$. We then have the polynomials \eqref{eq:actualrelation} with coefficients in $\hat{F}_s$ and degree $\leq [\hat{L}:\mathbb{Q}]$. By \ref{proofnottrivial} we know that these polynomials define relations among the values of the G-functions in question at $\xi$ that are non-trivial, as long as there exists at least one archimedean place $v\in \Sigma_{\hat{L},\infty}$ for which $s$ is $v$-adically close to $s_0$.
Consider now $v\in \Sigma_{\hat{L},f}$ to be any finite place of $\hat{L}$. By \ref{vadicproximity} we know that $s$ cannot be $v$-adically close to $s_0$, in other words that $|\xi|_v\geq \min \{ 1, R_v(\vec{y})\}$, where $R_v(\vec{y})$ is the local radius of convergence of the $y_i$.
Now we split into two cases.\\
\textbf{Case 1:} For all archimedean places $v\in \Sigma_{\hat{L}, \infty}$ we have that $|\xi|_v\geq \min \{ 1, R_v(\vec{y})\}$.\\
Combining this assumption with the above result we get that \begin{center}
$h(\xi^{-1}) \leq \rho(\vec{y})$,
\end{center}
where $\rho(\vec{y})$ is the global radius of the collection of power series $y_i$. Combining Lemma 2 of I.$\S 2.2$ of \cite{andre1989g} with the Corollary of VI.$\S 5$ of loc.cit., we get that $h(s)=h(\xi)=h(\xi^{-1}) \leq \rho(\vec{y})<\infty$.
This concludes this case.\\
\textbf{Case 2: } There exists at least one archimedean place $v\in \Sigma_{\hat{L},\infty}$ for which $s$ is $v$-adically close to $s_0$.\\
In this case the relation defined by the polynomials in \eqref{eq:actualrelation} among the values at $\xi$ of the $y_i$ is by construction global, since there are no other places $v\in \Sigma_{\hat{L},f}$ for which $s$ is $v$-adically close to $s_0$.
Since we know that the relation \eqref{eq:actualrelation} is both non-trivial and global we get from \ref{hasse} that \begin{equation}\label{eq:heightboundprefinal}
h(\xi)\leq c_1(\vec{y}) \delta^{3\mu h-1}(\log \delta +1),
\end{equation}where $\delta$ is the degree of the polynomial \eqref{eq:actualrelation} in $\bar{\mathbb{Q}}[x_1,\ldots x_{h\mu}]$.
By construction of \eqref{eq:actualrelation} we know that $\delta\leq [\hat{L}:\mathbb{Q}]=[\hat{L}:L] \cdot[L:\mathbb{Q}]$. On the other hand \ref{propdegreebound} gives the bound $[\hat{L}:L]\leq ( (6.31) \mu^2)^{\mu^2} $. Combining these remarks with \eqref{eq:heightboundprefinal} we get that there exist positive constants $C_1, C_2$, independent of the point $s$, such that \begin{equation}\label{eq:heightboundfinal}
h(\xi)\leq C_1 ([L:\mathbb{Q}]+1)^{C_2},
\end{equation}as we wanted.\\
The result follows by combining the two above cases, or simply by replacing $C_1$ in \eqref{eq:heightboundfinal} by $\max\{C_1, \rho(\vec{y})\}$.
\end{proof}
\section{Complex Multiplication and other examples}\label{section:examples}
We construct examples of possible algebras of endomorphisms of Hodge structures where the conditions for points $s$ to be in the set $\Sigma$ of \ref{maintheorem} are easy to check. We start with the case of CM-Hodge structures and then construct infinite families of possible such algebras of all possible types, i.e. type I-IV in Albert's classification \ref{albert}.
\subsection{Hodge Structures with complex multiplication}\label{section:reviewcm}
Hodge structures with complex multiplication, or simply CM Hodge structures, play the same role for Hodge structures with weight $\geq2$ that abelian varieties with complex multiplication play for the weight $1$ Hodge structures. For an introduction to these we point the interested reader to \cite{ggkbook} and \cite{moonennotes}.
The main ingredient we will need about CM Hodge structures is the following lemma that describes their algebra of Hodge endomorphisms. Following \cite{ggkbook} we write ``CMpHS'' as an abbreviation of the term ``polarized Hodge structure with complex multiplication''.
\begin{lemma}\label{cmstructure}
Let $V$ be a CMpHS then there is a unique decomposition of $V$ into simple Strongly CMpHS's $V=V_1^{\oplus m_1}\oplus\cdots\oplus V_l^{\oplus m_r}$. In particular for the algebra $D$ of Hodge endomorphisms of $V$ we have that \begin{center}
$D\cong \Bigsum{i=1}{r}M_{m_i}(K_i)$,
\end{center}where $K_i\cong D_{i}$, the algebra of Hodge endomorphisms of $V_i$, is a CM field with $[K_i:\mathbb{Q}]=\dim_\mathbb{Q} V_i$.
\end{lemma}
\begin{proof}See \cite{ggkbook} Ch.V and especially the facts on page $195$. For a proof see \cite{moonennotes} where the necessary machinery is developed. \end{proof}
Motivated by this we make the following definition.
\begin{defn}Let $\mathbb{V}$ be a polarized variation of $\mathbb{Q}$-HS over some base $T$. We say that the point $t\in T(\mathbb{C})$ is a \textbf{CM point of the variation}, or a \textbf{special point of the variation}, if the Hodge structure $\mathbb{V}_t$ is a CMpHS.
\end{defn}
\begin{rmks} 1. Consider a CMpHS $(V,\phi)$ as in \ref{cmstructure}. Let $E$ be the maximal commutative semi-simple algebra of the algebra $D$. We have by \ref{cmstructure} that \begin{equation}\label{eq:strcmmmaxss}
E=K_1^{m_1}\times \cdots\times K_r^{m_r}.
\end{equation}We also have that $\dim_\mathbb{Q} E=\Sum{j=1}{r} m_j [K_j:\mathbb{Q}]$. Noting that
$\dim_\mathbb{Q} V= \Sum{j=1}{r} m_j \dim_\mathbb{Q} V_j$ and that $\dim_\mathbb{Q} V_j =[K_j:\mathbb{Q}]$ we get that \begin{equation}\label{eq:dimcmmaxss}
\dim_\mathbb{Q} V =\dim_\mathbb{Q} E.\end{equation}
2. The above property guarantees that CM-points satisfy the conditions of \ref{constpseudocm}. Indeed, from the above and \ref{remedy} we have that CM-points satisfy \eqref{eq:conditionoflemma1} and so we get non-trivial relations among the values of the G-functions we study at $\xi=x(s)$.
The globality of these relations follows from the fact that CM-points satisfy condition $\star_{2}$.
\end{rmks}
\subsubsection{CM-points have potentially good reduction}
Given a CM abelian variety $A$ defined over a number field $K$ it is well known, see \cite{serretate}, that $A$ will have potentially good reduction at each finite place of $K$. The linear algebraic lemma \ref{toricriterion} has an interesting consequence. Namely, it shows that a similar picture holds true for smooth projective varieties with CM polarized Hodge structure defined over a number field $K$. Even though the term ``good reduction'' or ``potentially good reduction'' cannot be expected to hold in general, as in the existence of N\'eron models, the term still makes sense from the point of view of Galois representations.
\begin{prop}\label{cminertiaistrivial}
Let $f:X\rightarrow S$ be a G-admissible variation of $\mathbb{Q}$-HS defined over the number field $K$. Let $L/K$ be a finite extension, let $s\in S(L)$ be a CM point for this variation, and let $v\in \Sigma_{L,f}$ be a finite place of $L$.
Assume the Hodge conjecture holds. Then the $l$-adic Galois representation of $G_{K_v}$ on $H^n_{\text{\'et}}(\bar{X}_{s,v}, \mathbb{Q}_l)$ has potentially good reduction for all $l\neq p(v)$.
\end{prop}
\begin{proof}
Let $D_s$ be the algebra of Hodge endomorphisms at $s$. We set $V_s:=H^{n}(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q})$ to be the $\mathbb{Q}$-HS corresponding to $s$, with $\dim_\mathbb{Q} V_s=2\mu$.
From \ref{cmstructure} we know that $V_s$ decomposes as $V_s=\Bigsum{i=1}{r} V_i^{\oplus m_i}$, so that $D_s\cong \Bigsum{i=1}{r}M_{m_i}(K_i)$, where $V_i$ are irreducible CM polarized $\mathbb{Q}$-HS with algebras of Hodge endomorphisms $K_i$ being CM fields with $n_i:=[K_i:\mathbb{Q}]=\dim_{\mathbb{Q}} V_i$. We also note that, trivially from the above, we have that $\mu=\Sum{i=1}{r}m_i n_i$, where $\mu:=\dim_{\mathbb{Q}} H^n(X^{an}_s,\mathbb{Q})$.
Let $p=p(v)$ be the characteristic of the residue field $\kappa(v):=\mathcal{O}_{L_v}/m_{L_v}$. We fix $l\in\mathbb{N}$ a prime with $l\neq p$. We then notice that the $\bar{\mathbb{Q}}_l$-algebra $\bar{D}_{s,l}:=D_s\times_\mathbb{Q} \bar{\mathbb{Q}}_l\cong (D_s\times_\mathbb{Q} \mathbb{Q}_l)\otimes_{\mathbb{Q}_l} \bar{\mathbb{Q}}_l$ is such that $\bar{D}_{s,l}^{\times}$ contains all of the closed $\bar{\mathbb{Q}}_l$-points of a $\mu$-dimensional torus.
From \ref{unipotentaction} we know that the inertia group $I_{L_v}$ acts quasi-unipotently on $H^n_{\text{\'et}} (\bar{X}_{s,v},\mathbb{Q}_l)$. Therefore up to a finite extension of $L_v$ we may and do assume that it in fact acts unipotently. In this case we get an associated nilpotent endomorphism $N_v$, as we saw in our earlier discussion, whose exponential determines the action of the inertia group.
In this case, from \ref{endinert}, we have that\begin{center}
$\bar{D}_{s,l}^{\times}\hookrightarrow \GL(H^{n}_{\text{\'et}}(\bar{X}_{s,v},\mathbb{Q}_l)\otimes_{\mathbb{Q}_l}\bar{\mathbb{Q}}_l )^{N_v}$,
\end{center}where $N_v$ is the above nilpotent matrix associated to the action of the inertia group. The result now follows from \ref{toricriterion}. Indeed, if $N_v\neq 0$ we get that $\dim_{\bar{\mathbb{Q}}_l}\bar{D}^{\times}_{s,l}\leq \mu -\rank(N_v)<\mu$ which contradicts the above.
\end{proof}
\subsubsection*{Height bounds via the purity argument}
\begin{theorem}\label{application1}Let $f:X\rightarrow S$, $S'$, $h$, $A_0$ and $s_0$ be as above. We assume that $f$ has a good arithmetic model over $\mathcal{O}_K$.
We consider the set \begin{center}
$\Sha(S)_1:= \{ P \in S(\bar{\mathbb{Q}}) : P \text{ satisfies the conditions in \ref{remedy} and condition } \star \}$.
\end{center}Then, there exist positive constants $C_1$, $C_2$ such that $h(P)\leq C_1 [K(P):K]^{C_2}$ for all $P\in \Sha(S)_1$.
\end{theorem}
\subsection*{Height bounds via Gabber's lemma}
The second result of this type we prove follows the ideas in Andr\'e's exposition. Namely, we use Andr\'e's argument that employs Gabber's lemma and the theory of N\'eron models to establish the globality of the relations we create. The drawback of this method is that we have to replace conditions $\star$ with weaker ones and make assumptions on the reduction of the abelian variety $A_0$ at all finite places.
We assume $A_0$ is isogenous to $A_{0,1}^{n_1} \times \ldots \times A_{0,k}^{n_k}$, where $A_{0,j}$ are simple non-isogenous abelian varieties. We may and do assume, without loss of generality, that all of these abelian varieties are defined over the field $K$. Furthermore, for our argument to work, we need to assume that $A_0$ has potentially good reduction at all finite places of the field $K$.
\begin{defn}With notation as above, we say that the point $P\in S(\bar{\mathbb{Q}})$ satisfies condition $(G)$ if the following hold true:
\begin{enumerate}
\item for all $1\leq i\leq r $ we have that $M_{m_i}(D_i)\not\hookrightarrow M_h(\mathbb{Q})$, and
\item for all $v\in \Sigma_{K,f}$ we have that there exists no pair $(i,j)$ such that $M_{m_i}(D_i)\hookrightarrow \End^{0}((A_{0,j}^{n_j})_{\mathbb{F}_{q(v)}} )$.
\end{enumerate}
\end{defn}
[\textbf{Note to self} If an abelian variety over some number field $K$ is \underline{simple} and has good reduction at some place $v$ its reduction might have a factorization into abelian subvarieties. In similar nature, if $Hom_{\bar{\mathbb{Q}}}(A,B)=0$ for two abelian varieties defined over $K$ is the same true for their reduction at a prime of good reduction? I think the answer to this question is no. The above condition $(G)$ is therefore insufficient or at least needs to be altered somewhat.]
\begin{theorem}\label{application2}Let $f:X\rightarrow S$, $S'$, $h$, $A_0$ and $s_0$ be as above.
We consider the set \begin{center} $\Sha(S)_2:= \{ P \in S(\bar{\mathbb{Q}}) : P \text{ satisfies the conditions in \ref{remedy} and condition } (G) \}$.
\end{center}Then, there exist positive constants $C_1$, $C_2$ such that $h(P)\leq C_1 [K(P):K]^{C_2}$ for all $P\in \Sha(S)_2$.
\end{theorem}
\begin{rmks}1. Our main point of divergence from Andr\'e's result is that we do not require the degeneration at $s_0$ to be completely multiplicative and that we do not require any conditions about the parity of the dimension $g$.
In contrast with Andr\'e's result though we do not deal with abelian schemes with generic algebra of endomorphisms larger than $\mathbb{Q}$.\\
2. In both of these results the restrictions on the points we are considering are much more stringent than the very elegant condition ``$\End^{0}(X_s) \not\hookrightarrow M_g(\mathbb{Q})$'' of Andr\'e.
This necessity arises from two issues. The first is that we have fewer G-functions once we assume that the degeneration is not completely multiplicative. This justifies essentially the necessity for the conditions in \ref{remedy}, and is evident by the construction in \ref{section:pseudocmrelations}.
The second issue that arises has to do with establishing the globality of the relations we created. Just as before we do this via establishing the impossibility of ``$v$-adic proximity'' of certain points to the degeneration.In order to guarantee this we need more complicated conditions on the points we consider if our degeneration is ``less aggressive'' than completely multiplicative degeneration.
\end{rmks}
Following our earlier exposition we divide the exposition into sections. In \ref{section:abeliangfunctions} we prove a variant of Theorem $1$ in Ch. IX. \S $4$ of \cite{andre1989g} that guarantees the existence of G-functions among the relative periods. We also establish the equivalent of \ref{changeofplace} in the case of abelian schemes that guarantees the polynomials created in \eqref{eq:actualrelation} define non-trivial relations among the values of the aforementioned G-functions at points that are archimedeanly close to $s_0$ and satisfy the conditions in \ref{remedy}.
\section{Existence of G-functions}\label{section:abeliangfunctions}
We fix $S$, $S'$, $X$, $X'$, $s_0$, $h$ the dimension of the toric part of the degeneration at $s_0$, and $A_0$ as in the above discussion all defined over some number field $K$. We also consider $x$ a local parameter of $S'$ at the point $s_0$ as before.
In this section we fix an embedding $K\hookrightarrow \mathbb{C}$ and consider a unit disk $\Delta\subset S'^{an}_{\mathbb{C}}$ centered at the point $s_0$ as before. By working with the connected N\'eron model we are guaranteed that the local monodromy around $s_0$ acts unipotently.[Need further information.]
Our main aim is to establish the following theorem.
\begin{theorem}\label{abeliangfunctions}
For the above objects there exists a basis $\omega_1,\ldots,\omega_{2g}$ of $H^1_{DR}(X/S)$ over some dense open subset $U$ of $S$, such that the Taylor expansion in $x$ of the relative periods $\frac{1}{2\pi i} \int_{\gamma_j}^{} \omega_i$ are G-functions for $1\leq j\leq h$, where $\gamma_j$ constitute a local frame of the sheaf $M_0\mathbb{R}_1 (f^{an}_{\mathbb{C}})_{*}(\mathbb{Q})|_{V}$, where $V\subset \Delta^{*}$ is some open analytic subset.
\end{theorem}
Note to self: Might be better to set $k=\bar{K}$ and work over this field from the get go as Andr\'e does.
\subsection{Proof of \ref{abeliangfunctions}}
We follow closely the proof of Theorem 1, Ch. X, \S 4 of \cite{andre1989g} which we are trying to generalize.
\begin{proof}Consider a unit disk $\Delta\subset S'^{an}_{\mathbb{C}}$ centered at the point $s_0$ as above. We start by working locally at the disk $\Delta$. We set $Y:=X'^{an}_{\Delta}$ and $Y^{*}:= X^{an}_{\Delta^{*}}$.
As in the proof of loc. cit. we have the exponential exact sequence of sheaves on the disk $\Delta$\begin{equation}\label{eq:exposequence}
0\rightarrow \Gamma \rightarrow \text{\underline{Lie}}Y/\Delta\rightarrow Y\rightarrow 0,
\end{equation}with $\Gamma:= R_1 (f_{\Delta}')_{*}(\mathbb{Z})$.(??)
Let us also consider $\Gamma^f$ the disjoint union of all the section of $\Gamma$ passing through the fiber of the sheaf $\Gamma $ at the center $0$ of $\Delta$. As shown in loc. cit. the sheaf $\Gamma$ and the disjoint union $\Gamma^f$ satisfy the following properties:
\begin{enumerate}
\item we have that $\Gamma_{\Delta^{*}}$ is a lattice bundle in $\text{\underline{Lie}} Y^{*}/\Delta^{*}$, in particular $(\Gamma_{\Delta^{*}})_z=H_1(Y_z,\mathbb{Z})$ for all $z\in \Delta$(??),
\item for all $z\in \Delta^{*}$ we have that $\Gamma^f_z\otimes_{\mathbb{Z}} \mathbb{Q}= H_1(Y_z,\mathbb{Q})^{\pi_1(\Delta^{*},z)}$
\item $H^0(\Delta, \Gamma\otimes \mathbb{Q}) =H^0(\Delta^{*}, \Gamma_{\Delta{*}}\otimes \mathbb{Q})$.
\end{enumerate}
Since by assumption we have that $Y_0=(\mathbb{G}_m^h)_\mathbb{C}^{an} \times (A_0)_{\mathbb{C}}^{an}$ we get that $\dim_\mathbb{Q} H^0(\Delta, \Gamma\otimes \mathbb{Q}) =\dim_{\mathbb{Q}} \Gamma_0\otimes \mathbb{Q}=2(g-h)+h=2g-h$.\\
We consider the coherent $\mathcal{O}_{S}$ module with connection $(H^1_{DR}(X/S),\nabla)$. By(??) Deligne's work on differential equations with regular singular points\cite{deligneregular} and Griffiths regularity theorem it is known that there exists an extension, i.e. a locally free coherent $\mathcal{O}_{S'}$-module, $\mathcal{E}_{can}$ of this sheaf.
We note that in loc. cit. it is shown (??) that $e^{*}\Omega^{1}_{Y/\Delta}$ is a subbundle of $\mathcal{E}_{can}|_{\Delta}$.\\
We now let $U\subset S'$ be a Zariski neighborhood of $s_0$ and consider an ordered basis $\omega_1,\ldots, \omega_{g}$(??) of $e^{*}\Omega^{1}_{X'/S'}|_U$. We order the aforementioned basis $\omega_i$ so that the first $h$ elements $\omega_1,\ldots,\omega_h$ are such that $\omega_i(0)\in H^0(\mathbb{G}_m^h, \Omega^1)$ and the rest of the elements are such that $\omega_i(0)\in H^0(A_0,\Omega^1)$.[Can I do this?]\\
Similarly, we consider $\gamma_1,\ldots,\gamma_{2g-h}$ a basis of $H^0(\Delta, \Gamma\otimes \mathbb{Q})$ such that $\gamma_i(0)$ are a basis of $H_1 ((\mathbb{G}_m^h)^{an},\mathbb{Q})$ for $1\leq i\leq h$ and $\gamma_i(0)$ are a basis of $H_1(A_0^{an},\mathbb{Q})$ for $h+1\leq i\leq 2g-h$.
With these choices we get the relative periods we are interested in, which will be functions on the non-empty analytic subset $U^{an}\cap \Delta$, given by $\frac{1}{2\pi i} \int_{\gamma_j}^{} \omega_i$ for $1\leq i,j\leq h$. We also note that by construction we will have that $\frac{1}{2\pi i}\int_{\gamma_j}^{} \omega_i =0$ for $1\leq j\leq h$ and $h+1\leq i\leq 2g-h$.
Since we are focusing only on the toric part of the degeneration, the relative periods with $1\leq i,j\leq h$ specialize at $0$ to the periods of the $\omega_i(0)\in H^0(\mathbb{G}_m^h,\Omega^1)$. These are just residues that belong to our original field $K$.
The non-degenerancy of the pairing $H^0(\mathbb{G}_m^h,\Omega^1)\times H_1(\mathbb{G}_m^h,\mathbb{Q})\rightarrow K$ induced by the residue guarantees that the $h\times h$ matrix $(\frac{1}{2\pi i} \int_{\gamma_j}^{} \omega_i)_{1\leq i,j\leq h}$ is in $\GL_h(K)$. Then, as in loc. cit.(??), Nakayama's Lemma allows us to extend the basis $\omega_1,\ldots,\omega_{2g-h}$ to a basis of sections $\omega_1,\ldots,\omega_{2g}$ of $\mathcal{E}_{can}$ over some possible smaller Zariski neighborhood $U'$ of $s_0$ so that $\int_{\gamma_j}^{} \omega_i=0$ for $1\leq j\leq h$ and $j\geq h+1$.
In other words, extending the $\gamma_i$ to a frame $\{\gamma_j\}_{1\leq j\leq 2g }$ of $R_1(f^{an})_{*}(\mathbb{Q}_{X^{an}_{\mathbb{C}}})|_{V}$, for $V\subset U^{an}\cap \Delta^{*}$, we get that with these choices the relative period matrix $(\frac{1}{2\pi i} \int_{\gamma_j}^{}\omega_i)$ will be such that its first $h$ columns are of the form
\[ \vphantom
\begin{matrix}
\overbrace{XYZ}^{\mbox{$R$}}\\ \\ \\ \\ \\ \\
\underbrace{pqr}_{\mbox{$S$}}
\end{matrix}}%
P_h=\begin{bmatrix}
\frac{1}{2\pi i}\int_{\gamma_j}^{}\omega_i\\
0\\
\vdots\\
0
\end{bmatrix}%
\begin{matrix
\coolrightbrace{\frac{1}{2\pi i}\int_{\gamma_j}^{}\omega_i}{h\text{ rows}}\\
\coolrightbrace{0\\
\vdots\\
0 }{2g -h \text{ rows}}
\end{matrix}\]
Let $K\{x\}$ be the henselization of the local ring $K[x]_{(x)}$ and let $\partial=x\frac{d}{dx}$.(?? which is a differential in??)
By the theory of the Gauss-Manin connection it is well known that $nabla_{\partial }$ is represented by a matrix in $M_{2g}(K\{x\})$, with respect to the basis $\omega_i$ of $\mathcal{E}_{can}$ that we chose.
We also know that [Should I write more details here?] $P_h$ satisfies the differential system \begin{equation}\label{eq:differentialequation}
\partial X=G\cdot X,
\end{equation}as does the full relative period matrix and all of its columns.[Should I expand on this??]\\
By construction(??)\footnote{Here Andr\'e misquotes Deligne and doesn't explain further on the unipotence issue. I suspect it has to do with the fact that we chose the connected N\'eron model, essentially killing the torsion of the local monodromy.} we get that $G(0)$, which coincides with the residue of the connection at $0$, is nilpotent. This allows to establish the following claim.\\
\textbf{Claim 1:} There exists a matrix $D\in M_{2g\times h}(K)$ such that $P_h$ can be written as \begin{equation}\label{eq:claimandreproof}
P_h=P_G\cdot D,
\end{equation}where $P_G:=Y_G \exp(G(0)\log x )$ and $Y_G$ is the normalized uniform part of the solution of \eqref{eq:differentialequation}.\\
\begin{proof}[Proof of Claim 1]We quickly review some aspects from Ch. III of loc.cit. The system \eqref{eq:differentialequation} is equivalent to the two systems
\begin{equation}\label{eq:equivalentsystems}
\left.
\begin{aligned}
\partial (Z^{-1}X)=CZ^{-1}X \\
\partial Z= G\cdot Z-Z\cdot C
\end{aligned}
\right\}.
\end{equation}
In other words $X= Z\exp(C \log x )$, from the first of the two systems. The fact that $G(0)$ is nilpotent allows us to choose $C=G(0)$. The normalized uniform part $Y_G$ of $X$ is then the unique solution in $\GL_{2g}(K[[x]])$ of the second system in \eqref{eq:equivalentsystems} that satisfies $Y_G(0)=I$.
The existence of the normalized uniform part and the fact that $\GL_{2g}( K[[x]])$ follow from the discussion in Ch. III, \S $1.4$ of loc. cit.\\
We then know that there must exist a matrix $D\in M_{2g\times h}(\mathbb{C})$ such that $P_h=Y_G \exp(G(0)\log x )\cdot D$. All we need to do now is to show that $D$ has coefficients in the field $K$.
This follows from the fact that $P_h(0)\in M_{2g\times h}(K)$, that $G(0)\in M_{2g}(K)$, and that $Y_G \in \GL_{2g}(K[[x]])$.[?? Add more details to make sure that nothing goes wrong.]
\end{proof}
The previous claim guarantees that $P_h\in M_{2g\times h}(K[[x]])$.\\
What is left(??) is to show that the entries of this matrix satisfy ``geometric differential equations''.
\end{proof}
\subsection{Non-trivial relations}
Need to establish the equivalent of Lemma p. $209$ of Andr\'e or \ref{changeofplace} in my file.
\subsection{Some more complicated examples}
We start with highlighting some examples where \ref{maintheorem} applies. We fix for the remainder some general notation as in the main part of our exposition.
We consider $f:X\rightarrow S$ a G-admissible variation of Hodge structures with fibers of odd dimension $n$ and generic special Mumford-Tate group $Sp(\mu,\mathbb{Q})$, where $\mu:=\dim_\mathbb{Q} H^n(X_z^{an},\mathbb{Q})$ for any $z\in S^{an}$ and let $h:= \dim_{\mathbb{Q}} \im{(N^{*})^{n}}$.
As before, for a point $s\in S(\bar{\mathbb{Q}})$ we let $V_s:=H^n(X^{an}_s,\mathbb{Q})$ and assume that the decomposition of $V_s$ into simple polarized sub-$\mathbb{Q}$-HS is given by $V_1^{m_1}\oplus \ldots \oplus V_r^{m_r}$. We also let $D_s=M_{m_1}(D_1)\oplus \ldots \oplus M_{m_r}(D_r)$ be its algebra of Hodge endomorphisms. \\
For our process to kick in we need to have that
\begin{equation}\label{eq:examplescondition1}
h> \frac{\dim_\mathbb{Q} V_j}{[Z(D_j):\mathbb{Q}]} \text{ for some }j\text{, or that }
\end{equation}
\begin{equation}\label{eq:examplescondition2}
h\geq \min\{ \frac{\dim_\mathbb{Q} V_i}{[Z(D_i):\mathbb{Q}] } : i \text{ such that } D_i=\End_{HS}(V_i) \text{ is of type IV } \}.
\end{equation}
\subsubsection*{Some examples of CM-fields and totally real fields}
\textbf{The fields $F_{0,\beta}$ and $F_\beta$:} Let $\beta\in \mathbb{N}$ and let $p$ be a prime with $p\equiv1\mod 2\beta$. It is well known that the field $\mathbb{Q}(\zeta_p)$ is a cyclic extension of $\mathbb{Q}$ with totally real subfield $\mathbb{Q}(\zeta_p+\zeta_p^{-1})$. We can therefore find a subfield $F_{0,\beta}$ of $\mathbb{Q}(\zeta_p+\zeta_p^{-1})$ with $[F_{0,\beta}:\mathbb{Q}]=\beta$. Note that by construction this extension is also cyclic and totally real.
By Frobenius density it follows\footnote{See \cite{janusz} Ch. IV, Corollary $5.4$.}that the set \begin{center}
$In(F_{0,\beta}):=\{ l\in \Sigma_{\mathbb{Q},f}: l \text{ is inert in } F_{0.\beta}\}$
\end{center} is infinite. Fix $l_1,l_2\in In(F_\beta)$ with $l_1\neq l_2$. Then from quadratic reciprocity we can find a prime $q=q(l_1,l_2)\in \mathbb{Q}$ such that in the extension $F_\beta:=F_{0,\beta}(i\sqrt{q})$ the prime ideals $l_j\mathcal{O}_{F_{0,\beta}}$, $j=1$, $2$, split in $F_\beta$.\\
In conclusion we get that for the field $F_\beta$ constructed above there are distinct primes $l_1$, $l_2\in\mathbb{Q}$ whose splitting in $F_\beta$ is given by
\begin{center}
$l_j\mathcal{O}_{F_\beta} =w_{j,1}\cdot w_{j,2}$.
\end{center}In particular we get that $[(F_\beta)_{w_{j,i}}:\mathbb{Q}_{l_j}]=n$ for all $i,$ $j$ and that, trivially by construction, if $\sigma$ denotes complex conjugation in $F_\beta/\mathbb{Q}$, we have $\sigma w_{j,1}=w_{j,2}$.
Note that the family of CM-fields constructed above is infinite, for fixed $\beta$, by varying either the infinitely many pairs $(l_1,l_2)$ or the infinite choices $q(l_1,l_2)$.
\subsubsection*{Examples of algebras of Type IV}
Consider $F_\beta$ a field as those constructed above. Let $D$ be a division algebra over $F_\beta$ and let $d^2=[D:F_\beta]$. For such an algebra to be of type IV in Albert's classification it needs to satisfy the following conditions:\begin{enumerate}
\item for any finite place $v\in \Sigma_{F_\beta,f}$ we have that $\inv_v(D)+\inv_{\sigma(v)}(D)=0$, and
\item $\inv_v(D)=0$ for all such places that satisfy $\sigma (v)=v$.
\end{enumerate}
By construction of our fields $F_\beta$ we get places $w_{j,i}$, $1\leq i,j\leq 2$, that come in pairs with $\sigma (w_{j,1})=w_{j,2}$ for $j=1,$ $2$. We want, in view of the conditions $\star$ introduced in \ref{section:conditions}, to work with algebras $D$ that ramify at these finite places. For that reason we consider the following set of central division algebras, or CDA for short, with center $F_\beta$
\begin{center}
$\mathcal{D}_{IV}(\beta,d):= \{D: CDA/F_\beta, \text{ } [D:F_\beta]=d^2, \text{ } \inv_{w_{j,i}} (D)\not\in \mathbb{Z} \}$.
\end{center}We note that the set $\mathcal{D}_{IV}(\beta,d)$ actually depends on the numbers $\beta$, $l_j$, $q(l_1,l_2)$, and $d$ so it would be perhaps more accurate to denote this by $\mathcal{D}_{IV}(d,\beta,l_1,l_2,q(l_1,l_2))$, though we avoid this for notational brevity.
We note that for all choices of $\beta$, $l_j$, and $q(l_1,l_2)$ as above the set $\mathcal{D}_{IV}(\beta,2)$, i.e. the set of quaternion algebras satisfying the above conditions is non-empty. This follows from the classification theorem of quaternion algebras over global fields, see \cite{vigneras} Ch. III, Theoreme $3.1$.\\
Let $f:X\rightarrow S$ be as above and let $s\in S(\bar{\mathbb{Q}})$ and assume that one of the algebras $D_k$ that appear in the decomposition of the algebra of Hodge endomorphisms $D_s$ at the point $s$ is an element of $\mathcal{D}_{IV}(\beta,d)$ for some choice of $(\beta,d,l_1,l_2, q(l_1,l_2))$. Then we can find simple conditions to check whether $s$ is in the set $\Sigma $ of \ref{maintheorem}.
Indeed, assuming that \begin{equation}\label{eq:cor1condition0}
2\beta\geq \frac{\dim_\mathbb{Q} V_k}{h}
\end{equation}is enough to guarantee the validity of \eqref{eq:examplescondition2}. After this we just need to check the validity of at least one of the conditions in \ref{section:conditions}. In our case, by construction of the fields $F_\beta$, conditions $\star_4$ and $\star_7$ translate into easy to check conditions, mainly thanks to the fact that $[(F_\beta)_{w_{j,i}}:\mathbb{Q}_{l_j}]=\beta$.
In more detail, by our construction in the case $d=2$ condition $\star_4$ follows from \begin{equation}\label{eq:star4typeiv}
4\beta m_k\not| h^B_j\text{ for all } j,
\end{equation}where $h^B_j$ are as in \ref{section:reviewmonodromy}. Condition $\star_7$, only applicable in our case when $d=d_k\geq 3$, follows from \begin{equation}\label{eq:star7typeiv}
m_kd\beta\not| h^B_j\text{ for all } j.
\end{equation}
Finally, for the case $d=1$ so that $D_k=F_\beta$ , or in other words the case where $V_k$ is a CM-HS while the other $V_{t}$ are arbitrary polarized sub-HS of $V_s$, it is easy to create a condition analogous to the conditions $\star$ created in \ref{section:conditions}. Indeed, it is easy to check, using arguments as in \ref{vadicproximity}, that the the condition \begin{equation}\label{eq:star4variantcm}
m_k \beta\not| h^B_j\text{ for all }j,
\end{equation}is enough to guarantee the impossibility of $v$-adic proximity for all finite places $v\in\Sigma_L$, where $L=K(s)$ with our usual notation.\\
With these observations in mind we define $\Sigma_{IV}\subset S(\bar{\mathbb{Q}})$ to be the set that consists of the points $s\in S(\bar{\mathbb{Q}})$ whose corresponding algebra of Hodge endomorphisms satisfies the above hypothesis, i.e. for some $k$ we have that $D_k\in\mathcal{D}_{IV}(\beta,d)$ for some choice of $(\beta,d,l_1,l_2, q(l_1,l_2))$, condition \eqref{eq:cor1condition0} holds, and condition \eqref{eq:star4typeiv} holds if $d_k=2$, or \eqref{eq:star7typeiv} holds if $d_k\geq3$, or \eqref{eq:star4variantcm} holds if $d_k=1$.
For the points in $\Sigma_{IV}$ we get that they satisfy the conditions needed so that they are in the set $\Sigma$ of \ref{maintheorem}. We thus have the following corollary.
\begin{cor} \label{corollary1}Let $f:X\rightarrow S$ be a morphism over $K$ defining a G-admissible variation of $\mathbb{Q}$-HS satisfying the conditions of \ref{maintheorem}. Let $\Sigma_{IV}$ be the above set of points.
Then, there exist constants $C_{1}$, $C_2>0$ such that for all $s\in\Sigma_{IV}$ we have\begin{center}
$h(s)\leq C_1 [K(s):K]^{C_2}$,
\end{center}where $h$ is a Weil height on $S'$.
\end{cor}
\begin{exam}
The simplest example of this nature that we could find is the following:\\
Assume that $\mu=\dim_{\mathbb{Q}} V_s=16$ and that $h=h_0^B=4$ for a G-admissible variation with $n=3$ satisfying the conditions of \ref{maintheorem}.
In this case we let $\beta=2$ and $d=2$ above. In other words we can consider points for which $D_s$ is a quaternion algebra over a CM-field constructed as above. Note that \eqref{eq:cor1condition0} is satisfied, leaving us with checking $8\not| h^B_j$ for all $j$.
We have $h^B_0=h^B_6=4$ so all we need to check the validity of \eqref{eq:star4typeiv} is to make sure that $8\not| h^B_j$ for $j=1,\: 2$, and $3$. To establish this we note that $W_3^B\neq W_0^B$ and $W_3^B\neq W_5^B$ in the notation of \eqref{eq:weightmonodromyfiltration}, which can be shown for example using the description of the $W_i^B$ in \cite{steenbrinkzucker} remark $(2.3)$. Since in our case $\dim_\mathbb{Q} W_5^B=12$ and $\dim_\mathbb{Q} W_0^B=4$ the result follows.
\end{exam}
\subsubsection*{Algebras of types I-III}
The totally real fields $F_{0,\beta}$ with $[F_{0,\beta}:\mathbb{Q}]=\beta$ that we created above, help us construct convenient examples to check the conditions in \ref{section:conditions}, mainly since they are cyclic. In fact, every such field constitutes a type I algebra in Albert's classification.\\
By the aforementioned classification of quaternion algebras over number fields, see \cite{vigneras} or \cite{voight}, we have a bijection between the set
\begin{center}
$\{ \text{Quaternion algebras over }F_{0,\beta} \text{ up to isomorphism} \} $
\end{center} and the set\footnote{Normally we would have to make sure that the the subsets $P$ in question do not have any complex archimedean places. Since our fields are totally real this condition is simply vacuous.} \begin{center}
$\{ P \subset \Sigma_F: |P|\equiv 0\mod 2 \}$
\end{center}given by $B\mapsto \ram(B)$, with $\ram(B)$ the set of places over which the quaternion algebra $B$ ramifies.
With this in mind we define, in parallel to the type IV case above, the following sets of quaternion algebras $D/F_{0,\beta}$:
\begin{center}
$\mathcal{D}_{II}(\beta):= \{D: \ram(D) \cap\Sigma_{F,\infty}=\emptyset,\: w_{j,i}\in \ram(D) \text{ for all } j,i\}$
\end{center}
\begin{center}
$\mathcal{D}_{III}(\beta):= \{D: \Sigma_{F,\infty}\subset \ram(D),\: w_{j,i}\in \ram(D) \text{ for all } j,i\}$.
\end{center}
\begin{rmk}
We note that by the aforementioned classification theorem it follows that these sets are in fact infinite, for fixed $\beta$ and pair of primes $(l_1,l_2)$.
\end{rmk}
Note that for these algebras we will have by construction the following:\begin{enumerate}
\item $[(F_{0,\beta})_{w_{j,i}}:\mathbb{Q}_{l_j}]= [F_{0,\beta}:\mathbb{Q}]=\beta$ for all $1\leq i,j\leq 2$, and
\item for $D\in \mathcal{D}_{III}(\beta)\cup \mathcal{D}_{II}(\beta)$ we will have that $\inv_{w_{j,i}}(D)\neq 0$.
\end{enumerate}
Given these it is much easier to check for the validity of the conditions for a point $s\in S(\bar{\mathbb{Q}})$ to be in the set $\Sigma$ of \ref{maintheorem}, assuming that one of the algebras $D_k$ appearing in the decomposition of the algebra of Hodge endomorphisms $D_s$ is an element of $\mathcal{D}_{III}(\beta)\cup \mathcal{D}_{II}(\beta)$, or even that $D_k=F_{0,\beta}$ for some $\beta$.
Indeed, assume that for some $k$ we have $D_k\in \mathcal{D}_{III}(\beta)\cup \mathcal{D}_{II}(\beta)$. Then, condition \eqref{eq:examplescondition1} translates to simply checking \begin{equation}\label{eq:cor2condition0}\beta>\frac{\dim_{\mathbb{Q}}V_k}{h}.
\end{equation}
On the other hand, checking $\star_4$, which is the strongest out of the conditions in \ref{section:conditions}, becomes straightforward. Indeed, letting $h^B_j$ be the dimensions of the quotients resulting from the weight monodromy filtration as in \ref{section:reviewmonodromy}, $\star_4$ in this case follows from
\begin{equation}\label{eq:star4simpler1}
4m_k \beta\not| h^B_j\text{ for all }j.
\end{equation}
In the case where $D_k=F_{0,\beta}$ it is easy to check, with the same arguments as in the proof of \ref{vadicproximity}, that the the condition \begin{equation}\label{eq:star4varianttotreal}
m_k \beta\not| h^B_j\text{ for all }j,
\end{equation}is enough to guarantee the impossibility of $v$-adic proximity for all finite places $v\in\Sigma_L$, where $L=K(s)$ with our usual notation.
With the above in mind, for our fixed morphism $f:X\rightarrow S$, we consider the set $\Sigma_{I-III}\subset S(\bar{\mathbb{Q}})$ that consists of the points $s\in S(\bar{\mathbb{Q}})$ that are such that, if the corresponding algebra of Hodge endomorphisms is given by $D_s=M_{m_1}(D_1)\oplus \ldots \oplus M_{m_r}(D_r)$, we have that there exists $1\leq k\leq r$ and $\beta,\:l_1,\text{ and}l_2\in \mathbb{Q}$ as above such that \eqref{eq:cor2condition0} holds and either $D_k\in \mathcal{D}_{III}(\beta)\cup \mathcal{D}_{II}(\beta)$ and \eqref{eq:star4simpler1} holds, or $D_k=F_{0,\beta}$ and \eqref{eq:star4varianttotreal} holds.
In this case, \ref{maintheorem} applies to such points and we have the following corollary.
\begin{cor}\label{corollary2}Let $f:X\rightarrow S$ be a morphism over $K$ defining a G-admissible variation of $\mathbb{Q}$-HS satisfying the conditions of \ref{maintheorem}. Let $\Sigma_{I-III}$ be the above set of points.
Then, there exist constants $C_{1}$, $C_2>0$ such that for all $s\in\Sigma_{I-III}$ we have\begin{center}
$h(s)\leq C_1 [K(s):K]^{C_2}$,
\end{center}where $h$ is a Weil height on $S'$.
\end{cor}
\begin{rmks}1. We note that in both \ref{corollary1} and \ref{corollary2} the conditions imposed on the points $s$ revolve around only one of the division algebras $D_k$ that appear in the decomposition of the algebra of Hodge endomorphisms $D_s$. The rest of the algebras $D_t$ with $t\neq k$ could have arbitrary properties.\\
2. We could simplify the situation by considering points $s$ for which the algebra $D_s$ is equal to one of the algebras constructed in the above examples, i.e. we are in the case where $r=m_1=1$ and $D_s$ is a central simple algebra itself.
\end{rmks}
\section{A short review of G-functions in Arithmetic Geometry}\label{section:reviewgfunctions}
G-functions were first introduced by Siegel in \cite{siegel}. We start with a short review of G-functions and we list some of their main properties.
\begin{defn}
Let $K$ be a number field and let $y=\Sum{n=0}{\infty}a_n x^n\in K[[x]]$. Then $y$ is called a \textbf{G-series at the origin} if the following are true:
\begin{enumerate}
\item $\forall v\in \Sigma_{K,\infty}$ we have that $i_v(y)\in \mathbb{C}_v[[x]]$ defines an analytic function around $0$,
\item there exists a sequence $(d_n)_{n\in\mathbb{N}}$ of natural numbers such that
\begin{itemize}
\item $d_n a_m\in \mathcal{O}_K$ for all $m\leq n$,
\item there exists $C>0$ such that $d_n\leq C^n$ for all $n\in \mathbb{N}$,
\end{itemize}
\item $y$ satisfies a linear homogeneous differential equation with coefficients in $K(x)$.
\end{enumerate}
\end{defn}
Examples of G-series at the origin are elements of $\bar{\mathbb{Q}}(x)$ without a pole at $0$, the expansion of $log(1+x)$ at $0$, and any element of $\bar{\mathbb{Q}}[[x]]$ which is algebraic over $\bar{\mathbb{Q}}(x)$.
We note, see \cite{dwork}, that we can naturally define ``\textit{G-series at} $\zeta$'', for any $\zeta \in \mathbb{C}$. We also remark that the number field $K$ can be replaced by $\bar{\mathbb{Q}}$ without problems thanks to the third condition, which implies that the $a_i$ are all in some finite extension of $\mathbb{Q}$. Finally, we note that the set of G-series at $\zeta$ forms a ring.
\begin{defn}A \textbf{G-function} is a multivalued locally analytic function $y$ on $\mathbb{C}\backslash {S}$, with $|S|<\infty$, such that
for some $\zeta\in\mathbb{C}\backslash {S}$, $y$ can be represented by a G-series at $\zeta$.\end{defn}
Thanks to the Theorem of Bombieri-Andr\'e and the Theorem of Chudnovsky we know that the global nature of a G-function is in fact very much dependent on the fact that it can be locally written as a G-series. That is why, essentially following \cite{andre1989g}, we identify the two notions, especially since we will be only interested at power series centered at the origin.\\
For more on G-functions we direct the interested reader to the excellent introductory text \cite{dwork} and the more advanced \cite{andre1989g}.
\subsection{A Hasse Principle for G-functions}\label{section:hasseprinciple}
The main tool we will need from the theory of G-functions is a theorem of Andr\'e, that generalizes work of Bombieri in \cite{bombg}, which plays the role of a ``Hasse Principle" for G-functions. First we need some definitions. For the rest of this section consider $y_0,\ldots,y_{m-1}$ to be G-functions with coefficients in some number field $K$. We also define $Y:= (y_0,\ldots,y_{m-1})\in K[[x]]^{m}$, we fix some homogeneous polynomial $p\in K[t_1,\ldots,t_m]$, and a $\xi\in K$.
\begin{defn}
1. We say that a relation $p(y_0(\xi),\ldots,y_{m-1}(\xi))=0$ \textbf{holds $v$-adically for some place $v$ of $K$} if \begin{center}
$i_v(p)(i_v(y_0)(i_v(\xi)),\ldots,i_v(y_{m-1})(i_v(\xi)))=0$.
\end{center}
2. A relation like that is called \textbf{non-trivial} if it does not come by specialization at $\xi$ from a homogeneous relation of the same degree with coefficients in $K[x]$ among the $y_i$. Respectively, we call it \textbf{strongly non-trivial} if it does not occur as a factor of a specialization at $\xi$ of a homogeneous irreducible relation among the $y_i$ of possibly higher degree.\\
3. A relation $p(y_0(\xi),\ldots,y_{m-1}(\xi))=0$ is called \textbf{global} if it holds $v$-adically for all places $v$ of $K$ for which $|\xi|_v<\min \{1, R_v(Y) \}$.\end{defn}
\begin{theorem}[Hasse Principle for G-functions,\cite{andre1989g}, Ch VII, \S5.2]\label{hasse}Assume that $Y\in\bar{\mathbb{Q}}[[x]]^m$ satisfies the differential system $\frac{d}{dx}Y=\Gamma Y$ where $\Gamma\in M_m(\bar{\mathbb{Q}}(x))$ and that $\sigma(Y)<\infty$. Let $\Sha_{\delta}(Y)$, resp. $\Sha_\delta'(Y)$, denote the set of ordinary points or apparent singularities $\xi\in\bar{\mathbb{Q}}^*$ where there is some non-trivial, resp. strongly non-trivial, and global homogeneous relation of degree $\delta$.
Then, \begin{center}
$h(\Sha_\delta(Y))\leq c_1(Y) \delta^{3(m-1)}(\log \delta +1)$, and
\end{center}\begin{center}
$h(\Sha_\delta'(Y))\leq c_2(Y) \delta^{m}(\log \delta +1)$.
\end{center}In particular any subset of $\Sha_\delta(Y)$ with bounded degree over $\mathbb{Q}$ is finite.
\end{theorem}
\begin{rmk}
The quantity $\sigma(Y)$ is called the size of $Y$. G-functions have finite size. \footnote{For this fact and the definition of the notion of ``size" of a power series see \cite{andre1989g} Chapter I. }
\end{rmk}
\subsection{Periods and G-functions}\label{section:periodsandgfunctions}
Our primary interest in the theory of G-functions stems from the connection between G-functions and relative periods. We give a brief review of the results in \cite{andre1989g} that highlight this connection together with some basic facts and definitions that we will use later on.\\
Let $T$ be a smooth connected curve over some number field $k\subset {\mathbb{C}}$, $S=T\backslash\{s_0\}$, where $s_0\in T(k)$ is some closed point, and let $x$ be a local parameter of the curve $T$ at $s_0$.
We also consider $f:X\rightarrow S$ a proper smooth morphism and we let $n=dim X-1$. We then have the following isomorphism of $\mathcal{O}_{S_{\mathbb{C}}^{an}}$-modules\begin{center}
$P^{\bullet}_{X/S}:
H^{\bullet}_{DR}(X/S)\otimes_{\mathcal{O}_S}\mathcal{O}_{S_{\mathbb{C}}^{an}}\rightarrow R^{\bullet}f^{an}_{*}\mathbb{Q}_{X^{an}_\mathbb{C}}\otimes_{\mathbb{Q}_{S^{an}_\mathbb{C}}}\mathcal{O}_{S^{an}_\mathbb{C}}$.\end{center}
In what follows we will be focusing on the isomorphism $P^{n}_{X/S}$, which from now on we will simply denote by $P_{X/S}$. We also let $\mu=\dim_\mathbb{Q} H^{n}(X_z^{an},Q)$ where $z\in S(\mathbb{C})$.
This isomorphism is the relative version of Grothendieck's isomorphism between algebraic de Rham and Betti cohomology and it can be locally represented by a matrix. Namely, if we choose a basis $\omega_i$ of $H^{n}_{DR}(X/S)$ over some affine open subset $U\subset S$ and a frame $\gamma_{j}$ of $R_nf_{*}^{an}\mathbb{Q}_{X^{an}_\mathbb{C}}$ over some open analytic subset $V$ of the analytification $U^{an}_{\mathbb{C}}$, $P_{X/S}$ is represented by a matrix with entries of the form $\int_{\gamma}^{ }\omega_i$.
\begin{defn}We define the \textbf{relative n-period matrix} (over $V$) to be the $\mu\times\mu$ matrix
\begin{center}
$\bigg(\frac{1}{(2\pi i)^n} \int_{\gamma}^{ }\omega_i \bigg)$.
\end{center}Its entries will be called the \textbf{relative n-periods}.
\end{defn}
A result we will need in what follows guarantees the existence of G-functions among the relative n-periods under the hypothesis that the morphism $f$ extends over all of $T$. Namely, let us assume $f$ extends to a projective morphism $f_T:X_T\rightarrow T$ with $X_T$ a smooth $k$-scheme, such that $Y:=f^{-1}(s_0)$ is a union of smooth transversally crossing divisors $Y_i$ entering the fiber with multiplicity $1$.
Under these assumptions we know, see \cite{peterssteenbrink} Corollary $11.19$, that the local monodromy is unipotent. Let $\Delta $ be a small disk embedded in $T^{an}$ and centered around $s_0$. We let $2\pi iN^{*}$ be the logarithm of the local monodromy acting on the sheaf $R_n(f^{an}_{\mathbb{C}})_{*}(\mathbb{Q})|_{\Delta^{*}}$.
\begin{defn}We denote the image of the map $(2\pi iN^{*})^n$ by $M_0 R_n(f^{an}_\mathbb{C})_{*}(\mathbb{Q})|_{\Delta^{*}}$. We call $M_0$-$n$-period any relative $n$-period over a cycle $\gamma$ in $M_0 R_n(f^{an}_\mathbb{C})_{*}(\mathbb{Q})|_{\Delta^{*}}$.
\end{defn}
By the formalism of the limit Hodge structure we have that for all $z\in \Delta^{*}$ the group $\pi_1(\Delta^{*},z)$ acts unipotently on the fiber $(R_n(f^{an}_\mathbb{C})_{*}(\mathbb{Q})|_{\Delta^{*}})_z$. We also get that, letting $2\pi iN^{*}_z$ be the nilpotent logarithm of the image of a generator of $\pi_1(\Delta^{*},z)$ via the monodromy representation, $(M_0 R_n(f^{an}_\mathbb{C})_{*}(\mathbb{Q})|_{\Delta^{*}})_z=\im{(2\pi iN^{*}_z)^n}$.
\begin{theorem}[\cite{andre1989g}, p.185]\label{existence} There exists a basis of sections $\omega_i$ of $H^n_{DR}(X/S)$ over some dense open subset of S, such that for any section $\gamma$ of $M_0 R_n(f^{an}_\mathbb{C})_{*}(\mathbb{Q})|_{\Delta^{*}}$ the Taylor expansion in $x$ of the relative $M_0$-$n$-periods $\frac{1}{(2\pi i)^n}\int_{\gamma}^{ }\omega_i$ are globally bounded G-functions.\end{theorem}
\begin{rmk} We may assume without loss of generality that the G-functions created have coefficients in $k$, when $k\subset \bar{\mathbb{Q}}$. For more on this see the proof of \ref{changeofplace}.
\end{rmk}
\section{Endomorphism algebras of Hodge Structures}\label{section:hodgereview}
One of the central notions we will employ in what follows are the endomorphism algebras of polarized $\mathbb{Q}$-Hodge structures of pure weight. We present here a quick review of the main facts we will need later on about the structure of these algebras, as well as a few standard definitions and notation on Hodge-theoretic notions that we will use.\\
Given a $\mathbb{Q}$-Hodge structure, or $\mathbb{Q}$-HS for short, of pure weight we also get a group homomorphism $\tilde{\varphi}:\mathbb{S}\rightarrow GL(V)_{\mathbb{R}}$ of $\mathbb{R}$-algebraic groups, where $\mathbb{S}$ is the Deligne torus. Let $\mathbb{U}_1$ be the $\mathbb{R}$-subtorus of $\mathbb{S}$ with $\mathbb{U}_1(\mathbb{R})=\{z\in\mathbb{C}^*:|z|=1\}$ and let $\varphi:=\tilde{\varphi}|_{\mathbb{U}_1}$.
\begin{defn}
Let $V$ be a pure weight $\mathbb{Q}$-HS and $\tilde{\varphi}$ and $\varphi$ be as above. The \textbf{Mumford-Tate group} of $V$, denoted by $G_{mt}(V)$, is defined as the $\mathbb{Q}$-Zariski closure of $\tilde{\varphi}(\mathbb{S}(\mathbb{R}))$. The \textbf{special Mumford-Tate group} of $V$, denoted by $G_{smt}(V)$, is defined as the $\mathbb{Q}$-Zariski closure of $\varphi(\mathbb{U}_1(\mathbb{R}))$.
\end{defn}
\subsubsection*{Irreducible Hodge Structures: Albert's Classification}
It is well known that the category of polarizable $\mathbb{Q}$-Hodge structures is semi-simple. This implies that for a polarizable $\mathbb{Q}$-HS $V$, its endomorphism algebra $D:=\End(V)^{G_{smt}(V)}$ is a semi-simple $\mathbb{Q}$-algebra. If, furthermore, the polarizable $\mathbb{Q}$-HS $V$ that we consider is simple, then $D$ is a simple division $\mathbb{Q}$-algebra equipped with a positive involution, naturally constructed from the polarization.
Such algebras are classified by Albert's classification.
\begin{theorem}[Albert's Classification,\cite{mumfordabelian}]\label{albert} Let $D$ be a simple $\mathbb{Q}$-algebra with a positive (anti-)involution $\iota$, denoted $a\mapsto a^{\dagger}$. Let $F=Z(D)$, be the center of $D$, $F_0=\{a\in F: a=a^{\dagger}\}$, $e_0=[F_0:\mathbb{Q}]$, $e=[F:\mathbb{Q}]$, and $d^2=[D:F]$. Then $D$ is of one of the following four types:\\
\textbf{Type I:} $D=F=F_0$ is a totally real field, so that $e=e_0$, $d=1$, and $\iota$ is the identity.\\
\textbf{Type II:} $D$ is a quaternion algebra over the totally real field $F=F_0$ that also splits at all archimedean places of $F$. If $a\mapsto a^{*}=tr_{D/F}(a)-a$ denotes the standard involution of this quaternion algebra, then there exists $x\in D$ with $x=-x^{*}$ such that $a^{\dagger}=xa^{*}x^{-1}$ for all $a\in D$. Finally, in this case $e=e_0$ and $d=2$.\\
\textbf{Type III:} $D$ is a totally definite\footnote{We remind the reader that a quaternion algebra $B$ over a number field $F$ is called totally definite if for all archimedean places $v\in\Sigma_{F,\infty}$ we have that the algebra $B$ is ramified at $v$. This requires that $F$ is totally real so that $B\otimes_F F_v\simeq \mathbb{H}$, with $\mathbb{H}$ the standard quaternion algebra over $\mathbb{R}$, for all $v\in\Sigma_{F,\infty}$.} quaternion algebra over the totally real field $F=F_0$. In this case $\iota$ is the standard involution of this quaternion algebra and as before $e=e_0$ and $d=2$.\\
\textbf{Type IV:} $D$ is a division algebra of rank $d^2$ over the field $F$, which is a CM-field with totally real subfield $F_0$, i.e. $e=2e_0$. Finally, the involution $\iota$ corresponds, under a suitable isomorphism $D\otimes_{\mathbb{Q}}\mathbb{R}\overset{\simeq}{\rightarrow}M_d(\mathbb{C})\times\ldots\times M_d(\mathbb{C})$, with the involution $(A_1,\ldots,A_{e_0})\mapsto(^t\bar{A}_1,\ldots,^t\bar{A}_{e_0})$
Furthermore, in this case we have that for $\sigma$ a generator of $\gal(F/F_0)$ the following must hold:\begin{enumerate}
\item if $v\in\Sigma_{F,f}$ is such that $\sigma(v)=v$ we have that $\inv_v(D)=0$, and
\item for all $v\in \Sigma_{F,f}$ we must have that $\inv_v(D)+\inv_{\sigma(v)}(D)=0$.
\end{enumerate}.\end{theorem}
\subsubsection*{The general case}
Let $(V,\phi)$ be a polarized $\mathbb{Q}$-HS of weight $n$. Then, combining the semi-simplicity of the category of polarized $\mathbb{Q}$-HS and \ref{albert} we get a good description of the endomorphism algebra $D=\End(V)^{G_{smt}(V)}$.
Indeed, we know that there exist simple polarized weight $n$ sub-$\mathbb{Q}$-Hodge structures $(V_i,\varphi_i)$ with $1\leq i\leq r$, such that $V_i\not\simeq V_j$ for all $i\neq j$ and we have a decomposition\begin{equation}\label{eq:decompirrhs}
V=V_1^{m_1}\oplus\ldots\oplus V_r^{m_r}.
\end{equation}Denoting by $D_i:=\End(V_i)^{G_{smt}(V_i)}$ the corresponding endomorphism algebras and by $F_i:=Z(D_i)$ their respective centers, we then have a decomposition
\begin{equation}\label{eq:decompalgebras}
D=M_{m_1}(D_1)\times \ldots\times M_{m_r}(D_r).
\end{equation}Finally, this implies that the center $F$ of $D$ is such that\begin{equation} \label{eq:centersemi}
F=F_1\times\ldots\times F_r,
\end{equation}were each $F_i$ is diagonally embedded into $M_{m_i}(D_i)$, and the maximal commutative semi-simple sub-algebra $E$ of $D$ may be written as\begin{equation}\label{eq:maxcommss}
E=F_1^{m_1}\times\ldots \times F_r^{m_r}.
\end{equation}
For a proof of Albert's classification see \cite{mumfordabelian}, \S 21. For more on Mumford-Tate groups we direct the interested reader to our sources for this section, which are mainly \cite{moonennotes} and \cite{ggkbook}.
\section{The main setting-notational conventions}\label{section:notations}
Before delving into the technical parts of our argument we devote this section on describing the general setting that we will be working on in more detail. We give the definitions of the main objects and introduce the notation that we will, unless otherwise stated, keep uniform throughout our exposition.
Let $S'$ be a smooth proper geometrically irreducible curve over some number field $K\leq\bar{\mathbb{Q}}$, let $\Sigma_S\subset S'(K)$ be a finite set of $K$-points and $s_0\in \Sigma_S$ be a fixed such point. We let $S=S'\backslash\Sigma_S$ be the complement of $\Sigma_S$ in $S'$. We also fix $x$ a local parameter of the curve $S'$ at $s_0$ and $\eta$ the generic point of $S$.
Let us consider $f:X\rightarrow S$ a smooth projective morphism and let $n=\dim X-1$. Assume $f$ extends to a projective morphism $f':X'\rightarrow S'$ with $X'$ a smooth $K$-scheme and that $Y=f^{-1}(s_0)$ is a simple normal crossings divisor.
The map $f$ defines a variation of polarized $\mathbb{Q}$-HS of weight $n$ over $S^{an}_{\mathbb{C}}$ given by $R^nf^{an}_{*}\mathbb{Q}_{X^{an}_\mathbb{C}}$. We denote by $G_{mt,p}$, respectively by $G_{smt,p}$, the Mumford-Tate group, or respectively the special Mumford-Tate group, associated to the $\mathbb{Q}$-HS associated to the point $p\in S(\mathbb{C})$. We also let $G_{mt,\eta}$, respectively $G_{smt,\eta}$, be the generic Mumford-Tate group, or respectively the generic special Mumford-Tate group, of the variation. For each $p\in S(\mathbb{C})$ we also let $V_p=H^n(X^{an}_p,\mathbb{Q})$ be the fiber of the local system $R^nf^{an}_{*}\mathbb{Q}_{X^{an}_\mathbb{C}}$ and let $\mu=\dim_\mathbb{Q} V_p$.
Consider $z\in S(\mathbb{C})$ to be a Hodge generic point for the above variation of $\mathbb{Q}$-HS. The main invariant of the variation we will be interested in is the $\mathbb{Q}$-algebra\begin{center}
$D:=\End(V_z)^{G_{smt,z}}=\End(V_z)^{G_{smt,\eta}}$.
\end{center}
Similarly, for $s\in S(\mathbb{C})$ we let \begin{center}
$D_s:= \End(V_s)^{G_{smt,s}}$.
\end{center}
\begin{defn}Let $X$, $S$, $s\in S(\mathbb{C})$, $D_s$, and $D$ be as above.
We call $D_s$ the \textbf{algebra of Hodge endomorphisms at $s$}.
\end{defn}
\begin{defn}
A variation of Hodge structures such as above, meaning a weight $n$ geometric variation of $\mathbb{Q}$-HS parameterized by $S$ whose degeneration at some $s_0\in \Sigma_S\subset S'$ is as above, with $S=S'\backslash\{\Sigma_S\}$, with all of the above defined over some number field $K$, will be called \textbf{G-admissible}.
\end{defn}
\begin{rmk}
We remark that under these assumptions \ref{existence} applies by letting $T=S'\backslash (\Sigma_S\backslash \{ s_0\})$ and $f_T$ be the pullback of $f'$ over $T$. In particular we have the existence of G-functions among the entries of the relative period matrix as described in \ref{section:periodsandgfunctions}.
\end{rmk}
\textbf{Notation:} We fix some notation that appears throughout the text. By $\Sigma_K$, $\Sigma_{K,f}$, $\Sigma_{K,\infty}$ we denote the set of all places of a number field $K$, respectively finite or infinite places of $K$. For $v\in \Sigma_K$ we let $i_v:K\rightarrow \mathbb{C}_v$ denote the inclusion of $K$ into $\mathbb{C}_v$. For $y\in K[[x]]$ we let $i_v(y)$ denote the element of $\mathbb{C}_v[[x]]$ given via $i_v$ acting coefficient-wise on $y$.
For a scheme $Y$ defined over a field $k$ we let $\bar{Y}:= Y\times_{\spec k}\spec \bar{k}$ and $Y_{L}:=Y \times_{\spec k}\spec L$ for any extension $L/k$.
\subsection{The non-relative case}
\textbf{Notation:} Let $X/k$ be a smooth projective variety over a subfield $k$ of $\mathbb{C}$ and let $n=\dim_k X$.\\
\subsubsection*{Short review on polarizing forms}
For all $d\in \mathbb{N}$ there exist non-degenerate bilinear polarizing forms
\begin{center}$\langle ,\rangle_{DR} : H^d_{DR} (X/k)\otimes_k H^d_{DR} (X/k)\rightarrow k$, and
\end{center}\begin{center}
$\langle , \rangle_{B} : H^d(X^{an}_{\mathbb{C}},\mathbb{Q})\otimes_{\mathbb{Q}} H^d(X^{an}_{\mathbb{C}},\mathbb{Q})\rightarrow (2\pi i)^{-d}\mathbb{Q}=\mathbb{Q}(-d)$,
\end{center}on de Rham cohomology and Betti cohomology respectively. We also write $\langle ,\rangle_{B}=(2\pi i)^{-d} \langle ,\rangle $, where $\langle ,\rangle $ has values in $\mathbb{Q}$ and is of the same type, i.e. symmetric or skew-symmetric, as $\langle ,\rangle_{B}$.
These two are the polarizing forms of the corresponding cohomology group. Their existence follows from the fact that $X$ is projective and smooth and they are constructed via a very ample line bundle \cite{delhodge2}.
We also have that, via the two embeddings $k\hookrightarrow \mathbb{C}$ and $(2\pi i)^{-d} \mathbb{Q}\hookrightarrow \mathbb{C}$, and the comparison isomorphism
\begin{center}
$P^{d}_{X}: H^d(X/k)\otimes_{k} \mathbb{C} \rightarrow H^{d} (X^{an}_{\mathbb{C}} ,\mathbb{Q})\otimes_{\mathbb{Q}} \mathbb{C}$
\end{center}the two bilinear forms $\langle , \rangle_{DR}$ and $\langle , \rangle_{B}$ are compatible under $P^d_{X}$, meaning that
\begin{equation}\label{eq:polarscomp}\langle v,w \rangle_{DR}=\langle P^d_{X}(v) ,P^d_{X}(w) \rangle_{B}, \forall v,w\in H^d_{DR} (X/k)\otimes_k\mathbb{C}.
\end{equation}
\subsubsection*{Relations on periods-Notation}
From now on we assume that $d=n=\dim_kX$. We can and do consider from now on the above polarizing forms $\langle , \rangle_{DR}$, $\langle , \rangle_{B}$, and the form $\langle , \rangle$ as vectors in the spaces $H^n_{DR} (X/k)^{*} \otimes_kH^n_{DR} (X/k)^{*}$, $(H^n(X^{an}_{\mathbb{C}},\mathbb{Q})^{*} \otimes_{\mathbb{Q}} H^n(X^{an}_{\mathbb{C}},\mathbb{Q})^{*})(-n)$, and $H^n(X^{an}_{\mathbb{C}},\mathbb{Q})^{*} \otimes_{\mathbb{Q}} H^n(X^{an}_{\mathbb{C}},\mathbb{Q})^{*}$ respectively.
In this case, i.e. $d=n$, via Poincar\'e duality, these forms will correspond to elements $t_{DR} \in H^n_{DR}(X/k)\otimes_{k}H^n_{DR}(X/k)$, $t_B\in (H^n(X^{an}_{\mathbb{C}},\mathbb{Q})\otimes_{\mathbb{Q}} H^n(X^{an}_{\mathbb{C}},\mathbb{Q}))(n)$, and $t\in H^n(X^{an}_{\mathbb{C}},\mathbb{Q})\otimes_{\mathbb{Q}} H^n(X^{an}_{\mathbb{C}},\mathbb{Q})$, respectively.
The compatibility of the comparison isomorphism $P^n_{X}$ with Poincar\'e duality implies that $P^n_X\otimes P^n_X (t_{DR}) =t_B=(2\pi i)^{n}t$. In particular $t_{DR}$ is a Hodge class defined over the field $k$. For cycles such as this it is known\footnote{See page $169$ of \cite{andre1989g}.} that they impose polynomial relations among the $n$-periods with coefficients in the field $k((2\pi i)^n)$.
In what follows we show that the relations constructed by $t_{DR}$ are in fact the Riemann-relations, i.e. they are the equations imposed on the $n$-periods by \eqref{eq:polarscomp}. This is used without proof by Andr\'e in, essentially, the case were $n=1$. The author is sure that this part is known to experts in the field and includes it only for the sake of completeness of the exposition.\\
\textbf{Notation:} We consider from now on a fixed basis $\{ \gamma_i: 1\leq i\leq \mu:=\dim_\mathbb{Q} H^n(X^{an}_{\mathbb{C}},\mathbb{Q})\}$ of $H_n(X^{an}_{\mathbb{C}},\mathbb{Q})$ and we let $\gamma_i^{*}$ be the elements of its dual basis, which constitutes a basis of $H^n(X^{an}_{\mathbb{C}},\mathbb{Q})$. We also consider $\omega_i$, $1\leq i\leq \mu$, a fixed $k$-basis of $H^n_{DR}(X/k)$.
With respect to these choices the isomorphism $P^n_X$ corresponds to the matrix $( \int_{\gamma_j} \omega_i )$. We denote this matrix also by $P^n_X$ so that the isomorphism is nothing but $P^n_X(v)=\prescript{t}{}{v} P^n_X$, were on the right we have the matrix acting on the right. Vectors in the various spaces will be considered as column vectors in the various bases. Finally, we denote the matrix of the $n$-periods by $P:= (2\pi i)^{-n} P^n_X$.\\
With the above notation fixed we let $\langle \omega_i,\omega_j\rangle_{DR}=d_{i,j}$ and let $M_{DR}=(d_{i,j})\in \GL_{\mu}(k)$, which will be the matrix corresponding to the form $\langle,\rangle_{DR}$, i.e. \begin{center}
$\langle v,w\rangle_{DR}=\prescript{t}{}{v}M_{DR} w$.
\end{center}Considering, alternatively as above, $\langle,\rangle_{DR}$ as an element of the space $H^n_{DR} (X/k)^{*}\otimes_{k} H^n_{DR} (X/k)$, the above are equivalent to
\begin{center}
$\langle ,\rangle_{DR} =\Sum{i,j=1}{\mu} d_{i,j} \omega^{*}_i\otimes \omega^{*}_j$.
\end{center}
Similarly we let $q_{i,j}=\langle \gamma^{*}_i,\gamma^{*}_j\rangle \in \mathbb{Q}$ and set $M_B=(q_{i,j})\in \GL_\mu(\mathbb{Q})$. This implies that $\langle \gamma_i^{*} ,\gamma_{j}^{*}\rangle =(2\pi i)^{-n} q_{i,j}$. Same as above these relations can be rewritten as
\begin{center}
$\langle v,w\rangle=\prescript{t}{}{v} M_B w$ and $\langle v,w\rangle_{B}=\prescript{t}{}{v} ((2\pi i)^{-n} M_B)w$,
\end{center}for all $v,w \in H^n(X^{an}_{\mathbb{C}} ,\mathbb{C})$. Alternatively, if we were to consider $\langle,\rangle $ and $\langle ,\rangle_{B}$ as elements of $H^n(X^{an}_{\mathbb{C}},\mathbb{C})^{*}\otimes_{\mathbb{C}} H^n(X^{an}_{\mathbb{C}},\mathbb{C})^{*}$ we can write these as $\langle ,\rangle =\sum_{i,j=1}^{\mu} q_{i,j} \gamma_i\otimes\gamma_j$, and $\langle ,\rangle_{B} =\sum_{i,j=1}^{\mu}(2\pi i)^{-n} q_{i,j} \gamma_i\otimes\gamma_j $ respectively.
We now consider the Poincar\'e duality isomorphisms $\Pi_{DR}:H^n_{DR} (X/k)\rightarrow H^n_{DR} (X/k)^{*}$ and $\Pi_B:H^n(X^{an}_{\mathbb{C}},\mathbb{Q})\rightarrow H^n(X^{an}_{\mathbb{C}},\mathbb{Q})$, which we have since $\dim_k X=n$. With respect to the bases $\{ \omega_i \}$ and $\{\omega^{*}_i \}$ the isomorphism $\Pi_{DR}$ corresponds to an invertible matrix which we denote by $A_{DR}\in\GL_{\mu}(k)$. Similarly, with respect to the bases $\{ \gamma_i\}$ and $\{\gamma^{*}_i\}$ we get the invertible matrix $A_B$ corresponding to $\Pi_B$.
Finally, let us write $t_{DR} =\sum_{i,j=1}^{\mu}\lambda_{i,j} \omega_i\otimes \omega_j$, $t=\sum_{i,j=1}^{\mu}\tau_{i,j}\gamma_i^{*} \otimes \gamma_{j}^{*}$ and $t_B=\sum_{i,j=1}^{\mu}(2\pi i)^{n}\tau_{i,j}\gamma_i^{*} \otimes \gamma_{j}^{*}$, where $\lambda_{i,j}\in k$ and $\tau_{i,j}\in\mathbb{Q}$. We also define $\Lambda_{DR}=(\lambda_{i,j})$ and $\Lambda_{\mathbb{Q}}=(\tau_{i,j})$.
\subsubsection*{Classes and forms}
With the above notation fixed from now on we turn to describing the relation between the classes $t_{DR}$, and $t$ and the respective forms.\\
By definition we have $\Pi_{DR}^{\otimes 2}(t_{DR})=\langle ,\rangle_{DR}$. This implies that \begin{equation}\label{eq:basiscyclesdr}
\sum_{i,j=1}^{\mu} \lambda_{i,j} \Pi_{DR}(\omega_i) \otimes \Pi_{DR}(\omega_j)=\sum_{i,j=1}^{\mu} d_{i,j} \omega_i^{*}\otimes \omega_j^{*}.
\end{equation}We know that $\Pi_{DR}(\omega_i)=\Sigma a_{i,j} \omega_j^{*} $, with $A_{DR} =(a_{i,j})$ . Applying this to \eqref{eq:basiscyclesdr} its is easy to see, with a few trivial computations, that $\prescript{t}{}{A}_{DR} \Lambda_{DR}A_{DR} =M_{DR}$, or equivalently we get the equality\begin{equation}\label{eq:matricesdr}
\Lambda_{DR}=\prescript{t}{}{A}^{-1}_{DR} M_{DR} A^{-1}_{DR}.
\end{equation}
Similarly for the pair $t$ and $\langle ,\rangle $ we find that\begin{equation}\label{eq:matricesbetti}
\Lambda_{\mathbb{Q}}=\prescript{t}{}{A}_{B}^{-1} M_B A_B^{-1},
\end{equation}coming from the equality $\Pi_B^{\otimes 2} (t)=\langle ,\rangle$.\\
\begin{flushleft}
\textbf{The relation given by $t_{DR}$.}
\end{flushleft}
We review how a relation on the $n$-periods is constructed from $t_{DR}$. We start from the equality $(2\pi i)^{-n} (P_X^n)^{\otimes 2} (t_{DR} )=t$. This in turn implies that for all $l,m$, with the notation as above, we have \begin{equation}
\sum_{i,j=1}^{\mu} \lambda_{i,j} ((2\pi i)^{-n} \int_{\gamma_l}^{} \omega_i) ( (2\pi i)^{-n} \int_{\gamma_m}^{} \omega_j ) =(2\pi i)^{-n} \tau_{l,m}.
\end{equation}
These equations are the relations between $n$-periods that we eluded to earlier. Putting them altogether the previous relation is equivalent to the equality
\begin{equation}\label{eq:relationsmatrix}
\prescript{t}{}{P} \Lambda_{DR} P = (2\pi i )^{-n} \Lambda_{\mathbb{Q}}.
\end{equation}
\begin{flushleft}
\textbf{Comparing the matrices $A_B$ and $A_{DR}$.}
\end{flushleft}
Earlier on we had the matrices $A_{DR}$ and $A_B$ corresponding to the respective Poincar\'e duality isomorphisms. We saw in \eqref{eq:matricesdr} and \eqref{eq:matricesbetti} how these matrices relate the ``$\Lambda$-matrices'' and ``$M$-matrices''. We would like to replace the ``$\Lambda$-matrices in \eqref{eq:relationsmatrix} by the corresponding ``$M$-matrices'', showing thus that the relations created are nothing but the Riemann-relations\footnote{See \ref{section:subsectionnotationsnontrivial} for a definition and the reason of why we needed these.} . The first step is to describe how the matrices $A_{DR}$ and $A_B$ relate to one another.\\
We had the isomorphisms $\Pi_{DR}$ and $\Pi_B$ and the matrices $A_{DR}$ and $A_B$ that represented these with respect to the bases we have chosen. We know that the comparison isomorphism $P^n_X$ respects Poincar\'e duality, meaning that the following diagram commutes:\begin{center}
$\begin{tikzcd}
H^n_{DR} (X/k)\otimes_k \mathbb{C}\arrow[d, "\Pi_{DR}\otimes_{k}\mathbb{C}"'] \arrow[r, "P^n_X"] & H^n(X^{an}_{\mathbb{C}}, \mathbb{Q})\otimes_{\mathbb{Q}} \mathbb{C} \arrow[d, "\Pi_B\otimes_{\mathbb{Q}}\mathbb{C}"] \\
H^n_{DR} (X/k)^{*}\otimes_k\mathbb{C} & H^n(X^{an}_\mathbb{C},\mathbb{Q})\otimes_{\mathbb{Q}}\mathbb{C} \arrow[l, "(P^n_X)^{\vee}"]
\end{tikzcd}$\end{center}
where $(P^n_X)^{\vee} (f) =f\circ P^n_X$ for all $f\in H^n(X^{an}_\mathbb{C}, \mathbb{Q})^{*}\otimes_{\mathbb{Q}}\mathbb{C}$.
Looking at what the relation of the above diagram, i.e. $\Pi_{DR}\otimes_{k}\mathbb{C}=(P_X^n)^{\vee}\circ (\Pi_B\otimes_{\mathbb{Q}}\mathbb{C})\circ P^n_X$, does to the basis $\{\omega_i\}$, and using the fact that with respect to the bases $\{\gamma_j\}$ and $\{ \omega_i^{*}\}$ the matrix representing $(P^n_X)^{\vee}$ will be the matrix $\prescript{t}{}{(\int_{\gamma_j}^{}\omega_i )}$, i.e. the transpose of $P^n_X$, we conclude that \begin{equation}\label{eq:theamatrices}
A_{DR} = (\int_{\gamma_j}^{}\omega_i )\cdot A_B \cdot \prescript{t}{}{(\int_{\gamma_j}^{}\omega_i )}.
\end{equation}
\begin{flushleft}
\textbf{Conclusions}
\end{flushleft}
Combining \eqref{eq:relationsmatrix} with \eqref{eq:matricesdr} and \eqref{eq:matricesbetti} we get \begin{eqnarray}\label{eq:almosttheend}
\prescript{t}{}{P} (\prescript{t}{}{A}^{-1}_{DR} M_{DR} A^{-1}_{DR}) P= (2\pi i)^{-n} (\prescript{t}{}{A}_B^{-1} M_B A_B^{-1}).
\end{eqnarray}
From \eqref{eq:theamatrices} we get\begin{equation}\label{eq:thematrices1}
\prescript{t}{}{P} \prescript{t}{}{A}^{-1}_{DR} =\frac{1}{(2\pi i)^{n}} \prescript{t}{}{A}_B^{-1} P, \text{ and}
\end{equation}
\begin{equation}\label{eq:theamatrices2}
A^{-1}_{DR} P =\frac{1}{(2\pi i)^{n}} \prescript{t}{}{P}^{-1} A_B^{-1}.
\end{equation}
Using \eqref{eq:thematrices1} and \eqref{eq:theamatrices2} together with \eqref{eq:almosttheend} we get\begin{equation}\label{eq:riemannrelationsmatrixorm}
PM_B \prescript{t}{}{P} =(2\pi i)^{-n} M_{DR}.
\end{equation}But, this is the relation we get between the above matrices by looking at the equation \eqref{eq:polarscomp} and translating it in terms of matrices. Indeed, \eqref{eq:polarscomp} translates to\begin{center}
$\prescript{t}{}{v} P^n_X ((2\pi i)^{-n} M_B) { } \prescript{t}{}{(\prescript{t}{}{w} P^n_X)}=\prescript{t}{}{v}M_{DR}w$ for all $v,w\in H^n_{DR} (X/k)\otimes_k\mathbb{C}$.
\end{center}From this we recover \eqref{eq:riemannrelationsmatrixorm} by multiplying on both sides by $(2\pi i)^{-n}$ and noting that $P=(2\pi i)^{-n} P^n_X$.\\
What is actually of use to us is not exactly \eqref{eq:riemannrelationsmatrixorm} but rather the same relation for the transpose of $P$. To obtain this, first from \eqref{eq:riemannrelationsmatrixorm} we get trivially\begin{center}$(2\pi i)^{n} M_B = P^{-1} M_{DR}\prescript{t}{}{P}^{-1}$,
\end{center}then taking inverses on both sides we get \begin{equation}\label{eq:riemannwewant}
\prescript{t}{}{P} M_{DR} ^{-1} P =(2\pi i)^{-n} M_{B}^{-1}.
\end{equation}
\subsection{The relative case}
\textbf{Setting:} We consider $f:X\rightarrow S$ a smooth projective morphism of $k$-varieties itself defined over the same subfield $k$ of $\mathbb{C}$. We also assume that $S$ is a smooth connected curve which is not necessarily complete over $k$ and the dimension of the fibers of $f$ is $n$.
We then have, for all $d\in\mathbb{N}$, the relative version of the comparison isomorphism between the algebraic de Rham and the Betti cohomology\begin{equation}\label{eq:relativeisomorphism} P^d_{X/S}:H^d_{DR}(X/S)\otimes_{\mathcal{O}_S} \mathcal{O}_{S^{an}_{\mathbb{C}}}\rightarrow R^df^{an}_{*}\mathbb{Q}_{X^{an}_\mathbb{C}}\otimes_{\mathbb{Q}_{S^{an}_\mathbb{C}}}\mathcal{O}_{S^{an}_{\mathbb{C}}}.
\end{equation}Once again we let, in parallel to the non-relative case we studied earlier, $\mu$ denote the rank of these sheaves.
We once again have the same picture, as far as polarizing forms are concerned, as in the non-relative case. In other words we have $\langle,\rangle_{DR}$ a polarizing form of the de Rham cohomology sheaves $H^d_{DR}(X/S)$ which is defined over the field $k$, and a polarizing form $\langle,\rangle_B=(2\pi i)^{-n}\langle ,\rangle $ of the sheaves on the right of \eqref{eq:relativeisomorphism}. These two forms will be compatible with the relative isomorphism \eqref{eq:relativeisomorphism}, meaning that we have\begin{equation}\label{eq:relativecompatibility}
\langle P^d_{X/S}(v),P^d_{X/S}(w)\rangle_{B}=\langle v,w\rangle_{DR},
\end{equation}holds for all sections $v,w$ of the sheaf on the right of \eqref{eq:relativeisomorphism}.\\
From now on we focus on the case $d=n$.
We choose $U\subset S$ a non-empty affine open subset. Then the form $\langle,\rangle_{DR}|_{U}$ will map, via the relative version of the Poincar\'e duality isomorphism, to a class $t_{DR} \in H^n_{DR}(X/S)\otimes_{\mathcal{O}_S}H^n_{DR}(X/S)|_{U}$.
Similarly we repeat this process for the forms $\langle,\rangle$ and $\langle,\rangle_B$, over the analytification $U^{an}_{\mathbb{C}}$, to get elements $t\in (R^nf_{*}\mathbb{Q}\otimes_{\mathbb{Q}_{S^{an}_{\mathbb{C}}}}R^nf_{*}\mathbb{Q})|_{U^{an}_{\mathbb{C}}}$ and $t_B\in(R^nf_{*}\mathbb{Q}\otimes_{\mathbb{Q}_{S^{an}_{\mathbb{C}}}}R^nf_{*}\mathbb{Q})(n)|_{U^{an}_{\mathbb{C}}}$ with $t_B=(2\pi i)^{n}t$.
Compatibility of Poincar\'e duality with the relative comparison isomorphism shows that $P^n_{X/S}\otimes P^n_{X/S}|_{U}(t_{DR})=t_B$. In other words the class $t_{DR}$ is a relative Hodge class thus defining polynomial relations among the relative $n$-periods.
Now we can repeat the arguments we made in the non-relative case. First, we choose $\{\omega_i\}$ a basis of section of $H^n_{DR}(X/S)$ over the affine open subset $U\subset S$ and $\{\gamma_j^{*}\}$ a frame of $R^nf_{*}^{an}\mathbb{Q}_{X^{an}_{\mathbb{C}}}|_{V}$, or equivalently a frame $\{\gamma_j\}$ of the relative homology $R_nf^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}}|_{V}$, where $V\subset U^{an}_\mathbb{C}$ is some open analytic subset. We get that the matrix $P_{X/S}=((2\pi i)^{-n} \int_{\gamma_j}\omega_i)$ satisfies \begin{equation}
\label{eq:relativematrix1} P_{X/S} M_B \prescript{t}{}{P}_{X/S}=(2\pi i)^{-n} M_{DR},
\end{equation}where $M_B$ and $M_{DR}$ are the matrices of the forms $\langle,\rangle$ and $\langle,\rangle_{DR}$ with respect to the basis of section and the frame chosen above.
The same process as before shows us that \eqref{eq:relativematrix1} is equivalent to the validity of the polynomial relations on the relative $n$-periods defined by the relative Hodge class $t_{DR}$. Finally, the same elementary argument as before shows the validity of the relative analogue of relation \eqref{eq:riemannwewant}, i.e. the Riemann relations that we use in \ref{section:subsectionnotationsnontrivial}.
\section{Hodge Endomorphisms and De Rham Cohomology}\label{section:derhamendo}
Let $K$ be a number field and $f:X\rightarrow S$ be a smooth projective $K$-morphism of $K$-varieties, with $S$ a curve as above. Let us also consider a point $s\in S(L)$ for some finite extension $L/K$ and set $Y:=X_s$ which is a smooth projective variety defined over $L$.
In what follows we will need the existence of a natural action of the algebra of Hodge endomorphisms of $H^n(Y,\mathbb{Q})$ on both sides of the comparison isomorphism \begin{equation}
P^n:H^n_{DR}(\bar{Y}/\bar{L})\otimes_{\bar{L}} \mathbb{C} \rightarrow H^n(\bar{Y}_{\mathbb{C}}^{an},\mathbb{Q})\otimes_{\mathbb{Q}} \mathbb{C},
\end{equation}such that these actions commute with this isomorphism.
In the case of abelian varieties this is automatic from the fact that the algebra of Hodge endomorphisms is naturally realized as the algebra of endomorphisms of the abelian variety. This in turn acts naturally on both sides of the comparison isomorphism and the actions commute with the isomorphism itself. In a general variety $Y$ we cannot hope for such a description without assuming the validity of the absolute Hodge Conjecture.
It is the author's belief that the results in this section are known to experts in the field. Since we were not able to find an exact reference of the results we needed we have dedicated this section in providing proofs for these results.
\subsection{Existence of the action}
For the rest of this subsection we fix a number field $L$ and a smooth projective $n$-dimensional variety $Y$ defined over $L$.
\begin{prop}\label{propendodr}Let $Y$ be a smooth projective variety over the number field $L$ of dimension $n$. Let $V:=H^n(\bar{Y}^{an}_{\mathbb{C}}, \mathbb{Q})$ and $D:=\End_{HS}(V)$ be the algebra of Hodge endomorphisms. Then,\textbf{ assuming the absolute Hodge Conjecture}, there exists a finite Galois extension $\hat{L}$ of $L$ such that there exists an injective homomorphism of algebras \begin{center}
$i: D\hookrightarrow \End_{\hat{L}}(H^n_{DR}(Y/L)\otimes_{L} \hat{L})$.
\end{center}
Moreover, we have that $P^n(i(d) v)= d\cdot P^n(v)$ for all $d\in D$ and all $v\in H^n_{DR}(\bar{Y}/\bar{L} )\otimes_{\bar{L}} \mathbb{C}$. In other words, the action of the algebra $D$, that is induced by $i$, on de Rham cohomology coincides with the usual action of $D$ on the Betti cohomology as endomorphisms of the Hodge structure under the comparison isomorphism $P^n$.
\end{prop}
\begin{proof}
We start with some, well known, observations. First of all, the natural isomorphism \begin{center}
$\alpha_0:\End_{\mathbb{Q}}(V)\cong V\otimes V^{*}$
\end{center} is an isomorphism of $\mathbb{Q}$-HS. In particular, via $\alpha_0$ the elements of $D$ correspond to Hodge classes\footnote{See Lemma $11.41$ of \cite{voisinhodge1}. }.
It is also known that the isomorphism $\alpha : H^n(\bar{Y}^{an}_{\mathbb{C}},\mathbb{Q})^{*}\rightarrow H^n(\bar{Y}^{an}_{\mathbb{C}},\mathbb{Q})(n)$, given by Poincar\'e duality, is an isomorphism of $\mathbb{Q}$-HS. As a consequence we get that the induced isomorphism
\begin{center}
$\alpha_1: V\otimes_{\mathbb{Q}} V^{*} \overset{\cong}{\rightarrow } (V\otimes_{\mathbb{Q}} V)(n)$
\end{center}is also an isomorphism of $\mathbb{Q}$-HS. Moreover, it is known that the injection \begin{center}
$\alpha_2:(H^n(\bar{Y}^{an}_{\mathbb{C}},\mathbb{Q})\otimes_{\mathbb{Q}} H^n(\bar{Y}^{an}_{\mathbb{C}},\mathbb{Q}) )(n)\hookrightarrow H^{2n}(\bar{Y}^{an}_{\mathbb{C}}\times \bar{Y}^{an}_{\mathbb{C}} , \mathbb{Q} )(n)$,
\end{center}given by the K\"unneth formula is also an injective homomorphism of $\mathbb{Q}$-HS.\\
\subsubsection{Bounds on the degree extension}
Later on we want to have some control on the degree of the Galois extension $\hat{L}/L$ constructed in the proof of \ref{propendodr}. In particular, we want an upper bound on the degree $[\hat{L}:L]$ that will be independent of the smooth projective variety $Y/L$ and the field $L$ itself. We want this bound to only depend on the dimension of $Y$ and its $n$-th Betti number. In making an analogy with the case of abelian varieties, we want upper bounds akin to those achieved in \cite{silverberg}.
\begin{prop}\label{propdegreebound}
Assume the \textbf{absolute Hodge Conjecture} is true. Let $Y$ be a smooth $n$-dimensional projective variety defined over the number field $L$. Then the field extension $\hat{L}/L$ constructed in \ref{propendodr} may be chosen so that for its degree we have
\begin{center}
$[\hat{L}:L]\leq ( (6.31) m^2)^{m^2} $,
\end{center}where $m=\dim_\mathbb{Q} H^n(\bar{Y}^{an}_\mathbb{C},\mathbb{Q} )$ is the $n$-th Betti number.
\end{prop}
\begin{proof}
Let $\beta $ be a $\mathbb{Q}$-basis of $D$. From the proof of \ref{propendodr} we have an injective homomorphism of $\mathbb{Q}$-algebras $D\hookrightarrow \End_{\hat{L}}(H^{n}_{DR} (Y_{\hat{L}}/ \hat{L}))$, given in the basis elements by $d\rightarrow \tilde{d}$ in the notation of the proof of \ref{propendodr}.
By base change we have a natural action of the finite Galois group $\gal(\hat{L}/L)$ on de Rham cohomology $H^n_{DR}(Y_{\hat{L}}/\hat{L})$, as an $L$-vector space. This induces a natural action of the same group on $\End_{\hat{L}}(H^{n}_{DR} (Y_{\hat{L}}/ \hat{L}))$, viewed as an $L$-vector space again. We start by proving the following claim.\\
\textbf{Claim:} The above action of the Galois group induces an action on the embedding of $D$ in $\End_{\hat{L}}(H^{n}_{DR} (Y_{\hat{L}}/ \hat{L}))$. In other words for all $\sigma \in \gal(\hat{L}/L)$ we have that $\sigma(D)=D$. \\
\begin{proof}[Proof of the claim]
Assuming the absolute Hodge Conjecture, by our earlier construction, for every element $d$ of the basis $\beta$ we get an element $\tilde{d}=i(d)\in \End_{\hat{L}} (H^n_{DR} (Y_{\hat{L}}/\hat{L}))$. By the previous proof, via Poincar\'e duality and the K\"unneth formula, we get classes $\tilde{\phi}_d\in H^{2n}_{DR} (Y_{\hat{L}}\times Y_{\hat{L}}/\hat{L})$ that map to Hodge classes $\phi_d\in H^{2n}(Y^{an}\times Y^{an},\mathbb{Q})$. As we did in our earlier proof we let $Z:=Y\times_{L} Y$. In the above construction we implicitly consider a fixed embedding $\sigma_0:\hat{L}\hookrightarrow \mathbb{C}$.
By our assumption that the absolute Hodge Conjecture holds true, we get that for all embeddings $\sigma :\hat{L}\hookrightarrow \mathbb{C}$ the class $\tilde{\phi}_d\in H^{2n}(\sigma (Z_{\hat{L}})/\mathbb{C})$ is Hodge. Here $\sigma (Z_{\hat{L}})$ denotes the complex variety obtained from $Z_{\hat{L}}$ when we base change via the embedding $\sigma$ to $\mathbb{C}$.
From the embedding $\sigma_0:\hat{L}\hookrightarrow \mathbb{C}$ that we fixed earlier we get an embedding $i_0:L\hookrightarrow \mathbb{C}$. Any embedding $\sigma:\hat{L}\hookrightarrow\mathbb{C}$ that is such that $\sigma|_L=i_0$ will correspond to an element of the Galois group $\gal(\hat{L}/L)$ via the bijective map $\gal(\hat{L}/L)\rightarrow \{\sigma:\hat{L}\hookrightarrow \mathbb{C} : \sigma|_L=i_0\} $ given by $\tau\mapsto \sigma_0\circ\tau$. For notational brevity we suppress $\sigma_0$ from our notation from now on and identify $\tau\in\gal(\hat{L}/L)$ with $\sigma_0\circ\tau$, in other words we identify the elements of $\gal(\hat{L}/L)$ with the corresponding embedding $\hat{L}\hookrightarrow \mathbb{C}$. With this notational convention we may and will write from now on $Y_\mathbb{C}$, or $Z_\mathbb{C}$ respectively, for the complex variety we would otherwise denote by $\sigma_0Y_{\hat{L}}$, or $\sigma_0Z_{\hat{L}}$ respectively.
For the above $\sigma$, since $Y$ and hence also $Z$ are defined over the field $L$, by the above remarks $H^{2n}_{DR}(\sigma Z_{\hat{L}}/\mathbb{C})$ may be identified with $H^{2n}_{DR}(Z_{\mathbb{C}}/\mathbb{C})$. Via this identification $\tilde{\phi}_d$ will get mapped to $\sigma^{*}(\tilde{\phi}_d)\in H^{2n}_{DR}(Z_{\hat{L}}/\hat{L})$. Here $\sigma^{*}: H^{2n}_{DR}(Z_{\hat{L}}/\hat{L})\rightarrow H^{2n}_{DR}(Z_{\hat{L}}/\hat{L})$ denotes the isomorphism of $L$-vector spaces induced by $\sigma \in\gal(\hat{L}/L)$ on cohomology.
Now, since $Y$ and $Z$ are both defined over the field $L$, both the Poincar\'e duality isomorphism and the K\"unneth formula on de Rham cohomology are defined over the field $L$ as well. These maps, by construction, commute with the isomorphisms $\sigma^{*}$ so we get that $\sigma^{*}(\tilde{d})$ maps to $\sigma^{*}(\tilde{\phi}_d)\in H^{2n}(Z_\mathbb{C}/\mathbb{C})$ via the map $\alpha_{DR}$ we had in the proof of \ref{propendodr}.
Writing $P$ for Grothendieck's comparison isomorphism we have that $P(\sigma^{*}(\tilde{d}))\in D\subset \End_{\mathbb{Q}} H^n(Y^{an}_\mathbb{C},\mathbb{Q})$ is a Hodge endomorphism. Thus $\sigma^{*}(\tilde{d})\in i(D)$ with the notation of \ref{propendodr} and the result follows.
\end{proof}
By the claim therefore we get an action of $G:=\gal(\hat{L}/L)$ on the $\mathbb{Q}$-vector space $D$, or more precisely its image in $\End_{\hat{L}}(H^{n}_{DR} (Y_{\hat{L}}/ \hat{L}))$. Let $\dim_{\mathbb{Q}}D= m_0$ and note that $m_0\leq m^2$ trivially. We may and do assume, without loss of generality, that the field extension $\hat{L}/L$ constructed in the previous proof is minimal with the property that every cycle of the above basis $\tilde{d}$ is defined over $\hat{L}$. This implies that the corresponding group homomorphism
$\gal(\hat{L}/L)\rightarrow \aut(D)$ is in fact injective.
Let $\Lambda_1$ be a lattice in $D$, and consider $\Lambda := \Sum{g\in G}{} g(\Lambda_0)$. This will be a lattice that is also invariant by $G$. From the $G$-invariance of $\Lambda$ we get a group homomorphism\begin{center}
$ G\rightarrow \GL(\Lambda )$.
\end{center}This homomorphism will be injective as well by our earlier assumption about the minimality of the extension $\hat{L}/L$.
Let $N\geq 3$. Then, we know that the kernel of the surjective map $\GL(\Lambda)\rightarrow \GL(\Lambda/N\Lambda)$ contains no element of finite order of the group $\GL(\Lambda)$. As a result we get $G\hookrightarrow \GL(\Lambda/N\Lambda)$ which implies that $|G|$ divides $|\GL(\Lambda/N\Lambda)|=|\GL_{m_0}(\mathbb{Z}/N\mathbb{Z})|$.
Following the notation of \cite{silverberg} we let $g_r(N):=|\GL_{r}(\mathbb{Z}/N\mathbb{Z})|$ and $G(r):=\gcd \{ g_r(N):N\geq 3 \}$. From Theorem $3.1$ of \cite{silverberg} we have that \begin{equation}\label{eq:silverbound}
G(r)< ((6.31)r)^r.
\end{equation}
From the above argument we get that $|G|$ divides $G(m_0)$ and combining this with \eqref{eq:silverbound} and the fact that $m_0\leq m^2$ we get that \begin{equation}
|G|< ( (6.31) m^2)^{m^2}.
\end{equation}
\end{proof}
\section{The action of the Local Monodromy}\label{section:monodromy}
We start by reviewing a key property of the local monodromy that we will need during this process. This follows the ideas in Chapter X, Lemma $2.3$ of \cite{andre1989g}.\\
Let $\Delta$ be a small disc embedded in $S'^{an}_\mathbb{C}$ centered at $s_0$ and such that $\Delta^{*}\subset S^{an}_\mathbb{C}$ . We have already remarked in \ref{section:periodsandgfunctions} that the logarithm of the local monodromy of $\Delta^{*}\subset S^{an}_\mathbb{C}$ acting on $R_n(f^{an}_{\mathbb{C}})_{*}(\mathbb{Q})|_{\Delta^{*}}$ defines the local subsystem $\mathcal{M}_0:=M_0 R_n(f^{an}_\mathbb{C})_{*}(\mathbb{Q})|_{\Delta^{*}}$. This is contained in the maximal constant subsystem of $R_n(f^{an}_{\mathbb{C}})_{*}(\mathbb{Q})|_{\Delta^{*}}$, since $2\pi iN^{*}$, the nilpotent logarithm associated with the action of monodromy on the limit Hodge structure, has degree of nilpotency $\leq n+1$.
We recall that, since the map $f:X\rightarrow S$ is smooth and projective, we have a bilinear form $\langle,\rangle$ on the local system $R_nf^{an}_{*} \mathbb{Q}$ induced by the polarizing form.
\begin{lemma}\label{maxisotropic}
The local system $\mathcal{M}_0$ is a totally isotropic subsystem of $R_nf^{an}_{*}\mathbb{Q}|_{\Delta^{*}}$ with respect to the polarizing form $\langle,\rangle$.
\end{lemma}
\begin{proof}The skew-symmetric form $\langle, \rangle $ defines a morphism of local systems \begin{center}
$R_nf^{an}_{*} \mathbb{Q}|_{\Delta^{*}}\otimes R_nf^{an}_{*} \mathbb{Q}|_{\Delta^{*}}\rightarrow \mathbb{Q}(n)|_{\Delta^{*}}$.
\end{center}Therefore it is invariant under the local monodromy and we conclude that for any $z\in\Delta^{*}$ and for all $v,w\in (R_nf^{an}_{*} \mathbb{Q})_z$ we have \begin{equation}\label{eq:localmonodromypolar}
\langle N^{*}_z v,w\rangle +\langle v,N^{*}_z w\rangle =0.
\end{equation}
Now let $v,w$ be any two sections of $\mathcal{M}_0$. Then for any $z\in \Delta^{*}$ there exist $v_{0,z},w_{0,z}\in (R_nf^{an}_{*} \mathbb{Q})_z$ such that $v_z=(2\pi iN_z^{*})^n(v_{0,z})$ and $w_z=(2\pi iN_z^{*})^n(w_{0,z})$. Using \eqref{eq:localmonodromypolar} we thus get \begin{center}
$\langle v_z,w_z\rangle= \langle (2\pi iN_z^{*})^n(v_{0,z}),(2\pi iN_z^{*})^n(w_{0,z})\rangle =$
$=-\langle (2\pi iN_z^{*})^{n-1}(v_{0,z}),(2\pi iN_z^{*})^{n+1}(w_{0,z})\rangle =0$,\end{center}where the last equality follows from the fact that $N_z^{*}$ has degree of nilpotency $\leq n+1$.
Therefore we get that for all $v,w\in \mathcal{M}_0$ we have $\langle v,w\rangle =0$. Hence $\mathcal{M}_0$ is a totally isotropic local subsystem.\end{proof}
\section{Trivial relations}\label{section:trivialrelations}
\subsection{Our setting and notations}\label{section:subsectionnotationsnontrivial}
Let $f:X\rightarrow S$ be a smooth projective morphism of $k$-varieties where $k$ is a subfield of $\bar{\mathbb{Q}}$. We also fix an embedding $\bar{\mathbb{Q}}\hookrightarrow \mathbb{C}$ so that we may consider $k$ as a subfield of $\mathbb{C}$. Assume that $S$ is a smooth irreducible curve, that the fibers of $f$ are $n$-dimensional, and let $\mu:=\dim_{\mathbb{Q}} H^n(X^{an}_s,\mathbb{Q})$ for some $s\in S(\mathbb{C})$. Throughout this section we assume that $n$ is \textbf{odd} and that $S=S'\backslash\{\Sigma_S\}$ for some finite subset $\Sigma_S$ and that there exists a $k$-point $s_0\in\Sigma_S$ where our VHS has a non-isotrivial degeneration.
We consider\begin{center}
$P^n_{X/S}:H^n_{DR}(X/S)\otimes_{\mathcal{O}_S}\mathcal{O}_{S^{an}_{\mathbb{C}}}\rightarrow R^nf^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}} \otimes_{\mathbb{Q}_{S^{an}_{\mathbb{C}} }} \mathcal{O}_{S^{an}_{\mathbb{C}}}$,
\end{center} the relative period isomorphism.
\subsubsection*{Choosing bases-The Riemann relations}
Let $\omega_i$, $1\leq i\leq \mu$, be a basis of $H^n_{DR}(X_{\eta})$ over $k(S)$, where $\eta$ is the generic point of $S$. Then there exists some dense affine open subset $U$ of $S$ over which these $\omega_i$ are sections of the sheaf $H^n_{DR}(X/S)$. We also fix a trivialization $\gamma_i$ of $R_nf_{*}^{an}\mathbb{Q}_{X^{an}_{\mathbb{C}}}$, i.e. the relative homology, over an analytic open subset $V$ of $U^{an}_{\mathbb{C}}$. Since we are interested in describing the relations among the periods archimedeanly close to the point of degeneration $s_0$, we may and do assume that the set $V$ is simply connected and contained in a fixed small punctured disk $\Delta^{*}$ around $s_0$.
The matrix of $P^n_{X/S}$ with respect to this basis and trivialization will have entries in the ring $\mathcal{O}_V$. We multiply the matrix's elements by $(2\pi i)^{-n}$ and, by abuse of notation, we denote the above $\mu\times \mu$ matrix of relative $n$-periods by
\begin{center}
$P_{X/S}:=((2\pi i)^{-n} \int_{\gamma_j}^{} {\omega_i})$.
\end{center}
Since the morphism $f:X\rightarrow S$ is smooth, projective, and is also defined over $k$, it defines a polarization which will be defined over $k$ as a form on de Rham cohomology. In particular we get, since the weight $n$ of our variation is odd,\begin{itemize}
\item a skew-symmetric form $\langle,\rangle_{DR}$ on $H^n_{DR}(X_\eta)$ with values in $k(S)$ and
\item a skew-symmetric form $\langle,\rangle_B=(2\pi i)^{n} \langle,\rangle$ on $R_nf^{an}_{*} \mathbb{Q}$ with values in $\mathbb{Q}(n)$.
\end{itemize}
These two skew-symmetric forms are compatible with the isomorphism $P^n_{X/S}$, in the sense that the dual form of $\langle , \rangle_B$ coincides with the form induced by $\langle,\rangle_{DR}$ via the isomorphism $P^n_{X/S}$.
The compatibility of the polarizing forms translates to relations among the periods. These relations can be described succinctly by the equality\begin{equation}\label{eq:riemanrelationsnontrivialitychapter}
\prescript{t}{}{P} M_{DR}^{-1} P= (2\pi i)^{-n}M_B^{-1},
\end{equation}where $M_{DR}$ and $M_B$ are the matrices of $\langle,\rangle_{DR}$ and the dual of $\langle,\rangle_{B}$ respectively with respect to some basis and trivialization.
For more on this see \ref{appendixpolarizations}. The relations given on the periods by \eqref{eq:riemanrelationsnontrivialitychapter} are practically a direct consequence of the well known Hodge-Riemann bilinear relations defining a polarization of a Hodge structure. For this reason from now on we shall refer to \eqref{eq:riemanrelationsnontrivialitychapter} as the \textbf{Riemann relations} for brevity.
With this in mind, we may and do select the above basis $\omega_i$ and trivialization $\gamma_j$ so that the following are satisfied:\begin{itemize}
\item the $\omega_i$ are a symplectic basis of $H^n_{DR}(X_{\eta})$ so that $\omega_1,\ldots, \omega_{\mu/2}$ constitute a basis of the maximal isotropic subspace $F^{\frac{n+1}{2}}H^n_{DR}(X_\eta)$ and the rest of the elements, i.e. $\omega_{\mu/2+1},\ldots, \omega_\mu$ are the basis of a transverse Lagrangian of $F^{\frac{n+1}{2}} H^n_{DR}(X_\eta)$, and
\item the $\gamma_j$ is a symplectic trivialization of $R_nf^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}}|_V$, which is also such that $\gamma_1,\ldots,\gamma_h$ are a frame of the space $\mathcal{M}_0|_{V} $ and the $\gamma_1,\ldots, \gamma_{\mu/2}$ are a frame of a maximal totally isotropic subsystem that contains $M_0R_nf^{an}_{*}\mathbb{Q}|_V$.
\end{itemize}
With these choices we may and do assume from now on that the matrices that correspond to the two aforementioned forms are
$M_{DR}=M_B=J_\mu=\begin{pmatrix}
0&-I\\
I&0
\end{pmatrix}$.
With this \eqref{eq:riemanrelationsnontrivialitychapter} translates to \begin{equation}\label{eq:riemannrelationodd}
\prescript{t}{}{P} J_\mu P= (2\pi i)^{-n}J_\mu.
\end{equation}
\subsubsection*{The main result}
Let $y_{i,j}$ with $1\leq i\leq \mu$ and $1\leq j\leq h$ be the entries of the first $h$ columns of the matrix $P_{X/S}$. The aforementioned work of Andr\'e, see \ref{existence}, guarantees that these are G-functions.
This is happening with respect to a local parameter $x$ of $S'$ at $s_0$, with respect to which the $y_{i,j}$ can be written as power series.
For the remainder of this section we consider the above notation fixed. The rest of this section is dedicated to describing the generic, or ``trivial'', relations among the G-functions $y_{i,j}$. Indeed, we prove the following:
\begin{prop}\label{goalnontriviality} With the above notation, assume that the generic special Mumford-Tate group of the variation of $\mathbb{Q}$-HS on $S^{an}_{\mathbb{C}}$ given by $R^nf^{an}_{*} \mathbb{Q}_{X^{an}_{\mathbb{C}}}$ is $Sp(\mu,\mathbb{Q})$.
Then, the Zariski closure of the $\mu\times h$ matrix $Y:=(y_{i,j})$ over $\bar{\mathbb{Q}}[x]$ in $\mathbb{A}^{\mu\times h}$ is the variety whose ideal is given by the Riemann relations.
\end{prop}
\subsection{Trivial relations over $\mathbb{C}$ for the period matrix}
Under the notations and assumptions of \ref{section:subsectionnotationsnontrivial} and \ref{goalnontriviality} we have the following:
\begin{lemma}\label{monoatgeneric}
Let $z\in V\subset U^{an}$ be a Hodge generic point of the $\mathbb{Q}$-VHS given by $R^nf^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}}$. Then the monodromy group $H_z$ at $z$ is $Sp(\mu,\mathbb{Q})$.
\end{lemma}
\begin{proof} Let $\rho_H:\pi_1(S^{an},z)\rightarrow GL(H^n(X^{an}_z,\mathbb{Q}))$ be the monodromy representation at $z$. Then, by Andr\'e's Theorem of the fixed part\cite{andrefixed} we know that $H_z$, which is the connected component of the the $\mathbb{Q}$-algebraic group $\rho_H(\pi_1(S^{an},z))^{\mathbb{Q}-Zar}$, is a normal subgroup of the derived subgroup of the Mumford-Tate group $G_{mt,z}$ at $z$. In other words \begin{center}
$H_z\trianglelefteq DG_{mt,z}$.
\end{center}
On the other hand we have that $DG_{mt,z}\leq G_{smt,z}$ and trivially that $DG_{smt,z}\leq DG_{mt,z}$, where $G_{smt,z}$ is the special Mumford-Tate group at $z$. But, by assumption, we know that $G_{smt,z}\simeq Sp(\mu,\mathbb{Q})$, since $z$ is Hodge generic for our variation. It is classical that $Sp(\mu,\mathbb{Q})$ satisfies $DSp(\mu,\mathbb{Q})=Sp(\mu,\mathbb{Q})$. Hence we have $DG_{mt,z}= Sp(\mu,\mathbb{Q})$.
We thus get that $H_z\trianglelefteq Sp(\mu,\mathbb{Q})$. Finally, $Sp(\mu,\mathbb{Q})$ is a simple $\mathbb{Q}$-algebraic group, therefore $H_z=1$ or $H_z =Sp(\mu,\mathbb{Q})$. But, if we had $H_z=1$, then the variation of $\mathbb{Q}$-HS in question would be isotrivial, and hence extend to $T=S^{an}\cup \{s_0\}$. We get a contradiction since the local monodromy at $s_0\in S'(\mathbb{C})$ is non-trivial by assumption.\end{proof}
From now on, by taking a finite \'etale cover of $S$ if necessary, we may and do assume that $\rho_H(\pi_1(S^{an},z))^{\mathbb{Q}-Zar}$ is connected, i.e. that for the Hodge generic points $z\in V$ we have $H_z=\rho_H(\pi_1(S^{an},z))^{\mathbb{Q}-Zar}$.
\subsubsection{The matrix of Periods and differential equations}
Let us denote by $M_{\mu}$ the variety of $\mu\times \mu$ matrices over $\mathbb{C}$, where $\mu:=\dim_{\mathbb{Q}}H^n(X^{an}_{s,\mathbb{C}},\mathbb{Q})$ for any $s\in S(\mathbb{C})$.
The period matrix $P_{X/S}$ defines a holomorphic map\begin{center}
$\phi :V\rightarrow M_{\mu}$.
\end{center}We let $Z\subset V\times M_{\mu}$ be the graph of this function. The first step in our process is determining the $\mathbb{C}$-Zariski closure of $Z$.
\begin{lemma}\label{czarclosure}Let $Z$ be as above then the $\mathbb{C}$-Zariski closure of $Z$ is\begin{center}$S_{\mathbb{C}}\times \{M: \prescript{t}{}{M}J_{\mu} M=(2\pi i)^{-n} J_\mu\}$.
\end{center}
\end{lemma}
In order to prove this we will employ the monodromy action in an essential way. For this purpose we will need to review some further properties of the isomorphism $P^n_{X/S}$.
To this end, let us consider\begin{center}
$Q^n_{X/S} : R^nf^{an}_{*}\otimes_{\mathbb{C}_{S^{an}}} \mathcal{O}_{S^{an}} \overset{\sim}{\rightarrow}H^n_{DR} (X/S)\otimes_{\mathcal{O}_S}\mathcal{O}_{S^{an}}$,
\end{center}the inverse of $P^n_{X/S}$.
It is known, see \cite{katzalgsol} Prop.$4.1.2$, that this isomorphism restricts to an isomorphism of local systems\begin{center}
$Q:R^nf^{an}_{*} \mathbb{C}_{X^{an}}\overset{\sim}{\rightarrow} \mathbb{R}^nf^{an}_{*}\Omega^{\bullet}_{X^{an}_\mathbb{C}/\mathbb{C}}\overset{\sim}{\rightarrow} (H^n_{DR}(X/S) \otimes_{\mathcal{O}_S}\mathcal{O}_{S^{an}})^{\nabla}$
\end{center}where $(H^n_{DR}(X/S) \otimes_{\mathcal{O}_S}\mathcal{O}_{S^{an}})^{\nabla}\subset \mathbb{R}^{n}f^{an}_{*} \Omega^{\bullet}_{X^{an}/S^{an}}$ is the local system of horizontal sections with respect to the Gauss-Manin connection.
Note that we have an inclusion of local systems $R^nf^{an}_{*} \mathbb{Q}\hookrightarrow R^nf^{an}_{*} \mathbb{C}$ on $S^{an}$. This leads to a commutative diagram \begin{center}
\begin{tikzcd}
\pi_1(S^{an},z)\arrow[r, "\rho_{H,\mathbb{C}}"] \arrow[d, "\rho_H"'] & \GL(H^n(X^{an}_z,\mathbb{C}))\\
\GL(H^n(X^{an}_z,\mathbb{Q})) \arrow[ru, hook] &
\end{tikzcd}\end{center}
for any point $z\in S^{an}$.
In particular, we get, under our assumptions on the connectedness of the group $(\rho_H(\pi_1(S^{an},z)))^{\mathbb{Q}-Zar}$, that the group $G_{mono,z}:=(\rho_{H,\mathbb{C}}(\pi_1(S^{an},z)))^{\mathbb{C}-Zar}$, i.e. the $\mathbb{C}$-Zariski closure of the image of the fundamental group under $\rho_{H,\mathbb{C}}$, is such that\begin{equation}\label{eq:monocomp1}
G_{mono,z}= H_z\otimes_{\mathbb{Q}}\mathbb{C}.
\end{equation}
Earlier we saw that we have an isomorphism $Q$ of local systems over $S^{an}$.
By the equivalence of categories between local systems over $S^{an}$ and representations of the fundamental group $\pi_1(S^{an},z)$ we thus have that the representations \begin{center}
$\rho_{H,\mathbb{C}}:\pi_1(S^{an},z) \rightarrow \GL(H^n(X^{an}_z,\mathbb{C}))$, and
$\rho_{DR}:\pi_1(S^{an},z) \rightarrow \GL((H^n_{DR} (X/S)\otimes_{\mathcal{O}_S} \mathcal{O}_{S^{an}})^{\nabla})$,
\end{center}are conjugate. In fact, keeping in mind that all actions are on the right, we have that $\rho_{DR}(\lambda)= Q(z)^{-1} \rho_{H,\mathbb{C}} (\lambda) Q(z)$, for all $\lambda\in \pi_1(S^{an},z)$, where $Q(z)$ is the fiber of $Q$ at $z$. From this we get that
\begin{equation}\label{eq:conjugatemono}
G_{DR,z}:=(\rho_{DR}(\pi_1(S^{an},z)))^{\mathbb{C}-Zar}=Q(z)^{-1} G_{mono,z}Q(z).
\end{equation}
Let $B$ be the matrix of the isomorphism $Q|_{V}$ with respect to the frame $\{\gamma_j^{*}:1\leq j\leq \mu\}$ of the trivialization of $R^nf^{an}_{*}\mathbb{Q}|_V\subset R^nf^{an}_{*}\mathbb{C}|_V$, i.e. the dual of the frame given by the $\gamma_j$ on $R_nf^{an}_{*}\mathbb{Q}|_V$, and the basis $\{\omega_i:1\leq i\leq \mu\}$ chosen above. We then have that the rows $b_i$ of $B$, which will correspond to $Q|_V(\gamma_i^{*})$ written in the basis $\omega_i$, will constitute a basis of the space $\Gamma(V, (H^n_{DR}(X/S)\otimes_{\mathcal{O}_S}\mathcal{O}_{S^{an}})^{\nabla})$. In other words $B$ is a complete solution of the differential equation $\nabla(\omega)=0$, defined by the Gauss-Manin connection. We note that in our setting the Gauss-Manin connection is known to be defined over the field $k$ by work of Katz and Oda. see \cite{katzoda} and \cite{katznilpo}.
Let $\Gamma\in M_\mu(k(S)$ be the (local) matrix of $\nabla$ on$U$ with respect to the basis given by the $\omega_i$. Writing $\nabla(\omega)=d\omega+\omega\Gamma$, identifying the $\omega$ with the $1\times n$ matrix given by the coefficients of $\omega$ in the basis given by the $\omega_i$, we may rewrite the above equation as $d\omega=-\omega\Gamma$. The corresponding matricial differential equation then becomes\begin{equation}\label{eq:matricialde}
X'=-X\Gamma.
\end{equation}
The monodromy representation $\rho_{DR}$ defines analytic continuations of solutions at $z$ of the differential equation \ref{eq:matricialde}. So in considering the value at the point $z$ of the analytic continuation $B^{\lambda}$ of the matrix $B$ along the cycle $\lambda\in\pi_1(S^{an},z)$, corresponding to a loop $\gamma$ passing through $z$, all we are doing is multiplying the matrix $B_z$ by $\rho_{DR}(\lambda)$. In other words for $\lambda\in \pi_1(S^{an},z)$ we have that\begin{equation}\label{eq:derhamaction}
(B^{\lambda})_z= B_z\rho_{DR}(\lambda).\end{equation}
We apply the ideas presented in the above discussion to prove the following lemma.
\begin{lemma}\label{lemmancont}
Consider $A$ to be the matrix of the isomorphism $P^n_{X/S}$ on the open analytic set $V$ with respect to the basis $\omega_i$ and frame $\gamma_j^{*}$ chosen above. Let $z\in V$ and let $\lambda \in \pi_1(S^{an},z)$. Then the value at $z$ of the analytic continuation $A^{\lambda}$ of $A$ along the loop that corresponds to $\lambda$ is given by \begin{center}
$(A^{\lambda})_z =A_z \rho_{H,\mathbb{C}}(\lambda)^{-1} $,
\end{center}where $\rho_{H,\mathbb{C}}$ is the above representation on Betti cohomology.\end{lemma}
\begin{proof}
We have $A\cdot B=I_\mu$ hence $A^{\lambda}\cdot B^{\lambda}=I_\mu$. Using \eqref{eq:derhamaction} we get that $(A^{\lambda})_z=\rho_{DR}(\lambda)^{-1} B_z^{-1} = \rho_{DR}(\lambda)^{-1}A_z$.
On the other hand, with the above notation we have that $\rho_{DR}(\lambda) =B_z^{-1} \rho_{H,\mathbb{C}}(\lambda)B_z$. This combined with the above leads to the result.\end{proof}
\begin{rmk}
The same relation holds for the value $P_{X/S}(z)$ at $z$ of the matrix of relative periods $P_{X/S}$, since $P_{X/S}=(2\pi i)^{-n} A$.
\end{rmk}
We are now in the position to prove \ref{czarclosure}.
\begin{proof}[Proof of \ref{czarclosure}]
Let $Z\subset V\times M_\mu\subset S_{\mathbb{C}} \times M_\mu$ be the graph of the isomorphism $P_{X/S}|_V$. Let $\tilde{Z}$ be the union of the graphs of all possible analytic continuations of $Z$. It is easy to see via analytic continuation that we have $(\tilde{Z})^{\mathbb{C}-Zar} =Z^{\mathbb{C}-Zar}$. We also note that for all $z\in V$ we have $(\tilde{Z}_z)^{\mathbb{C}-Zar}\subset (\tilde{Z}^{\mathbb{C}-Zar})_z$ for trivial reasons.
We focus on the points $z\in V$ that are Hodge generic for the variation of $\mathbb{Q}$-HS given by $R^nf^{an}_{*} \mathbb{Q}_{X^{an}_{\mathbb{C}}}|_V$. We note that the set of such $z$ in $V$, which we denote by $V_{Hgen}$, is uncountable.
By \ref{lemmancont}, and the fact that the rows of the matrix $B$ above are a complete solution of the differential system \eqref{eq:matricialde}, we know that $\tilde{Z}_z=P_{X/S}(z) \rho_{H,\mathbb{C}} (\pi_1(S^{an},z)) $. From this we get that $(\tilde{Z}_z)^{\mathbb{C}-Zar}= P_{X/S}(z)G_{mono,z} $.
From \eqref{eq:monocomp1} we know that $G_{mono,z}=H_z\otimes_{\mathbb{Q}}\mathbb{C}$ while from \ref{monoatgeneric} we know that, since $z\in V_{Hgen}$, we have $H_z\simeq Sp(\mu,\mathbb{Q})$, hence $G_{mono,z}\simeq Sp(\mu,\mathbb{C})$. Hence we have $(\tilde{Z}_z)^{\mathbb{C}-Zar}= P_{X/S}(z)Sp(\mu, \mathbb{C})$.
Using \eqref{eq:riemannrelationodd} together with the above we arrive through elementary reasoning to\begin{equation}\label{eq:tildefiber}
(\tilde{Z}_z)^{\mathbb{C}-Zar} = \{ M\in \GL_\mu(\mathbb{C}) : \prescript{t}{}{M} J_\mu M =(2\pi i)^{-n} J_\mu\}.
\end{equation}Applying this to the fact that for all $z\in V$ we have $(\tilde{Z}_z)^{\mathbb{C}-Zar} \subset (\tilde{Z}^{\mathbb{C}-Zar})_z=(Z^{\mathbb{C}-Zar})_z$, we get that \begin{equation}\label{eq:almostthere}
V_{Hgen} \times \{ M\in \GL_\mu(\mathbb{C}) : \prescript{t}{}{M} J_\mu M =(2\pi i)^{-n} J_\mu \} \subset Z^{\mathbb{C}-Zar}.
\end{equation}Now, using the fact that $V_{Hgen}$ is uncountable and taking Zariski closures in \eqref{eq:almostthere} we get that \begin{center}
$S_{\mathbb{C}}\times \{ M\in \GL_\mu(\mathbb{C}) : \prescript{t}{}{M}J_\mu M =(2\pi i)^{-n} J_\mu \} \subset Z^{\mathbb{C}-Zar}$.
\end{center}
On the other hand, once again from \eqref{eq:riemannrelationodd}, we know that \begin{center}
$Z\subset S_{\mathbb{C}}\times \{ M\in \GL_\mu(\mathbb{C}) : \prescript{t}{}{M} J_\mu M =(2\pi i)^{-n} J_\mu \}$
\end{center} which, by once again taking Zariski closures, gives the reverse inclusion.
\end{proof}
\subsection{Trivial relations over $\mathbb{C}$ for the G-functions}
As we remarked earlier, the entries of the first $h$ columns of our matrix $P_{X/S}$ are G-functions, under our choice of basis and trivialization. Let us denote by $y_{i,j}$ these entries and by $Y$ the respective $\mu\times h$ matrix they define. Consider the projection map $\pr :M_\mu\rightarrow \mathbb{A}^{\mu \times h}$ that maps a matrix $(a_{i,j})\in M_{\mu}$ to the $\mu\times h$ matrix that consists of its first $h$ columns. This maps $P_{X/S}$ to $Y$.
\begin{lemma}\label{gfunczar}Let $T$ be the subvariety of $\mathbb{A}^{\mu\times h}$ defined by the following set of polynomials \begin{center}
$\{ \prescript{t}{ }{b_i} J_\mu b_j: 1\leq i,j \leq h\}$,
\end{center}where $b_i$ denotes the $i$-th column of a matrix of indeterminates.
Then $Y^{\mathbb{C}(S)-Zar}=T_{\mathbb{C}(S)}$.
\end{lemma}
\begin{proof}
Let $Z_Y\subset V\times M_{\mu\times h}(\mathbb{C})$ denote the graph of $Y$ as a function $Y:V\rightarrow M_{\mu\times h}(\mathbb{C})$. It suffices to show that $Z_Y^{\mathbb{C}-Zar} =S\times T$.
The inclusion $Z_Y^{\mathbb{C}-Zar} \subset S\times T$ follows trivially from \eqref{eq:riemannrelationodd}, which shows that $Z_Y\subset V\times T(\mathbb{C})$. On the other hand, we have that the map $\id_S\times \pr: S\times M_\mu \rightarrow S\times \mathbb{A}^{\mu\times h}_{\mathbb{C}}$ is topologically closed, with respect to the Zariski topology. This implies that $Z_Y^{\mathbb{C}-Zar} = (\id\times\pr ) (Z^{\mathbb{C}-Zar})$.
By construction we have that the columns $c_i$ of any $\mu\times h$ matrix $C\in T(\mathbb{C})$ will be a basis that spans an isotropic subspace of dimension $h$ with respect to the symplectic form defined by $J_{\mu}$ on $\mathbb{C}^{\mu}$. It is easy to see that we can extend this set of vectors to a basis $\{c_j:1\leq j\leq \mu \}$ of $\mathbb{C}^\mu$ that satisfies\begin{enumerate}
\item $\prescript{t}{}{c_i} J_\mu c_j=0$ for all $i,j$ with $|i-j|\neq \mu/2$, and
\item $\prescript{t}{}{c_i} J_\mu c_j=(2\pi i)^{-n}$ for $i=j+\mu/2$.
\end{enumerate}In other words, this is a symplectic basis ``twisted'' by a factor $(2\pi i)^{-n/2}$. The $\mu\times \mu$ matrix with columns $c_i$ will then be such that $(s,M_C)\in Z^{\mathbb{C}-Zar}$ by \ref{czarclosure} and by construction $\pr(M_C)=C$.
Combining the above with the fact that $Z_Y^{\mathbb{C}-Zar} = (\id\times\pr ) (Z^{\mathbb{C}-Zar})$ we have that $S\times T\subset Z_Y^{\mathbb{C}-Zar}$ and our result follows.
\end{proof}
\subsection{Trivial relations over $\bar{\mathbb{Q}}$}
So far we have not used any arithmetic information about the $y_{i,j}$, namely the fact that they are G-functions.
Let $\xi\in \bar{\mathbb{Q}}$. Then a trivial polynomial relation with coefficients in $\bar{\mathbb{Q}}$ at the point $\xi$ among the $y_{i,j}\in \bar{\mathbb{Q}}[[x]]$ is a relation that satisfies the following:\begin{enumerate}
\item there exists $p(x_{i,j})\in \bar{\mathbb{Q}}[x_{i,j}]$ such that the relation we have is of the form $p(y_{i,j}(\xi))=0$,
\item the relation holds $v$-adically for some place $v$ of $\bar{\mathbb{Q}}$, i.e. letting $i_v:\bar{\mathbb{Q}}\hookrightarrow \bar{\mathbb{Q}}_v$ we have that the $y_{i,j}$ converge at $i_v(\xi)$ and the above relation is an equality in $\bar{\mathbb{Q}}_v$,
\item there exists a polynomial $q(x)(x_{i,j})\in \bar{\mathbb{Q}}[x][x_{i,j}:1\leq i\leq \mu,1\leq j\leq h]$ such that it has the same degree as $p$, with respect to the $x_{i,j}$, and $q(\xi)(x_{i,j})=p(x_{i,j})$
\end{enumerate}
Therefore, to describe the trivial relations among the values of our G-functions $y_{i,j}$ at some $\xi\in\bar{\mathbb{Q}}$, it is enough to determine the $\bar{\mathbb{Q}}[x]$-Zariski closure of the matrix $Y$. We do this in the following lemma, which is practically a more detailed rephrasing of \ref{goalnontriviality}.
\begin{lemma}\label{trivialrelationsfinal} Let $Y$ be the $\mu\times h$ we had above. Then the ${\bar{\mathbb{Q}}[x]}$-Zariski closure $Y^{\bar{\mathbb{Q}}[x]-Zar}$ of $Y$ is the subvariety of $\mathbb{A}^{\mu\times h}_{\bar{\mathbb{Q}}[x]}$ defined by the following set of polynomials\begin{center}
$\{ \prescript{t}{ }{b_i} J_\mu b_j : 1\leq i,j \leq h\}$,
\end{center}where $b_i$ denotes the $i$-th column of a matrix of indeterminates.
\end{lemma}
\begin{proof}We let $\Sigma$ be the set of polynomials above and let $I_R$ be the ideal generated by $\Sigma$ in the ring $R[x_{i,j}]$, where $R$ will denote different fields in our proof.
In this case from \ref{gfunczar} we know that $Y^{\mathbb{C}(S)-Zar}$ is equal to $V(I_{\mathbb{C}(S)})$. Note that the elements of $\Sigma$ all have coefficients in $\bar{\mathbb{Q}}[x]$, in fact they have coefficients in $\bar{\mathbb{Q}}$. From this we get the result we wanted, i.e. $Y^{\bar{\mathbb{Q}}[x]-Zar} =V(I_{\bar{\mathbb{Q}}[x]})$. \end{proof}
\begin{rmk}
Implicit in the previous proof is the fact that we have a polarization that is defined over $k\subset \bar{\mathbb{Q}}$ as a cycle in some de Rham cohomology group.\end{rmk}
\section{Towards relations for exceptional points}\label{section:pseudocmrelations}
Let $f:X\rightarrow S$ be a G-admissible variation of $\mathbb{Q}$-HS. We start with the following definition.
\begin{defn}
Let $s\in S(\bar{\mathbb{Q}})$. Assume that in the decomposition of $V_s$ into irreducible $\mathbb{Q}$-Hodge structures, as in \eqref{eq:decomprepeat}, there exists at least one irreducible factor $V_i$ whose algebra of endomorphisms $D_i$ is of type IV in Albert's classification. We then say that the point $s$, or equivalently the corresponding $\mathbb{Q}$-HS, is \textbf{pseudo-CM}.
\end{defn}
\begin{rmk}We note here that all CM-points $s\in S(\bar{\mathbb{Q}})$ of the variation will satisfy the above definition. The term ``pseudo-CM'' reflects the fact that the center of a type IV algebra in Albert's classification is a CM field. We note that the points considered here are far more general, at least in principle, than special points.
\end{rmk}
\subsection{Notational Conventions}
Let $f:X\rightarrow S$ be a G-admissible variation as above and let $s\in S(L)$ with $L\subset \bar{\mathbb{Q}}$.
First of all, note that from the semisimplicity of the category of polarized Hodge structures, we know that we may write \begin{equation}\label{eq:decomprepeat}
V_s=V_1^{m_1}\oplus\ldots \oplus V_r^{m_r},
\end{equation}with $ (V_i,\varphi_i)$ irreducible polarized $\mathbb{Q}$-HS that are non-isomorphic to each other. Let $D_i:=\End(V_i)^{G_{mt}(V_i)}$ be the respective endomorphism algebras so that \begin{center}
$D_s=M_{m_1}(D_1)\times \ldots \times M_{m_r}(D_r)$.
\end{center}
From \ref{propendodr} we know that, assuming the absolute Hodge conjecture, there exists a finite extension $\hat{L}$ of $L$ such that $D_s$ acts on $H^n_{DR}(X_{s,\hat{L}} /\hat{L})$ and that this action is compatible with the comparison isomorphism between algebraic de Rham and singular cohomology. Again assuming the absolute Hodge conjecture, we know from \ref{propdegreebound} that the degree $[\hat{L}:L]$ of the extension is bounded by a bound independent of the point $s$. We assume from now on that $\hat{L}=L$ and return to this issue in the proof of \ref{maintheorem}.
We let $F_i$ denote the center of the algebra $D_i$ for $1\leq i\leq r$ and note that these are number fields due to Albert's classification. We introduce the following notation \begin{itemize}
\item $\hat{E}_s= F_1^{m_1}\times \ldots \times F_r^{m_r}$ the maximal commutative semi-simple algebra of $D_s$,
\item $\hat{F}_i$ the Galois closure of the field $F_i$ in $\mathbb{C}$,
\item $\hat{F}_s$ the compositum of the fields $\hat{F_i}$ together with the field $L$.
\end{itemize}
\subsection{Splittings in cohomology and homology}
Let us assume $f:X\rightarrow S$ is a G-admissible variation as above and let $s\in S(L)$, where $L/K$ is a finite extension. We assume that $s$ is archimedeanly close to the point $s_0$ on $S'$, with respect to a fixed inclusion $L\hookrightarrow \mathbb{C}$. In particular we assume that it is in the image of the inclusion of a punctured unit disc $\Delta^{*}\subset S^{an}_{\mathbb{C}}$ centered at $s_0$.\\
Under the above assumption, $L=\hat{L}$, we know that we have two splittings.
Namely, on the one hand we get a splitting \begin{equation}\label{eq:pseudocmhomsplitting}
H_n(X^{an}_{s,\mathbb{C}},\mathbb{Q})\otimes_\mathbb{Q} \hat{F}_s= \Bigsum{\sigma:\hat{E}\rightarrow \mathbb{C}}{ }\hat{W}_{\sigma},
\end{equation}induced from the splitting $\hat{E}_s\otimes_{\mathbb{Q}} \hat{F}_s= \Bigsum{\sigma:\hat{E}_s\rightarrow \mathbb{C}}{} \hat{F}_s^{\sigma}$, where $\hat{F}^\sigma_s$ denotes the field $\hat{F}_s$ viewed as an $\hat{E}_s$-module with the action of $\hat{E}_s$ being multiplication by $\sigma$. We also note that on $\hat{W}_\sigma$ the algebra $\hat{E}_s$ acts again via multiplication with its character $\sigma$.
On the other hand, we have a splitting
\begin{equation}\label{eq:pseudocmdrsplitting}
H^{n}_{DR}(X_s/L)\otimes_{L} \hat{F}_s=\Bigsum{\sigma:\hat{E}_s\rightarrow \mathbb{C}}{}\hat{W}^{\sigma}_{DR},
\end{equation}
which once again comes from the above splitting of $\hat{E}_s\otimes_{\mathbb{Q}} \hat{F}_s$. In particular, we note that the action of $\hat{E}_s$ on $\hat{W}^{\sigma}_{DR}$ comes once again via $\sigma$.
\subsubsection{Duality of the splittings}
We start by highlighting how the two splittings interact with one another via the comparison isomorphism\begin{center}
$P^n_{X_s}:H^n_{DR} (X_s/L)\otimes_{L} \mathbb{C}\rightarrow H^n(X^{an}_{s,\mathbb{C}},\mathbb{Q})\otimes_{\mathbb{Q}}\mathbb{C}$.
\end{center}The following lemma is already noted as a property of the splittings by Andr\'e, we include a short proof for the sake of completeness.
\begin{lemma}\label{dualitycmsplit}
For all $\sigma\neq \tau$ if $\omega\in W^{\tau}_{DR}$ and $\gamma\in W_{\sigma}$ then \begin{center}
$\int_{\gamma}{} \omega=0$.
\end{center}\end{lemma}
\begin{proof}Let us fix $\sigma\neq \tau$ as above and let $\omega \in \hat{W}^{\tau}_{DR}$ and $\gamma\in \hat{W}_\sigma$.
For all $e\in \hat{E}_s$ we have that $P^n_{X_s}(d\omega)=P^n_{X_s} (\tau(d)\omega)=\tau(d)P^n_{X_s}(\omega)=d\cdot P^n_{X_s}(\omega)$, where the last equality follows from the moreover part of \ref{propendodr}. The algebra $\hat{E}_s$, and in particular its group of invertible elements $\hat{E}_s^{\times}$, acts by definition on $V_s$ as endomorphisms of the Hodge structure. The action of $\hat{E}_s^{\times}$ on the dual space $V_s^{*}$ will thus be the dual of that of $V_s$.
In particular for any $\gamma\in \hat{W}_\sigma$, for any $e\in\hat{E}_s^{\times }$, and for any $\delta\in V_s$, we get that $(e\cdot \gamma) (e\cdot \delta)=\gamma(\delta)$. Taking $\delta=P^n_{X_s}(\omega)$ we get that for all $\gamma \in \hat{W}_\sigma$ and for all $e\in\hat{E}_s^{\times}$\begin{center}
$ \int_{\gamma}^{}\omega=\gamma(P^n_{X_s}(\omega))=(e \cdot \gamma)(e\cdot P^n_{X_s}(\omega))$.
\end{center}But we know that $(e \cdot \gamma)(e\cdot P^n_{X_s}(\omega))= (\sigma (e^{-1}) \gamma ) (\tau(e)P^n_{X_s}(\omega))$, where we used the duality between the actions of $\hat{E}_s^{\times}$ on $V_s$ and $V_s^{*}$. Putting everything together we get that for all $e\in\hat{E}_s^{\times}$ we will have that
\begin{center}
$ \int_{\gamma}^{}\omega= \sigma(e)^{-1}\tau(e) \int_{\gamma}^{}\omega$.
\end{center}Since $\sigma\neq\tau$ we can find such an $e$ with $\sigma(e)\neq \tau(e)$ and the lemma follows.
\end{proof}
\subsection{Involutions and symplectic bases}
In creating the relations we want we will need to construct symplectic bases with particular properties. To construct these we will need to review some facts about the involutions of the algebras of Hodge endomorphisms and see how they interact with the splittings we have.\\
For the weight $n$ $\mathbb{Q}$-HS given by $V_s$ we denote by $\langle,\rangle $ the symplectic form defined by the polarization on $V_s$. By duality we get a polarized $\mathbb{Q}$-HS of weight $-n$ on the dual space $V_s^{*}:=H_n(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q})$, and we denote the symplectic form given by the polarization again by $\langle,\rangle$. We note that these two symplectic forms are dual.\\
The algebra $D_s$ comes equipped with an involution, which we denote by $d\mapsto d^{\dagger}$, that is defined by the relation \begin{equation}\label{eq:involdef}
\langle d\cdot v,w\rangle = \langle v,d^{\dagger}\cdot w\rangle ,
\end{equation}for all $d\in D_s$ and for all $v,w \in V^{*}_s$, or equivalently for all $v,w\in V_s$.
In the decomposition \eqref{eq:decomprepeat} of $V_s$, or its dual $V^{*}_s$, the polarization on each $V_i$, or $V^{*}_i$ respectively, is given by the restriction of the polarization of $V_s$, or its dual respectively. Therefore the involution $d\mapsto d^{\dagger}$ of $D_s$ restricts to the positive involutions of the respective algebras $D_i$.
The algebra homomorphisms $\sigma:\hat{E}_s\rightarrow \mathbb{C}$ have a convenient description. Writing \begin{center}
$\hat{E}_s= F_1^{m_1}\times \ldots \times F_r^{m_r}$,
\end{center} we let $\pr_{j,l}:\hat{E}_s\rightarrow F_j$, where $1\leq j\leq r$ and $1\leq l\leq m_j$, denote the projection of $\hat{E}_s$ onto the $l$-th factor of $F_j^{m_j}$, which will act respectively on the $l$-th factor of $V_j^{m_j}$ that appears in the decomposition. Then any algebra homomorphism $\sigma:\hat{E}_s\rightarrow \mathbb{C}$ can be written as \begin{equation} \sigma=\tilde{\sigma}\circ \pr_{j,l}
\end{equation}for some $j$ and $l$ as above and some $\tilde{\sigma}:F_j\hookrightarrow\mathbb{C}$. For convenience, from now on we define the notation\begin{equation}\label{eq:charofmax}
\tilde{\sigma}_{j,l}:=\tilde{\sigma}\circ \pr_{j,l}.
\end{equation}
\begin{lemma}\label{lemmainvolutions}Consider the splitting \eqref{eq:pseudocmhomsplitting}. Then for the subspaces $\hat{W}_\sigma$ the following hold:\begin{enumerate}
\item If $\sigma =\tilde{\sigma}_{j,l}$ then $\hat{W}_\sigma$ is contained in the $l$-th factor of $(V_j^{*})^{m_j}$,
\item Let $\sigma =\tilde{\sigma}_{j,l}$ and let $\tau$ be some non-zero algebra homomorphism $\hat{E}_s\rightarrow \mathbb{C}$. Consider non-zero vectors $v\in \hat{W}_\sigma$ and $w\in \hat{W}_\tau$. If we assume that $\langle v,w\rangle \neq 0$ then one of the following cases holds \begin{enumerate}
\item $\sigma =\tau$ and the algebra $D_j$ of Hodge endomorphisms is of Type I, II or III in Albert's classification, or
\item $\sigma =\bar{\tau}$, where $\bar{(\cdot)}$ denotes complex conjugation, and $D_j$ is of Type IV in Albert's classification.
\end{enumerate}
\end{enumerate}\end{lemma}
\begin{proof}The first part of the lemma is trivial.
For the second part let $v\in \hat{W}_\sigma$ and $w\in \hat{W}_\tau$ be non-zero vectors as above with $\langle v,w\rangle \neq 0$. From the preceding discussion there exists a pair $(j',l')$ for $\tau $ such that $\tau=\tilde{\tau}\circ \pr_{j',l'}$, where $\tilde{\tau}:F_{j'}\hookrightarrow\mathbb{C}$. From the first part of this lemma we also know that $\hat{W}_\tau$ is contained in the $l'$-th factor of $(V_{j'}^{*})^{m_{j'}}$.
The subspaces $V_i^{*}$ of $V_s^{*}$ are symplectic, with their symplectic inner product being the restriction of that of $V_s^{*}$. This immediately implies that $(j,l)=(j',l')$.
For any $d\in \hat{E}_s$ we have that
$\langle d\cdot v,d\cdot w\rangle = \langle \sigma(d) v,\tau(d)w\rangle = \sigma(d)\tau(d) \langle v,w\rangle$. On the other hand using the defining property of the involution we get \begin{equation}\label{eq:involskew}
\langle d\cdot v,d\cdot w\rangle=\langle v,(d^{\dagger}d)\cdot w\rangle =
\tau(d^\dagger d) \langle v,w\rangle .\end{equation}
Since, by assumption $\langle v,w\rangle \neq0$ the above relations imply that for all $d\in\hat{E}_s$ we have\begin{equation}\label{eq:characters1}
\sigma(d)\tau(d)=\tau(d^{\dagger})\tau(d).
\end{equation}
Let $F_j$ be the center of the algebra $D_j$. Then \eqref{eq:characters1} implies that for all $d\in F_j$ \begin{equation}\label{eq:characters2}
\tilde{\sigma}(d)\tilde{\tau}(d)=\tilde{\tau}(d^{\dagger})\tilde{\tau}(d).
\end{equation}In particular, this implies that for all $d\in F_j$ we have that \begin{equation}\label{eq:characters3}\tilde{\tau}(d^{\dagger})=\tilde{\sigma}(d).\end{equation}
If $D_j$ is of Type I in Albert's classification then the involution restricts to the identity and we get trivially that $\tilde{\tau}=\tilde{\sigma}$, and hence also $\sigma=\tau$. So our result follows in this case.
If $D_j$ is of Type II then we have that $F_j$ is a totally real field, $D_j$ is a quaternion algebra over $F_j$ and there exists $a\in D_j$ such that the involution is given by $d^{\dagger }=a d^{*}a^{-1}$ on $D_j$, where $d^{*}=\tr_{D_j/F_j}(d)-d$. Note that for $d\in F_j=Z(D_j)$ we have that $\tr_{D_j/F_j}(d)=2d$, so that $d^{\dagger}=d$ for all $d\in F_j$. Combining these observations with \eqref{eq:characters3} we get that $\tau=\sigma$.
The same argument we just used for the case of Type II algebras works for the case of Type III algebras, though we do not need to introduce any element $a$ as above since the involution in this case is equal to the canonical involution.
Finally, let us assume that $D_j$ is of type IV in Albert's classification. In this case $F_j$ is a CM-field. In this case the involution is known to restrict to complex conjugation on the field $F_j$. In other words $d^{\dagger}=\bar{d}$ for $d\in F_j$. This, together with \eqref{eq:characters3}, implies that $\tilde{\sigma}(d)=\tilde{\tau}(\bar{d})$. Since $F_j$ is a CM-field this implies that $\tilde{\sigma}=\bar{\tilde{\tau}}$ and by extension $\sigma =\bar{\tau}$.
\end{proof}
\begin{rmk}
The above lemma shows that the splitting \eqref{eq:pseudocmhomsplitting} of $V_s^{*}$ is comprised of two types of mutually skew-orthogonal symplectic subspaces. On the one hand, we have the symplectic subspaces $\hat{W}_\sigma$ that are contained in some $V_j$ that is of Type I-III, and on the other hand we have the symplectic subspaces of the form $\hat{W}_{\tau}\oplus \hat{W}_{\bar{\tau}}$, where $\hat{W}_\tau$ is contained in some $V_j$ that is of Type IV. For the second type, note that we also have that $\hat{W}_\tau$ and $\hat{W}_{\bar{\tau}}$ are transverse Lagrangians of these symplectic subspaces.
\end{rmk}
\subsection{Constructing relations}
We return to our original G-admissible variation of $\mathbb{Q}$-HS, restricted to $\Delta^{*}$. We let $s\in S(L)$ be a fixed point which we assume is in $\Delta^{*}$. We then have the totally isotropic local subsystem of rank $h$ over the ring $\mathcal{O}_{S^{an}_{\mathbb{C}}}|_{\Delta^{*}}$\begin{center}
$\mathcal{M}_0:=M_0 R_n(f^{an}_{\mathbb{C}})_{*}(\mathbb{Q})|_{\Delta^{*}}$
\end{center} of the local system $R_n(f^{an}_{\mathbb{C}})_{*}(\mathbb{Q})|_{\Delta^{*}}$, which has rank $\mu:= \dim_\mathbb{Q} V_s$.
We fix a basis $\{\omega_i:1\leq i\leq \mu\}$ of $H^n_{DR}(X/S)$ over some dense open subset $U\subset S$ and a trivialization $\{\gamma_j:1\leq j\leq \mu\}$ of $R_nf^{an}_{*} \mathbb{Q}|_V$ where $V$ is some open analytic subset of $U^{an}$ with $s\in V\subset \Delta^{*}$. We may and do choose these so that the following conditions are satisfied:\begin{enumerate}
\item the matrices of the skew-symmetric forms on $H^n_{DR}(X/S)$ and $R_nf^{an}_{*} \mathbb{Q}$ induced by the polarization written with respect to the basis $\{\omega_i\}$ and trivialization $\{\gamma_j\}$ respectively are both equal to $J_\mu$,
\item $\gamma_1,\ldots, \gamma_h\in \mathcal{M}_0|_V$ and $\gamma_1,\ldots, \gamma_{\mu/2}\in \mathcal{M}^{+}$,
\end{enumerate}where $\mathcal{M}^{+}$ is a maximal totally isotropic local subsystem of $ R_n(f^{an}_{\mathbb{C}})_{*}(\mathbb{Q})|_V$ that contains $\mathcal{M}_0|_V$.
Let us now consider the relative comparison isomorphism\begin{center}
$P^n_{X/S}:H^n_{DR}(X/S)\otimes_{\mathcal{O}_S}\mathcal{O}_{S^{an}_{\mathbb{C}}}\rightarrow R^nf^{an}_{*}\mathbb{Q}_{X^{an}_{\mathbb{C}}} \otimes_{\mathbb{Q}_{S^{an}_{\mathbb{C}} }} \mathcal{O}_{S^{an}_{\mathbb{C}}}$,
\end{center}and restrict it over the set $V$. With respect to the above choices we let $P_{X/S}=\frac{1}{(2\pi i)^n}(\int_{\gamma_j}{}\omega_i)$ for the matrix of periods of $f$.
Let us write $P_{X/S}=\begin{pmatrix}
\Omega_1& \Omega_2\\
N_1&N_2
\end{pmatrix}$. From \ref{existence}, we know that the first $h$ columns of this matrix have entries that are G-functions. It is among their values at $\xi=x(s)$ that we want to find some relation that reflects the action of $\hat{E}_s$.
\begin{lemma}\label{constpseudocm}Assume that $h\geq 2$ and that for the point $s\in S(L)$ one of the following is true\begin{enumerate}
\item there exists $\tau:\hat{E}_s\rightarrow \mathbb{C}$ such that $h> \dim_{\hat{F}_s} \hat{W}_\tau$, or
\item $s$ is a pseudo-CM point and \begin{center}
$h\geq \min\{\dim\hat{W}_\tau: \hat{W}_\tau\subset V_{i(\tau)} \text{, with } V_{i(\tau)}\text{ of type IV }\}$.
\end{center}
\end{enumerate}
Then, there exists an algebraic relation among the values at $\xi=x(s)$ of the entries of the first $h$ columns of the matrices $\Omega_1$ and $N_1$. Moreover, this relation corresponds to some homogeneous polynomial with coefficients in $\hat{F}_s$ and degree $\leq 2$.
\end{lemma}
\begin{proof} We take cases depending on the interplay between $\mathcal{M}_{0,s}\otimes \hat{F}_s$ and the splitting \eqref{eq:pseudocmhomsplitting}. We also assume that $V_s=H^n(\bar{X}_{s,\mathbb{C}},\mathbb{Q})$ has a decomposition as in \eqref{eq:decomprepeat}.
\textbf{Case 1:} Assume there exists some $\tau :\hat{E}_s\rightarrow \mathbb{C}$ such that the following holds \begin{equation}\label{eq:condicase1}
\bigg(\Bigsum{\underset{\sigma\neq \tau}{\sigma:\hat{E}_s\rightarrow \mathbb{C}}}{}\hat{W}_{\sigma}\bigg)\cap (\mathcal{M}_s\otimes \hat{F}_s)\neq 0,
\end{equation}then we get, at least one relation of degree $1$.
Note that for dimension reasons \eqref{eq:condicase1} is satisfied for the $\tau$ of the first condition above.\\
Indeed, let $\gamma \in \Gamma (V,R_n(f^{an}_\mathbb{C})_{*}(\mathbb{Q}))$ be a section such that $\gamma(s)$ belong to the non-zero space of \eqref{eq:condicase1}. From \ref{dualitycmsplit} we get that for all $\omega \in \hat{W}^{\tau}_{DR}$ we have
\begin{equation}\label{eq:case1relationps}
\frac{1}{(2\pi i)^n}\int_{\gamma(s)}\omega=0.
\end{equation}
Writing $\gamma$ as an $\hat{F}_s$-linear combination of the $\gamma_j$ with $1\leq j\leq h$ and $\omega$ as an $\hat{F}_s$-linear combination of the $\omega_i$ with $1\leq i\leq \mu$, we have that \eqref{eq:case1relationps} leads to a linear equation among the values of the G-functions in question at $\xi$.\\
\textbf{Case 2:} Assume that for all $\tau :\hat{E}_s\rightarrow \mathbb{C}$ we have \begin{equation}\label{eq:condicase2}
\bigg(\Bigsum{\underset{\sigma\neq \tau}{\sigma:\hat{E}_s\rightarrow \mathbb{C}}}{}\hat{W}_{\sigma}\bigg)\cap (\mathcal{M}_s\otimes \hat{F}_s)= 0,
\end{equation}then we want to show that we can create a relation of degree $2$.\\
First of all, we may assume, which we do from now on, that $\dim_{\hat{F}_s}\hat{W}_\sigma\geq h$ for all $\sigma$, otherwise we are in case 1, for dimension reasons.
The first step in creating the relations we want is defining symplectic bases with particular properties, which we do in the following claims.\\
\textbf{Claim 1:} There exists a symplectic basis $e_1,\ldots,e_{\mu/2},f_1,\ldots,f_{\mu/2}$ of the symplectic vector space $V^{*}_s\otimes_\mathbb{Q}\hat{F}:=H_n(\bar{X}_{s,\mathbb{C}},\mathbb{Q})\otimes \hat{F}_s$ that satisfies the following properties\begin{enumerate}
\item $\langle e_i,e_j\rangle =\langle f_i,f_j\rangle =0$ and $\langle e_i,f_j\rangle=\delta_{i,j}$ for all $i,j$.
\item $e_j=\gamma_j(s)$ for $1\leq j\leq h$.
\item There exists $\tau:\hat{E}_s\rightarrow \mathbb{C}$ such that\begin{enumerate}
\item $\hat{W}_\tau$ is contained in $V_{i(\tau)}^{*}\otimes \hat{F}_s$, where $V_{i(\tau)}^{*}$ is some irreducible sub-Hodge Structure of $V^{*}_s$ which is of Type IV, and
\item $f_j\in \hat{W}_\tau$ for $1\leq j\leq h$,
\item $\dim_{\hat{F}_s} \hat{W}_\tau=h$.
\end{enumerate}
\end{enumerate}
\begin{proof}[Proof of Claim 1]
From \ref{maxisotropic} we know that choosing any basis of local sections $\gamma_j(s)$ of $\mathcal{M}_{0,s}$, its vectors will satisfy $\langle \gamma_i(s),\gamma_j(s)\rangle =0$ for all $i,j$. Assume that we have fixed one such basis as above and fix an indexing of the set $\{\sigma :\sigma:\hat{E}_s\rightarrow \mathbb{C} \}=\{ \sigma_i:1\leq i\leq m(s)\}$. We can then write uniquely \begin{equation} \gamma_j(s)= w_{j,1}+\ldots +w_{j,m(s)}
\end{equation}where $1\leq j\leq h$ and $w_{j,i}\in \hat{W}_{\sigma_i}$.
By assumption the $\mathbb{Q}$-HS $V_s$ is pseudo-CM, therefore there exists $\tau$ such that $\hat{W}_\tau$ is as we want in the claim and the same holds for $\hat{W}_{\bar{\tau}}$. Without loss of generality assume that $\bar{\tau}=\sigma_1$. Since we are in Case 2, we also know that $\dim_{\hat{F}_s}\hat{W}_{\bar{\tau}}\geq h$ and that \eqref{eq:condicase2} holds. From \eqref{eq:condicase2} we get that the vectors $w_{j,1}\in\hat{W}_{\bar{\tau}}$ are in fact linearly independent.
By \ref{lemmainvolutions} we know that $\hat{W}_{\bar{\tau}}\oplus \hat{W}_{\tau}$ is a symplectic vector space with $\hat{W}_{\tau}$ and $\hat{W}_{\bar{\tau}}$ being transverse Lagrangians.
Let $v_j$ with $1\leq j\leq \dim_{\hat{F}_s}\hat{W}_{\bar{\tau}}$ be a basis of $\hat{W}_{\bar{\tau}}$ with $v_j=w_{j,1}$ for $1\leq j\leq h$. We complete this to a symplectic basis $v_i,f_j$, with $1\leq j\leq \dim_{\hat{F}_s}\hat{W}_{\bar{\tau}}$ of $\hat{W}_{\bar{\tau}}\oplus \hat{W}_{\tau}$ such that the $f_j$ are a basis of $\hat{W}_{\tau}$. Then we have, by construction and by \ref{lemmainvolutions}, that
\begin{equation}
\langle \gamma_i(s),f_j\rangle =\delta_{i,j}
\end{equation}for all $1\leq i,j\leq h$.
Therefore, setting $e_i :=\gamma_i(s)$ for $1\leq i\leq h$ the result follows by extending the set of vectors $\{e_i, f_i:1\leq i\leq h\}$ to a symplectic basis of $V_s^{*}\otimes \hat{F}_s$. Finally, note that the $\tau$ was arbitrary with $\hat{W}_{\tau}$ being contained in a type IV sub-Hodge structure of $V_s^{*}$. Therefore, by the assumption that $h\geq \min\{\dim\hat{W}_\tau: \hat{W}_\tau\subset V_{i(\tau)}^{*} \text{, with } V_{i(\tau)}^{*}\text{ of type IV }\}$ in our lemma and the assumption in this second case that $\dim\hat{W}_\sigma\geq h$ for all $\sigma$ we get that we may find such a $\tau$ that also satisfies the last condition of our claim.
\end{proof}
From now on we fix the $\tau$ we found in Claim 1. Having created a symplectic basis for $H_n(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q})\otimes \hat{F}_s$ we want to construct a symplectic basis of $H^{n}_{DR}(X_s)\otimes_{L} \hat{F}_s$ in a way that lets us take advantage of\ref{dualitycmsplit}.\\
\textbf{Claim 2:} There exists a symplectic basis $e^{DR}_1,\ldots,e^{DR}_{\mu/2}, f^{DR}_1,\ldots, f^{DR}_{\mu/2}$ of $H^n_{DR}(X_s)\otimes_{L} \hat{F}_s$ such that the following holds \begin{enumerate}
\item $\forall j$ we have that $e^{DR}_j\in \hat{W}_{DR}^{\sigma}$ for some $\sigma \neq \tau$,
\item for $1\leq j\leq h$ we have that $f^{DR}_j\in \hat{W}^\tau_{DR}$,
\item for $h+1\leq h\leq \mu/2$ we have that $f^{DR}_j\in \hat{W}^{\sigma}_{DR}$ for some $\sigma\neq \tau$.
\end{enumerate}
\begin{proof}[Proof of Claim 2]We start by noting that the results of \ref{lemmainvolutions} apply easily via duality to the splitting \eqref{eq:pseudocmdrsplitting} via $P^{n}_{X_s}$, due to our assumption that $L=\hat{L}$. In particular, via duality we get that for $\sigma:\hat{E}_s\rightarrow \mathbb{C}$ the subspaces $\hat{W}_{DR}^{\sigma}$ are once again divided into two categories\begin{itemize}
\item $\hat{W}_{DR}^{\sigma}$ that are symplectic subspaces, corresponding to $\hat{W}_\sigma$ that are contained in simple sub-Hodge structures of $V_s^{*}$, after these are tensored with $\hat{F}$, that are of Type I, II or III, and
\item $\hat{W}_{DR}^{\sigma}$ that are isotropic subspaces appearing in pairs such that $\sigma$ and $\bar{\sigma}$ are both algebra homomorphisms $\hat{E}_s\rightarrow \mathbb{C}$ and $\hat{W}_{DR}^{\sigma}\oplus\hat{W}_{DR}^{\bar{\sigma}}$ is a symplectic subspace. These correspond via duality to the $\hat{W}_\sigma$ that are contained in simple sub-Hodge structures of $V_s^{*}$, again after these are tensored with $\hat{F}$, that are of Type IV.
\end{itemize}
With that in mind, for each $\sigma$ we pick vectors $e^{\sigma}_i$ so that\begin{itemize}
\item the $e^{\sigma}_i$ are the basis of a Lagrangian subspace of $\hat{W}_{DR}^{\sigma}$ if we are in the first case above, so that in this case $1\leq i\leq \frac{1}{2}\dim_{\hat{F}_s}\hat{W}_{DR}^{\sigma}$,
\item the $e^{\bar{\tau}}_i$ are a basis of $\hat{W}_{DR}^{\bar{\tau}}$ of our fixed $\tau$, and
\item in the second case above for each $\sigma \neq \tau,\bar{\tau}$ we pick one $\sigma $ for each pair $(\sigma , \bar{\sigma})$ and we let $e^{\sigma}_i$ be a basis of $\hat{W}_{DR}^{\sigma}$.
\end{itemize}
Let $e^{DR}_j$, with $1\leq j\leq \mu$, be any indexing of the set of all the $e^{\sigma}_i$ above. The spanning set of these defines a Lagrangian subspace of $H^n_{DR}(X_s)\otimes_{L} \hat{F}_s$. In a similar manner, by the above remarks derived from \ref{lemmainvolutions}, we can construct a basis of a transverse Lagrangian to the Lagrangian spanned by the $e^{DR}_i$ with $f^{DR}_j$ also elements of the various $\hat{W}_{DR}^\sigma$. It is also straightforward from the above that we may pick $f^{DR}_1,\ldots,f^{DR}_h\in \hat{W}^{\tau}_{DR}$.
\end{proof}
\subsubsection{Some cleaning up}
The technical conditions
\begin{equation}\label{eq:conditionoflemma1}
\exists\tau:\hat{E}_s\rightarrow \mathbb{C} \text{ such that } h> \dim_{\hat{F}_s} \hat{W}_\tau
\end{equation}
\begin{equation}\label{eq:conditionoflemma2}
h\geq \min\{\dim\hat{W}_\tau: \hat{W}_\tau\subset V_{i(\tau)} \text{, with } V_{i(\tau)}\text{ of type IV }\}.
\end{equation}that appear in \ref{constpseudocm} are by no means aesthetically pleasing! We have dedicated this short section to remedy this fact. In fact we prove the following lemma.
\begin{lemma}\label{remedy}
Condition \eqref{eq:conditionoflemma2} is equivalent to the condition\begin{center}
$h> \frac{\dim_\mathbb{Q} V_j}{[Z(D_j):\mathbb{Q}]}$ for some $j$,
\end{center}and condition \eqref{eq:conditionoflemma2} is equivalent to the condition\begin{center}
$h\geq \min\{ \frac{\dim_\mathbb{Q} V_i}{[Z(D_i):\mathbb{Q}] } : i \text{ such that } D_i=\End_{HS}(V_i) \text{ is of type IV } \}.$
\end{center}
\end{lemma}
To prove this we work in greater generality with modules of semisimple algebras over $\mathbb{Q}$. The material in this section is definitely not new but we include it for the sake of completeness of our exposition.\\
Let us fix some notation. We consider a $\mathbb{Q}$-HS $V$ with $\mu:=\dim_\mathbb{Q} V$ that decomposes as $V=V_1^{m_1} \oplus \cdots\oplus V_r^{m_r}$. We write $D= M_{m_1} (D_1) \oplus \cdots M_{m_r }(D_r)$ for the algebra of Hodge endomorphisms of $V$, where $D_i$ is the algebra of Hodge endomorphisms of $V_i$. For each $i$ we let $F_i:= Z(D_i)$ be the center of $D_i$ and $f_i:=[F_i:\mathbb{Q}]$. Finally, we let $\hat{F}$ be the Galois closure of the compositum of the fields $F_i$ and $\hat{E}:= F_1^{m_1} \oplus \cdots\oplus F_r^{m_r}$ be the maximal commutative semisimple algebra of $D$.
For the non-trivial homomorphisms of algebras $\sigma : \hat{E} \rightarrow \hat{F}$ we write $\sigma=\tilde{\sigma}_{j,l}$ as we did earlier. The above result then follows from the following lemma.
\begin{lemma} The $\hat{E}\otimes_\mathbb{Q}\hat{F}$-module $V\otimes_\mathbb{Q} \hat{F}$ has a decomposition as an $\hat{E}\otimes_\mathbb{Q}\hat{F}$-module as
\begin{center}
$V\otimes_\mathbb{Q} \hat{F}= \Bigsum{\sigma:\hat{E}\rightarrow \hat{F}}{ }\hat{W}_{\sigma}$,
\end{center}
where $\hat{W}_\sigma$ are $\hat{F}$-subspaces of $V\otimes_\mathbb{Q} \hat{F}$ on which $\hat{E}\otimes_{\mathbb{Q}}\hat{F}$ acts via multiplication by $\sigma$. Moreover, $\dim_{\hat{F}} W_\sigma = \frac{\dim_\mathbb{Q} V_{i(\sigma)}}{f_{i(\sigma)}}$ where $i(\sigma)\in\{1,\ldots,r\}$ is such that $\sigma =\tilde{\sigma}_{i(\sigma),l}$ for some $l$ and $\tilde{\sigma}$ with our previous notation.
\end{lemma}
\begin{proof}
First of all, note that $\forall i$ we have $F_i\hookrightarrow \End_\mathbb{Q} V_i$ trivially. Therefore $V_i$ is isomorphic, as an $F_i$-module, to \begin{equation}\label{eq:isomfimodules}
V_i\simeq F_i^{t_i}
\end{equation}for some $t_i$. Counting dimensions of these as $\mathbb{Q}$-vector spaces we get that $t_i= \frac{\dim_\mathbb{Q} V_i}{f_i}$.
Tensoring both sides of \eqref{eq:isomfimodules} by $\otimes_\mathbb{Q}\hat{F}$ we get that $V_i\otimes_{\mathbb{Q}} \hat{F}\simeq (F_i\otimes_{\mathbb{Q}} \hat{F})^{t_i}$ as $F_i$-modules. Now note that since $\hat{F}$ is a Galois extension that contains $F_i$ we have that $ F_i\otimes_{\mathbb{Q}} \hat{F} \simeq\Bigsum{\tilde{\sigma}:F_i\rightarrow \hat{F}}{ }\hat{F}^{\tilde{\sigma}}$, where $\hat{F}^{\tilde{\sigma}}$ is just $\hat{F}$ viewed as an $F_i$-module via the action of the embedding $\tilde{\sigma}:F_i\hookrightarrow \hat{F}$. Combining the above we get\begin{center}
$V_i\otimes_{\mathbb{Q}} \hat{F}\simeq \Bigsum{\tilde{\sigma}:F_i\rightarrow \hat{F}}{ }\big(\hat{F}^{\tilde{\sigma}}\big)^{t_i}$.
\end{center}
The result now follows trivially.
\end{proof}
\section{Non-trivial relations}
The relations created in \ref{constpseudocm} were created after fixing a place $v\in \Sigma_{L,\infty}$, corresponding to an inclusion $i_v:L\rightarrow \mathbb{C}$. This is because we assumed that $s$ is archimedeanly close to $s_0$, with respect to this fixed embedding $L\hookrightarrow \mathbb{C}$.
\begin{defn}
Let $s\in S(L)$ for some $L/K$ and set $\xi=x(s)\in L$. For a place $v\in \Sigma_{L}$ we say that $s$ is $v$-adically close to $s_0$ if $|\xi|_v < \min\{1,R_v(\vec{y}):=R_{v}(y_1,\ldots,y_{h\mu}) \}$, where $y_j$ are the G-functions we had earlier.
\end{defn}
We want to create relations among the values of the G-functions $y_i\in K[[x]]$ at $\xi =x(s)$ for all places $v\in \Sigma_{L,\infty}$. In order to be able to create these we will need the following technical lemma, following the exposition in Ch.X, \S $3.1$ of \cite{andre1989g}. We fix a priori, the matrix \begin{equation}\label{eq:matrixofgfunctions}
G=\begin{pmatrix}
y_1&\cdots& y_h\\
\vdots&\ddots& \vdots\\
y_{h\mu-h+1}&\cdots& y_{h\mu}
\end{pmatrix}\in M_{\mu\times h}(K [[x]])
\end{equation}
Let us now consider $\iota :K \hookrightarrow \mathbb{C}$ to be a random complex embedding of $K$. We then have the complex Taylor series $\iota (y_i)$. We also let $G_{\iota}$ be the matrix defined analogously to $G$ with the $y_i$ replaced by the $\iota(y_i)$.
\begin{lemma}\label{changeofplace}For any $\iota$ as above the matrix $G_\iota$ is again the matrix that consists of the entries in the first $h$ columns of a period matrix with respect to the same basis of local sections of $H^n_{DR}(X/S)$ and to some local frame of the local system $R_n(f^{an}_{\iota,\mathbb{C}})_{*}(\mathbb{Q}_{X^{an}_\mathbb{C}})$.
\end{lemma}
\begin{rmk} Here by $f^{an}_{\iota,\mathbb{C}}$ we denote the analytification of the morphism $f_{\iota,\mathbb{C}}$, where $f_{\iota,\mathbb{C}}$ is the morphism induced from $f:X\rightarrow S$ via the base change given by $\iota :\spec\mathbb{C}\rightarrow \spec K$.\end{rmk}
\begin{proof} This follows essentially from the proof of Theorem 2 in Ch.X, \S 4.1 of \cite{andre1989g}, which constitutes \S 4.4 of the same chapter. We review the main parts we need. We let $S_1 = S\cup \{s_0\}$ and write $f_1:X_1\rightarrow S_1$ for the pullback of $f':X'\rightarrow S'$ via $S_1\hookrightarrow S'$. We note that the pair $(S_1,X_1)$ will be the pair ``$(S',X')$ in the notation of loc. cit.\\
\textbf{Step 1: A short review of the construction.} For each point $Q\in Y^{[n]}$ we can find an affine open subset $U^Q$ of $X_1$ admitting algebraic coordinates $x_{Q,1},\ldots ,\ldots,x_{Q,n+1}$ such that $Y_i\cap U^Q=Z(x_{Q,i})$ and the local parameter $x$ of $S_1$ at $s_0$ lifts to $x=x_{Q,1}\cdots x_{Q,n+1}$. To ease our notation we write simply $x_i$ for $x_{Q,i}$. We also fix the inclusion $i_Q:U^Q\rightarrow X_1$.
Then loc.cit. describes a horizontal map $T_Q:H^n_{DR}(X_1/S_1(\log Y)) \rightarrow K[[x]]$.
This map $T_Q$ also has an analytic description. To define it one needs some cycles ${i_Q}_{*}\gamma_Q$. We briefly review the definition of these cycles. For each $z\in \Delta$ we have the cycle $\gamma_{Q,z}\in H_{n} ((U^{Q}_z)^{an},\mathbb{Z})$ defined by the relations $|x_2|=\ldots =|x_{n+1} |=\epsilon$ and $x_1x_2\cdots x_{n+1}=x(z)$, where $\epsilon>0$ is small. These cycles glue together to define a section $\gamma_Q \in H^{0}( \Delta, R_{n}((f_1)_{\Delta}\circ i_Q )_{*} \mathbb{Q} )$ which we can push-forward to a cycle ${i_Q}_{*} \gamma_Q \in H^{0} (\Delta , R_n((f_1)_{\Delta} )_{*} \mathbb{Q})$.
From this it follows that $({i_Q}_{*} \gamma_Q)_z\in H_{n} (X^{an}_z,\mathbb{Q})$ is also invariant by the action of $\pi_1(\Delta^{*},z)$. In fact from the exposition in loc. cit. we know that the cycles $({i_Q}_{*} \gamma_Q)_z$ span the fiber $M_0R_n(f^{an}_\mathbb{C})_{*} (\mathbb{Q})_z$ for $z\in\Delta^{*}$.
From the analytic description of $T_Q$ we get that for $1\leq i\leq h\mu $ there exists a point $Q\in Y^{[n]}(\bar{\mathbb{Q}})$ such that the entry $y_i|_{\Delta}$ is equal to \begin{center}
$T_Q(\omega) =\frac{1}{(2\pi i)^n} \int_{{i_Q}_{*}\gamma_Q}^{} \omega$
\end{center}for some $\omega\in H^n_{DR}(X_1/S_1(\log Y))$, where $\Delta$ is a unit disk centered at $s_0$.\\
To be able to work over $K$, instead of $\bar{\mathbb{Q}}$ as loc. cit. does, we assume without loss of generality that all of the above, i.e. the points $Q$, the algebraic coordinates, and the coefficients of the $y_{i}$ are actually defined over our original field $K$. To achieve this we might have to a priori base change everything, i.e. $f:X\rightarrow S$ and $f':X'\rightarrow S'$, by some fixed finite extension $\hat{K}$ of our original field $K$. This does not affect our results so we may and do assume it.\\
\textbf{Step 2: Changing embeddings.} Implicit in the definition of the cycles ${i_Q}_{*}\gamma_Q$ is the fixed embedding $K\hookrightarrow\mathbb{C}$. Shifting our point of view to the embedding $\iota:K\hookrightarrow \mathbb{C}$ we get a similar picture. Given a $K$-variety $Z$ we define $Z_{\iota}:=Z\times_{\spec K,\iota}\spec \mathbb{C}$ the base change of $Z$ via $\iota:\spec \mathbb{C}\rightarrow \spec K$ and similarly for the base change of a morphism $\phi:Z_1\rightarrow Z_2$ between $K$-varieties. In other words we suppress reference to the original embedding $K\hookrightarrow\mathbb{C}$ but keep track of the new embeddings.
The algebraic coordinates $x_1,\ldots ,x_{n+1}$ on $U^Q$ pullback to algebraic coordinates $\iota^{*}x_1,\ldots,\iota^{*}x_{n+1}$ on $U^Q_\iota$. We write $x_{i,\iota}$ for $\iota^{*} x_i$ and also consider a unit disk $\Delta_\iota \subset (S_1)_{\iota}^{an}$ centered at $s_0$.
Once again we have that $x_{1,\iota}\cdots x_{n+1,\iota}=x_{\iota}$. We define the cycles $\gamma_{Q,\iota}$ similarly:\\
for $z\in \Delta_{\iota}$ we let $\gamma_{Q,\iota,z}\in H_{n} ((U^Q_{\iota})^{an},\mathbb{Z})$ be defined by $|x_{2,\iota}|=\ldots =| x_{n+1,\iota}|=\epsilon$ and $x_{1,\iota}x_{2,\iota}\cdots x_{n+1,\iota}=x_{\iota}(z)$. Once again these glue together to give cycles ${i_{Q,\iota}}_{*}\gamma_{Q,\iota}\in H^{0} (\Delta_\iota, R_{n}((f_1)_{\iota,\Delta_\iota})^{an}_{*}\mathbb{Q})$. \\
The cycles $({i_Q}_{*}\gamma_{Q,\iota })_z$, for $Q$ varying in the set $Y^{[n]}$, will span the fiber of the local system $M_0 R_n(f^{an}_{\iota} )_{*} (\mathbb{Q}_{X^{an}_{\iota, \mathbb{C}}})_z $ for $z\in{\Delta_{\iota}^{*}} $. This follows from the exposition in loc.cit. since the proof does not depend on the embedding $K\hookrightarrow\mathbb{C}$.
Among these we may choose a frame of $M_0R_n(f^{an}_{\iota} )_{*} (\mathbb{Q}_{X^{an}_{\iota, \mathbb{C}}})|_{V}$ and then extend that to a frame of $R_n(f^{an}_{\iota} )_{*} (\mathbb{Q}_{X^{an}_{\iota, \mathbb{C}}})|_{V}$, where $V\subset \Delta_\iota^{*}$ is some open subset of $\Delta_{\iota}^{*}$. We thus get a relative period matrix $P_1$ of the morphism $f$. Finally, Remark 1 page 21 of \cite{andre1989g} together with the exposition in the aforementioned proof show that in fact $G_1=G_\iota$, where $G_1$ is the matrix that consists of the first $h$ columns of $P_1$, and the result follows.
\end{proof}
\subsubsection{Construction of the actual relations}
Let $s\in S(L)$ be a point of the variation satisfying either of the conditions of \ref{constpseudocm}. We assume that $s$ is $v_0$-adically close for some fixed $v_0\in \Sigma_{L,\infty}$. Considering the embedding $i_{v_0}:L\rightarrow \mathbb{C}$, which we drop from notation from now on writing just $L\hookrightarrow \mathbb{C}$, the construction of \ref{constpseudocm} goes through.
We consider now $G$, as in \eqref{eq:matrixofgfunctions} above, to be the matrix of G-functions created with respect to that embedding. For any other place $v\in \Sigma_{L,\infty}$ such that $s$ is $v$-adically close to $s_0$ we repeat the process of \ref{constpseudocm}, this time we replace $K$ by $i_v(K)$, $L$ by $i_v(L)$, and $X_s$ by $X_s\times_{L}i_v(L)$. Thanks to \ref{changeofplace} we may choose trivializations so that the corresponding $\mu\times h$ matrix of G-functions we are interested in is $G_{\iota_v}$.
As a result for any such archimedean place $v$ we get a polynomial $q_v$ with coefficients in $L$ such that $i_v(q_v)(i_v(y_1)(\xi),\ldots,i_v(y_{h\mu})(i_v(\xi)) )=0$. We let \begin{equation}\label{eq:actualrelation}q=
\underset{\underset{|\xi|_v<\min\{1,R_v(\vec{y})\}}{v\in \Sigma_{L,\infty}}}{\Pi}q_v.\end{equation}
The relation we were looking for is the one coming from this polynomial. This relation holds $v$-adically for all archimedean places for which $|\xi|_v<\min\{1,R_v(\vec{y})\}$, by construction.
Later on, we describe conditions that guarantee that $s$ cannot be $v$-adically close to $s_0$ for $v\in \Sigma_{L,f}$. This will, effectively, make \eqref{eq:actualrelation}, or to be more precise the relation it induces at $\xi$, global in that case.
Leaving the proof of globality for later we note that the relation induced from \eqref{eq:actualrelation} satisfies the other key property we want. Namely we have the following lemma.
\begin{lemma}\label{proofnottrivial}
The relations created above are non-trivial, assuming that the generic special Mumford-Tate group of our variation is $Sp(\mu,\mathbb{Q})$.
\end{lemma}
\begin{proof}This follows by comparing the relations in \eqref{eq:case1relationps} and \eqref{eq:case2final} with the polynomials described in \ref{trivialrelationsfinal}. For Case $2$ above, it is easier to see the non-triviality of the relations in question by looking at \eqref{eq:relationsp2-4} instead of \eqref{eq:case2final}.
\end{proof}
\section{Review on the action of the inertia group}
Here we review some notions about the action of the inertia group on \'etale cohomology of varieties over local fields. We then apply these results to our case of interest that of G-admissible variations of $\mathbb{Q}$-HS.
\subsection{Grothendieck's monodromy Theorem}
Let $X/K$ be a smooth projective variety with $K$ a local field whose residue field has characteristic $p$. We let $G_K:=Gal(K_s/K)$ be the absolute Galois group of the field $K$, $I_K\leq G_K$ be its inertia subgroup, and we also let $X_{K_s}=X\times_K K_s$. We have a natural action of these groups, which we denote by \begin{center}
$\rho_l:G_K\rightarrow Aut (H^{i}_{\acute{e}t}(X_{K_s},\mathbb{Q}_l))$,
\end{center}on the \'etale cohomology groups of $X_{K_s}$ for $l\neq p$. This action for the inertia group is described by the following classical theorem of Grothendieck.
\begin{theorem}\label{unipotentaction}[\cite{serretate}]
Let $X$ be a smooth projective variety over $K$. Then the inertia group $I$ acts quasi-unipotently on the \'etale cohomology group $H^{i}_{\acute{e}t}(X_{K_s},\mathbb{Q}_l)$.
\end{theorem}
From this we get that there exists a finite field extension $L/K$ for which the inertia group $I_L$ acts unipotently on $H^i_{\acute{e}t}(X_{K_s},\mathbb{Q}_l)$. The unipotency of this action can be described more explicitly and provides an ascending filtration, called the \textbf{monodromy filtration}, $M_{\bullet}$ of $H^{i}_{\acute{e}t}(X_{K_s},\mathbb{Q}_l)$. We present a short review of these facts here.
Let us choose a uniformizer $\varpi\in\mathcal{O}_L$ and consider, for each $n\geq 0$ the map \begin{center}
$t_{l,n}:I_L\rightarrow \mu_{l^n}$
\end{center}which is defined by $\sigma (\varpi^{\frac{1}{l^n}})=t_{l,n}(\sigma)\varpi^{\frac{1}{l^n}}$. We then define the map\begin{center}
$t_l:I_L\rightarrow \mathbb{Z}_l(1)$,
\end{center}as the inverse limit of the maps $t_{l,n}$.
Then there exists a nilpotent map, called the monodromy operator, $N: H^{i}_{\acute{e}t}(X_{K_s},\mathbb{Q}_l)(1)\rightarrow H^{i}_{\acute{e}t}(X_{K_s},\mathbb{Q}_l)$ such that for all $\sigma \in I_L$ we have $\rho_l(\sigma)=exp(Nt_l(\sigma))$.
\subsubsection*{The monodromy filtration}
From the above operator $N$ we can construct the monodromy filtration on $H^{i}_{\acute{e}t}(X_{K_s},\mathbb{Q}_l)$ written as
\begin{center}
$0=M_{-i-1}\subset M_{-i}\subset\ldots\subset M_{i-1}\subset M_i=H^{i}_{\acute{e}t}(X_{K_s},\mathbb{Q}_l)$
\end{center}we also define the $j$-th graded quotient of the filtration to be $Gr_{j}^{M}:=M_j/M_{j-1}$.
We record the main properties of the monodromy filtration in the following lemma.
\begin{lemma}\label{monoprop}
Let $M_{\bullet}$ be the above monodromy filtration. Then the following hold\begin{enumerate}
\item $NM_j(1)\subset M_{j-2}$ for all $j$,
\item the map $N^{j}$ induces an isomorphism $Gr_j^{M}(j)\overset{\cong}{\rightarrow} Gr_{-j}^{M}$ for all j,
\item the monodromy filtration is the unique ascending filtration satisfying the above two properties, and
\item the inertia group $I_L$ acts trivially on $Gr_{j}^{M}$ and $M^{I}\subset M_0$.
\end{enumerate}
\end{lemma}
\begin{proof}
This follows from \cite{deligneweil2}, Prop. $1.6.1$ and the above discussion.
\end{proof}
\subsubsection{Filtrations and endomorphisms }
In what follows we will need the following lemma.
\begin{lemma}\label{filtrationsandendomorphisms} Let $k$ be a field of characteristic $0$ and let $A=A_1\oplus \ldots\oplus A_r$ be a semisimple algebra over $k$, where $A_i$ are its simple summands,
and assume that $A\hookrightarrow (\End( V))^{N}$, where $N$ is a nilpotent endomorphism of the finite dimensional $k$-vector space $V$.
Let $W_{\bullet}$ be the ascending filtration of $V$ defined by $N$ and let $h_i:=\dim_{k} \gr_{i}^{W_{\bullet}}$. Then, for each $1\leq i\leq r$ we have that there exists $j(i)$ with \begin{center}
$ A_i \hookrightarrow \End ( \gr_{j(i)}^{W_{\bullet} } ) \simeq M_{h_{j(i)}} (k)$.
\end{center}
\end{lemma}
\begin{proof}We assume without loss of generality that $i=1$ and proceed by induction on the degree $n+1$ of nilpotency of $N$, and hence the length $2n$ of the filtration $W_{\bullet}$,\begin{center}
$0\subset W_{-n} \subset W_{-n+1} \subset \ldots \subset W_{n-1} \subset W_{n} = V$.
\end{center}
Let $\psi_1: \End ( V)^{N}\rightarrow \End(W_{-n})$ be the homomorphism of $k$-algebras defined by $F\mapsto F|_{W_{-n}}$. Since the algebra $A_i$ is simple, either one of two things will happen\begin{enumerate}
\item $\ker\psi_1 \cap A_1= \{0\}$, in which case we are done, or
\item $A_1\subset \ker\psi_1$.
\end{enumerate}
From now on we assume that we are in the \underline{second case}.\\
Let $N_{1}$ be the nilpotent endomorphism induced by $N$ on the quotient $W_{n-1}/W_{-n}$. We note that the degree of nilpotency of $N_{1}$ is $n$, i.e. it has dropped by $1$. We than make the following
\begin{claim} There is a natural embedding of $k$-algebras \begin{center}
$A_1 \hookrightarrow \End (W_{n-1}/W_{-n})^{N_{1}}$.
\end{center}\end{claim}
\begin{proof}[Proof of the Claim] First of all, note that we have the map\begin{center}
$\phi_1: \ker\psi_1 \rightarrow \End (W_{n-1}/W_{-n})^{N_{1}}$,
\end{center}given by $F\mapsto F|_{W_{n-1}} \mod W_{-n}$.
Since $A_1\subset \ker\psi_1$ is simple we get that, once again, either $A_1\subset \ker\phi_1$ or $A_1\cap \ker \phi_1=\{0\}$. We clearly cannot have the first case since all elements of $\ker\phi_1$ are nilpotent, in fact $F^3=0$ for all such $F$. Therefore, $A_1\cap \ker \phi_1=\{0\}$ and the algebra homomorphism $\phi_1$ restricts to an embedding of $A_1$ into $\End (W_{n-1}/W_{-n})^{N_{1}}$ as we wanted.
\end{proof}
By induction our result follows, noting that the case $n=1$ is trivially dealt with by the above argument.
\end{proof}
\subsection{Inertia and endomorphisms}\label{section:inertiaendom}
We believe the results in this subsection are broadly known to experts. Unable to find a reference for these we include them here for the sake of completeness.\\
Let us consider $f:X\rightarrow S$ a G-admissible variation of $\mathbb{Q}$-HS defined over the fixed number field $K$ as usual. We also fix a point $s\in S(L)$ that is $v$-adically close to $s_0$ for some $v\in \Sigma_{L,f}$. Our main goal in this section is a brief study of the relation between the algebra of Hodge endomorphisms at $s\in S(L)$ and its relation with the endomorphisms of $H^{n}_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l)$ where $l\neq p(v)$ and $\bar{X}_{s,v}=(X_s\times_L L_v)\times_{L_v} \bar{L}_v$.
This relation is captured by the following proposition.
\begin{prop}\label{endinert}Assume the Hodge conjecture is true for $X_s$ and that the action of the inertia group $I_{L_v} $ on $H^n_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l)$ is unipotent. Then
\begin{center}
$D_s\otimes\mathbb{Q}_l:=(\End(H^n(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q})))^{M_s} \otimes_{\mathbb{Q}}\mathbb{Q}_l \hookrightarrow (\End(H^n_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l)))^{I_{L_v}}$.
\end{center}
\end{prop}
In order to prove this we follow the same strategy of proof as that of Theorem 1 of \cite{ribet}. We first start with the following lemma.
\begin{lemma}\label{compthm}
There is a natural endomorphism of $\mathbb{Q}_l$ algebras
\begin{center}
$(\End(H^{n}(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q})))^{G_{mt,s}} \otimes_{\mathbb{Q}}\mathbb{Q}_l\hookrightarrow \End(H^n_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l))$.
\end{center}
\end{lemma}
\begin{proof}As a corollary\footnote{See Corollary 4.3, Ch. VI of \cite{milnetale} applied to the field extension $\bar{L}_v/\bar{L}$ and $\mathbb{C}/\bar{L}$.} of the Smooth base change Theorem for lisse $l$-adic sheaves we have that \begin{equation}\label{eq:compar1}
H^{n}_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l)\cong H^{n}_{\acute{e}t}(\bar{X}_{s},\mathbb{Q}_l)\cong H^{n}_{\acute{e}t}(\bar{X}_{s,\mathbb{C}},\mathbb{Q}_l),
\end{equation}
where $\bar{X}_s=X_s\times_L \bar{L}$ and $\bar{X}_{s,\mathbb{C}}=\bar{X}_s\times_{\bar{L}}\mathbb{C}$.
Applying Artin's comparison theorem for lisse $l$-adic sheaves, we get\begin{equation}\label{eq:compar2}
H^{n}_{\acute{e}t}(\bar{X}_{s,\mathbb{C}},\mathbb{Q}_l)\cong H^{n}(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q}_l).
\end{equation}
Combining \eqref{eq:compar1} with \eqref{eq:compar2} we get
\begin{center}
$H^{n}_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l)\cong H^{n}(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q}_l)$,
\end{center}
and the inclusion map we want follows.
\end{proof}
\begin{proof}[Proof of \ref{endinert}]It suffices to show that given $\sigma\in I_{L_v}$ and $f\in (\End(H^{n}(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q})))^{G_{mt,s}}$ the corresponding elements of $\End(H^n_{\acute{e}t}(\bar{X}_{s,v},\mathbb{Q}_l))$ commute with each other.
Since $f\in (\End(H^{n}(\bar{X}_{s,\mathbb{C}}^{an},\mathbb{Q})))^{G_{mt,s}}$ the Hodge conjecture implies that $f$ is defined over some finite extension $\tilde{L}/L$. Therefore, by compatibility of the cycle class maps in \'etale and singular cohomology with Artin comparison, $f$ commutes with $\sigma^k$ for some $k\geq 1$. The result then follows from Lemma $1.2$ of \cite{ribet} since the action in question is unipotent.\end{proof}
|
2,877,628,091,343 | arxiv | \section{Introduction}
Magnets with geometrical frustration\cite{moessner:06pt} have
received much attention as models of strongly interacting electronic
systems with unusual ground states, thermodynamic phases, and
excitations. The hallmark of strong frustration is a conspicuously
large degeneracy of the classical ground state: essentially, a
finite {\em fraction} of the degrees of freedom remains
unconstrained to the lowest temperatures. For discrete spins, this
manifests itself in the number of ground states scaling
exponentially with the system volume and thus giving rise to a
nonzero entropy density at absolute zero temperature. Well-known
examples of that are the Ising antiferromagnet on the triangular
lattice\cite{wannier:1950, houtappel:1950} and spin
ice.\cite{bramwell:science} For continuous spins --- most saliently
for the Heisenberg antiferromagnet on the pyrochlore lattice --- the
classical ground states form a \textit{manifold} whose dimension is
proportional to the system volume.\cite{Moessner98PRB} In
that particular case, the classical model exhibits strong
short-range spin correlations but fails to exhibit any form of
conventional magnetic order down to the lowest temperatures
accessible in Monte Carlo simulations. The strong correlation
between the local motions of spins in this liquid-like phase
manifests itself as an emergent gauge structure in the low-temperature limit
and results in a dipolar form of the asymptotic spin correlations at
large separations. \cite{isakov:2004prl, henley:2005prb}
At the same time, the large degeneracy of the ground state makes
this system susceptible to all kinds of perturbations, which
certainly exist in real compounds. For instance, the spin-lattice
coupling, arising from the dependence of exchange strength on the
atomic displacements,\cite{Kittel60} lifts the degeneracy through a
spin analog of the Jahn-Teller effect\cite{Tch02PRB} observed in
spinels ZnCr$_2$O$_4$\cite{PhysRevLett.84.3718} and
CdCr$_2$O$_4$.\cite{chung:247204}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{fig1}
\caption{Second and third-neighbor pairs on the pyrochlore lattice.
Since exchange paths giving rise to $J_3$ and $J_3'$ are inequivalent,
the two couplings may be different. Numbers from 0 to 3 label the four
fcc sublattices.}
\label{fig:pyrochlore}
\end{figure}
This naturally leads one to ponder the following questions. Can the
interplay of a weak perturbation with strong frustration lead to
interesting ordered phases? Are there any (intermediate) partially
ordered phases? What is the nature of the phase transitions between
such phases? In this paper we discuss these questions in the
context of a classical Heisenberg antiferromagnet on the pyrochlore
lattice with interactions going beyond nearest neighbors. Following
previous work by Reimers \textit{et al.}\cite{reimers:1991prb} and
by Tsuneishi \textit{et al.},\cite{tsuneishi:2007jpcm} we consider
the classical Heisenberg antiferromagnet on the pyrochlore lattice
with the Hamiltonian
\begin{equation}
\mathcal{H} = J_1\sum_{\langle ij\rangle}
\mathbf S_i\cdot\mathbf S_j
+ J_2\sum_{\langle\langle ij\rangle\rangle}
\mathbf S_i\cdot\mathbf S_j,
\label{eq-H0}
\end{equation}
where $\langle ij\rangle$ and $\langle\langle ij\rangle\rangle$
indicate pairs of first and second neighbors, respectively. Given
the short-range nature of exchange forces, we work in the limit $J_2
\ll J_1$. It is reasonable to expect that the influence of $J_2$
becomes noticeable only at low temperatures of order $J_2 S^2$, when
the system is already in the strongly correlated paramagnetic state,
in which it is constrained to fluctuate around the ground states of
the nearest neighbor exchange. Using a combination of Monte Carlo
simulations and analytical arguments, we have mapped out the phase
diagram in the $J_2$--$T$ plane shown in Fig.~\ref{fig:phase-dgm}.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{fig2}
\caption{Phase diagram of the model with antiferromagnetic first and
weak second-neighbor exchange of either sign on the pyrochlore
lattice. Open circles are numerically determined locations of
thermodynamic phase transitions (all first order); filled circles denote the
stability boundary of the collinear phase. Solid lines are interpolated
phase boundaries; the dashed line is a boundary of local stability of
the collinear phase. The wavenumber of the incommensurate magnetic
phase is $h \approx 3/4$.} \label{fig:phase-dgm}
\end{figure}
Antiferromagnetic second-neighbor exchange, $J_2>0$, significantly
reduces the frustration by selecting states in which spins within
any of the four fcc sublattices, comprising the pyrochlore lattice,
are parallel to one another. We find a collinearly ordered phase of
the type $\langle \mathbf S_0 \rangle = -\langle \mathbf S_1 \rangle
= -\langle \mathbf S_2 \rangle = \langle \mathbf S_3 \rangle$,
where the subscripts enumerate the fcc sublattices
[Fig.~\ref{fig:pyrochlore} and Fig. \ref{fig-afj2}(a)]. The
transition between the paramagnetic and antiferromagnetic phases is
discontinuous.
Ferromagnetic second-neighbor exchange, $J_2<0$, leaves the system
strongly frustrated. A mean-field calculation by Reimers \textit{et
al.}\cite{reimers:1991prb} predicted a ground state with
incommensurate magnetic order. While Tsuneishi \textit{et
al.}\cite{tsuneishi:2007jpcm} indeed observed Bragg peaks in the
spin structure factor obtained through a Monte Carlo simulation for
$J_2=-0.1 J_1$, they also noted that the spins remained dynamic,
failing to freeze. We show that the observed locations of the Bragg
peaks are compatible with the results of Reimers \textit{et al.}, so
that the low-temperature phase is most likely magnetically ordered.
The main focus of our paper is a peculiar \textit{partially ordered}
phase sandwiched between the paramagnet and the magnetically ordered
state for weak enough ferromagnetic $J_2$, namely $-0.09 J_1
\lesssim J_2 < 0$. In the intermediate phase, the spins display
collinear order; furthermore, they exhibit magnetic order within a
thin $\{100\}$ layer but no order across different layers. The
partial order can be characterized by a combination of a director
$\hat \mathbf n$ specifying a global spin axis, a Potts ($Z_3$)
variable $q = (100)$, (010), or (001) specifying the direction of
the layers, and an Ising ($Z_2$) variable $\sigma_n$ {\em for each
layer} identifying one of the two possible spin orientations within
a layer. The order is partial in the sense that the Ising variables
$\{\sigma_n\}$ randomly pick values of $+1$ and $-1$ with no
discernible correlations between adjacent layers. The partially
ordered state is bounded by first-order transitions on both the high
and low-temperature sides.
Similar partial order has been previously found in a $1/S$ treatment
of the Heisenberg antiferromagnet on the checkerboard lattice, also
known as the square lattice with crossings, a two-dimensional analog
of the pyrochlore.\cite{ot:2003prb} In both systems, the distinct
layered states are \textit{not} related to one another by a symmetry
of the Hamiltonian and simply arise as different local minima of the
free energy. Free-energy barriers separating them may be large enough in
practice for the system not to be ergodic and instead to remain in one of these
minima forever.
Since the energy of the partially ordered collinear state is greater
than that of the low-temperature multiple-$\mathbf q$ magnetic
order, entropic selection plays a crucial role in the stabilization
of the intermediate phase. This is consistent with the general
observation that states with collinear spins tend to have softer
thermal fluctuations and therefore have a lower free energy at
finite temperatures.\cite{Moessner98PRB,henley:89} A similar
collinear phase has been reported in the Monte Carlo study of a
$J$-$J'$ model which interpolates between the pyrochlore and the fcc
lattices. \cite{pinettes:02}
While we have focused on the role of second-neighbor exchange $J_2$
in the formation of magnetic order on the pyrochlore lattice, our
results also shed light on the role of third-neighbor interactions
$J_3$ (see Fig.~\ref{fig:pyrochlore}). In view of strong
correlations between nearest-neighbor spins developing at
temperatures well below $J_1 S^2$, the properties of the system
depend not on $J_2$ and $J_3$ separately but on their linear
combination $J_2 - J_3$. Indeed, the relative shift in energy for
any pair of ground states of the nearest-neighbor exchange due to a
small $J_3$ is identical to the effect of a $J_2$ of the same
magnitude and opposite sign. Thus our findings should also be of
relevance for the more general case of a pyrochlore antiferromagnet
with small $J_2$ and $J_3$.
The remainder of this paper is organized as follows. In
Sec.~\ref{sec:low-T} we briefly discuss the nature of magnetically
ordered phases at low temperatures for both signs of the
second-neighbor coupling $J_2$. Sec.~\ref{sec:int-T} presents the
main subject of this work, the partially ordered phase found at
intermediate temperatures on the ferromagnetic side of $J_2$.
Stability of the partially ordered state and its phase boundaries
are examined in Sec.~\ref{sec:stability}. We conclude with a
discussion of these results in Sec.~\ref{sec:discussion}.
\section{Low-temperature ordered phases}
\label{sec:low-T}
Since the phase transitions shown in Fig. \ref{fig:phase-dgm} are
strongly discontinuous and occur at very low temperatures, the
metastable states close to the coexisting region are rather
long-lived. Conventional histogram methods with local Metropolis
updates are ineffective in determining the critical points due to a
large energy barrier separating the metastable state from the true
ground state. Instead, we settled on using a method proposed by M.
Creutz \textit{et al.}, \cite{creutz:79} in which a mixed phase with
the two coexisting states each occupying half the lattice is
constructed first. By thermalizing the mixed phase at various
temperatures, the critical point is determined when neither of the
two states prevail the system during the relaxation process. Since
the multiple-$\mathbf q$ magnetic order has an extended unit cell
with a period of about 4 cubic lattice constants, systems used in
our mixed-phase simulations contain $8^3$ cubic unit cells, with a
total spin $N = 16\times 8^3$.
\subsection{Antiferromagnetic $J_2$: low frustration}
\begin{figure}
\includegraphics[width = 0.30\columnwidth]{fig3a}
\includegraphics[width = 0.68\columnwidth]{fig3b}
\caption{\label{fig-afj2} (a)A ${\bf q} = 0$ N\'eel order for model
with an antiferromagnetic $J_2$ (ferromagnetic $J_3$). The order
parameter is one of the three staggered magnetization ${\bf L}_3 =
({\bf S}_0+{\bf S}_1 - {\bf S}_2 - {\bf S}_3)/4
S$.\cite{chern:060405} (b) The phase transition between the
paramagnetic and antiferromagnetic phases for $J_2= 0.01 J_1$. The
simulated system has $16\times 8^3$ spins. The energy density
$\varepsilon = (E-E_0)/6 N_s$, where $E_0 = -N_s J_1$ is the ground
state energy of nearest-neighbor interactions.}
\end{figure}
In the limit $J_2 \ll J_1$, magnetic ordering takes place at a
temperature $T_c = \mathcal O(J_2 S^2)$. The nature of this ordering
is best understood by appealing to the fact that a weak
third-neighbor coupling $J_3 \ll J_1$ (Fig.~\ref{fig:pyrochlore})
selects among the nearest-neighbour ground states in the same way as
a second-neighbor coupling $J_2$ of the same strength and opposite
sign, as explained in Appendix \ref{app:equiv}. (We here note in
passing that, since the strength of coupling depends on the exchange
paths and not the interatomic distance alone, sometimes $J_3$ may be
as big as $J_2$. For instance, \textit{ab initio} calculations show
that in CdCr$_2$O$_4$ $J_3$ exceeds $J_2$ in
magnitude.\cite{chern:060405, yaresko:2008}) This insight is useful
as the resulting ordered pattern can be understood in a more
straightforward way by analyzing the effect of $J_3$. To see that,
note that the pyrochlore lattice consists of four fcc sublattices
and that third neighbors on the pyrochlore lattice belong to the
same fcc sublattice (Fig.~\ref{fig:pyrochlore}). Thus a
ferromagnetic exchange $J_3<0$ is not frustrated and will be
absolutely minimized by a state where spins within the same fcc
sublattice are parallel to one another.
A translationally invariant four-sublattice ground state was
predicted for the pyrochlore antiferromagnet with a ferromagnetic
$J_3$ by Reimers \textit{et al.}\cite{reimers:1991prb} The same can
be expected for an antiferromagnetic second-neighbor coupling
$J_2>0$. In both cases the energy of the further-neighbor exchange
is minimized by a ferromagnetic order $\langle \mathbf S_i \rangle$
within the individual sublattices. Consequently any configuration
satisfying $\sum_{i=0}^3 \langle \mathbf S_i \rangle = 0$ is a
ground state at the mean-field level. Thermal fluctuations
nonetheless favor those with collinear spins. \cite{Moessner98PRB}
This is indeed what we obtained in the Monte Carlo simulations
(Fig.~\ref{fig-afj2}): a $\mathbf q=0$ N\'eel state with an
up-up-down-down spin configuration on every tetrahedron is found to
be the ground state for an antiferromagnetic $J_2$. This collinear
magnetic state is separated by a discontinuous transition line from
the high-temperature cooperative paramagnetic state. As shown in
Fig. \ref{fig-afj2}(b), both the energy density $\varepsilon$ and
the staggered magnetization $\mathbf L_3 = (\mathbf S_0 - \mathbf
S_1 - \mathbf S_2 + \mathbf S_3)/4S$ show a clear jump at the
transition temperature $T_c \approx 3.2\, J_2 S^2$.
\subsection{Ferromagnetic $J_2$: high frustration}
\label{sec-high-frustration}
The case of a ferromagnetic second-neighbor coupling, $J_2<0$, is
similar to that of $J_3>0$. An antiferromagnetic coupling on an fcc
lattice is frustrated, so that this time one may expect a more complex
magnetic order. Indeed, Reimers's mean-field calculation yields an
incommensurate magnetic order with a wavevector $\mathbf q=2\pi(h,h,0)$
in the case of a ferromagnetic $J_2$.
\begin{figure}
\includegraphics[scale = 0.32]{fig4a}
\includegraphics[scale = 0.32]{fig4b}
\caption{\label{fig-fq} (a) The spin structure factor of the
low-temperature ordered state at wavevectors $\mathbf q =
2\pi(h,h,l)$. The state was obtained from a Monte-Carlo simulation
with a system with $16\times 8^3$ spins for a ferromagnetic $J_2$;
the temperature was $T = 0.2\,|J_2|$. (b) Minimum eigenvalue of the
exchange matrix $J_{mn}(\mathbf q)$ at wavevectors $\mathbf q =
2\pi(h,h,l)$ for $J_2=-J_1/10$. The satellite peaks at ${\bf
q}\approx 2\pi(\frac{5}{4},\frac{5}{4},\pm 0.1)$ might be due to the
finite-size effect for an incommensurate spin order.}
\end{figure}
We have performed Monte Carlo simulations on the pyrochlore lattice with
periodic boundary conditions measuring 8 cubic unit cells in each
direction. The simulations were done for $J_2 = -0.1\,J_1$. They
revealed a state with magnetic Bragg peaks at incommensurate lattice
momenta near $2\pi\{3/4,\,3/4, 0\}$ and other equivalent positions.
Fig.~\ref{fig-fq}(a) shows two inequivalent Bragg peaks, $\mathbf q
\approx 2\pi(3/4,\,3/4,0)$ and $-2\pi(3/4,\,3/4,0)$, the rest being
related to these two by a reciprocal lattice vector. Bragg peaks
with comparable intensities are found at other wavevectors related
to the above two by point-group symmetries. This multiple-$\mathbf
q$ N\'eel order is consistent with the ground states of
(\ref{eq-H0}) in the spherical approximation, in which the local
length constraints $|\mathbf S_i|=S$ are replaced by a global one
$\sum_{i=1}^N\,|\mathbf S_i|^2 = N S^2$. Introducing the Fourier
transform $\mathbf S_i = \sum_{\bf q} \mathbf S_m({\bf q})
e^{i\mathbf q\cdot\mathbf r_i}$ [the site index $i = (m,\mathbf
r_i)$, where $m$ is the sublattice index], the exchange interaction
(\ref{eq-H0}) becomes
\begin{equation}
\label{eq-H1}
\mathcal{H} = \frac{N}{4}\sum_{{\bf q}}
\sum_{m,n=0}^3 J_{mn}({\bf q})
\,{\bf S}_m({\bf q})\cdot{\bf S}_n(-{\bf q}).
\end{equation}
The Fourier components $\mathbf S_m(\mathbf q)$ are subject only to
a global constraint $\sum_{m,\bf q} |\mathbf S_m({\bf q})|^2 = S^2$.
The matrix $J_{mn}(\mathbf q)$ is the Fourier transform of the
exchange interaction $J_{ij}=J_{mn}({\bf r}_i - {\bf r}_j)$. Its
explicit form with interactions up to the fourth nearest neighbors
can be found in Ref. \onlinecite{reimers:1991prb}.
Expanding $\mathbf S_m(\mathbf q) =\sum_a U^a_{\mathbf q,\,m}\,\bm
\Phi^a_{\mathbf q}$ in terms of the eigenvectors $U^a_{\mathbf q,
m}$ of the exchange matrix $J_{mn}(\mathbf q)$ yields the energy as
a function of the expansion coefficients $\bm \Phi^a_{\mathbf q}$:
\begin{equation}
E = \frac{N}{4} \sum_{\mathbf q}\sum_{a=1}^4
\lambda^a_{\mathbf q} |\bm \Phi^a_{\mathbf q}|^2,
\end{equation}
where $\lambda^a_{\mathbf q}$ is the corresponding eigenvalue of
$J_{mn}(\mathbf q)$. With the normalization
$\sum_{m=0}^3|U^a_{\mathbf q,m}|^2=1$, the vectors $\bm
\Phi^a_{\mathbf q}$ satisfy $\sum_{\mathbf q} \sum_a
|\bm\Phi^a_{\mathbf q}|^2 = S^2$. The ground state energy of
(\ref{eq-H1}) is thus $E_0 = N S^2 \lambda_{\rm min}$, where
$\lambda_{\rm min}$ is the lowest eigenvalue $\lambda^a_{\mathbf
q}$.
For the nearest-neighbor interaction only, the two lowest
eigenvalues are $\mathbf q$-independent, $\lambda^1_{\mathbf q}
=\lambda^2_{\mathbf q} = -J_1$, reflecting the degenerate nature of
the magnetically ordered ground state. This degeneracy is lifted by
the introduction of $J_2$ as discussed by Reimers {\em et al.}
\cite{reimers:1991prb} A contour plot of the lowest eigenvalue of
the exchange matrix as a function of the wavevector $\mathbf
q=2\pi(h,h,l)$ for $J_2<0$ is shown in Fig. \ref{fig-fq}(b). It can
be seen from Fig. \ref{fig-fq} that the peaks of the spin structure
factor appear at the same locations as the minima of exchange
energy, namely at 12 incommensurate wavevectors $\mathbf q^* =
2\pi\{h^*,h^*,0\}$, where $h^*\approx 3/4$ depends weakly on the
ratio $J_2/J_1$.
For small $J_2/J_1$,
$h^*=a_0+a_1 (J_2/J_1)+\mathcal{O}((J_2/J_1)^2)$, where
\begin{eqnarray}
a_0 &=&\frac{1}{\pi}\arccos[(4\sqrt{3}-9)/3]=0.7427,\nonumber \\
a_1 &=& \frac{44}{3\pi\sqrt{9654+5574\sqrt{3}}}=0.0336.
\label{eq:h-j2}
\end{eqnarray}
\begin{figure}
\includegraphics[height=0.55\columnwidth]{fig5}
\caption{\label{fig-trans-m} The phase transition between the
paramagnetic and antiferromagnetic phases for $J_2= -0.1 J_1$. The
simulated system has a total of $N=16\times 8^3$ spins. The
normalized energy density $\varepsilon = (E-E_0)/6 N$, where $E_0 =
-N J_1 S^2$ is the ground state energy of nearest-neighbor
interactions. $\phi_M$ is the second moment of the magnetic order
parameters.}
\end{figure}
The magnetic order is described by the order parameter composed of
12 vector amplitudes $\bm \Phi_{\mathbf q^*}$.\cite{reimers:1991prb}
A detailed characterization of this magnetic state is deferred to a
future publication. Fig.~\ref{fig-trans-m} shows the temperature
dependence of the energy density $\varepsilon$ and the magnitude of
the order parameters $\phi_M = \sum_{\mathbf q^*} |\bm\Phi_{\mathbf
q^*}|^2$. Both exhibit a clear jump at $T_c \approx 0.95|J_2| S^2$,
indicating a first-order transition. This is also confirmed by a
double-peak structure in the energy histogram at the transition
temperature. Similar results were obtained for $J_2 \lesssim
-0.09\,J_1 $ where the magnetic phase is separated from the
high-temperature spin liquid phase by a first-order phase transition
as indicated in Fig.~\ref{fig:phase-dgm}.
\section{Partially ordered phase}
\label{sec:int-T}
As discussed in the Introduction, an intermediate phase with
collinear spins exists at finite temperatures for a small
ferromagnetic coupling $J_2<0$. The appearance of collinearity is
not totally unexpected as it is well known that collinear states are
in general favored by thermal fluctuations in magnets with
frustrated exchange interactions. \cite{Henley89PRL} The fact that
the system remains frustrated even in the presence of a
ferromagnetic $J_2$ makes the existence of the nematic phase
possible. From another perspective, the classical nearest-neighbor
Heisenberg spins on the pyrochlore lattice evade the thermal
selection only marginally.\cite{Moessner98PRB} The introduction of a
ferromagnetic $J_2$ reduces the dimension of ground-state manifold,
thus permitting thermal fluctuations to stabilize collinear states.
\begin{figure}
\includegraphics[height=0.52\columnwidth]{fig6a}
\includegraphics[height=0.52\columnwidth]{fig6b}
\caption{\label{fig-transition} Transitions between (a) the
paramagnetic and nematic phases, and (b) the nematic and N\'eel
phases, for $J_2=-0.01 J_1$. A parallel-tempering Monte Carlo method
was employed to simulate a system with $16\times 4^3$ spins. The
normalized energy density $\varepsilon = (E-E_0) /6 N$, where $E_0 =
-N J_1 S^2$ is the ground state energy of nearest-neighbor
interactions. $Q$ is the spin nematic order parameter.}
\end{figure}
\subsection{Nematic order}
To demonstrate that spins indeed become collinear in the
intermediate phase, we have obtained from Monte Carlo simulations
the nematic order parameter $Q$ defined as the largest eigenvalue of
the traceless tensor $Q_{\mu\nu} = \langle S_{\mu} S_{\nu}/S^2 -
\delta_{\mu\nu}/3\rangle$, \cite{Chaikin} where $S_\mu$ represents
Cartesian components of a spin. It vanishes in a totally disordered
state and attains the maximal value of 2/3 for parallel spins.
The thermodynamic behavior of the system with $J_2 = -0.01\,J_1$ in
the vicinity of the phase transitions is illustrated in
Fig.~\ref{fig-transition}. The simulation was done on the pyrochlore
lattice with periodic boundary conditions measuring 4 cubic unit
cells in each direction, giving a total of $N = 16 \times 4^3 =
1024$ spins. To improve the equilibration process, we employed
parallel tempering\cite{hukushima:1996jpsj, trebst:2004pre} with 30
replicas. The energy density $\varepsilon$ and the nematic order
parameter $Q$ are shown as functions of temperature near $T_{c1}$
[paramagnet to partially ordered phase, Fig.~\ref{fig-transition}
(a)] and $T_{c2}$ [partially ordered phase to antiferromagnet, Fig.
\ref{fig-transition} (b)]. The energy density shows a clear
discontinuity at both transitions. Extrapolating the energy curve
from the partially ordered phase to $T=0$ yields a density
$\varepsilon_L = -|J_2|/3$ characteristic of a layered state to be
discussed below. Likewise, the order parameter $Q$ extrapolates to
the maximal attainable value of 2/3 characteristic of collinear
spins. Below $T_{c2}$, the antiferromagnetic state seems to have a
residual nematic order with $Q\approx 0.05$, which may be intrinsic
to the low-temperature ordered state, or a finite-size effect.
\subsection{Bond order}
Nematic order alone does not provide a full characterization of this
phase: four spins on a tetrahedron have three distinct collinear
states not related to each other by a global rotation of the spins.
They are labeled red, green, and blue in Fig.~\ref{fig:rgb}. These
states differ from one another by the location of frustrated bonds
$\langle ij \rangle$ that involve parallel spins. Since the global
direction of the spins is already captured by the nematic order
parameter $Q_{\mu\nu}$, further characterization can be made by
using scalar quantities, such as bond variables $f_{ij} \equiv
\langle \mathbf S_i \cdot \mathbf S_j \rangle$. At temperatures
well below $J_1 S^2$ only two (out of six) bond variables of a
tetrahedron are independent:\cite{Tch02PRB}
\begin{eqnarray}
f_1 &=& \frac{f_{01} + f_{23} + f_{02} + f_{13} - 2 f_{03} - 2 f_{12}}
{\sqrt{12}},
\nonumber\\
f_2 &=& \frac{f_{01} + f_{23} - f_{02} - f_{13}}{2}.
\label{eq:f}
\end{eqnarray}
The vector $\mathbf f = (f_1, f_2)$ takes on values in a triangular
domain with the three collinear states in its corners.
\begin{figure}
\includegraphics[width=0.92\columnwidth]{fig7}
\caption{The three distinct collinear states of a tetrahedron.
Frustrated bonds (with parallel spins) are shown as dashed lines.}
\label{fig:rgb}
\end{figure}
What kind of bond order might one expect in the intermediate phase?
To answer this question, let us again use the equivalence between a
ferromagnetic $J_2$ and an antiferromagnetic $J_3$. The latter
promotes antiparallel orientations for spins 3 and $3'$
(Fig.~\ref{fig:pyrochlore}), which means---for a collinear state of
spins---that one of the bonds 03 and $03'$ is frustrated and the
other is satisfied. (Bergman \textit{et al.}\cite{bergman:134409}
showed that such states---satisfying the ``bending rule" for
frustrated bonds in zero applied field---are also favored by quantum
fluctuations of spins.) In other words, adjacent tetrahedra will be
in states of different color. This is reminiscent of the
antiferromagnetic Potts model with 3 states: red, green and blue in
Fig.~\ref{fig:rgb}. A collinear state of the pyrochlore
antiferromagnet is fully specified by the global spin director and
the colors of all tetrahedra. Note however that colors of
tetrahedra are not completely independent: the number of satisfied
bonds ($\mathbf S_i \cdot \mathbf S_j = -S^2$) must be even along
any closed loop. Nonetheless, the parameterization in terms of Potts
variables serves a useful purpose. One of the phases of the
antiferromagnetic Potts model on a bipartite lattice has a broken
sublattice symmetry (BSS): one sublattice is dominated by one color,
while the other is randomly populated by the two remaining
colors.\cite{grest:1981prl, lapinskas:1998prl} With this state in
mind, we have measured the average bond variables in the
intermediate phase in the Monte Carlo simulations.
\begin{figure}
\includegraphics[scale=0.95]{fig8a}
\includegraphics[scale=0.95]{fig8b}
\caption{\label{fig-fAB} The distribution of bond vector of the two
sublattices $\mathbf f_A$ and $\mathbf f_B$ in the nematic phase.
The simulated system has (a) 8 layers and (b) 6 layers of tetrahedra
in one sublattice. The bond vector $\mathbf f$ has been normalized
such that the three collinear states, blue, red, and green, are at
vertices $(-1,0)$, $(\frac{1}{2}, \frac{\sqrt{3}}{2})$, and
$(\frac{1}{2}, \frac{-\sqrt{3}}{2})$, respectively.}
\end{figure}
The Monte Carlo averages of the bond doublet (\ref{eq:f}) for
sublattices $A$ and $B$ are shown in Fig.~\ref{fig-fAB}. The value
of $\mathbf f$ for sublattice $A$ is narrowly distributed in the
vicinity of the collinear blue state, indicating that all tetrahedra
of sublattice $A$ are in this state. There are no blue tetrahedra on
sublattice $B$, as one might expect from the analogy with the
antiferromagnetic Potts model. For the BSS phase, where each site
is red or green with equal probabilities, one expects a continuous
distribution of $\mathbf f$ in the middle of the opposing edge of
the triangle connecting the green and red corners. Instead, we find
that sublattice $B$ has discrete fractions of red tetrahedra: e.g.
0, 1/4, 1/2, 3/4, and 1 in a system with 8 layers of tetrahedra in
one sublattice [Fig. \ref{fig-fAB}(a)].
This discreteness is a finite-size effect: an examination of
individual microstates shows that the intermediate phase has a
layered structure for bond variables on sublattice $B$: tetrahedra
within the same layer in the $xy$ plane have the same color. The
origin of the layered structure on one of the sublattices can be
traced to the same constraint on the colors around a closed loop.
See Appendix \ref{app:constraint} for details. For example, the
simulated system of Fig. \ref{fig-fAB}(a) contained 8 layers of
tetrahedra within a sublattice. If the layers could be colored red
and green independently of one another, one would expect to find the
fractions of either color proportional to 1/8. However, periodic
boundary conditions create constraints on the number of satisfied
bonds in the direction perpendicular to the layers, so that each
lattice can only have an even number of layers of either color.
Hence the fractions proportional to 1/4. Similarly, for a system in
which each sublattice has 6 layers of tetrahedra, the fraction of
red layers is 0, 1/3, 2/3, and 1 [Fig. \ref{fig-fAB}(b)].
\begin{figure}
\includegraphics[width=0.74\columnwidth]{fig9}
\caption{\label{fig-hist} Histogram of 17 distinct collinear
layered structures obtained by replica-exchange Monte Carlo
simulation. The system has $16\times 4^3$ spins. The configuration
number labels 17 topologically distinct layered states subject
to the periodic boundary condition.}
\end{figure}
To verify this observation more directly, we performed a
replica-exchange Monte Carlo simulation on a system with $4^3$
conventional cubic cells. $16\times 4^3$ spins are divided into 8
layers of tetrahedra in each sublattice. A particular layered state
with collinear spins is described by a sequence of Ising variables
$\{\sigma_1,\sigma_2,\cdots\sigma_8\}$ (see Appendix
\ref{sec:layered-state}). With periodic boundary conditions, 17
distinct configurations are used in a replica-exchange Monte Carlo
simulation. The Ising sequences corresponding to these 17 layered
states are listed in Table I. In each exchange cycle, a fixed number
of Metropolis sweeps are performed on individual replicas of the
system, each of which corresponds to a particular layered state.
Then different replicas are exchanged according to detailed balance,
thus ensuring thermodynamic equilibrium. A histogram of the
occurrence of the 17 configurations in a chosen replica is shown in
Fig. \ref{fig-hist}. The almost equal probability of occurrence
implies a vanishing spin order after averaging over the different
configurations.
The layered structure of the intermediate phase spontaneously breaks
the rotational and translational symmetries of the pyrochlore
lattice. A collinear N\'eel order exists within an individual layer
of tetrahedra but not across the layers if the colors on one
sublattice are indeed random. At the mean-field level, the collinear
states in the partially ordered phase belong to a larger class of
(generally non-collinear) layered states with the same exchange
energy. A discussion of the general layered states is presented in
Appendix \ref{sec:layered-state}. As already mentioned previously,
since collinear spins tend to have softer magnon spectrum, those
layered states with collinear spins are favored by thermal
fluctuations.
The two phase boundaries enclosing the intermediate phase are both
discontinuous transitions. The critical temperatures determined by
the mixed-phase method \cite{creutz:79} are linear in $J_2$:
$T_{c1} \sim 1.87 |J_2|S^2$ and $T_{c2} \sim 0.26 |J_2|S^2$ as
$T \to 0$. Our numerical simulations seem to indicate
that the intermediate phase is globally stable in the temperature
regime $T_{c2} < T < T_{c1}$: in the mixed state, the collinear phase
gradually takes over the entire lattice. We do not have analytical
arguments to back up the global stability of the intermediate collinear
phase: such an analysis would require knowledge of the free energy
of the magnetically ordered phase, which has not yet been obtained.
\section{Local stability of the partially ordered phase}
\label{sec:stability}
Even an analysis of the local stability of the partially ordered collinear
phase is not exactly straightforward. The standard
large-$S$ method of computing the magnon
contribution to the free energy fails because of the existence
of unstable modes with a negative stiffness at zero temperature. The
instability merely reflects the fact that the collinear
states are not a local minimum of energy (\ref{eq-H0}). The instability
is avoided at a (sufficiently high) finite temperature: the free energy
of spin fluctuations contributes a positive term to the spin stiffness.
In this Section we analyze the local stability of the collinear phase.
\subsection{Unstable modes}
To analyze the stability of a collinear state, we express the energy
of the system in terms of transverse spin fluctuations
$\bm\sigma_i\perp\hat\mathbf n$. By substituting $\mathbf S_i
\approx S (1-\bm\sigma_i^2/2 S^2) \eta_i \hat \mathbf n + \bm
\sigma_i$ into Eq. (\ref{eq-H0}) we obtain a spin-wave Hamiltonian
in the harmonic approximation,
\begin{equation}
\label{eq-H-sigma2}
\mathcal{H}^{(2)}=E_L + (J_1-2 J_2)\sum_i\bm\sigma_i^2
+\frac{1}{2}\sum_{i,j}J_{ij}\,\bm\sigma_i\cdot\bm\sigma_j,
\end{equation}
where $E_L$ is the energy of the layered state. The Ising variables
$\{\eta_i\}$ specifying the direction of a spin are absent from the
harmonic Hamiltonian (\ref{eq-H-sigma2}). They affect the dynamics
of the system through the canonical commutation relations for the
transverse components of the spins
\begin{figure}
\includegraphics[scale=0.302]{fig10a}
\includegraphics[scale=0.302]{fig10b}
\caption{\label{fig-unstable} (a) The energy dispersion of the spin-wave
band with unstable modes.
(b) Regions in momentum space $\mathbf q=2\pi(h,h,l)$
where the spectrum of energy fluctuations has negative eigenvalues
$\Lambda_{\mathbf q}^a$. $J_1=1$, $J_2=-0.1$.}
\end{figure}
The quadratic form (\ref{eq-H-sigma2}) must be positive definite to
guarantee stability of the collinear state. Its eigenvalues
$\Lambda$ are obtained by making the Fourier transform and then
diagonalizing a $4 \times 4$ matrix (the pyrochlore lattice is an
fcc with a basis of 4 sites):
\begin{equation}
\Lambda^a_{\mathbf q} = (J_1-2 J_2)+\lambda^a_{\mathbf q},
\end{equation}
where $\lambda^a_{\mathbf q}$ are eigenvalues of $J_{mn}(\mathbf q)$
defined in Sec.~\ref{sec-high-frustration}. The dispersion has
degenerate zero modes along lines $\mathbf q=2\pi\{1,h,0\}$
corresponding to magnetic spirals along one of the three cubic axes.
These spirals belong to the degenerate manifold of non-collinear
layered states discussed in Appendix \ref{sec:layered-state}.
Furthermore, there are regions in momentum space with
$\Lambda_{\mathbf q} < 0$, as shown in Fig. \ref{fig-unstable}. The
most unstable modes are found at wavevectors $\mathbf
q^*=2\pi\{h^*,h^*,0\}$ with $h^*$ given by Eq.~(\ref{eq:h-j2}). For
small $J_2/J_1$, the lowest eigenvalue is
\[
\frac{\Lambda_{\rm min}}{J_1}= (28-16\sqrt{3})\,\frac{J_2}{J_1}
+\frac{32}{3}(56\sqrt{3}-97)\,\Bigl(\frac{J_2}{J_1}\Bigr)^2
+\cdots.
\]
Since $\Lambda_{\rm min}<0$ for a ferromagnetic $J_2$, the collinear
ground states are unstable at zero temperature.
\subsection{Hartree-Fock calculation}
\begin{figure}
\includegraphics[scale=1.1]{fig11a}
\includegraphics[scale=1.1]{fig11b}
\caption{\label{fig-comparison} (a) energy density $\varepsilon$ and
(b) nematic order parameter as a function of temperature obtained
using Monte Carlo simulations and a Hartree-Fock self-consistent
calculation. The calculation was done with $J_2=-0.01 J_1$. The
dashed line is a linear fit to the Monte Carlo data. Note that the
transition temperature obtained from Monte Carlo simulation is
$T_{c2}\approx 0.076\,|J_2| S^2$.}
\end{figure}
At finite temperatures the collinear layered states are stabilized
by thermal fluctuations. To demonstrate this, we go beyond the
harmonic term of the classical Holstein-Primakoff expansion and
consider the interactions between spin waves,\cite{hizi:2007jpcm}
\begin{eqnarray}
\mathcal{H}^{(4)} = \frac{1}{8 S^2}\sum_{i,j} J_{ij}
\Bigl[\eta_i\,\eta_j\,\bm{\sigma}_i^2\,\bm{\sigma}_j^2
-\frac{1}{2}\bm{\sigma}_i\cdot\bm{\sigma}_j\,
(\bm{\sigma}_i^2+\bm{\sigma}_j^2)\Bigr].
\end{eqnarray}
Since the system is unstable at the harmonic order, a perturbation
expansion based on the quadratic Hamiltonian (\ref{eq-H-sigma2}) is
not possible. Instead, following Hizi and
Henley,\cite{hizi:2007jpcm} we construct an effective (mean-field)
quadratic Hamiltonian
\begin{equation}
\mathcal{H}_\mathrm{MF} = \sum_{i,j} \tilde
H^{(2)}_{ij} \bm\sigma_i\cdot\bm\sigma_j
\label{eq:H-MF}
\end{equation}
that provides the best
approximation to $\mathcal H^{(2)} + \mathcal H^{(4)}$. To this end,
we use the standard mean-field recipe to decouple the quartic
Hamiltonian. We first write every possible pair of operators in
$\mathcal H^{(4)}$ in terms of its thermal average plus a
fluctuation term. Dropping terms quartic in the fluctuations yields the
quadratic form (\ref{eq:H-MF}) with the following coefficients
$\tilde{H}^{(2)}_{ij}$:
\begin{eqnarray}
\label{eq-HF}
&(J_1-2 J_2)+\frac{1}{2 S^2}\,\sum_k\,J_{ik}\,
(\eta_i\,\eta_k\,G_{kk}-G_{ik}) & (i=j),
\nonumber\\
&\frac{1}{2}\,J_{ij}\,\Bigl[1 + \frac{1}{S^2} \eta_i\,\eta_j\,G_{ij}
-\frac{1}{2 S^2}\,(G_{ii}+G_{jj})\Bigr] & (i\neq j).
\quad\quad
\end{eqnarray}
Here $G_{ij}=\langle\sigma^x_i\sigma^x_j\rangle
=\langle\sigma^y_i\sigma^y_j\rangle$ is the correlation function of
spin fluctuations calculated self-consistently in the thermal
ensemble of the mean-field Hamiltonian (\ref{eq-HF})
\begin{equation}
\label{eq-Gij}
G_{ij} = \frac{\int\,D\bm{\sigma}\,\,\sigma^x_i\sigma^x_j
\,e^{-\beta\mathcal{H}_{\rm MF}}}
{\int\,D\bm{\sigma}\,e^{-\beta\mathcal{H}_{\rm MF}}}.
\end{equation}
\begin{figure}
\includegraphics[width=0.75\columnwidth]{fig12}
\caption{\label{fig-t-bd} Stability boundary $T^*$ obtained using
the Hartree-Fock calculation and the Monte Carlo simulations. The error
bars shown for the Monte Carlo data are equal to the temperature step
$\Delta T$ used in the simulation.}
\end{figure}
Numerically, an iteration process is used to obtain the correlation
functions $G_{ij}$. After self-consistency is reached,
the energy of the magnet is given by
\begin{eqnarray}
E_{\rm MF} &=& E_L + 2\sum_i (J_1-2 J_2)\,G_{ii}
+ \sum_{i,j} J_{ij}\,G_{ij} \\
&+&\frac{1}{2 S^2}\sum_{i,j}J_{ij}\,\Bigl[\eta_i\eta_j
(G_{ii} G_{jj}+G_{ij}^2)-G_{ij}(G_{ii}+G_{jj})\Bigr].
\nonumber
\end{eqnarray}
Fig. \ref{fig-comparison} (a) shows the computed energy density as a
function of temperature. The result agrees very well with that
obtained from Monte Carlo simulations. Both the simulation and
calculation were done for $J_2=-0.01\,J_1$ on a pyrochlore lattice
with a size of $16\times 4^3$ spins and periodic boundary condition
on each side. The self-consistent method can also be used to compute
the nematic order parameter. For $\hat\mathbf n = + \hat \mathbf z$,
the tensor $\langle S_{\mu}S_{\nu} \rangle$ becomes diagonal with
elements $\langle S_x S_x \rangle = \langle S_y S_y \rangle = 2\,
\bar G$ and $\langle S_z S_z \rangle = 1-2\,\bar G$, where $\bar G =
\sum_i G_{ii}/N$. The nematic order parameter is then
\begin{equation}
\label{eq-Q}
Q = \frac{2}{3} - \frac{2}{N S^2}\sum_i G_{ii}.
\end{equation}
The result is shown in Fig. \ref{fig-comparison} (b) and the
agreement with that obtained from Monte Carlo simulation seems
satisfactory: the discrepancy between the two methods is less than
3\%. The nearly saturated nematic order parameter $Q$ observed in
Monte Carlo simulations implies $\bm\sigma^2\ll 1$, justifying the
Holstein-Primakoff expansion about the collinear state.
Below a certain temperature $T^*$ the energy spectrum of spin waves
acquires some negative eigenvalues and the collinear phase gives way
to the low-temperature ordered state. Since the transition is first
order, the $E-T$ diagram exhibits hysteresis. The thermodynamic
transition takes place at a temperature $T_{c2}>T^*$, at which the
collinear phase is still locally stable.
The dependence of $T^*/|J_2|$ on the ratio $|J_2|/J_1$ obtained from
the Hartree-Fock calculation is shown in Fig. \ref{fig-t-bd}. The
points collapse perfectly on a linear curve implying a scaling
relation $T^*\sim J_2^2/J_1$. A numerical estimate of the stability
boundary $T^*(J_2)$, obtained as the lowest temperature at which the
intermediate phase was still observed in Monte Carlo runs, is also
plotted in Fig. \ref{fig-t-bd}; the result is in satisfactory
agreement with that of the mean-field calculation.
\subsection{Analytic results: red-and-green state}
An analytical derivation of the stability temperature $T^*\sim
J_2^2/J_1$ is difficult to obtain for the most general layered
state. We have evaluated the stability for the simplest state of
this kind, where all of the layers have the same colors. A state of
this sort (sublattice $A$ is red and sublattice $B$ is green) was
studied in Ref. \onlinecite{chern:060405}. This particular state has
a higher symmetry than a typical layered structure: the color
variables violate only the inversion symmetry exchanging the two
sublattices of tetrahedra.
\begin{figure}
\includegraphics[width=0.45\columnwidth]{fig13}
\caption{\label{fig:J-dist} Renormalized nearest neighbor bonds of
the red-and-green state in the mean-field calculation. The
renormalized first-neighbor exchange constants: $J_1-K_1-K_2$
(dashed bonds), $J_1-K_1$ (dash-dotted bonds), and $J_1-K_2$ (solid
bonds).}
\end{figure}
In the mean-field Hamiltonian (\ref{eq-HF}), the main effects of the
quartic interaction $\mathcal{H}^{(4)}$ is to renormalize the
first-neighbor exchange $J_1$ to $J_{ij} = J_1 + \delta J_{ij}$,
which is now bond-dependent:
\begin{eqnarray}
\label{eq:delta-J}
\delta J_{ij} = -\frac{J_1}{2 S^2} (G_{ii}+G_{jj}-2\,\eta_i\eta_j G_{ij}).
\end{eqnarray}
Assuming that exchange renormalizations $\delta J_{ij}$ respect the
symmetries of the red-and-green state, we have 3 independent
variational parameters $\delta J_{01}$, $\delta J_{02}$, and $\delta
J_{03}$ (Fig. \ref{fig:J-dist}). If we further assume that the
correlations $G_{ij}$ are dominated by the pyrochlore zero modes,
the number of variational parameters reduces to 2. This is so
because zero modes satisfy $\sum_{i=0}^3\sigma_i = 0$, hence
$\langle \sigma_0 \sigma_1\rangle =
-\langle\sigma_0^2\rangle-\langle\sigma_0\sigma_2\rangle
-\langle\sigma_0\sigma_3\rangle$. It follows then that $\delta
J_{01} = \delta J_{02} + \delta J_{03}$. We parameterize the
exchange renormalizations in terms of $K_1$ and $K_2$ such that
\begin{eqnarray}
\label{eq:J-A}
& & \delta J_{01} = \delta J_{23} = - K_1 - K_2, \nonumber \\
& & \delta J_{02} = \delta J_{31} = - K_1, \\
& & \delta J_{03} = \delta J_{12} = - K_2 \nonumber
\end{eqnarray}
on the red sublattice.
We then compute the spectrum and the eigenmodes of energy
fluctuations with the renormalized exchange interaction. The two
zero-energy bands that were flat in the absence of $J_2$ and $K_i$
now acquire a dispersion; one becomes gapped ($\Lambda_\mathbf{q}^a$
is strictly positive), while the other has a vanishing energy at the
wavevector $\mathbf q_0 = 2\pi(0,0,1)$. This zero mode corresponds
to a global rotation of spins. Correlation functions are dominated
by fluctuations in the lowest band in the vicinity of $\mathbf q_0$.
For small $\mathbf k$, the energy eigenvalue is
\begin{eqnarray}
\label{eq-disp1}
\Lambda_\mathbf{q_0 + k} \approx \frac{1}{32}\bigl[
2 K_1 k_{\perp}^2 + (8|J_2|+K_2) k_z^2\bigr],
\end{eqnarray}
where $k_{\perp}^2 = k_x^2+k_y^2$.
In order to obtain the correlations $G_{ij}$, we need first to
obtain the eigenmodes. To this end, we use an orthonormal basis of
the two zero modes of $J_1$ for given values of $\mathbf k$. We then
treat $K_i$ and $J_2$ as perturbations and use degenerate
perturbation theory to obtain the eigenmodes. To the lowest order in
$\mathbf k$, they are
\begin{eqnarray}
u_0(\mathbf q_0 + \mathbf k) &=& -i/2 - (k_x - k_y + k_z)/16, \nonumber \\
u_1(\mathbf q_0 + \mathbf k) &=& +1/2 -i(k_x + k_y + k_z)/16, \nonumber \\
u_2(\mathbf q_0 + \mathbf k) &=& -1/2 -i(k_x + k_y - k_z)/16, \nonumber \\
u_3(\mathbf q_0 + \mathbf k) &=& +i/2 - (k_x - k_y - k_z)/16.
\end{eqnarray}
As can be easily checked, the total spin of a tetrahedron
$\sum_m\sigma_m = \sum_m u_m e^{i(\mathbf q_0+\mathbf k)\cdot\mathbf
r_m} = 0$ at this order of $k$. The spin correlation function is
\begin{eqnarray}
\label{eq-Gmn}
G_{mn} = \frac{1}{N'}\sum_{\mathbf q}
\frac{T}{\Lambda_{\mathbf q}}
u^*_m(\mathbf q)\,u_n(\mathbf q)\,
e^{i(\mathbf q)\cdot(\mathbf r_m-\mathbf r_n)},
\end{eqnarray}
where $N' = N/4$ is the number of unit cells, and $m$, $n$ are
sublattice indices. By expanding to the second order of $k$ and
using (\ref{eq:delta-J}), we obtain the following self-consistency
equations for $K_1$ and $K_2$
\begin{eqnarray}
\label{eq:K1}
\frac{J_1 T}{4N' S^2}\sum_{\mathbf k}
\frac{k_z^2}{2 K_1\,k_{\perp}^2+(8|J_2|+K_2)k_z^2} = K_1, \\
\label{eq:K2}
\frac{J_1 T}{2N' S^2}\sum_{\mathbf k}
\frac{k_{\perp}^2}{2K_1\,k_{\perp}^2+(8|J_2|+K_2)k_z^2} = K_2,
\end{eqnarray}
Although these equations can be solved numerically, we are
interested in an approximate solution of $K_1$ and $K_2$ in the
low-temperature regime, $T\ll |J_2| S^2$. Since the effective spin
stiffness $K$ is generated by thermal fluctuations, they are
expected to be small compared to $J_2$. To the lowest order we
neglect $K_1$ and $K_2$ in Eq.~(\ref{eq:K1}) and obtain
\begin{eqnarray}
K_1 \approx \frac{J_1 T}{32\, |J_2| S^2}.
\end{eqnarray}
On the other hand, because the integral for $K_2$ is divergent as
$K_1\to 0$, we must keep $K_1$ in Eq.~(\ref{eq:K2}). Substituting
the result for $K_1$ into Eq.~(\ref{eq:K2}), we obtain
\begin{eqnarray}
K_2 \approx \frac{\pi}{3\sqrt{2} S}\sqrt{J_1 T}
\end{eqnarray}
to the lowest order in $T$.
These results provide a glimpse into the physics of the transition
between the intermediate and low-temperature phases. Fig.
\ref{fig-trans-n} shows the renormalized dispersion
$\Lambda_\mathbf{q_0 + k}$ (\ref{eq-disp1}) along the line $\mathbf
q=2\pi(h,h,1)$ at various temperatures. As the temperature
decreases, a dip of the dispersion curve starts to develop at
$h\approx 0.2$. Eventually this local minimum touches zero at the
critical temperature $T_{c2}$; below $T^*$ the collinear state is
unstable: it decays by emitting spin waves with $\mathbf q \approx
2\pi(1/4,1/4,1)$, which is related to $2\pi(3/4, 3/4, 0)$ by a
reciprocal lattice vector.
\begin{figure}
\includegraphics[width=0.65\columnwidth]{fig14}
\caption{\label{fig-trans-n} Variation of spin-wave energy $\Lambda$
in unit of $|J_2|$ along the $\mathbf q = 2\pi(h,h,1)$ line. The
calculation was done with a $J_2 = -0.01 J_1$. The curves correspond
to temperatures $T/|J_2|=$ 0.018, 0.0165, 0.01526, 0.0145, 0.013
(from top to bottom) $T^* = 0.01526 |J_2| S^2$ corresponding to the
temperature where the ${\bf q}=0$ mode becomes unstable.}
\end{figure}
It should be noted that the scenario displayed in Fig.
\ref{fig-trans-n} is only a qualitative description of the real
transition. Our self-consistent treatment only takes into account
spin waves close to the $\mathbf q_0=2\pi(0,0,1)$ Goldstone mode.
This is valid at temperatures well above $T^*$ since these spin
waves are the lowest-energy excitations of the magnet. However, as
$T\to T_{c2}$, spin waves with wavevectors $\mathbf q \approx
2\pi(3/4,3/4,0)$ become soft and should also be included in a
self-consistent calculation. Additionally, we have studied the
energy of spin waves as a proxy for the instability, whereas the
proper calculation at a finite temperature should involve the free
energy. We do this next.
\subsection{Stability boundary: red-and-green state}
We now provide an estimate of the stability temperature $T^*$ by
computing the magnon contribution to the system free energy. An
expression (\ref{eq-F-unstable}) for the change of free energy
associated with an unstable mode is derived in Appendix
\ref{sec-fe-unstable}. Here we apply the result to the red-and-green
state. We consider the most dangerous modes, namely those with
wavevectors near $\mathbf q^* = 2\pi\{h^*,h^*,0\}$ where $h^*\approx
3/4$. In the presence of such an unstable mode with amplitude $\phi$
superimposed on the red-and-green state, the free energy changes by
an amount given by
\begin{equation}
\label{eq-DF}
\Delta F = \Bigl(\Lambda^* S^2 + \sum_{mn}
G_{mn}\,\Delta_{nm}\Bigr)\phi^2,
\end{equation}
where the correlation function $G_{mn}$ is given by (\ref{eq-Gmn}),
and $\Delta_{mn}$ is the perturbation to the mean-field Hamiltonian
$\mathcal{H}_{\rm MF}$ caused by the unstable mode. In our case, the
real-space eigenvector of the unstable mode with $\mathbf q^*
=2\pi(h^*,h^*,0)$ is
\begin{equation}
\mathbf m_{n}(\mathbf r) = U^*_{n}\,
\bigl[\hat\mathbf x\cos(\mathbf q^*\cdot\mathbf r)
+\hat\mathbf y\sin(\mathbf q^*\cdot\mathbf r)\bigr],
\end{equation}
where the corresponding momentum-space eigenvector for $\mathbf q^*$
is
\begin{equation}
\label{eq-eigenU}
\mathbf U^*
= (\cos\theta,\,-\sin\theta,\,-\sin\theta,\,\cos\theta)/\sqrt{2},
\end{equation}
with $\theta \approx 0.27 \pi$ and weakly dependent on $J_2$. We
write the energy of the unstable mode as $\Lambda^* =
-\gamma\,|J_2|$, where $\gamma \approx 0.2$ is a dimensionless
number. The change of free energy is then
\begin{equation}
\Delta F/\phi^2 = -\gamma|J_2| S^2 + \frac{J_1 T}{4 N'}
\sum_{\mathbf k} \frac{\Delta_\mathbf k}
{2 K_1\,k_{\perp}^2+(8|J_2|+K_2)k_z^2},
\end{equation}
where
\begin{equation}
\Delta_{\mathbf k} =\sum_{m,n} \Delta_{mn}
u^*_n(\mathbf k)u_m(\mathbf k)e^{i(\mathbf q_0+\mathbf k)
\cdot(\mathbf r_m-\mathbf r_n)}.
\end{equation}
Since in most cases $\Delta_{\mathbf k}\sim \Delta
+\mathcal{O}(k^2)$ for $h^*=1/4$, we neglect the $\mathbf k$
dependence of $\Delta_{\mathbf k}$ in the following as a lowest
order approximation. With the aid of Eqs.~(\ref{eq:K1}) and
(\ref{eq:K2}), the integral evaluates to
\begin{equation}
\frac{\sqrt{2}\,\Delta}{16\pi}\,\sqrt{J_1 T},
\end{equation}
The condition $\Delta F = 0$ thus gives an estimate of the stability
temperature
\begin{equation}
\label{eq-T-star}
T^*=\Bigl(\frac{16\pi \gamma S}{\sqrt{2}\Delta}\Bigr)^2
\,\frac{J_2^2}{J_1}.
\end{equation}
This expression overestimates (by a factor of about 10) the
stability temperature compared with numerical results. However, as
mentioned previously, the discrepancy is due to the fact that we
neglect contributions from the unstable modes themselves when
approaching the transition temperature. Those modes with wavevector
centered about the 12 unstable $\mathbf q^*=2\pi\{h^*,h^*,0\}$
become extremely soft as $T\to T^*$ and should be included in the
calculation in a self-consistent way. Nevertheless,
(\ref{eq-T-star}) provides an upper bound of the stability boundary
and gives a scaling relation consistent with the numerical data.
\section{Discussion}
\label{sec:discussion}
We have studied the classical Heisenberg antiferromagnet on the
pyrochlore lattice with first and second-neighbor exchange
interactions. Ferromagnetic second-neighbor exchange $J_2<0$ is
frustrated and lifts the vast degeneracy of the nearest-neighbor
model only partially, setting stage for a nontrivial phase diagram
in the $(J_2, T)$ plane. We have used a combination of Monte Carlo
simulations and analytical calculations to characterize the phases
of this model. In our opinion, the low-temperature phase, discussed
previously by Tsuneishi {\it et al.},\cite{tsuneishi:2007jpcm} is
the incommensurate, and likely non-collinear, ordered phase
predicted earlier by Reimers {\it et al}.\cite{reimers:1991prb} A
full characterization of its magnetic order remains to be done, and
its fate in the presence of strong quantum fluctuations is an
interesting topic for future study.
Our simulations have uncovered the existence of another,
partially-ordered phase at intermediate temperatures for a weak
enough $|J_2|$. In the intermediate phase, the spins are on average
collinear, which is manifested by a nonzero nematic order parameter.
The order is fully characterized by a combination of a global
nematic director $\hat\mathbf n$ and a 3-state Potts variable
(color) on every tetrahedron indicating the location of frustrated
bonds (Fig.~\ref{fig:rgb}). The second-neighbor interaction $J_2<0$
acts like an antiferromagnetic Potts coupling forcing unlike colors
on neighboring tetrahedra.
The color structure of this phase resembles the ordered state with
broken sublattice symmetry (BSS) of the antiferromagnetic Potts
model: \cite{grest:1981prl} one sublattice of tetrahedra is
dominated by one color (say, blue) while the other exhibits a
mixture of the remaining two colors (red and green). However, unlike
in the BSS state, the two colors on the second sublattice are not
distributed in a completely random way: they form uniform layers in
the plane associated with the colors (in this case, $xy$). The
colors of individual layers appear to be random, hence
\textit{partial} order.
The partial order can be described by an individual $Z_2$ variable
$\sigma_i$ for each such layer---in addition to a global direction
of the spins $\hat\mathbf n$ and the color of the other sublattice.
States with different sets of $\{\sigma_i\}$ are local minima of the
free energy. Accessing one such minimum from another by means of a
uniform rotation of spins within one layer of tetrahedra requires
climbing over a free-energy barrier that grows as the number of
spins in that layer and thus becomes impossible in the thermodynamic
limit. A more plausible route to changing the color of a layer is by
nucleating a bubble of the opposite $\sigma_i$, which will grow if
the new state has a lower free energy once the bubble is large enough for the
gain in bulk energy to outweigh the cost in interface energy. Since the
distinct layered states are not related by symmetry, their free energies are
generally different and the nucleation route may well lead to a
selection within this set of states. Since such nucleation can go along with
large energy barriers, it can be tricky to observe
\cite{leggett:1984prl,schiffer:1992prl}, and indeed we have not found it in our
simulations.
It is worth stressing that the ideal collinear states do not
minimize the exchange energy---either globally or locally. They owe
their stability to thermal fluctuations, which effectively
renormalize exchange couplings and turn these spin configurations
into minima of the free energy. As the temperature falls, the
couplings return to their bare values and the collinear states
become locally unstable at a temperature $T^* = \mathcal
O(J_2^2/J_1)$, in agreement with our Monte Carlo simulations. The
most unstable spin-wave mode has approximately the same wavenumber
as the low-temperature incommensurate magnetic order. The simulated
phase transition is strongly discontinuous.
Simulations on the high-temperature side show that the intermediate
phase persists up to a temperature $\mathcal O(J_2)$. A
discontinuous phase transition takes it into the paramagnetic phase.
The presence of strong local spin correlations in the paramagnetic
phase means that the effect of third-neighbor couplings $J_3$ (but
not of $J_3'$, see Fig.~\ref{fig:pyrochlore}) is equivalent, up to a
change of sign, to that of the second-neighbor coupling, at least to
the first order. Therefore we expect that the state of our system
depends on these couplings mostly through their difference
$J_2-J_3$. If correct, this observation would extend the results of
our study to a broader class of pyrochlore antiferromagnet with both
$J_2$ and $J_3$ present.
\section*{Acknowledgments}
It is a pleasure to thank J. Chalker and G. Jackeli for useful
discussions. This work was supported in part by the NSF under Grant
No. DMR-0348679 and by Research Corporation.
|
2,877,628,091,344 | arxiv | \section{Introduction}
Let $G$ be an abelian group and $A$ be a subset of $G$. We use
$S_A$ to denote the collection of all subset sums of $A$
$$S_A:= \{\sum_{a \in B} a| B \subset A, |B| < \infty \}. $$
\noindent We will keep this notation when $A$ is sequence of (not
necessarily different) elements of $A$. In this case $S_A$ is the
collection of all subsequence sums of $A$. $Z_n$ denotes the
cyclic group of order $n$. \vskip2mm
{\it Example.} Take $G= Z_{11}$. If $A$ is the subset
$\{1,2,3 \}$, then $S_A=\{1,2,3,4,5,6\}$. If $A$ is the sequence
$\{1,1,3\}$, then $S_A= \{1,2,3,4,5 \}$. \vskip2mm
Following Erd\H os \cite{erd}, we say that $A$ is
{\it complete} if $S_A=G$ and {\it incomplete } otherwise. If $G$
is finite, the {\it critical} number of $G$, $c(G)$, is the
smallest integer $m$ such that any subset $A \subset G \backslash
\{0\}$ with size $m$ is complete. This parameter has been studied
for a long time and its exact value is known for most groups.
\begin{theorem} \label{theorem:old} Let $G$ be a finite abelian
group of order $n=ph$, where $p$ is the smallest prime divisor of
$G$. Then the following holds
\begin{itemize}
\item If $p=2$ and $h \ge 5$ or $G= Z_2 \oplus Z_2 \oplus Z_2$,
then $c(G)= h$. If $p=2$ and $h \le 4$ and $G \neq Z_2 \oplus Z_2
\oplus Z_2$, then $c(G)= h+1$. \item If $h$ is a prime, then
$p+h-2 \le c(G) \le p+h-1$. Furthermore, if $h=p \ge 3$ or $h \ge
2p+1$, then $c(G)= p+h-2$. \item If $p \ge 3$ and $h$ is
composite, then $c(G) = p+h-2$.
\end{itemize}
\end{theorem}
The first statement is due to Diderrich and Mann \cite{DM}. The
second combines results of Mann and Wou \cite{MW} ( who studied
the case $h=p$) and Didderich \cite{D1}. The last statement has
been known as Didderich conjecture, posed in \cite{D1} and was
proved by Gao and Hamidoune \cite{GH}, more than twenty years
later.
In this paper, we would like to study the following question
\vskip2mm \centerline{\it What is the structure of a relatively
large incomplete set ? } \vskip2mm
Technically speaking, we would like to have a characterization for
incomplete sets of relatively large size. Such a characterization
has been obtained recently in \cite{GHLS} for sets of size at
least $n/(p+2)$. In this paper, we will be able to treat much
smaller sets. (In fact, our assumption on "relatively large" is
almost sharp; see Theorem \ref{theorem:dir2}.) The method used in
our proofs is different from those used in previous papers. As a
by-product, one obtains a new proof for a good portion of Theorem
\ref{theorem:old}, including a new proof for Didderich's
conjecture for large $n$ (see the remarks following Theorem
\ref{theorem:dir1}).
{\it Notation.} $<A>$ denotes the subgroup generated by $A$.
${\hbox{\bf E}}(X)$ denotes the expectation of a random variable $X$. All
logarithms have natural base, if not specified otherwise.
\section {The characterization of large incomplete sets}
Let us start by a simple fact, whose proof is left as an exercise.
\begin{fact} \label{fact1} Let $p$ be a prime and $A$ be a sequence of $p-1$
non-zero elements in $Z_p$. Then $S_A \cap \{0\} = Z_p$. On the
other hand, there is a sequence of $p-2$ non-zero elements of
$Z_p$ such that $S_A \cap \{0\} \neq Z_p$.
\end{fact}
Let $G$ be an abelian group of size $n$ and $q$ be a prime divisor
of $n$. Let $H$ be a subgroup of size $n/q$. A direct corollary of
Fact \ref{fact1} is the following
\begin{fact} \label{fact1'} If $A$ is an incomplete subset of $G$ and
$S_{A \cap H}= H$, then $H \backslash A$
has at most $q-2$ elements. Consequently $A$ has at most $n/q
+q-2$ elements. \end{fact}
\begin{definition} \label{definition:nice} Let $G$ be an abelian group of size
$n$. A subset $A$ of $G$ is {\it nice} if there is a subgroup $H$
of $G$ such that $|G/H|$ is a prime and $S_{A \cap H} = H$.
\end{definition}
Given a subgroup $H$ in $G$ and an element $a \in G$, we use $a/H$
to represent the coset of $H$ which contains $a$. $a/H$ can be
viewed as an element of the quotient group $G/H$. If $B$ is a
subset of $G$, then $B/H :=\{b/H| b \in B\}$ is a sequence in
$G/H$.
\begin{fact} \label{fact2} If $A$ is a nice incomplete set in a
finite abelian group $G$ of size $n$, then $|A| \le \frac{n}{p} +
p-2 $, where $p$ is the smallest prime divisor of $n$.
Furthermore, $(A \backslash H) /H$ is an incomplete sequence in
$Z_q = G/H$.
\end{fact}
\begin{proof} (Proof of Fact \ref{fact2}) If $A$ is a nice incomplete set
then $|A| \le \frac{n}{q} + q-2 $, for some prime divisor $q$ of
$n$. On the other hand, it is easy to see that $\frac{n}{q} + q
\le \frac{n}{p} +p$, where $p$ is the smallest prime divisor of
$n$. \end{proof}
Our leading idea is that relatively large incomplete sets are
nice. A special case has been verified by Gao, Hamidoune,
Llad\'o and Serra \cite{GHLS}. Their theorem can be reformulated
in the current setting as follows
\begin{theorem} \label{theorem:GHLS} Let $G$ be an abelian group
of order $n=ph$, where $p \ge 5$ is the smallest prime divisor of
$n$, $h \ge 15p$ is composite. Let $A$ be an incomplete subset of
at least $\frac{n}{p+2} +p$ elements. Then $A$ is nice.
Furthermore, there is a subgroup $H$ of size $n/p$ such that $S_{A
\cap H} = H$.
\end{theorem}
For any positive ${\epsilon} \le 1$, define
\begin{equation} \label{Cep} C({\epsilon}):= \sqrt{ \frac{40/{\epsilon}^2}{ \log
(2/{\epsilon})} }
\end{equation}
\noindent and let $n({\epsilon})$ be the smallest integer $m$ such that
for any $n \ge m$
\begin{equation} \label{Nep} n \ge C({\epsilon}) \sqrt {n \log n} >
\frac {4}{{\epsilon}^2}.
\end{equation}
\begin{remark} $n({\epsilon})$ is relatively small.
One can take, say, $n({\epsilon}) = 500/{\epsilon}^4$.
\end{remark}
Now we are ready to state our first theorem.
\begin{theorem} \label{theorem:dir1} Let $\delta$ be a positive
constant at most $1/6$ and $p_1 \le p_2 \dots \le p_t$, $t \ge
2$, be primes satisfying three conditions
\begin{itemize}
\item $p_2 \ge 3$; \item $n:=\prod_{i=1}^t p_i \ge n(\delta) $;
\item $p_1 \le \frac{1}{3C(\delta)} \sqrt {n/ \log n}$.
\end{itemize}
Let $G$ be an abelian group of order $n$ and $A$ be an incomplete
subset of $G$ of size at least $(5/6 +\delta)\frac{n}{p_1}$. Then
$A$ is nice and there is a subgroup $H$ such that $n/|H|$ is one
of the $p_i$, $|A \backslash H| < 3p_1$ and $S_{A \cap H} = H$.
\end{theorem}
\begin{remark} Let us have a few comments on this theorem.
\begin{itemize}
\item Using Theorem \ref{theorem:dir1} and Facts \ref{fact1'}, we
can recover a large portion of Theorem \ref{theorem:old}. To see
this, consider an incomplete set $A$ which does not contain zero.
If $|A| \le n/p_1$, there is nothing to prove. If $|A| \ge n/p_1$,
and $n=|G|$ satisfies the assumptions of Theorem
\ref{theorem:dir1}, apply this theorem to obtain the subgroup $H$.
As $A$ does not contain zero,
then $|A \cap H| \le |H|-1$. By Facts
\ref{fact1'}, $|A| \le n/q +q-3$, where $q= n/|H|$. But $n/q +q
\le n/p_1 +p_1$, so $|A| \le n/p_1+p_1 -3$.
\item The third assumption $p_1 \le \frac{1}{C} \sqrt {n/ \log n}$
in Theorem \ref{theorem:dir1} can be voided if we assume $t \ge 3$
(i.e., $n/p_1$ is composite) and $n$ sufficiently large. In that
case $p_1 \le n^{1/3} \ll \sqrt{n /\log n}$. It follows that the
assumptions of Theorem \ref{theorem:dir1} are satisfied whenever
$p_1 \ge 3$, $h$ is composite and $n$ is sufficiently large. Thus,
we have a new proof of Didderich conjecture for sufficiently large
$n$. \item Unlike Theorem \ref{theorem:GHLS}, one cannot conclude
that $H$ has size $n/p_1$. It is easy to give examples where
$|G|/|H|$ can be any of the $p_i$.
\end{itemize}
\end{remark}
The next question is to find the best lower bound on $|A|$ that
guarantees niceness. Our second theorem gives an almost complete
answer for this question.
\begin{theorem} \label{theorem:dir2} For any positive constant $\delta$
there is a positive constant $D(\delta)$ such that the following
holds. Let $p_1 \le p_2 \dots \le p_t$, $t \ge 3$, be primes such
that $p_1 p_2\le \frac{1}{D(\delta)} \sqrt {n/ \log n}$, where
$n:= \prod_{i=1}^t p_i$. Let $G$ be an abelian group of order $n$
and $A$ be an incomplete subset of $G$ with cardinality at least
$(1+ \delta)\frac{n}{p_1 p_2}$. Then $A$ is nice. Furthermore, the
lower bound $(1+ \delta)\frac{n}{p_1 p_2}$ cannot be replaced by
$\frac{n}{p_1 p_2} + n^{1/4-\alpha}$, for any constant $\alpha$.
\end{theorem}
Finally, let us discuss the case when $G=Z_n$, where $n$ is a
prime. This case has not been covered by the results presented so
far. Olson \cite{O2}, improving upon a result of Erd\H os and
Heilbronn \cite{EH}, shows that $c(Z_n) \le \sqrt{4n-3} +1$. His
bound was improved by da Silva and Hamidoune \cite{daH} to
$\sqrt {4n-7}$. As far as characterization results are concerned,
we know of the following two results.
\begin{theorem} \label{theorem:DF}
Let $n$ be a prime and $A$ be an incomplete subset of ${\mathbf Z}_n$ of
size at least $(2n)^{1/2}$. Then there is some non-zero element $b
\in {\mathbf Z}_p$ such that
$$\sum_{a \in b A} \| a\| \le n +O(n^{3/4} \log n). $$
\end{theorem}
\begin{theorem} \label{theorem:NSV} Let $n$ be a prime and $A$ be an incomplete
subset of ${\mathbf Z}_n$ of size at least $1.99 n^{1/2}$. Then there is
some non-zero element $b \in {\mathbf Z}_p$ such that
$$\sum_{a \in b A} \| a\| \le n +O(n^{1/2}). $$
\end{theorem}
Theorem \ref{theorem:DF} is due to Deshouillers and Freiman
\cite{DF}. Theorem \ref{theorem:NSV} is due to Nguyen, Szemer\'edi
and Vu \cite{NSV}. The error term in this $O(n^{1/2})$ is best
possible, as shown by a construction in \cite{Des2}.
The rest of the paper is organized as follows. Section 3 contains
the main lemma to the proofs, which states that if $A$ is
sufficiently large, then $S_A$ contains a subgroup of size
comparable to $|A|$. The proofs of the theorems come in Sections 4
and 5. Section 6 is devoted to concluding remarks.
\section {The existence of a large subgroup in $S_A$}
Our key tool is the following statement, which asserts that if $A$
is a sufficiently large subset of $G$, then $S_A$ contains a large
subgroup of $G$. Recall the definition of $C({\epsilon})$ and $n({\epsilon})$
from \eqref{Cep} and \eqref{Nep}.
\begin{theorem} \label{theorem:subgroup} Let $0 < {\epsilon} < 1$ be a
constant and $G$ be an abelian group of size $n$, where $n \ge
\max \{ \frac{4}{{\epsilon}^2}, C({\epsilon})\sqrt { n \log n } \}$. Let $A$ be
a subset of $G$ with at least $ \max \{ \frac{4}{{\epsilon}^2},
C({\epsilon})\sqrt { n \log n } \}$ elements. Then $S_A$ contains a
subgroup of size at least $(1-{\epsilon})|A|$.
\end{theorem}
\begin{remark} The bound $(1-{\epsilon})|A|$ is asymptotically sharp,
as $A$ itself can be a subgroup. The lower bound $C \sqrt {n
\log n}$ for $|A|$ is sharp up to the logarithmic term. To see
this, consider $G= Z_{p^2} $ and $A=\{0,1 \dots, p \}$. It is
clear that $|A| > p = \sqrt {|G|}$. On the other hand, $S_A$ does
not contain any proper subgroup of $G$. It is interesting to see
whether the $\log$ term can be removed.
\end{remark}
\begin{remark}
The theorem also holds for non-abelian group, see Theorem
\ref{theorem:subgroup1} at the end of this section.
\end{remark}
By definition of $n({\epsilon})$, if $n \ge n({\epsilon})$ then
$$n > C({\epsilon}) \sqrt { n \log n } >
\frac{4}{{\epsilon}^2} . $$ \noindent In this case we have the following
corollary, which is easier to use.
\begin{corollary} \label{cor:subgroup}
Let $0 < {\epsilon} < 1$ be a
constant and $G$ be an ablian group of size $n \ge n({\epsilon})$. Let
$A$ be a subset of $G$ with at least $C({\epsilon}) \sqrt { n \log n } $
elements. Then $S_A$ contains a subgroup of size at least
$(1-{\epsilon})|A|$.
\end{corollary}
To prove Theorem \ref{theorem:subgroup}, we use the following
result of Olson \cite{O1} (see also \cite[Chapter 12]{TVbook}).
Let $A$ be a set and $l$ be a positive integer, we define
$$lA:=\{a_1 + \dots a_l | a_i \in A \}. $$
\noindent Also recall that $<A>$ denotes the subgroup generated by
$A$.
\begin{theorem} \label{theorem:olson} Let $G$ be finite abelian group, $l$
be a positive integer and $0 \in A$ be a finite subset of $G$.
Then either $lA=<A>$ or $|lA| \ge |A| + (l-1) (\frac{|A|}{2} +1)$.
\end{theorem}
Since $|A| + (l-1) (\frac{|A|}{2} +1) \ge (l+1)|A|/2$, the
following corollary is immediate.
\begin{corollary} Let $G$ be a finite abelian group, $l$
be a positive integer and $0 \in A$ be a finite subset of $G$ such
that $(l+1)|A| \ge 2|G|$, then $$lA = <A>. $$
\end{corollary}
We also needs the following result of Olson \cite{O1}, which
refines an earlier result of Szemer\'edi \cite{Sz} (Szemer\'edi
proved the theorem for an unspecified constant instead of 3).
\begin{theorem} \label{theorem:szemeredi} Let $G$ be an abelian
group of order $n$ and $A$ be subset of at least $3 \sqrt n$ elements. Then
$0\in S_A$.
\end{theorem}
We also need the following simple lemma:
\begin{lemma} \label{lemma:cor:1} Let $G$ be an abelian group and $A$ be a subset of
$G$. Let $l$ be a positive integer and $S$ a subset of $G$ such
that every element of $S$ can be represented as the sum of two
different elements of $A$ in at least $2l-1$ ways (not counting
permutations). Then $l S \subset S_A$.
\end{lemma}
\begin{proof} (Proof of Lemma \ref{lemma:cor:1})
Let $x_1, \dots x_l$ be (not necessarily different) elements of
$S$. We represent $x_1 +\dots + x_l$ as a sum of different
elements of $A$ using the greedy algorithm. To start, represent
$x_1 =a_1 + a_1'$ where $a_1 \neq a_1'$ are different elements of
$A$. Assume that we have represented $x_1= a_1 +a_1', \dots, x_i =
a_i +a_i'$, where $1\le i < l$ and $a_1,a_1', \dots, a_i, a_i'$
are all different. Now look at $x_{i+1}$. Each of the $2i$
elements $a_1,a_1', \dots, a_i, a_i'$ appear in at most one
representation of $x_{i+1}$. Since $x_{i+1}$ has at least $2l-1 >
2i$ representations, we can find a representation $x_{i+1} =
a_{i+1}+ a_{i+1}'$ where both $a_{i+1}$ and $a_{i+1}'$ are
different from $a_1,a_1', \dots, a_i, a_i'$. This concludes the
prof.
\end{proof}
\begin{proof} (Proof of Theorem \ref{theorem:subgroup})
For each element $x \in G$, let
$m_x$ be the number of ways to represent $x$ as the sum of two
different elements of $A$ (not counting permutations). A double
counting argument gives
\begin{equation} \label{equ:cor:1-1} \sum_{x \in G} m_x = {|A| \choose 2}. \end{equation}
Notice that $m_x$ is at most $M:=|A|/2$. Set $K:= 2/{\epsilon}$.
Let $S_j$ be the collection of those
$x$ where $ K^{-j} M < m_x \le K^{-j+1} M$ for $j=1, \dots, j_0$,
where $j_0$ is the largest integer such that $K^{-j_0} M \ge 1$.
Let $S_{j_0+1}$ be the collection of those $x$ where $1 \le m_x
\le K^{-j_0} M$. By the definition of $S_j$
\begin{equation} \label{equ:cor:1-2} \sum_{j=1}^{j_0+1} K^{-j+1} M |S_j |
\ge \sum_{x \in G} m_x,
\end{equation}
\noindent which, together with \eqref{equ:cor:1-1} imply
\begin{equation} \label{equ:cor:1-3} \sum_{j=1}^{j_0+1} K^{-j+1} M |S_j |
\ge {|A| \choose 2 }
\end{equation}
Call a set $S_j$ {\it small} ($j=1, \dots, j_0+1$) if it has at
most $(1-{\epsilon})|A|$ elements and {\it large} otherwise. The
contribution from the small $S_j$ on the left hand side is at most
$$\sum_{j=1}^{j_0+1} K^{-j+1} M (1-{\epsilon})|A| \le \frac{K}{K-1} M(1-{\epsilon})|A|
= (1- \frac{{\epsilon}}{2-{\epsilon}}) \frac{|A|^2}{2} $$ \noindent taken into
account the facts that $K= 2/{\epsilon}$ and $M=|A|/2$. Since $|A| \ge
\frac{4}{{\epsilon}^2}$, we have
$$ (1- \frac{{\epsilon}}{2-{\epsilon}}) \frac{|A|^2}{2} \le (1 -{\epsilon}/2)
\frac{|A|^2} {2} - \frac{|A|}{2}. $$ \noindent From this and
\eqref{equ:cor:1-3}, we have
\begin{equation} \label{equ:cor:1-4}
\sum_{S_j \,\, \hbox{large}} K^{-j+1} M |S_j | \ge {|A| \choose
2} - (1-{\epsilon}/2) \frac{|A|^2}{2} + \frac{|A|}{2} = {\epsilon}
\frac{|A|^2}{4}.
\end{equation}
\noindent The bound $|A| \ge C({\epsilon}) \sqrt {n \log n} $ guarantees
that
\begin{equation} \label{equ:cor:1-5} {\epsilon} \frac{|A|^2}{4} \ge 5 K n \log_{2/{\epsilon}} n.
\end{equation}
(In fact, $C({\epsilon})$ is defined so that this inequality holds.) Set
$l_j:= K^{-j} M$. Since the number of large $S_j$ is at most
$j_0+1 \le \lfloor \log_{2/{\epsilon}} |A|/2\rfloor +1$,
\eqref{equ:cor:1-4}, \eqref{equ:cor:1-5} and the pigeon hole
principle imply that there is a large $S_j$ such that
$$ l_j |S_j| \ge 4 n . $$
Notice that $|S_j| \le |G|= n$. It follows that $ (\lfloor l_j/2
\rfloor +1) |S_j| \ge 2n$. Apply Corollary \ref{cor:subgroup} to
$l := \lfloor l_j/2 \rfloor$ and $S:= S_j \cup \{0\}$, we can
conclude that $lS =<S>$. On the other hand, by the definition of
$S$
$$lS= \cup_{i=1}^l i S_j \cup \{0\}. $$
\noindent By Lemma \ref{lemma:cor:1},
$$i S_j \subset S_A, $$
\noindent for all $1\le i \le l$. Finally, $0 \in S_A$ by Olson's theorem.
Thus $S_A$ contains $<A>$, which has at least $(1-{\epsilon})|A|$ elements since $S_j$
is large and $|<S> | \ge |S| \ge |S_j|$. This concludes the proof.
\end{proof}
All the tools used in the proof (Theorems \ref{theorem:olson} and
\ref{theorem:szemeredi}, Lemma \ref{lemma:cor:1}) hold for
non-abelian groups. Thus, Theorem \ref{theorem:subgroup} also
holds for this case. The proof requires only two simple
modifications. First, in Lemma \ref{lemma:cor:1}, $2l-1$ is
replaced by $4l-3$. The reason is that in the proof, each of the
elements $a_1, a_1', \dots, a_i, a_i'$ can now appear in at most 2
representations of $x_{i+1}$. The second is that in the proof of
Theorem \ref{theorem:subgroup}, we need to fix an ordering on the
elements of $G$ and when we consider a sum $x+y$, we always assume
that $x$ precedes $y$ in this ordering. The rest of the proof
remains the same.
\begin{theorem} \label{theorem:subgroup1} For any constant $0 < {\epsilon} < 1$
there are constant $n_1({\epsilon})$ and $C_1({\epsilon})$ such that the
following holds. Let $G$ be a group of size $n$, where $n \ge
n_1({\epsilon})$. Let $A$ be a subset of $G$ with at least $C_1({\epsilon})\sqrt
{ n \log n } \}$ elements. Then $S_A$ contains a subgroup of size
at least $(1-{\epsilon})|A|$.
\end{theorem}
The values of $n_1({\epsilon})$ and $C_1 ({\epsilon})$ might be slightly
different from that of $n({\epsilon})$ and $C({\epsilon})$, due to the
modifications.
\section{Proof of Theorem \ref{theorem:dir1}}
\begin{lemma} \label{lemma:dir2} Let $G$ be a finite additive
group and $A$ be a subset of $G$ with cardinality at least
$\lfloor |G|/2 \rfloor +2$. Then $S_A= G$.
\end{lemma}
\begin{proof} (Proof of Lemma \ref{lemma:dir2}) Let $x$ be an
arbitrary element of $G$. There are exactly $\lfloor |G|/2
\rfloor$ (unordered) pairs $(a,b)$ of different elements of $G$
such that $a+b=x$. The claim follows by the pigeon hole
principle. One can improve the bound slightly but from our point
of view it is not important.
\end{proof}
\begin{proof} (Proof of Theorem \ref{theorem:dir1})
Let $A_1$ be an arbitrary subset of $A$ with cardinality $(1+ 2
\delta)\frac{n}{3p_1}$. By the upper bound on $p_1$, we can
assume that $|A_1| \ge C(\delta) \sqrt {n \log n}$, which enables
us to apply Corollary \ref{theorem:subgroup} to $A_1$ and obtain a
subgroup $H \subset S_{A_1}$ where
$$|H| \ge (1-\delta)|A_1| = (1-\delta) (1+ 2\delta) \frac{n}{3p_1} >
\frac{n}{3p_1}. $$ The assumption $p_2 \ge 3$ shows that $H >
\frac{n}{p_1p_2}$. It follows that $|H| = n/q$ where $q$ is one
of the $p_i$ ( $1\le i \le t$). Furthermore,
$$q < 3 p_1 .$$
Consider the sequence $B:= \{a/H | a \in
A \backslash A_1 \}$ in the quotient group $ G/H=Z_q$. If $B$ has
at least $q-1$ non-zero elements, then by Fact \ref{fact1} $S_B$
contains $Z_q \backslash \{0\}$, which implies that
$$G \subset S_{A_1} + S_{A\backslash A_1} \subset S_A $$
\noindent a contradiction as $A$ is incomplete. Thus, $B$ has at
most $q-2$ non-zero elements. So we can conclude that all but at
most $q-2$ elements of $A \backslash A_1$ lie in $H$. Let $A_2$
denote the set of these elements. We have
$$|A_2| > |A \backslash A_1| -(q-2) \ge |A|-|A_1| - 3p_1 +2 \ge
\Big(\frac{5}{6} +\delta - (1+ 2\delta) \frac{1}{3} \Big)
\frac{n}{p_1} - 3p_1 +2. $$ The right most formula is
$$ (\frac{1}{2} + \frac{1}{3} \delta)) \frac{n}{p_1} -
3p_1 +2 \ge \frac{n}{2p_1} +2.
$$
\noindent since $\frac{1}{3} \delta \frac{n}{p_1} \ge 3p_1$ by the
assumption $p_1 \le \frac{1}{3C(\delta)} \sqrt {n / \log n}$ and
the definition of $C(\delta)$.
On the other hand, $|H|$ is at most $ \frac{n}{p_1}$. Thus,
$|A_2| \ge |H|/2 +2$ and so by Lemma \ref{lemma:dir2},
$S_{A_2}=H$. Notice that $A_2 \subset H \cap A$. Thus $S_{A \cap
H} = H$ which means that $A$ is nice, completing the proof.
\end{proof}
\section {Proof of Theorem \ref{theorem:dir2} }
Without loss of generality, we can
assume that $\delta \le 1/2$ and $A$ has exactly $(1+\delta) \frac{n}{p_1p_2}$ elements.
Let $A_1$ be a
subset of $A$ of size $(1+\delta/2) \frac{n}{p_1p_2}$. Setting
$D(\delta)$ sufficiently large, one can assume that $n$ is
sufficiently large and $|A_1| \ge C(\delta/4) \sqrt {n \log n}$
(where $C$ is defined as in \eqref{Cep}), thanks to the assumption
$$p_1p_2 \le \frac{1}{D(\delta)} \sqrt{n /\log n}. $$
\noindent This enables us to apply Corollary \ref{cor:subgroup}
to $A_1$ and conclude that $S_{A_1}$ contains a subgroup $H$ of
size at least
$$(1-\delta/4) |A_1| = (1-\delta/4) (1+\delta/2)\frac{n}{p_1p_2} >
\frac{n}{p_1p_2}.
$$
The critical point here is that $|H|$ is larger than
$\frac{n}{p_1p_2}$. This forces $|H| = n/q$ where $q$ is one of
the primes $p_i$. It would be easy to finish the proof now if
$A$ had at least $(2+\delta) \frac{n}{p_1p_2}$ (instead of only
$(1+\delta) \frac{n}{p_1p_2}$) elements. The reason is that in
this case we still have $(1+\delta/2)\frac{n}{p_1p_2}$ elements
outside $A_1$ to play with. Arguing as in the previous proof, we
can show that most of these elements should be in $H$ and span it.
As we lack these extra elements, we need an additional trick that
helps us to show that actually most elements of $A_1$ are already
in $H$. The heart of this trick is Lemma \ref{lemma:random} below.
Before presenting the lemma, let us make some observations. Set
$A_2 := A \backslash A_1$. As $A_1$ was chosen arbitrarily, $A_2$
is an arbitrary subset of $A$ with $\frac{\delta n}{2 p_1p_1}$
elements. Since $A$ is incomplete, $|A_2 \backslash H| \le q-2$,
where $q=|G|/|H| \le p_1p_2$. By setting $D(\delta)$ sufficiently
large, we can assume
$$p_1p_2 \le \frac{\delta^2}{20} \frac{n}{p_1p_2} =
\frac{\delta}{10} |A_2| $$
\noindent which implies
$$|A_2 \cap H| \ge |A_2| - p_1p_2 \ge (1-\delta/10) |A_2|. $$
To summarize, $A$ has the property that for any subset $A_2$ of
size $\frac{\delta n}{2 p_1p_2} = \frac{\delta}{2(1+\delta)} |A|$,
there is a maximal subgroup $H$ of $G$ such that $|A_2 \cap H| \ge
(1-\delta/10) |A_2|$.
\begin{lemma} \label{lemma:random}
Let $S$ be a subset of $G$ of size $(1+\delta) \frac{n}{p_1p_2}$
such that no maximal subgroup of $G$ contains $(1-\delta/2)$
fraction of $S$. Then there is a subset $S' \subset S$ of size
$\frac{\delta}{2(1+\delta)} |S|$ such that no maximal subgroup of
$G$ contains $(1-\delta/10 )$ fraction of $S'$.
\end{lemma}
Assuming the lemma for a moment, we can conclude the proof as
follows. By the lemma and its preceding paragraph, there is a
maximal subgroup $H$ such that
$$|H \cap A| \ge (1- \delta/2) |A| = (1-\delta/2)(1+\delta) \frac{n}{p_1p_2} \ge
(1+\delta/4) \frac{n}{p_1p_2} $$ as $\delta \le 1/2$. Since $|H|
\le n/p_1$ and the smallest prime divisor $p'$ of $H$ is either
$p_1$ or $p_2$, it is easy to verify that
$$|H \cap A| \ge \frac{|H|} {p'} + p' . $$
\noindent Thus we can apply Theorem \ref{theorem:old} or Theorem
\ref{theorem:dir1} for $H$ and $A \cap H$ to deduce that $A \cap
H$ is complete in $H$. Therefore, $S_{A \cap H} = H$ and $A$ is
nice.
Now we prove Lemma \ref{lemma:random}, using a probabilistic
argument.
\begin{proof} (Proof of Lemma \ref{lemma:random}) Set $s:=
|S|= (1+\delta) \frac{n}{p_1p_2}$ and ${\epsilon}:= \delta/10$. Consider
a random subset $S_1$ of $S$ obtained by selecting each element
$a \in S$ to be in $S_1$ with probability $\rho:= (1+2{\epsilon})
\frac{\delta}{2(1+\delta)^2}$, independently. Let $H$ be a
subgroup of $G$. By linearity of expectation and the assumption of
the lemma , we have
$${\hbox{\bf E}} (|H \cap S_1|) = \rho |H \cap S| \le \rho (1-\delta/2) s = \rho (1-5{\epsilon}) s. $$
\noindent On the other hand, ${\hbox{\bf E}}(|S_1| ) = \rho s$. Both $H \cap
S_1$ and $S_1$ have binomial distribution. By property of the
binomial distribution, there is a positive constant $c_0$
depending only on ${\epsilon}$ such that with probability at least $1-
\exp(-c_0\rho s)$
\begin{equation} \label{S1-1} (1-{\epsilon}) \rho s \le |S_1| \le (1+{\epsilon}) \rho s
. \end{equation} \noindent and
\begin{equation} \label{S1-2} |H \cap S_1| \le (1+{\epsilon}) \rho (1-5{\epsilon}) s.
\end{equation}
\noindent It is well known (and easy to prove) that the number
of maximal subgroups of $G$ is at most $|G|= n$. If $D(\delta)$
(and so $n$) is sufficiently large, then
$$2n \le \exp( c_0\rho s ). $$
\noindent Thus, we can use the union bound to conclude that there exists a set
$S_1$ such that \eqref{S1-1} holds and \eqref{S1-2} holds
simultaneously for every maximal subgroup $H$. Let $S'$ be any
subset of $S_1$ of size $\frac{\delta n}{2 n_1n_2}=
\frac{1}{1+2{\epsilon}} \rho s$. For any maximal subgroup $H$
$$|S' \cap H|/ |S'| \le |S_1 \cap H| /|S'| \le \frac{(1+{\epsilon})(1-5{\epsilon})\rho
s}{(\frac{1}{1+2{\epsilon}} \rho s} < (1-{\epsilon}) =(1-\delta/10). $$
\noindent This concludes the proof of the lemma. \end{proof}
The following example shows that the lower bound $(1+\delta)
\frac{n}{p_1p_2}$ cannot be reduced to $\frac{n}{p_1p_2} +
n^{1/4-\alpha}$, for any fixed $\alpha$.
{\it Example.} Take $n:= p^2 q$ where $1 < p < q $ are large
primes. Consider $G= Z_{p^2} \oplus Z_q$. Given any $\delta >0$
and any function $D(\delta)$, by choosing $q$ properly $p$ we can
guarantee that
$$ n^{1/2-\alpha} \le p^2 \le \frac{1}{D(\delta)} \sqrt {n/ \log n}. $$
We write an element $a \in G$ as $a= (x,y)$
where $x \in Z_{p^2} $ and $y \in Z_q$. Let $m$ be the largest
integer such that $\sum_{i=0}^m i < p^2-1$. Set
$$A:= \{ (x, 0) | 0 \le x \le m \} \cup \{ (0, y)|
0\le y \le q -1 \} .$$ It is easy to show that $A$ is incomplete
and not nice, thanks to the fact that $\sum_{i=0}^m i < p^2 -1$.
On the other hand,
$$|A| = m + q = m + \frac{n}{p^2} = m + \frac{n}{p_1p_2} \ge n^{1/4 -\alpha}
+ \frac{n}{p_1p_2} . $$
The proof of the theorem is complete.
\section{Concluding remarks}
One can use the additional trick in the proof of Theorem
\ref{theorem:dir2} to improve upon the constant $(5/6+\delta)$ in
Theorem \ref{theorem:dir1}. However, this requires some
modification on the assumptions. We prefer to present Theorem
\ref{theorem:dir1} in the simplest way in order to illustrate the
ideas.
One can also use the method presented here to study incomplete
sets with size less than $\frac{n}{p_1p_2}$. However, the
characterization obtained in this case is more technical and less
appealing.
|
2,877,628,091,345 | arxiv | \section{Introduction}
Recent rapid advances in image manipulation tools and deep image synthesis techniques have made generating fake videos easy. In addition, with the spread of SNS, the existence of fake videos has become a major threat to the credibility of the international community. Accordingly, detecting tampered videos/images has become an urgent issue \cite{verdoliva2020media}.
In particular, videos can easily be manipulated to produce fake videos by using the operations among frames such as the insertion, deletion, and permutation of video frames, called temporal operation. However, most methods for detecting forgery are not useful for such temporal operations. In addition, they are not robust enough against various types of content-preserving transforms without malice, such as resizing and compression. Since most SNSs are known to carry out such operations, conventional methods are efficient in such cloud environments~\cite{chuman2019image, chuman2017image}. To overcome the issue, various methods robust against the operations have been studied for still images~\cite{tanaka2021detection, iida2020privacy, arnia2006fast}. Accordingly, we propose a novel method for robustly detecting temporally operated videos using a robust hashing algorithm. The proposed method allows us not only to detect temporally operated videos with a high accuracy but to also reduce the amount of hash values by synthesizing an ``extended frame'' from multiple frames.
\section{Related Works}
\subsection{Robust Hashing}
Most hashing methods such as secure hash algorithms (SHA) generally output significantly different hash values from slightly different input data sets. In contrast, robust hashing methods are designed to output similar hash values from similar input data sets. Accordingly, hash values are robust against input data including distortion caused with compression and image resizing, so robust hashing has been used as a method for image retrieval.
Hash values for fake-image detection are required to be robust enough against a number of types of image operation such as image compression and resizing since such operation does not convert the content of images, although the quality of the images is reduced. Therefore, we focus on using a robust hash method that aims to robustly retrieve images similar to query images. In contrast, hash values generated by using a robust hash method have to be sensitive to the influence of manipulation used for generating tampered images such as copy-move and GANs.
Under these requirements, various robust hashing methods~\cite{li2015robust,kozat2004robust,gong2012iterative,venkatesan2000robust, iida2019robust, itagaki2021robust, du2020image} have been compared in terms of sensitivity and robustness in which Li~\textit{et al}.'s method~\cite{li2015robust} was demonstrated to have a suitable performance for tampered image detection.
In this paper, we also use Li~\textit{et al}.'s robust hashing method~\cite{li2015robust}, which was confirmed to have high performance for fake-image detection in~\cite{tanaka2021detection}. The method can capture both spatial and chromatic features by using quaternions. The method is carried out as follows:
\begin{enumerate}
\item Apply a Gaussian filter with a kernel size of $5 \times 5$ and a standard deviation of 1 to the image and then resize it to $128 \times 128$ pixels.
\item Extract spatial and chromatic features from the preprocessed image using quaternions.
\item Select some features from them using a feature selection algorithm.
\item Generate a binary hash value of 120 bits in length.
\end{enumerate}
\subsection{Tampering video}
Recent rapid advances in image manipulation tools and deep image synthesis techniques have made tampering videos easy~\cite{nirkin2019fsgan, elharrouss2020image}. Video tampering is classified into two types: intra-frame tampering and inter-frame tampering~\cite{milani2012overview}.
Intra-frame tampering, also called spatial tampering, is an attack on each frame. Examples include adding (copy-move, splicing) and deleting (inpainting) objects. Adding objects is a method for cutting some objects from other frames and pasting them into the original frame. Deleting objects hides some objects in the original frame with a background color.
In contrast, inter-frame tampering, also called temporal tampering, is to manipulate the relationships among frames. Examples include inserting, deleting, rearranging, and replacing frames as shown in Fig.~\ref{interframe}.
Frame insertion (Figure~\ref{interframe1}) is a technique that inserts some frames from another video frames into original ones. Frame deletion (Figure~\ref{interframe2}) is to delete some frames from the original video. Frame reordering (Figure~\ref{interframe3}) is to change the order of some or all of the frames in the original video. Frame replace (Figure~\ref{interframe4}) is to replace some frames in the original video with some frames in another video.
In this paper, we focus on detecting intra-frame tampering, which is difficult for other conventional methods.
\begin{figure}[tb]
\centering
\subfloat[Frame insertion]{\includegraphics[keepaspectratio,width=3.5cm]{niwa_figure/frameaddition.pdf}\label{interframe1}}
\hfil
\subfloat[Frame deletion]{\includegraphics[keepaspectratio,width=3.5cm]{niwa_figure/framedeletion.pdf}\label{interframe2}}\\
\quad
\subfloat[Frame reordering]{\includegraphics[keepaspectratio,width=2.8cm]{niwa_figure/frameshaffle.pdf}\label{interframe3}}
\hfil
\subfloat[Frame replace]{\includegraphics[keepaspectratio,width=3.8cm]{niwa_figure/framereplace.pdf}\label{interframe4}}
\caption{Example of inter-frame tampering}
\label{interframe}
\end{figure}
\section{Proposed Method}\label{tejun}
An overview of the proposed method is shown in Fig.~\ref{gaiyou1}.
A reference video $V_r = \{ f_{r1},f_{r2},\ldots,f_{rl} \}$ and a query video $V_q = \{ f_{q1},f_{q2},\ldots,f_{ql} \}$ are prepared, where $V_q$ and $V_r$ consist of $l$ frames respectively, and $f_{ri}$ and $f_{qi}$ indicate frames.
Extended frames $F_{ri}, F_{qi}, i = 1, 2,\ldots$ are defined by using the frames, respectively, where each extended frame is produced by using $n\times n$ frames and then a hash value is computed from every extended frame by using Li~\textit{et al}.'s hash method~\cite{li2015robust}. Li~\textit{et al}.'s hash method generates a hash value with $J=120$ bits from an extended frame, so the Hamming distance between two hash values is computed as
\begin{equation}
\label{hamming}
d_H(\vector{r}_i, \vector{q}_i) \triangleq \sum_{k=1}^{J} \delta(r_{ik}, q_{ik}),
\end{equation}
where,
\begin{equation*}
\delta(r_{ik}, q_{ik}) =
\begin{cases}
1 & (r_{ik} \neq q_{ik})\\
0 & (r_{ik}=q_{ik})
\end{cases}.
\end{equation*}
$\vector{r}_i=\{ r_{i1},r_{i2},\ldots,r_{ik},\ldots,r_{iJ}\}$ and $\vector{q}_i=\{ q_{i1}, q_{i2},\ldots,q_{ik},\\\ldots, q_{iJ}\}$, $r_{ik}, q_{ik}\in\{0,1\}$ are hash values computed from the $i$-th extended frames of $F_{ri}$ and $F_{qi}$.
In this method, $F_{qi}$ is judged as an operated frame if $d_H(\vector{r}_i, \vector{q}_i)$ is greater than or equal to the threshold $d$, as follows:
\begin{equation}
\label{eq:d}
F_{qi}=
\begin{cases}
\text{operated}& \text{if $d_H(\vector{r}_i, \vector{q}_i) \geq d$}\\
\text{non-operated}& \text{if $d_H(\vector{r}_i, \vector{q}_i) < d$}
\end{cases}.
\end{equation}
\begin{figure}[htb]
\centering
\includegraphics[keepaspectratio,width=8.8cm]{niwa_figure/gaiyou.pdf}
\caption{Overview of proposed method}
\label{gaiyou1}
\end{figure}
\subsection{Generation of extended frames}
Extended frames are generated from a reference video and a query video, respectively, as follows.
\begin{enumerate}[\IEEEsetlabelwidth{A-}]
\item[(A-1)] A video sequence is divided into frame-blocks, each with $n\times n$ frames (see Fig.~\ref{extend}).
\item[(A-2)] The first extended frame is defined with each frame-block in the order of the frame number (see Fig.~\ref{extend}).
\item[(A-3)] The second extended frame is defined with each frame-block such that frames near the four corners of the first extended frame are moved to the center of the extended frame.
\end{enumerate}
\begin{figure}[tb]
\centering
\includegraphics[keepaspectratio,width=8.8cm]{niwa_figure/extend.pdf}
\caption{Generating extended frame $(n=4)$}
\label{extend}
\end{figure}
A property of Li~\textit{et al}.'s robust hashing method gives the reason that a few frames are permutated in (A-3). The hashing method uses the quaternion polar cosine transform\cite{li2015robust}, so the four corners of an image are not sensitive.
To investigate the necessity of extended frames, we experimented with two videos: a reference video $V_r$ consisting of $n\times n$ white frames, which was used for generating one extended frame $F_r$, and a query video $V_q$ consisting of $n\times n-1$ white frames and one black frame, which was used for generating one extended frame $F_q$, where the black frame in $F_q$ corresponded to a tampered frame.
Figure~\ref{notikan} shows the relationship between the position of the black frame and the Hamming distance between $F_{r}$ and $F_{q}$.
If the black frame is at the $x$-th position, the Hamming distance between $F_{r}$ and $F_{q}$ is shown to be at the element~$(a,b)$, $(0 \leq a,b\leq n, x=(a-1)\times n+b)$ in Fig.~\ref{fakeframe-hamming}.
As you can see from Fig.~\ref{fakeframe-hamming}, in case of using Li~\textit{et al}.'s robust hashing method, the Hamming distance between $F_{r}$ and $F_{q}$ becomes almost zero value when the black frame is near the four corners of $F_{q}$.
Therefore, to avoid putting tampered frames near the four corners of the image, in this paper, we propose to prepare two types of extended frames: those ordered in the usual order. and those ordered in the replaced order, as shown in Fig.~\ref{notikan}. This technique is expected to make it easier to detect tampering even if the tampered frames are located near the four corners of extended frames (see Fig.~\ref{MAX}).
\begin{figure}[tb]
\centering
\subfloat[With 1st extended frame]{\includegraphics[keepaspectratio,height=3.35cm]{niwa_figure/hamming0.pdf}\label{notikan}}
\hfil
\subfloat[With both 1st and 2nd extended frames]{\includegraphics[keepaspectratio, height=3.35cm]{niwa_figure/hamming1.pdf}\label{MAX}}
\caption{Relationship between the position of the tampered frame and the Hamming distance $(n = 10)$}
\label{fakeframe-hamming}
\end{figure}
\subsection{Detection procedure}
The procedure of the proposed method is summarized as below.
\begin{enumerate}[\IEEEsetlabelwidth{B-}]
\item[(B-1)] Two types of extended frames are generated from a reference video.
\item[(B-2)]
A hash value is computed from every extended frame by using Li~\textit{et al}.'s robust hashing method, and the values are stored.
\item[(B-3)] Two types of extended frames are generated from a query video, and a hash value is computed from every extended frame by using Li~\textit{et al}.'s robust hashing method as well.
\item[(B-4)]Hash values are compared between an extended reference frame and the corresponding extended query frame to detect whether the query video is operated by Eqs.~\eqref{hamming} and \eqref{eq:d}.
\end{enumerate}
\section{Exreriment}
\subsection{Experimental setup}
To confirm the effectiveness of the proposed method, six datasets were prepared by ourselves as shown in Table~\ref{dataset}. Each dataset consists of frames of size $1280\times 720$ pixels, with a total of 3540 frames, 540 of which are tampered frames.
Three videos were downloaded from a web page, and dataset 0-0 was prepared from the three videos as a reference dataset. Dataset 0-0 was uploaded to Twitter and Instagram, and then dataset 0-1 and dataset 0-2 were produced by downloading dataset 0-0 from Twitter and Instagram, respectively. Thus, both datasets were not equal to dataset 0-0 due to the influence of recompressing and resizing.
Dataset 1-0 was produced by applying a number of frames in dataset 0-0 to temporal manipulations, i.e. inter-frame tamper operations such as frame deletion, reordering, and replacement. dataset 1-1 was produced by uploading dataset 1-0 to Twitter, and dataset 1-2 was produced by uploading dataset 1-0 to Instagram.
\begin{table}[tb]
\caption{Video datasets used in experiments}
\label{dataset}
\centering
\scalebox{0.88}{
\begin{tabular}{llcccc}
\hline
Dataset & \,\, Information & Compression & Resize & Frame size\\
\hline \hline
Dataset 0-0 & \,\, Reference & - & - & $1280\times 720$\\
Dataset 0-1 & \,\, Uploaded to Twitter & + & - & $1280\times 720$\\
Dataset 0-2 & \,\, Uploaded to Instagram & + & + & $1152\times 648$\\
\hline
Dataset 1-0 &
\begin{tabular}{l}
Including\\[-0.7mm] operated frames
\end{tabular} & - & - & $1280\times 720$\\
Dataset 1-1 & \,\, Uploaded to Twitter & + & - & $1280\times 720$\\
Dataset 1-2 & \,\, Uploaded to Instagram & + & + & $1152\times 648$\\
\hline
\end{tabular}}
\begin{flushleft}
``+'' indicates that frames in that dataset were compressed/resized, while ``-'' indicates that any frames in that dataset were not compressed/resized.
\end{flushleft}
\end{table}
\begin{table}[t]
\caption{Definition of indicators}
\label{dif}
\centering
\begin{tabular}{cc|cc}
& & \multicolumn{2}{c}{Predict label} \\ \cline{3-4}
& & Positive & Negative \\ \hline
\multicolumn{1}{c|}{Actual} & Positive & $\mathit{TP}$ & $\mathit{FN}$ \\
\multicolumn{1}{c|}{label} & Negative & $\mathit{FP}$ & $\mathit{TN}$
\end{tabular}
\end{table}
\subsection{Result}
The proposed method was evaluated by using two evaluation indexes: accuracy (Acc) and average precision (AP). Both Acc and AP are in the range of $[0,1]$, and a higher value indicates that tampered frames are detected more correctly. Acc and AP are given by
\begin{equation}
Acc = \frac{\mathit{TP} +\mathit{TN}}{\mathit{TP}+\mathit{FP}+\mathit{TN}+\mathit{FN}},
\end{equation}
\begin{equation}
AP = \sum_j (R_j - R_{j-1})P_j,
\end{equation}
where $\mathit{TP},\mathit{FP}, \mathit{TN}$ and $\mathit{FN}$ are as defined in Table~\ref{dif}, $P_j, R_j$ are precision and recall at the $j$-th threshold also given by $P_j = {\mathit{TP}_j}/(\mathit{TP}_j+\mathit{FP}_j)$, $R_j = {\mathit{TP}_j}/(\mathit{TP}_j+\mathit{FN}_j)$.
Experiment results are shown in Table~\ref{compare} where dataset 0-0 was used as a reference video, and two parameters $n$ and $d$ were experimentally decided as $n = 8$ and $d = 23$. From the results of dataset 0-1 and dataset 0-2, the method was confirmed to be robust enough against recompression and resizing operations. In addition, From the results of datasets 1-0, 1-1 and 1-2, it was verified to be effective in detecting temporally operated videos. In particular, when using two extend frames, the proposed method had a higher detection accuracy.
\begin{table}[tb]
\caption{Comparison of tampering detection results \\for each extended frame$(n = 8, d = 23)$}
\label{compare}
\centering
\begin{tabular}{ccccc}
\hline
Query&\multicolumn{2}{c}{With 1st extended frames}&\multicolumn{2}{c}{With both 1st and 2nd ones}\\
\cline{2-5}
dataset & Acc & AP & Acc & AP\\
\hline \hline
Dataset 0-1 & 1.0000 & not defined & 1.0000 & not defined\\
Dataset 0-2 & 1.0000 & not defined & 1.0000 & not defined\\
\hline
Dataset 1-0 & 0.9825 & 1.0000 & 1.0000 & 1.0000\\
Dataset 1-1 & 0.9825 & 1.0000 & 1.0000 & 1.0000\\
Dataset 1-2 & 0.9825 & 0.9936 & 1.0000 & 1.0000\\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
In this paper, we proposed a novel method with a robust hashing algorithm for detecting temporally operated videos where Li \textit{et al}.'s robust hashing method was used as a robust hashing algorithm. In addition, the method uses an approach called ``extended frames'' to detect tampered frames with high accuracy while referring to many frames at once. In an experiment, the method was demonstrated not only to be robust against image recompression and resizing but to also give a higher accuracy by using two extended frames.
\bibliographystyle{IEEEtran}
|
2,877,628,091,346 | arxiv | \section{Introduction}
Recent research on language modeling has found that a language model that incorporates explicit notions of hierarchical syntactic structures, Recurrent Neural Network Grammars \citep[RNNGs]{dyer-etal-2016-recurrent}, achieves better perplexity \citep{dyer-etal-2016-recurrent,kim_2019}, systematic syntactic generalization \citep{futrell_2019,wilcox_2019,Hu:et-al:2020}, and correlation with human brain signals \citep{hale_2018} than a comparable LSTM \citep{hochreiter_97} language model, which processes the input string in a sequential and non-hierarchical fashion.
This is in agreement with what the linguistic theory suggests as the right model of language \citep{chomsky_1957,structures:not:strings}.
To that end, the syntactic inductive bias of RNNGs is derived from a \textbf{recursive syntactic composition} operation, where the fixed-size vector representation of each constituent is computed by a recursive, learned function of the vector representations of its children \citep[\emph{inter alia}]{goller_1996,socher_2011,socher:2013c,socher:2013b,le-zuidema-2015-compositional,tai:2015,dyer:2015,bowman-etal-2016-fast}. Nevertheless, the benefits of recursive syntactic compositions have---for the most part---been demonstrated within the context of the \mbox{LSTM-based} RNNG syntactic language model that has relatively few parameters, operates only on single-sentence sequences, and is challenging to scale to larger datasets \citep{noji-oseki-2021-effective}. Hence, we aim to answer the following open question: To what extent---if at all---would a similar recursive composition operation \emph{continue} to be advantageous for Transformer models that work well at scale?
To answer this question, we introduce \textbf{Transformer Grammars}---a novel class of Transformer language models that combine: (i) the expressive power, scalability, and strong performance of a Transformer-XL \citep{dai-etal-2019-transformer}; (ii) joint modeling of surface strings $\boldsymbol{x}$ and their corresponding phrase-structure trees $\boldsymbol{y}$, \emph{i.e.,} $p(\boldsymbol{x}, \boldsymbol{y})$; and (iii) an explicit modeling of hierarchical syntactic structures through recursive syntactic compositions. Concretely, Transformer Grammars implement recursive syntactic compositions through their: (i) attention masks, which recursively compose the vector representations of smaller linguistic units, such as words, into fixed-size vector representations of larger linguistic units, such as phrases and sentences; (ii) relative positional encoding, which is based on relative differences in tree depths (as opposed to sequential positions); and (iii) a careful memory update mechanism. Transformer Grammars therefore retain the computational efficiency of standard Transformer-XLs, which can scale much more easily than LSTMs to larger datasets and model sizes by virtue of their higher training parallelism.
We remark that Transformer Grammars differ from RNNGs in three ways. First, they are based on Transformer architectures \citep{NIPS2017_3f5ee243}, which have outperformed LSTM-based models at various NLP tasks \citep{devlin_2019}.
Second, they inherit the scalability of standard Transformers, and can therefore scale well to larger datasets and model sizes---which are key drivers behind recent language modeling success \citep[\emph{inter alia}]{devlin_2019,roberta,brown_2020,kaplan_2020}. Third, they are amenable to the modeling of documents as opposed to single sentences, which enables us to assess whether the benefits of recursive syntactic compositions extend beyond sentence-level language modeling.
Beyond RNNGs, Transformer Grammars are also related to the recent work of \citet{qian-etal-2021-structural}, who proposed a ``generative parsing as language modeling'' approach \citep{choe-charniak-2016-parsing}. To that end, \citet{qian-etal-2021-structural} used syntactic structures to modulate the behavior of a subset of attention heads, which led to a more data-efficient acquisition of human-like syntactic generalization. Transformer Grammars differ from this prior approach in one important way. Concretely, \citet{qian-etal-2021-structural} employed two specialized attention heads---one that attends only to elements \emph{within} the same constituent, and another that attends only to elements \emph{outside} of the constituent---whereas the rest of the attention heads remain unconstrained \citep{strubell-etal-2018-linguistically,astudillo-etal-2020-transition}. In practice, this does not yield explicit compositional representations of partial trees that are computed in a recursive fashion. In contrast, Transformer Grammars use additional transition steps and attention masking procedures that more closely implement recursive syntactic compositions. Hence, given the differences between the two models, our approach sheds more light into whether, and to what extent, the recursive syntactic composition hypothesis---which has been shown to be valuable at the small data and model scale in the case of RNNGs---\emph{continues} to offer additional benefits, above and beyond the syntactically-motivated division of labor between different attention heads.
We evaluate Transformer Grammars against several baselines on three metrics (perplexity, syntactic generalization, and parse reranking), and on two training datasets: (i) the small-scale Penn Treebank \citep[PTB]{marcus:1993} and (ii) the medium-scale BLLIP-\textsc{lg} \citep{Hu:et-al:2020} datasets, with $\sim$1M and $\sim$40M words, respectively. We find that:
\begin{itemizesquish}
\item Transformer Grammars (\textbf{TGs}) achieve better single-sentence language modeling perplexity, syntactic generalization, and parse reranking performance than RNNGs. As TGs retain the scalability of Transformer architectures, they are also much faster to train than the batched RNNG \citep{noji-oseki-2021-effective}---which is \emph{already} much faster than the original RNNG. These findings highlight the speed and performance benefits of incorporating recursive syntactic compositions on top of Transformers, as opposed to RNNs.
\item In terms of single-sentence language modeling perplexity, \emph{both} TGs and the ``generative parsing as language modeling'' approach \citep[\textbf{TXL (CC)}]{choe-charniak-2016-parsing,qian-etal-2021-structural} outperform a standard Transformer-XL model that operates only on the word sequences (\textbf{TXL (terminals)}). This finding demonstrates the benefits of \emph{joint} modeling of words and phrase-structure trees, even in the case of Transformer models that are trained at the medium data scale.
\item Using the syntactic generalization score of \citet{Hu:et-al:2020} to quantify the ability of models to syntactically generalize in a systematic and human-like fashion---an important component of human linguistic intelligence---we find that TGs outperform a strong TXL (CC) baseline that likewise models the joint probability of strings and phrase-structure trees, but does so \emph{without} the TG's contraints to use recursive syntactic compositions. Remarkably, our TG model can outperform a GPT-2-small \citep{radford_2019} model that is trained on 250$\times$ as much data. This finding suggests that recursive syntactic compositions are indeed beneficial for this task.
\item Nevertheless, the TXL (CC) model slightly outperforms TGs in terms of single-sentence language modeling perplexity; we attribute this result to the recursive syntactic compositions within TGs, which---despite their benefits for syntactic generalization---interferes with Transformers' lexical copying ability that is important for some language modeling predictions. This result indicates a partial dissociation between perplexity and syntactic generalization---both of which are important metrics for assessing language modeling success. Hence, we encourage future work to report language modeling results on both metrics, rather than solely relying on perplexity to measure language modeling progress.
\item We find that extending the recursive syntactic composition to document-level language modeling---where subsequent predictions condition on a single \emph{composed} representation of previous sentences---does not provide any tangible benefits: Document-level TGs trail their document-level TXL (terminals) and TXL (CC) counterparts that do not feature any recursive syntactic compositions. To better understand \emph{why} this is the case, we run a regression analysis to better understand where recursive syntactic compositions are beneficial in document-level language modeling, and where they are not.
\end{itemizesquish}
All in all, our findings show that---given a way to scale stronger syntactic biases to larger model sizes and assuming comparable experimental conditions---language models that \emph{do} incorporate notions of syntactic structures (both TXL (CC) \& TG) can outperform those that do not on multiple language modeling evaluation metrics. We further demonstrate that encouraging the model to explain the data through the lens of recursive syntactic compositions---as is the case for our proposed Transformer Grammars approach---is a valuable inductive bias for achieving an even stronger human-like syntactic competence, outperforming the TXL (CC) model that does not integrate explicit notions of recursive syntactic compositions. Furthermore, our findings motivate more research into the best way for augmenting Transformers with syntactic biases (\emph{e.g.,} exploring different ways for integrating recursive syntactic compositions, and identifying where exactly they are beneficial in various NLP tasks). Taken more broadly, our findings motivate the development of \emph{scalable} language models---that nevertheless incorporate stronger notions of syntactic structures---as a promising (albeit relatively under-explored) area of NLP research.
\section{Model}
Transformer Grammars belong to a class of syntactic language models that estimate the \emph{joint} probability of syntax trees $\boldsymbol{y}$---in particular, phrase-structure trees---and strings of words $\boldsymbol{x}$. Following \citet{vinyals:2015,dyer-etal-2016-recurrent,choe-charniak-2016-parsing}, we decompose this generation problem into a sequence of \emph{actions} that construct ($\boldsymbol{x}$, $\boldsymbol{y}$) in a top-down, left-to-right fashion, by interleaving non-terminal phrasal nodes and their corresponding children, as shown in Figure~\ref{choe_charniak}. This linearized representation of ($\boldsymbol{x}$, $\boldsymbol{y}$) is constituted of three types of actions: (i) opening non-terminals (action type \texttt{ONT}), marking the opening of a new constituent; (ii) generating terminal symbols/leaf nodes (\emph{i.e.,} words or subword tokens), henceforth denoted as \texttt{T}; or (iii) closing the most recent open constituent/incomplete non-terminal symbol, henceforth denoted as (\texttt{CNT}).
\begin{figure}[t]
\centering
\tikzstyle{level 1}=[level distance=0.75cm, sibling distance=1cm]
\tikzstyle{level 2}=[level distance=0.75cm, sibling distance=0.5cm]
\begin{align*}
& \Big(\underbrace{\text{the blue bird sings}}_{\text{string }\boldsymbol{x}},
\underbrace{\tikz[baseline=(NP.base)]{
]
\node{S}
child {node (NP) {NP}
child {node {.}}
child {node {.}}
child {node {.}}
}
child {node {VP}
child {node {.}}
};
}}_{\text{syntax tree }\boldsymbol{y}}\Big) \\
& \underbrace{\text{(S (NP the blue bird NP) (VP sings VP) S)}}_{\text{actions }\boldsymbol{a}}
\end{align*}
\caption{An example that represents a pair of string $\boldsymbol{x}$ and its corresponding phrase-structure tree $\boldsymbol{y}$, which are then represented as a sequence of actions that construct ($\boldsymbol{x}$, $\boldsymbol{y}$) in a top-down, left-to-right fashion \cite{dyer-etal-2016-recurrent,choe-charniak-2016-parsing}.}
\label{choe_charniak}
\end{figure}
Let $\boldsymbol{a} = (a_0, a_1, ..., a_{T-1})$ be a sequence of actions (of length $T$) that generates ($\boldsymbol{x}$, $\boldsymbol{y}$), where each action is part of the action vocabulary $\mathcal{V}$. Transformer Grammars define a probability distribution over $\boldsymbol{a}$ through a left-to-right factorization, \emph{i.e.,} $p(\boldsymbol{x}, \boldsymbol{y}) = p(\boldsymbol{a}) = \prod_i p(a_i \mid \boldsymbol{a}_{< i})$.\footnote{This decomposition is valid because, within the context of a top-down, left-to-right traversal of ($\boldsymbol{x}$, $\boldsymbol{y}$), there is precisely a one-to-one
mapping between the full syntax tree ($\boldsymbol{x}$, $\boldsymbol{y}$) and its corresponding action sequence $\boldsymbol{a}$.} We refer interested readers to \citet{dyer-etal-2016-recurrent} for more details on how to derive the action sequence $\boldsymbol{a}$.
\subsection{Recursive syntactic composition via attention}\label{sec:recursive_composition}
Transformer Grammars feature explicit recursive syntactic compositions, which have been shown to be valuable in the case of RNNs, and at the small data and model scale \citep[\emph{inter alia}]{dyer-etal-2016-recurrent,wilcox_2019,futrell_2019,Hu:et-al:2020}. Concretely, Transformer Grammars implement recursive syntactic composition operations through the Transformer attention mask.
In Transformer models, attention is the only mechanism by which, at a given position $i$, information from other positions $j \neq i$ is incorporated. The rules governing this information flow---\emph{i.e.,} which positions can be attended from which other positions---are defined by the \emph{attention mask}. In most work so far, the attention masking procedure---and how positional information is expressed---is designed to ensure correctness (\emph{e.g.,} causal masking) or to improve computational efficiency, as done by \citet{NEURIPS2020_c8512d14}, \emph{inter alia}. Here we propose to use the Transformer attention masks to implement explicit recursive syntactic compositions.
Schematically, the sequence is processed from left-to-right, updating a stack from one position to the next. When the current position corresponds to a closing non-terminal---hence marking the end of a constituent---we build a \emph{composed} representation by restricting the attention to the positions corresponding only to the parts/children of that constituent. We then form a syntactic bottleneck that prevents future tokens from attending to the already-composed parts; we pop from the stack the positions corresponding to the children nodes, such that subsequent predictions can only condition on the fixed-size vector that represents the composed constituent; we refer to this process as the \textsc{compose} attention. This syntactic bottleneck encourages the model to learn informative representations of composed phrases, and is inspired by the same design principle as RNNGs.
In addition to the \textsc{compose} attention, at every position, we apply the \textsc{stack} attention, where attention is restricted to positions on the stack, and we add the current position to the stack. Both \textsc{stack} and \textsc{compose} are implemented with the exact same parameters and attention heads---what distinguishes them is the rule for computing the set of positions that the model can attend to. As we actually want to perform both \textsc{compose} and \textsc{stack} for a closing non-terminal (\emph{e.g.,} to first compute a composed representation based on its parts/children, and then add the composed representation onto the stack), and to perform exactly one attention operation per token, the original sequence $\boldsymbol{a}$ is transformed by duplicating all closing non-terminals, yielding a sequence $\boldsymbol{a}'$ of length $T'$, \emph{e.g.,} \texttt{(S (NP the blue bird NP) NP) (VP sings VP) VP) S) S)}. The first one of each pair is given the type \texttt{CNT1}, and implements \textsc{compose}, whereas the second is given the type \texttt{CNT2}, and implements \textsc{stack}. To keep the number of prediction events (\emph{i.e.,} the number of times a probability distribution is emitted by the model) constant, no final prediction is made for \textsc{compose} positions. See Figure~\ref{fig:processing} for an example.
\figureExampleTransitionsAndAttentionMatrix{}
The exact procedure for \textsc{stack}/\textsc{compose} is described in Algorithm~\ref{stack/compose attention}. The positions that may be attended are represented as a binary attention mask $\mathbf{A} \in \mathbb{R}^{T' \times T'}$ (see example in Figure~\ref{fig:processing}), such that $A_{ij} = 1$ if and only if the position $j$ may be attended from $i$, and 0 otherwise. Note that the computation of the attention mask is still causal, \emph{i.e.,} no information from positions $j > i$ is used to compute the positions that can be attended from $i$.
\paragraph{Relative positional encoding.} In Transformer-XL, the relative position between an attending position $i$ and an attended position $j$ is the linear position difference $i - j$. This quantity does not reflect nor use the topology of the tree---two consecutive tokens (linear relative position of 1) may belong to two different constituents (\emph{e.g.,} \texttt{bird} and \texttt{sings} in our example), with a distant common ancestor. We thus generalize how relative positions are provided to the attention mechanism such that any matrix $\mathbf{R} \in \mathbb{Z}^{T' \times T'}$ can be used, where $R_{ij}$ is the relative position between $i$ and $j$. For our model, we define $R_{ij} = \mathrm{depth}(i) - \mathrm{depth}(j)$, where $\mathrm{depth}(i)$ is the depth of the $i$-th token in the tree. Note that the relative distance $R_{ij}$ will only be computed if $A_{ij} = 1$ (\emph{i.e.,} $j$ may be attended from $i$). For instance, for the action sequence in Figure~\ref{choe_charniak}, the relative distance between (the positions corresponding to) words \texttt{sings} and \texttt{bird} is never computed, but it will be computed between \texttt{sings} and the NP covering \texttt{the blue bird}.
\begin{algorithm}
\caption{\textsc{stack}/\textsc{compose} attention}
\label{stack/compose attention}
\begin{algorithmic}[1]
\Require $\boldsymbol{a}'$ sequence of tokens
\Ensure $\mathbf{A} \in \mathbb{R}^{T' \times T'}$ attention mask
\State $S \gets []$\Comment{Empty stack}
\State $\mathbf{A} \gets 0$
\For{$i \gets 0$ to $T'$}
\If{$\textrm{type}(\boldsymbol{a}'[i]) = \texttt{CNT1}$}
\Comment{\textsc{compose}}
\State $j \gets i$
\While{$\textrm{type}(\boldsymbol{a}'[j]) \neq \texttt{ONT}$}
\State $A_{ij} \gets 1$
\State $j \gets S.pop()$
\EndWhile
\State $A_{ij} \gets 1$
\State $S.push(i)$
\Else
\Comment{\textsc{stack}}
\If{$\textrm{type}(\boldsymbol{a'}[i]) \neq \texttt{CNT2}$}
\State $S.push(i)$
\EndIf
\For{$j \in S$}
\State $A_{ij} \gets 1$
\EndFor
\EndIf
\EndFor
\State \textbf{return} $\mathbf{A}$\Comment{Attention mask}
\end{algorithmic}
\end{algorithm}
\subsection{Segmentation and recurrence}
In the same manner as Transformer-XL, Transformer Grammars are recurrent neural networks that can process arbitrarily long sequences as consecutive segments that contain a fixed number of tokens $L$, maintaining and updating a memory of temporal dimension $M$ from one segment to the next. With $0~\le~\tau~\le~\lceil~\frac{T'}{L}~\rceil$, $\boldsymbol{a}'_\tau = \left(a_{\tau L}, a_{\tau L+1}, \ldots, a_{\tau \left(L+1\right)-1}\right)$ is the $\tau+1$-th segment. Token embeddings are obtained from an embedding matrix $\mathbf{E} \in \mathbb{R}^{|\mathcal{V}| \times d}$ to form a sequence of $L$ vectors in $\mathbb{R}^d$: $\mathbf{h}^{(0)}_\tau~=~\left(h^{(0)}_{\tau L}, \ldots, h^{(0)}_{\tau \left(L+1\right)-1} \right)$.
The core of the model is composed of $K$ stacked recurrent layers, i.e, for $1 \le k \le K$: $$ \mathbf{h}^{(k)}_\tau, \mathbf{m}^{(k)}_{\tau+1} = \mathrm{Layer}^{(k)}(\mathbf{h}^{(k-1)}_\tau, \mathbf{m}^{(k)}_{\tau}, \mathbf{A}_\tau, \mathbf{R}_\tau)$$ where for each segment $\tau$: \begin{itemizesquish}
\item $\mathbf{h}^{(k)}_\tau~\in~\mathbb{R}^{L \times d}$ is the sequence of hidden states, which form the input for layer $k+1$,
\item $\mathbf{m}^{(k)}_{\tau}~\in~\mathbb{R}^{M \times d}$ is the memory,
\item $\mathbf{A}_\tau~\in~\mathbb{R}^{L \times (M+L)}$ is the attention mask from the current segment to: (i) the current segment, and (ii) the memory,
\item and $\mathbf{R}_\tau~\in~\mathbb{Z}^{L \times (M+L)}$ is the corresponding relative positions matrix.
\end{itemizesquish} The exact algorithm to compute $\mathbf{A}_\tau$ is given in Section~\ref{memory_update} of the appendix. All layers receive the same attention mask and relative positions matrix.
Each layer $k$ is composed of a multi-head self-attention sub-layer and a position-wise feed-forward network sub-layer (with residual connections followed by layer normalization---both omitted here for clarity), as well as an operation to update the memory for the next segment: \begin{align*}
\mathbf{h}^{(k-\frac{1}{2})}_\tau &= \mathrm{SelfAttn}_k(\mathbf{h}^{(k-1)}_\tau, \mathbf{m}^{(k)}_{\tau}, \mathbf{A}_\tau, \mathbf{R}_\tau) \\
\mathbf{h}^{(k)}_\tau &= \mathrm{FFN}_k(\mathbf{h}^{(k-\frac{1}{2})}_\tau) \\
\mathbf{m}^{(k)}_{\tau+1} &= \mathrm{MemoryUpdate}(\mathbf{h}^{(k-1)}_\tau, \mathbf{m}^{(k)}_{\tau})
\end{align*}
The output of the last layer, $\mathbf{h}^{(K)}_\tau$, is multiplied by the transpose of the embedding matrix $\mathbf{E}^T$ to get the unnormalized next token log probabilities
\paragraph{Self-attention}
Borrowing notation from~\cite{dai-etal-2019-transformer}, let $\mathbf{W}_q$, $\mathbf{W}_{k,E}$, $\mathbf{W}_{k,R}$, $\mathbf{W}_v$, and $u$ and $v$ be the trainable model parameters. Let $\left[ \cdot, \cdot \right]$ denote a concatenation operation along the time dimension. For a single head, we have: \begin{align*}
\mathbf{q} &= \mathbf{h} \mathbf{W}_q &
\mathbf{k} &= \left[ \mathbf{m}, \mathbf{h} \right] \mathbf{W}_{k,E} &
\mathbf{v} &= \left[ \mathbf{m}, \mathbf{h} \right] \mathbf{W}_{v}
\end{align*}
The attention score for an attending position $i$ and an attended position $j$ is \begin{equation*}
s_{ij}= (\mathbf{q}_i + u)^T \mathbf{k}_j + (\mathbf{q}_i + v)^T \mathbf{r}_{ij}
\end{equation*} where $\mathbf{r}_{ij} \in \mathbb{R}^{d}$ is an embedding of the integer relative position $R_{ij}$ (row from $\mathbf{W}_{k,R}$). Much like in Transformer-XL, the second term can be computed efficiently as the relative positions take values within a small interval $\left[R_\textrm{min}, R_\textrm{max}\right]$.
The mask $\mathbf{A}$ (\S\ref{sec:recursive_composition}) is applied element-wise on the scores, which sets masked entries to $-\infty$. The normalized attention weights are then obtained by applying a $\mathrm{softmax}$ activation function to the scores; the final attention outputs are the product of the attention weights and the values. In practice, we use multiple heads---the outputs of each are concatenated and passed onto a linear transformation.
\paragraph{Memory update}
In Transformer-XLs, the memory is updated by shifting the current input onto it. Here we take advantage of the fact that positions within a subtree that have been \textsc{compose}d are never attended to in the future, and \emph{a fortiori} in the following segments. Hence, only positions that may be attended need to be added/kept in the memory. This requires a careful book-keeping of which position in the memory corresponds to which original position in the input sequence, both to perform the update, and to compute the correct attention mask and relative positions. The update is performed by selecting a subset of rows from $\mathbf{h}^{(l-1)}_\tau$ and $\mathbf{m}^{(k)}_\tau$ to construct $\mathbf{m}^{(k)}_{\tau+1}$, \emph{i.e.,} by multiplying ($\mathbf{h}^{(l-1)}_\tau$, $\mathbf{m}^{(k)}_\tau$) with the relevant update matrices: \begin{equation*}
\mathbf{m}^{(k)}_{\tau+1} = \mathbf{U}^{m \rightarrow m}_{\tau} \mathbf{m}^{(k)}_{\tau} + \mathbf{U}^{h \rightarrow m}_{\tau} \mathbf{h}^{(l-1)}_\tau
\end{equation*}
The full algorithm to compute these matrices $\mathbf{U}_{\tau}$ and $\mathbf{A}_\tau$ is provided in Appendix~\ref{memory_update}.
\subsection{Computational properties}
\paragraph{Recursive composition}
Transformer Grammars accomplish recursive composition via a custom attention mask that reflects the hierarchical structure. Although the mask at a position $i+1$ depends on the mask at position $i$, during training the entire attention mask matrix can be precomputed in advance, and then applied independently to compute multiple syntactic compositions in parallel for the whole segment.
For instance, in the example sequence from Figure~\ref{fig:processing}, during training the representations of NP and VP are computed in parallel, even though their closing non-terminals are at different positions ($6$ and $10$, respectively) in the sequence. Every following layer of Transformer Grammars then takes the composed representations at previous layers, and composes them further. For instance, at position $12$, the second layer will form a composed representation of a sentence constituent \texttt{S)} by using as input the first layer representations of \texttt{NP)} and \texttt{VP)}.
The prerequisite of this approach is that at least $d$ layers are needed for tokens of depth $d$ to affect the topmost composed representation.
\paragraph{\textsc{stack} attention}
Whereas the syntactic composition steps use a \textsc{compose} attention mask at each closing non-terminal of type \texttt{CNT1}, all other transition steps use a \textsc{stack} attention mask.
This mask computes the representations of the completed/closed constituents, words, and open non-terminals on the stack. In Figure~\ref{fig:processing}, the word \texttt{sing} will attend to the closed constituent \texttt{NP)}, and the open non-terminals \texttt{(S} and \texttt{(VP}. We remark that---at the position of \texttt{sing}---TGs access the representations of the preceding words only trough their composed representation \texttt{NP)}, enforcing the compressive effect of recursive syntactic compositions.
\textsc{stack} and \textsc{compose} attentions have an interesting interaction at the higher layers: The \textsc{stack} attention used in the lower layers of \texttt{sings}, which looks at the representation of the preceding \texttt{NP)}, shall in turn influence the \textsc{compose} attention of \texttt{VP)} at the higher layers. In other words, the left context may inform how a syntactic composition should be computed.
\citet{bowman-etal-2016-fast} have shown that this is a desirable property, because the composition function may need to disambiguate the parts that are being composed; in language modeling, the left context is likely to be informative.
\paragraph{Performance} Up to the slightly increased sequence length due to the duplication of the closing non-terminals, and the minor differences relating to the attention mask and memory update, the computational requirements of Transformer Grammars are very close to that of a standard Transformer-XL, hence enabling the model to scale exactly as well.
\subsection{Hybrid models and ensembles}
\label{hybrid_models}
\textsc{stack}/\textsc{compose} attention is indeed restrictive---only composed representations can be attended to from another syntactic unit---which forms an inductive bias that forces the model to build and use informative phrase-level syntactic features. As language contains non-syntactic phenomena (semantics, co-occurrences, copying new terms, etc.) that should be accounted for, or even as the parse trees used to train the model may contain errors, we explore a few alternatives where a limited amount of model capacity is left for attention to proceed in an \emph{unrestricted} way, denoted as hybrid models.
In a {\em layer-level hybrid}, out of $L$ layers in total, the bottom $L_{TG}$ layers are Transformer Grammars layers with restricted \textsc{stack}/\textsc{compose} attention, whereas the top $L~-~L_{TG}$ are unrestricted layers with a standard causal attention mask. In a {\em head-level hybrid}, in each layer, out of $H$ heads, $H_{TG}$ heads are Transformer Grammars heads with restricted attention, whereas $H - H_{TG}$ are unrestricted attention heads. This is similar to \citet{strubell-etal-2018-linguistically}'s syntactically-informed self-attention where one head is trained to attend to the syntactic parent of the current token, and \citet{qian-etal-2021-structural}'s approach where two heads are restricted.
Lastly, we consider heterogeneous ensembles of Transformer Grammars and Transformer-XLs, as compared to their homogeneous counterparts.
\section{Experiments}
We compare Transformer Grammars to two Transformer-XL baselines: (i) one trained only on the terminal word sequences (\textbf{TXL (terminals)}), and (ii) another trained on the linearized tree sequence as done by \citet{choe-charniak-2016-parsing}, henceforth denoted as \textbf{TXL (CC)}. We remark that model (i) is a word-level language model that estimates the probability of surface strings $p(\boldsymbol{x})$, whereas model (ii) is a syntactic language model that estimates $p(\boldsymbol{x,y})$ in the same fashion as TGs---albeit \emph{without} explicit notions of hierarchical syntactic structures (in contrast to TGs that feature recursive syntactic compositions through the attention masks). We additionally compare against two external models: (i) the ``generative parsing as language modeling'' approach of \citet{qian-etal-2021-structural}, which operates in a similar fashion as the linearized \citet{choe-charniak-2016-parsing} baseline, albeit with two attention heads that are specialized for syntax (though falling short of TGs' explicit recursive syntactic compositions); and (ii) the batched RNNG model of \citet{noji-oseki-2021-effective} that can scale the explicit recursive compositions of RNNGs to larger datasets, albeit on top of LSTM architectures as opposed to Transformers.
\paragraph{Datasets.} To that end, we conduct experiments at both the small scale Penn Treebank \citep[PTB]{marcus:1993} dataset with $\sim1$ million words, and the medium-scale BLLIP\textsc{-lg} dataset with $\sim40$ million words. To facilitate a fair comparison with prior work, we use the pre-processed, sentence-level PTB dataset of \citet{dyer-etal-2016-recurrent}, where unseen words and singletons on the training set are mapped according to a special set of ``<UNKNOWN>'' word symbols as proposed by \citet{petrov_07}; this unseen word mapping scheme takes into account orthographic case distinctions and morphological information.
The BLLIP\textsc{-lg} dataset was proposed by \citet{Hu:et-al:2020}, who introduced a series of PTB-style corpora of various sizes that are randomly subsampled from BLLIP 1987-89 Corpus Release 1~\cite{charniak2000bllip}, for which they generated syntactic parses with an accurate phrase-structure parser. Here we experiment solely with the largest version of the corpora, BLLIP\textsc{-lg}. We consider two settings where we: (i) model each sentence independently, which corresponds to the PTB setting and how syntactic language modeling has typically been done in the past; and (ii) model each document---each of which is composed of multiple sentences. In the second setup, we reconstructed the document boundaries from the original BLLIP corpora. For the syntactic language models (both TG and TXL (CC)), we use \citet{Hu:et-al:2020}'s available parse trees; for the non-syntactic model (TXL (terminals)), we extract the words,\footnote{As a result, we count \numprint{72455} words and \numprint{3000} sentences in the test split, and \numprint{34925} words and \numprint{1500} sentences in the validation split.} pre-unkification, from the parsed data. Using the SentencePiece~\cite{kudo-richardson-2018-sentencepiece} tokenizer with the unigram language model subword algorithm~\cite{kudo-2018-subword}, we learn a vocabulary of 32K word-pieces from the training split.
To account for training variance, for each model (Transformer Grammars, TXL (terminals), and TXL (CC)), we train 100 models of the same size (layers, parameters) with independent random initializations and training data shuffling. Each model is trained for a fixed number of gradient updates (steps); we then select the checkpoint that has the lowest validation loss. On PTB, we use 16-layer models with 12M parameters; on BLLIP\textsc{-lg}, we use 16-layer models with 252M parameters. All models are trained under the language modeling objective (whether word-level language modeling in the case of TXL (terminals), or syntactic language modeling in the case of TXL (CC) and TGs) using the Adam~\cite{DBLP:journals/corr/KingmaB14} optimizer. The full hyperparameters are provided in Appendix~\ref{hyperparameters}.
\subsection{Language modeling perplexity}
\paragraph{Experimental setup} We compute the word perplexity of the validation and test split of the datasets under the models as $\mathrm{PPL}(\mathcal{D}) = \left( \prod_{x \in \mathcal{D}} p(\boldsymbol{x}) \right)^{-\frac{1}{N_w}}$, where $N_w$ is the total number of words.\footnote{$+1$ for the end-of-sequence token to model the termination of the sequence. This is only needed for the word-level language models, and not needed for the syntactic language models, where generation terminates when there is exactly one completed constituent on the stack.} While perplexity can be computed directly for models operating on strings, exact computation of the perplexity is \emph{intractable} for models operating on the joint distribution of strings and syntax trees $p(\boldsymbol{x}, \boldsymbol{y})$, because:
\begin{align*}
p(\boldsymbol{x}) = \sum_{\boldsymbol{y} \in \mathcal{Y}_{\boldsymbol{x}}} p(\boldsymbol{x},\boldsymbol{y}),
\end{align*}
where $\mathcal{Y}_{\boldsymbol{x}}$ denotes the set of all phrase-structure trees that are compatible with the input string $\boldsymbol{x}$; note that the cardinality of $\mathcal{Y}_{\boldsymbol{x}}$ is exponential in the input length $|\boldsymbol{x}|$. As exact computation of $p(\boldsymbol{x})$ is intractable under TGs and TXL (CC), we compute a lower bound on $p(\boldsymbol{x})$ by approximately marginalizing over a much smaller set of proposal trees $\mathcal{Y'}_{\boldsymbol{x}}=\{\boldsymbol{y'}^{(1)}, \cdots,\boldsymbol{y'}^{(J)} \}$, where $\mathcal{Y'}_{\boldsymbol{x}}$ denotes a set of $J$ trees\footnote{We use $J=300$ unique proposal trees for each sentence going forward.} that are sampled \emph{without replacement} from a separately-trained proposal model that estimates $q(\boldsymbol{y} \mid \boldsymbol{x})$; here we use the discriminative RNNG as the proposal model. Formally:
\begin{align*}
\sum_{\boldsymbol{y'}^{(j)} \in \mathcal{Y'}_{\boldsymbol{x}}} p\big(\boldsymbol{x},\boldsymbol{y'}^{(j)}\big) < p(\boldsymbol{x})
\end{align*}
Note that the strict lower bound holds because $\mathcal{Y'}_{\boldsymbol{x}} \subset \mathcal{Y}_{\boldsymbol{x}}$; for the syntactic language models, we derive an upper bound on the perplexity through this lower bound on $p(\boldsymbol{x})$. To facilitate a fair comparison across different syntactic language models, we use the exact same proposal trees for all models.
For the document-level language models, given a document that consists of $N_s$ sentences, for each sentence $i$ in the document, we need to marginalize over all possible syntax trees for every single $i - 1$ \emph{preceding} sentence in that document. This is even more intractable to compute than in the sentence-level language modeling case. We approximate this by greedily picking the single most likely syntax tree under the model for the first $i - 1$ sentences, before concatenating this single-path prefix with the $J$ tree proposals for the last sentence.
\paragraph{Discussion}
We report the mean and sample standard deviation of perplexity (first 3 columns) in Table~\ref{table:main_results}, and plot their distributions in Figure~\ref{test_distributions}.
\begin{table*}[htpb]
\begin{center}
\begin{tabular}{l|ccc|c|c}
{} & \multicolumn{3}{c|}{Perplexity ($\downarrow$)} & SG ($\uparrow$) & $F_1$ ($\uparrow$) \\
{} & PTB & BLLIP sent. & BLLIP doc. & BLLIP sent. & PTB \\
\hline
TG$^{\dagger}$ & 61.8 $\pm$ 0.2 & 30.3 $\pm$ 0.5 & 26.3 $\pm$ 0.1 & \textbf{82.5 $\pm$ 1.6} & \textbf{93.7 $\pm$ 0.1} \\
TXL (CC)$^{\dagger}$ & \textbf{61.2 $\pm$ 0.3} & \textbf{29.8 $\pm$ 0.4} & \textbf{22.1 $\pm$ 0.1} & 80.2 $\pm$ 1.6 & 93.6 $\pm$ 0.1 \\
TXL (terminals) & 62.6 $\pm$ 0.2 & 31.2 $\pm$ 0.4 & 23.1 $\pm$ 0.1 & 69.5 $\pm$ 2.1 & N/A \\ \midrule
RNNG$^{\diamondsuit}$ \citep{dyer-etal-2016-recurrent} & 105.2 & N/A & N/A & N/A & 93.3 \\
PLM-Mask$^{\diamondsuit}$ \citep{qian-etal-2021-structural} & N/A & 49.18$^{\clubsuit}$ & N/A & $\approx$75$^{\spadesuit}$ & N/A
\end{tabular}
\end{center}
\caption{Results on the \textbf{test} split of the datasets. For our results (top three rows), we report the mean and sample standard deviation of the perplexity, bracketing $F_1$ score, and syntactic generalization (SG) score obtained for 100 models of each of the three model variants. $^{\dagger}$Perplexities reported for TG and TXL (CC) are upper bounds, derived from approximately marginalizing over a set of proposal trees. TXL (terminals) cannot be used to compute bracketing $F_1$ scores. The syntactic generalization test suite assumes models trained on independent sentences from BLLIP\textsc{-lg}. TXL (CC) has the best perplexity overall, whereas TG does best on the tasks that are most related to syntactic structures: SG and bracketing $F_1$ scores. In the last two rows (marked with $^{\diamondsuit}$), we report results that are taken directly from prior work. We include these results for completeness, although we remark that they are not directly comparable because prior work may use different model sizes, tokenization scheme, etc., although the training dataset for each task (\emph{i.e.,} each column) is exactly identical for all models. Result with $^{\clubsuit}$ is obtained from personal correspondence with \citet{qian-etal-2021-structural}, while result with $^{\spadesuit}$ is approximately derived from a chart (Figure~3 of \citet{qian-etal-2021-structural}).}
\label{table:main_results}
\end{table*}
\begin{figure*}[htpb]
\centering
\includegraphics[width=6in,height=4.5in]{distributions.pdf}
\caption{Distributions of the metrics of interest on the test splits of the datasets, over $n=100$ trained models with independent random initializations. All the differences in means are statistically significant ($p < 10^{-3}$).}
\label{test_distributions}
\end{figure*}
Although all three models share the exact same number of model parameters and training dataset, our very first observation is that \emph{both} the Transformer-XL baseline trained on the linearized Choe-Charniak sequences (TXL (CC)) and the proposed Transformer Grammars (TG) model achieve a lower perplexity---even though the reported perplexity is in fact only an upper bound---than a Transformer-XL trained only on terminals (TXL (terminals))---for which the perplexity calculation is exact---on PTB and the sentence-level BLLIP. This shows that joint modeling of syntactic structures and surface strings in Transformers---even \emph{without} any explicit inductive bias for making use of the syntactic information (\emph{e.g.,} TXL (CC)), is still helpful for improving perplexity. We conjecture that the next-token prediction task is made easier by the presence of non-terminals within the context, which restricts the word classes that may appear next. Although there are more such prediction events for the Choe-Charniak sequences than for the words-only model, the predictions of the non-terminals are marginalized out at evaluation time. At training time, it might seem that the learning demands placed on the model are higher, and that having to predict the syntactic structures could produce an interference and reduce the available model capacity for predicting the words. Here we do not find this to be the case, as evidenced by both syntactic language models' better perplexity.
Comparing Transformer Grammars to TXL (CC), perplexity suffers by about 0.5 point (1\%-1.7\% increase relative to the TXL (CC) model's perplexity) on the two sentence-level datasets, and by about 4 points (19\% relative increase) at the document-level on BLLIP-\textsc{lg}. We believe that TGs' restrictions on the attention mask---which use syntactic structures to create a recursive compositional bottleneck---make the learning of fine-grained lexical associations harder. As an example, in the sequence \texttt{(S (NP the blue bird NP) NP) (VP sings VP) VP) S) S)}, the tokens \texttt{the blue bird} cannot be attended to from the position of the \texttt{(VP} for which \texttt{sings} is to be predicted---only a single composed representation for the noun phrase, created where the first closing \texttt{NP)} is located, can be attended. From this perspective, the successful prediction of \texttt{sings} would require the learning and availability of syntactic (\emph{e.g.,} grammatical number) as well as non-syntactic features (animality, propensity for singing, etc.) We examine this further in \S\ref{analysis_of_log_probs}.
We attribute the large difference in perplexity at the document-level to two facts. First, for Transformer Grammars, each previous sentence appearing in the context are condensed into a \emph{single} composed representation, constituting too strong of a bottleneck. Second, because we use the differences in tree depths as relative positions, the order of the previous sentences is lost in the case of TGs.
\subsection{Parse reranking}
\paragraph{Experimental setup}
As human-annotated syntax trees are available for the PTB test split, for each sentence, we compare (i) the most likely tree under the model, among the 300 proposal trees generated by a discriminative RNNG, to (ii) the gold tree. In other words, we rerank the tree samples produced by a discriminative model, using the probabilities assigned by a generative syntactic language model, as done by \citet{dyer-etal-2016-recurrent} and \citet{choe-charniak-2016-parsing}. We report bracketing $F_1$ as computed by EVALB \cite{evalb}. This task thus evaluates the model's ability to select the correct tree for a given sentence.
\paragraph{Discussion}
We report the mean and sample standard deviation of bracketing $F_1$ (last column) in Table~\ref{table:main_results}, and plot its distribution in Figure~\ref{test_distributions}. We observe that TG does slightly better (+ 0.1\%) than TXL (CC) on this task; the small difference in mean is nevertheless statistically significant (two-sided Welch's unequal variances {\em t}-test, $p < 10^{-3}$). This shows that TG is a slightly better than TXL (CC) at scoring parse trees, which may be explained by its restricted attention and by its use of composed syntactic representations.
\subsection{Syntactic generalization}
\paragraph{Experimental setup}
\citet{Hu:et-al:2020} developed a series of test suites that probe the syntactic ability of language models on a large set of syntactic phenomena. To that end, the aim of this task is to comprehensively assess the ability of language models to syntactically generalize in a human-like fashion, which constitutes a key feature of human linguistic ability. We use the standard set of 31 test suites, grouped into 6 circuits evaluating agreement, licensing, garden-path effects, gross syntactic expectation, center embedding, and long-distance dependencies. A model succeeds on a given test case when the probabilities it assigns to specifically crafted examples obey an inequality (or conjunctions thereof) justified by how humans process language. We report the average syntactic generalization (SG) score across test suites for models trained on independent sentences from BLLIP\textsc{-lg}.
\paragraph{Discussion}
We report the mean and sample standard deviation of the average syntactic generalization score (fourth column) in Table~\ref{table:main_results}, and plot its distribution in Figure~\ref{test_distributions}. Based on these findings, we make two main observations.
Our first observation is that the average syntactic generalization score is substantially higher for models trained on linearized trees---whether they are TXL (CC) or Transformer Grammars---compared to a Transformer-XL baseline trained on words only. We believe that this result can be explained in three steps. First, the modeling of the structure via the non-terminals by TG and TXL (CC) can be seen, during training, as providing additional syntactic supervision. This enables them to pick---from a large number of candidate trees---good parses for a sentence. Second, as the SG score is computed from inequalities involving model surprisals on \emph{words}, we perform an approximate marginalization step for TG and TXL (CC). In this approximate marginalization, valid parses are therefore heavily weighted. Finally, when the model has a strong preference for syntactically correct parses, the tasks from the test suite become easier, accounting for these models' higher scores.
Our second observation is that, comparing Transformer Grammars to a Transformer-XL trained on linearized trees, our approach is most beneficial on tasks that are most related to modeling structure, \emph{i.e.,} parse reranking, as seen previously, in addition to the comprehensive syntactic generalization test suite. On both tasks, Transformer Grammars achieve higher bracketing $F_1$ and average syntactic generalization scores (two-sided Welch's unequal variances $t$-test, difference in means is statistically significant with $p < 10^{-3}$).
We believe that this is explained by the restricted attention in Transformer Grammars, thus preventing the model from attending to---and being confused by---syntactically irrelevant parts of the input, and encouraging it to learn informative composed representations of subtrees.
In Figure~\ref{sg_circuits}, we present a breakdown of the syntactic generalization results, as split by the six syntactic circuits. As expected from the average score, the Transformer-XL trained only on terminals performs worse than both the TXL (CC) and TG, except for Gross Syntactic State where it nearly reaches 100\%. TG and TXL (CC) have very similar scores on all circuits except licensing, where TG substantially outperforms TXL (CC). Altogether, these results demonstrate the benefits of recursive syntactic compositions for improving language modeling performance at syntax-sensitive benchmarks of human linguistic competence, even in the case of powerful Transformer-based language models that are trained at the medium scale. Furthermore, our findings further shed light on which syntactic constructions benefit the most from explicit recursive syntactic compositions.
\begin{figure*}[htpb]
\centering
\includegraphics[width=6in,height=4in]{sg_circuits.pdf}
\caption{Syntactic generalization results, as split by the six circuits. The error bars represent bootstrapped 95\% confidence intervals of the mean. Overall, syntactic models (TG and TXL (CC)) outperform the TXL (terminals) model that operates only on the word sequences, and TG does particularly well on the licensing circuit.}
\label{sg_circuits}
\end{figure*}
\subsection{Evolution during training}
Figure~\ref{training} shows the evolution of the syntactic generalization scores and perplexity during training, on the test split of the sentence-level BLLIP-\textsc{lg} dataset.\footnote{We used the test split for ease of comparison with the main results. As no modeling decision is based on these results, we consider this a valid use of that split.} For all three models, perplexity improves monotonically during training. The syntactic generalization score, however, increases quickly until \numprint{30000} steps for TG and TXL (CC), and then plateaus. We observe a different pattern for TXL (terminals), where the syntactic generalization score increases until much later, and only plateaus after \numprint{60000} steps. We remark that TG achieves a higher SG score than both TXLs at every point during training, and---after only \numprint{20000} steps---even reaches a higher score than what the TXL (CC) achieves after training for \numprint{100000} steps.
It is indeed intriguing that the syntactic generalization score increases quickly during training, before plateauing fairly early. It may be that: (i) the model continues to learn syntax, but the particular syntactic generalization test suite scores fail to capture this, or that (ii) further improvements would appear only by training for much longer on larger corpora~\citep{warstadt-etal-2020-learning}, or even that (iii) the model stops learning any further syntactic information beyond this point. In the latter case, it may be that the model has learned as much syntax as it can given its capacity, or that it prioritizes using its capacity to learn non-syntactic features of the data distribution to minimize its training objective. Irrespective of which possible interpretation is correct, stronger inductive biases---as featured in the TG model---enable it to reach syntactic competence levels that the baselines fail to achieve.
\begin{figure*}[htpb]
\centering
\includegraphics[width=6in,height=3in]{training.pdf}
\caption{Evolutions of the syntactic generalization (SG) score and perplexity during training, on sentence-level BLLIP-\textsc{lg}. The lines represent the average for 100 models, and the error bars---sometimes shorter than plot line width---represent bootstrapped 95\% confidence intervals of the mean. Perplexity improves throughout training, whereas SG plateaus relatively quickly.}
\label{training}
\end{figure*}
\subsection{Alternatives}
As the design space for our approach is very large, we considered and analyzed the performance of several alternatives, along with the baselines, for which we report results on the validation sets of PTB and BLLIP-\textsc{lg} in Figure~\ref{table:validation_results}.
\begin{table*}[htpb]
\begin{center}
\begin{tabular}{l|ccc|c|c}
{} & \multicolumn{3}{c|}{Perplexity ($\downarrow$)} & SG ($\uparrow$) & $F_1$ ($\uparrow$) \\
{} & PTB & BLLIP sent. & BLLIP doc. & BLLIP sent. & PTB \\
\hline
TG$^{\dagger}$ & 76.1 $\pm$ 0.3 & 31.5 $\pm$ 0.6 & 26.9 $\pm$ 0.1 & 82.5 $\pm$ 1.6 & \textbf{93.0 $\pm$ 0.1} \\
TG (distinct MHA)$^{\dagger}$ & 75.6 $\pm$ 0.3 & 31.6 $\pm$ 0.5 & 26.9 $\pm$ 0.1 & 81.3 $\pm$ 1.3 & \textbf{93.0 $\pm$ 0.1} \\
TG (TXL mem.)$^{\dagger}$ & 76.1 $\pm$ 0.3 & 31.5 $\pm$ 0.3 & 26.8 $\pm$ 0.1 & \textbf{82.9 $\pm$ 1.3} & \textbf{93.0 $\pm$ 0.1} \\
TG (TXL mem. \& rel. pos.)$^{\dagger}$ & 76.3 $\pm$ 0.3 & 31.5 $\pm$ 0.6 & 26.2 $\pm$ 0.1 & 82.6 $\pm$ 1.8 & \textbf{93.0 $\pm$ 0.1} \\
TXL (CC)$^{\dagger}$ & \textbf{74.9 $\pm$ 0.3} & \textbf{30.9 $\pm$ 0.4} & \textbf{22.3 $\pm$ 0.1} & 80.2 $\pm$ 1.6 & 92.9 $\pm$ 0.1 \\
TXL (terminals) & 77.3 $\pm$ 0.3 & 32.2 $\pm$ 0.5 & 23.3 $\pm$ 0.1 & 69.5 $\pm$ 2.1 & N/A \\
\end{tabular}
\end{center}
\caption{Results on the \textbf{validation} split of the datasets. $^{\dagger}$Perplexities reported for TG (all variants) and TXL (CC) are upper bounds, derived from approximately marginalizing over a set of proposal trees. We observe similar tendencies as for the test splits---TXL (CC) has better perplexity, whereas TG performs better on the tasks that are most related to syntactic structures.}
\label{table:validation_results}
\end{table*}
Transformer Grammars feature two key operations: one that obtains a composed representation of the subtree, and another that obtains the stack representation (\emph{e.g.,} through the addition of a new element). As these can be seen as two distinct operations, we implemented an alternative which used two distinct sets of model parameters for the multi-head attention---one for the \textsc{stack} operation, and another for the \textsc{compose} operation. This separation thus enables the model to better specialize them for each operation type, albeit at the cost of doubling the number of attention parameters. We report this as \textbf{TG (distinct MHA)}. This model performs very similarly to the default TG configuration, with a (i) small perplexity gain on PTB, (ii) somewhat worse average syntactic generalization score, and (iii) otherwise near-identical performance. We do not consider this alternative further, as the increase in the number of parameters does not conclusively yield a better performance.
We also considered a variant where we used a simplified form of the memory update, which is more similar to how the original Transformer-XL operates. Instead of using the structure to avoid adding activations corresponding to positions that will not be attended to in the future into the memory, we simply shift the current segment onto the memory, and mask the positions that are not to be attended. We report this as \textbf{TG (TXL memory)}. Finally, we considered a variant where the relative position between an attending token $i$ and an attended token $j$ is their linear relative positional difference $i - j$, as in a Transformer-XL, even though this distance does not incorporate tree structures; we report this as \textbf{TG (TXL mem. \& rel. pos.)}.
Neither performs better than the default configuration. In a two-sided Welch's unequal variances {\em t}-test, we fail to reject the null hypothesis that the difference in syntactic generalization score between TG (TXL memory) and the default TG is zero ($p\textrm{-value} = 0.13$). Hence, we select the configuration where the relative position relates most naturally to the tree structure (\emph{i.e.,} differences in tree depth), and where we benefit from restrictive attentions to keep more elements in memory.
\subsection{Hybrid models}
Figure~\ref{hybrid_results} shows the syntactic generalization score and perplexity on the sentence-level BLLIP-\textsc{lg} (validation set), for the two types of hybrid TG models (\S\ref{hybrid_models}). We observe that the perplexity of the head-hybrid models (cf. \S\ref{hybrid_models}) improves as unrestricted heads are added, up to a minimum reached for 6 unrestricted heads. While the reason why keeping a few restricted heads would be beneficial is unclear, the fact that adding unrestricted heads improves perplexity is expected---the model becomes able to attend in ways that are not restricted by syntax. Conversely, the syntactic generalization score gets worse as unrestricted heads are introduced, suggesting that Transformer Grammars' performance on these tasks is not due to facilitating attention in syntactically restricted ways, but to it being prevented from attending in ways that are forbidden by the \textsc{stack}/\textsc{compose} attention.
The overall tendency is similar for layer-hybrid models, which are composed of restricted Transformer Grammar layers followed by unrestricted (much like in a standard Transformer-XL) layers. Concretely, perplexity improves with more unrestricted layers, up to a certain point, whereas the syntactic generalization score tends to get worse with more unrestricted layers. Intriguingly, having a single unrestricted layer seems to be beneficial for \emph{both} perplexity and syntactic generalization. This suggests that there is value in relaxing the attention restrictions in a limited way, as we hypothesized in \S\ref{hybrid_models}. Hence, the number of unrestricted versus restricted layers constitutes an extra hyperparameter that can be tuned for applications requiring the most human-like syntactic behavior from a model.
\begin{figure*}[htpb]
\centering
\includegraphics[width=3in,height=6in]{hh.pdf}\includegraphics[width=3in,height=6in]{lh.pdf}
\caption{Left: Results from {\em head-hybrid} models---TG where $k$ heads are unrestricted. Right: results from {\em layer-hybrid} models---TG where the top $k$ layers are unrestricted (\S\ref{hybrid_models}). Both are done on sentence-level BLLIP-\textsc{lg} (validation). Error bars (dashed lines) represent bootstrapped 95\% confidence intervals of the mean.}
\label{hybrid_results}
\end{figure*}
\subsection{Ensembles}
In order to create models that combine (i) the syntactic generalization properties of Transfomer Grammars and (ii) the flexibility and better perplexity of Transformer-XLs, we consider heterogeneous ensembles composed of an equal number of models from the two classes. We compare the heterogeneous ensembles' performance to those of (i) the homogeneous ensembles, and (ii) the single models. Specifically, we consider uniform mixtures of experts, where the probability is the mean probability under the constituent models.
$$ p_{\mathrm{ens}}(a_i \mid \boldsymbol{a}_{< i}) = \frac{1}{K} \sum_{k=1}^{K} p_k(a_i \mid \boldsymbol{a}_{< i}) $$
In Figure~\ref{ensembles}, we report the mean syntactic generalization score and perplexity obtained by 20 heterogeneous ensembles (TG + TXL (CC)), 20 homogeneous ensembles of TGs, and 20 homogeneous ensembles of TXLs (CC). For comparison, we also report the mean metrics for 100 single TGs and 100 single TXLs.
\begin{figure*}[!htpb]
\centering
\includegraphics[width=5in,height=5in]{ensemble.pdf}
\caption{Syntactic generalization score and perplexity of single models and ensembles (BLLIP-\textsc{lg} sentences, validation split). Indicated next to each point is the number of models constituting the ensemble, or 1 for single models. The error bars (dashed) represent bootstrapped 95\% confidence intervals of the mean. The thin dotted lines link ensembles with the same number of constituent models. Adding more models to the ensembles pushes the Pareto frontier towards performance, although the trade-off between perplexity and SG score remains.}
\label{ensembles}
\end{figure*}
As expected, compared to single models, ensembling improves both (i) the syntactic generalization score, and (ii) the language modeling perplexity; this finding holds for both Transformer Grammars and TXLs. We observe substantial gains when going from one to two models, although the gains rapidly diminish beyond two models. Remarkably, ensembles of 10 TXLs do \emph{not} reach the SG score of even a single Transformer Grammars. This result further strengthens our earlier findings, where longer training of TXLs did not lead to better SG scores than those of Transformer Grammars.
Controlling for the same number of models in the ensemble, heterogeneous ensembles exhibit perplexities that are only very slightly worse than that of a homogeneous ensemble of Transformer-XLs. Despite this, the heterogeneous ensemble achieves a roughly 1 point higher SG score than the homogeneous ensemble of Transformer-XLs, suggesting that heterogeneous ensembles constitute a good way for combining the benefits of both classes of models. Nevertheless, for a given number of models in the ensemble, no configuration has both a higher SG score and a lower perplexity than another, suggesting that there is a trade-off between syntactic generalization and perplexity, and that it is difficult to simultaneously improve both.
For completeness, we also considered uniform products of experts $$p_{\mathrm{PoE}}(a_i \mid \boldsymbol{a}_{< i}) \propto \big(\prod_{k=1}^{K} p_k(a_i \mid \boldsymbol{a}_{< i})\big)^\frac{1}{K}$$ and found that these performed worse than uniform mixtures of experts.
\subsection{Regression analysis of probabilities}
\label{analysis_of_log_probs}
To quantify when Transformer Grammars are more or less successful than the baseline TXL (CC) at modeling the data, we consider, for each prediction of a terminal, the differences in log probabilities of the true terminal $a_i$ under the models. $$\Delta_i = \log p_{\mathrm{TG}}(a_i \mid \boldsymbol{a}_{< i}) - \log p_{\mathrm{TXL (CC)}}(a_i \mid \boldsymbol{a}_{< i})$$ To reduce variance stemming from random model initialization, we use an ensemble of Transformer Grammars and an ensemble of TXLs (CC), composed of 100 independently trained models each.
\subsubsection{Terminal frequencies}
We hypothesize that the syntactically-restricted attention pattern of Transformer Grammars---where subsequent predictions can only attend to composed representations---prevents it from learning the non-syntactic dimensions of the data distribution, such as rare co-occurrences, to the same extent as TXLs (CC). Based on this hypothesis, we expect the TGs' predictions to be worse for rare tokens.
We therefore compute the empirical unigram distribution of the terminals in the training split of BLLIP-\textsc{lg} documents, observe that it roughly follows a Zipf's law with exponent $s = -1.49$, and partition terminals into high-frequency ($f \ge 10^{-3}$), medium-frequency ($10^{-5} \le f < 10^{-3}$), and low-frequency ($f < 10^{-5}$) buckets. We then define three binary variables, indicating whether the terminal at a given position has a high, medium or low frequency, and use these in an ordinary least squares model to predict the difference in log probabilities on the BLLIP-\textsc{lg} validation split: $\Delta \sim \mathrm{HighFreq} + \mathrm{MediumFreq} + \mathrm{LowFreq}$.
We find an adjusted $R^2$ value of $0.039$, and coefficients $\beta_\mathrm{HighFreq} = -0.0488$, $\beta_\mathrm{MediumFreq} = -0.2419$, $\beta_\mathrm{LowFreq} = -0.5481$, all statistically different from 0 with a $p$-value $< 10^{-3}$.
This shows that---although TGs can predict the terminals appearing most frequently almost as well as TXLs (CC) do---they nevertheless struggle to predict rarer ones. We hypothesize that lexical co-occurrences that cross syntactic units can be learnt directly by TXLs (CC), whereas this is more difficult to do for TGs. Indeed, a consequence of \textsc{stack}/\textsc{compose} attention is that a terminal A can only attend to another terminal B if B is in A's left-context, and if B is a sibling of A. Our result suggests that this is not happening sufficiently often for Transformer Grammars to predict terminals as well as TXLs (CC). While it might be possible that the composed representations built by the model expose non-syntactic features about their constituents, the composition remains a strong bottleneck.
\subsubsection{Copying}
Likewise, we hypothesize that TXLs (CC) are better at copying arbitrary words from the context than their Transformer Grammars counterpart.
We define three binary variables, indicating (i) whether the true terminal to predict appears in the context in a previous sentence, but not in the current one; (ii) whether it appears in the context in the current sentence; or (iii) does not appear in the context at all. We use these in a new ordinary least squares model to predict the difference in log probabilities: $\Delta \sim \mathrm{InContextPrevSentences} + \mathrm{InContextCurSentence} + \mathrm{NotInContext}$.
We find an adjusted $R^2$ value of $0.010$, and coefficients $\beta_\mathrm{InContextPrevSentences} = -0.2871$, $\beta_\mathrm{InContextCurSentence} = -0.1003$, $\beta_\mathrm{NotInContext} = -0.1340$, all statistically different from 0 with a $p$-value $< 10^{-3}$.
This finding suggests that Transformer Grammars perform worse than TXLs (CC) in all three conditions, although the difference is most pronounced for terminals appearing in a previous sentence (but not in the current one). This observation suggests that TXLs (CC) are able to benefit from a priming effect---previously seen tokens becoming more likely---whereas this effect is more difficult to model with Transformer Grammars. The bottleneck of the \textsc{compose} attention, whereby each previous whole sentence is condensed into a single vector representation (per layer), may be too strong for fine lexical associations to be captured across sentences. At the same time, Transformer Grammars \emph{do} achieve a lower perplexity on BLLIP-\textsc{lg} documents than on BLLIP-\textsc{lg} sentences, suggesting that some coarse features are learnt and utilized.
\subsection{Computational performance}
Finally, we examine the computational cost of Transformer Grammars compared to the two TXL models. In Table~\ref{table:performance}, we report the training speed in steps---where each step conducts a gradient update---per second, on BLLIP-\textsc{lg}, on 16 cores of Google Cloud TPU v3. Given an independence assumption (sentences or documents), we use the same model size, batch size, and segment and memory lengths for all models. See Appendix~\ref{hyperparameters} for the full set of model hyperparameters.
\begin{table}[htpb]
\begin{center}
\begin{tabular}{l|cc}
{} & \multicolumn{2}{c}{Steps per second ($\uparrow$)} \\
{} & BLLIP sent. & BLLIP doc. \\
\hline
TG & \textbf{4.06} & \textbf{7.58} \\
TXL (CC) & 3.40 & 6.69 \\
TXL (terminals) & 3.44 & 6.70 \\
\end{tabular}
\end{center}
\caption{Training speed in steps/sec. Values may be compared within each column. Training a TG is slightly faster than training the two TXLs. This is because the syntactically derived quantities are computed on CPU, and fewer relative position values are possible.}
\label{table:performance}
\end{table}
Even though it is not central to our motivation, training Transformer Grammars is, in fact, slightly faster in our implementation. We attribute this speed-up to two factors. First, the computation of the quantities required by this model (attention mask, etc.) is done on CPUs, before passing them to the neural network; hence this step does not pose additional demands on the accelerator. Second, in a Transformer-XL, the relative positions are the differences in linear positions, whereas we use the differences in tree depths for TGs. As these take fewer possible values, we have fewer number of relative positional embeddings to multiply with the queries, speeding up the attention computation.
\section{Related Work}
Our work is related to a long line of prior work that demonstrated several ways of augmenting neural language models with more explicit syntactic inductive biases. Within the context of RNN-based language models, \citet{dyer-etal-2016-recurrent} proposed the RNNG model that conditions subsequent predictions on fixed-size vector representations of composed constituents, which are recursively computed based on the vector representations of their children nodes. A similar form of hierarchical modeling was introduced by \citet{yogatama:2018}, who proposed a stack-structured memory in order to encourage a better modeling of hierarchical structures. Another line of work derived novel RNN update rules that were inspired by hierarchical modeling: \citet{chung_2016} segmented the RNN hidden tate update through a multi-scale hierarchical recurrence, whereas \citet{shen_2019} introduced a novel cell-updating mechanism that incorporates the inutition that larger constituents contain information that changes more slowly during the course of sequential processing. Despite their different approaches, this set of prior work showcased the benefits of stronger syntactic biases within RNN-based language modeling; these benefits were primarily shown in terms of single-sentence language modeling perplexity and human-like syntactic generalization, as measured by syntax-sensitive grammaticality judgment tasks like number agreement \citep{linzen-2016} and targeted syntactic evaluations \citep{marvin:2018}, among others.
Beyond language modeling, the benefits of stronger syntactic biases for RNN-based models---either through hierarchical architectures or multi-task learning with syntactic auxiliary loss---had also been shown for other tasks, such as neural machine translation \citep{luong_2016,eriguchi_2016,eriguchi_2017,nadejde_2017,aharoni_2017}, natural language inference \citep{bowman-etal-2016-fast}, sentiment analysis \citep{socher:2013,tai:2015}, semantic role labelling \citep{he2018syntax,syntactic_scaffold}, and others. Despite the benefits of stronger syntactic biases for RNN-based models, it remains an open question whether---and to what extent---stronger syntactic biases are \emph{still} helpful for powerful Transformer models that work well at scale.
To that end, prior work has devised multiple ways of injecting syntactic inductive biases into Transformer-based encoders that observe the bidirectional context, such as BERT \citep[\emph{inter alia}]{wang2019structbert,sundararaman2019syntax,kuncoro_2020,sachan-etal-2021-syntax,bai-etal-2021-syntax}. Nevertheless, some prior work found that---within the context of Transformer encoders---stronger syntactic biases are not necessarily beneficial for some natural language understanding tasks \citep[\emph{inter alia}]{blimp,kuncoro_2020,pruksachatkun-etal-2020-intermediate,sachan-etal-2021-syntax}. One way that our work differs from this line of prior work is that we operate within the context of generative language modeling, as opposed to the bidirectional encoding setup where all tokens---or corrupted versions thereof---are assumed to be observed.
Lastly, our way of injecting syntactic biases into left-to-right Transformer language models combines two modeling traditions: (i) syntactic language modeling that estimates the joint probability of strings and phrase-structure trees \citep[\emph{inter alia}]{jurafsky_1995,chelba:2000,roark:2001,henderson:2004,mirowski_2015,choe-charniak-2016-parsing,kim_2019}, and (ii) constraining self-attention (or cross-attention) patterns in accordance with syntactic structures \citep[\emph{inter alia}]{strubell-etal-2018-linguistically,wang-etal-2019-tree,peng-etal-2019-palm,zhang2019sgnet,nguyen_2020,astudillo-etal-2020-transition}. By implementing hierarchical syntactic processing through the Transformer attention masks, Transformer Grammars retain the training scalability and parallelism afforded by conventional Transformer-based language models. Transformer Grammars are closely related to the work of \citet{qian-etal-2021-structural}, who similarly combined syntactic language modeling with syntax-based Transformer self-attention constraints through a ``generative parsing as language modeling'' approach \citep{choe-charniak-2016-parsing}. Nevertheless, Transformer Grammars differ from the work of \citet{qian-etal-2021-structural} and other prior work by virtue of a novel attention mask that explicitly implements recursive syntactic compositions, which have been shown to be beneficial at the small data and model scale within the context of RNNs \citep[\emph{inter alia}]{dyer-etal-2016-recurrent,kim_2019,wilcox_2019,futrell_2019}. Another key difference is that we explore an extension of sentence-level syntactic language modeling to document-level ones, which span longer sequences and multiple sentences. We argue that this research direction is an important one, as such longer-sequence modeling has been a key feature behind recent language modeling success \citep[\emph{inter alia}]{radford_2019,brown_2020}.
\section{Conclusion}
We introduced Transformer Grammars, a syntactic language model that implements recursive syntactic compositions through a novel attention mask. Transformer Grammars outperform various baselines and prior work on two syntax-sensitive language modeling evaluation metrics: syntactic generalization and parse reranking---all the while retaining the expressive power, excellent performance, and scalability of Transformer architectures. On sentence-level language modeling, Transformer Grammars outperform a strong Transformer-XL model that operates only on the word sequences, although we do not find our particular instantiation of recursive syntactic compositions to improve document-level language modeling perplexity. Our findings motivate the development of language models that incorporate stronger syntactic inductive biases---and yet can work well at scale at the same time---as a promising (albeit relatively under-explored) area of future NLP research.
\section*{Acknowledgements}
We would like to thank Jennifer Hu and Peng Qian for providing us with the BLLIP-\textsc{lg} reparsed data, the partial trees used for the syntactic models evaluated in~\citet{Hu:et-al:2020}, and for their answers to our many questions. We are also grateful to Kris Cao, Laura Rimell, Nando de Freitas, and our colleagues in the DeepMind Language team for their insightful comments and suggestions.
|
2,877,628,091,347 | arxiv | \section{Introduction}
The problem of determining whether a random graph contains a Hamilton cycle, i.e. a cycle passing through every vertex exactly once, dates back to the inception of the study of random graphs. In 1960, Erd\H{o}s and R\'enyi asked whether their eponymous random graph $G_{n, p}$, obtained by including any edge independently with probability $p$, contains a Hamilton path \cite{ErdosRenyi60}. The problem was settled by Koml\'os and Szemer\'edi \cite{KomlosSzemeredi}, who showed that
$$
\lim \Prob{G_{n, p} \text{ is Hamiltonian}} = \lim \Prob{\delta(G_{n, p})\ge 2},
$$
where $\delta$ denotes the minimum degree of a graph. This was strengthened to a hitting time result, independently by Bollob\'as \cite{bollobas84} and by Ajtai, Koml\'os and Szemer\'edi \cite{AjtaiKomlosSzemeredi}, stated as follows. Suppose $G_{n, m}$ is an increasing sequence of random graphs, where $G_{n, m}$ is obtained by adding a uniformly chosen edge to $G_{n, m-1}$. Let $\tau_2$ be the smallest $m$ for which $\delta(G_{n, m}) \ge 2$. Then with high probability, $G_{n, \tau_2}$ is Hamiltonian. For integers $k\ge 1$, let $\mathcal{A}_k$ denote the graph property of containing $\fl{k/2}$ Hamilton cycle, as well as a matching of size $\fl{n/2}$ when $k$ is odd. Bollob\'as and Frieze~\cite{BollobasFrieze} strengthened the hitting time result above to showing that $G_{n, \tau_k}\in \mathcal{A}_k$ with high probability. For a more thorough history, see Frieze's recent survey on Hamilton cycles~\cite{Frieze19}.
In recent years, some attention has been turned to random subgraphs of a host graph $\Gamma_n$. The random subgraph $\Gamma_{n, p}$ is obtained by including any edge of $\Gamma_n$ independently with probability $p$, and a the random graph process $\Gamma_{n, m}$ on $\Gamma_n$ is obtained by ordering the edges of $\Gamma_n$ uniformly at random. An early example is the random bipartite graph $G_{n, n, p}$, obtained by letting $\Gamma_n = K_{n, n}$. Frieze~\cite{Frieze85} determined the threshold in this case. Frieze and Krivelevich~\cite{FriezeKrivelevich02} showed that if $\Gamma_n$ is in a certain class of pseudorandom graphs (specified below) then $\Gamma_{n, \tau_2} \in \mathcal{A}_2$ whp. The author~\cite{Johansson20} showed that the same holds when $\delta(\Gamma_n) \ge (1/2+\varepsilon)n$ for some constant $\varepsilon > 0$ (and that it need not hold when $\delta(\Gamma_n) = n/2$). Alon and Krivelevich~\cite{AlonKrivelevich20} showed that $\Gamma_{n, \tau_{2k}} \in \mathcal{A}_{2k}$ whp for any $k=O(1)$ in three dense classes of host graphs, which include some pseudorandom graphs and graphs with $\delta(\Gamma_n) \ge (1/2+\varepsilon)n$.
Both \cite{FriezeKrivelevich02} and \cite{AlonKrivelevich20} consider pseudorandom graphs known as $(n, d, \mu)$-graphs. A graph $\Gamma$ is an $(n,d,\mu)$-graph if it has $n$ vertices, every vertex has degree $d$, and the second largest eigenvalue of its adjacency matrix is at most $\mu$ in absolute value. We strengthen both results in the following special case of our main theorem. Let $\tau_{\mathcal{A}_k}$ be the smallest $m$ for which $\Gamma_{n, m}\in \mathcal{A}_k$.
\begin{theorem*}[Theorem~\ref{thm:1}, pseudorandom graph case]
Let $k = O(1)$. Suppose $\Gamma_n$ is an $(n, d, \mu)$-graph with $\mu \le d(d/n)^\alpha$ for some constant $\alpha > 0$. Then the random graph process on $\Gamma_n$ has $\tau_{\mathcal{A}_k} = \tau_k$ with high probability.
\end{theorem*}
In~\cite{FriezeKrivelevich02} it was required that $\mu = o(d^{5/2} / (n\ln n)^{3/2})$, which only holds if $d \gg n^{3/4}(\ln n)^3$, while~\cite{AlonKrivelevich20} asked that $\mu = O(d^2/n)$ and $d = \Omega(n(\ln\ln n) / \ln n)$. This result strengthens both, with an implicit degree bound of $d = n^{\Omega(1)}$ owing to the fact that $\mu = \Omega(d^{1/2})$ (see e.g.~\cite{Nilli91}).
Our full result concerns a more general inhomogeneous random graph. Suppose $P$ is a symmetric $n\times n$ matrix with entries $P(u, v)\in [0, 1]$. We then define a random graph $G_{n, P}$ by independently including each edge $\{u, v\}$ with probability $P(u, v)$. If $R$ is a symmetric $n\times n$ matrix with $R(u, v) \ge 0$ for all $uv$, we also define a random graph process $G_{n, R}(t)$ as follows. Each pair $\{u, v\}$ is independently assigned a random value $E(u, v)$, exponentially distributed with rate $R(u, v)$, taken to equal $\infty$ if $R(u, v) = 0$. We let
$$
G_{n, R}(t) = ([n], \{uv : E(u, v) \le t\}).
$$
Note that $G_{n, R}(t)$ equals $G_{n, P}$ in distribution when $P(u, v) = 1-e^{-R(u, v)t}$. Note that this framework generalizes $\Gamma_{n, p}$ and $\Gamma_{n, m}$ (now in continuous time), re-obtained by letting $R$ be the adjacency matrix of $\Gamma_n$. Anastos, Frieze and Gao~\cite{AnastosFriezeGao19} considered Hamiltonicity in the stochastic block model, which is $G_{n, P}$ with $P$ in a specific class of block matrices.
For vertex sets $A, B$ we let $R(A, B) = \sum_{u\in A, v\in B} R(u, v)$. Let $d_R(u) = R(u, V)$ and $d_R(A) = R(A, V)$. A key tool in our proof is the random walk induced by $R$, which jumps from $u$ to $v$ with probability $M(u, v) = R(u, v) / d_R(u)$. The transition matrix $M$ has real eigenvalues $1 = \lambda_1 \ge \lambda_2 \ge\dots\ge \lambda_n\ge -1$ (see e.g.~\cite{LevinPeres17}), and we let $\lambda(R) = \max\{|\lambda_2|, |\lambda_n|\}$. Let $\sigma(u) = d_R(u) / d_R(V)$ be the stationary distribution of the random walk, and define $\|M\| = \max_{u, v} M(u, v)$.
\begin{definition}\label{def:degrees}
Let $\mathrm{RM}$ be the set of rate matrices $R$ with transition matrix $M$ such that there exist constants $\alpha \in [0, 1/2)$, $\gamma > 0$, $b > 0$ such that $\lambda(R) = o(1)$ and
\begin{equation}\label{eq:lamcond}
\lambda(R) \le (n\|M\|)^{-\alpha - \gamma},
\end{equation}
and $R$ satisfies the following:
\begin{enumerate}[(a)]
\item\label{item:scale} there exists a $d = d(n)$ such that $d_R(u) \ge d$ for all $u$, and $d_R(V) \le bdn$.
\item\label{item:powerlaw} for any $A\subseteq V$,
\begin{equation}\label{eq:powerlaw}
\sigma(A) = \frac{d_R(A)}{d_R(V)} \le b\bfrac{|A|}{n}^{1-2\alpha},
\end{equation}
\item $\|R\| \le dn^{-\gamma}$.
\end{enumerate}
\end{definition}
We state our main result.
\begin{theorem}\label{thm:1}
Let $k = O(1)$. If $R\in \mathrm{RM}$, then whp $G_{n, R}(t)$ satisfies
$$
\tau_{k} = \tau_{\mathcal{A}_{k}}.
$$
\end{theorem}
For any symmetric non-negative matrix $R$ on $V$, let
$$
\gamma_k(R) = \sum_{u\in V} d_R(u)^{k-1} e^{-d_R(u)}.
$$
Let $\mathrm{RM}(1)$ be the set of $R\in \mathrm{RM}$ with $\gamma_1(R) = 1$. Note that for any $x > 0$, the graphs $G_{n, R}(\tau_k)$ and $G_{n, xR}(\tau_k)$ are equal in distribution, so it is enough to prove Theorem~\ref{thm:1} for $R\in \mathrm{RM}(1)$.
\begin{theorem}\label{thm:pi}
Let $k = O(1)$. Suppose $P\in \mathrm{RM}$ has $\gamma_k(P) \to \gamma_k \in [0,\infty]$. Then
$$
\lim_{n\to\infty} \Prob{G_{n, P}\in \mathcal{A}_{k}} = e^{-\gamma_k}.
$$
\end{theorem}
Note that if $P$ has constant row sums $d_P(u) = d = \ln n + (k-1)\ln\ln n + c_{n, k}$, then $\gamma_k = \exp\{-\lim_n c_{n, k}\}$.
\section{Proof outline}\label{sec:Hdef}
Let us first discuss the overarching proof idea. Traditionally, many proofs of Hamiltonicity in random graphs rely on finding so-called {\em booster edges} for a graph $G$, which are edge $e\notin G$ such that $G\cup \{e\}$ is closer to being Hamiltonian, typically meaning it contains a longer path than $G$ does. If random edges are added to $G$, one argues that some booster is likely to be added.
Montgomery~\cite{Montgomery19} more generally defined boosters as sets $T$ of edges whose addition gets $G$ closer to Hamiltonicity, and used sets $|T| \le 2$. A similar idea was used earlier in~\cite{FriezeJohansson17}. Booster pairs were also used by Alon and Krivelevich in their recent paper~\cite{AlonKrivelevich20}. In this paper we move to general boosters, i.e. edge sets $T$ of any (constant) size whose addition improves $G$. These are found using random alternating walks.
\subsection{Random subgraphs}\label{sec:Hintro}
For proving Hamiltonicity, the most important property of $G_{n, R}(\tau_k)$ is expansion (see Lemma~\ref{lem:H} below for the definition). A major drawback of $G_{n, R}(\tau_k)$ for our purposes is its high average degree, with most vertices having degree $\Omega(\ln n)$. We therefore define a random subgraph $H\subseteq G_{n, R}(\tau_k)$, which retains expansion and connectivity with high probability while containing only $O(n)$ edges. We do not show that $H$ itself is Hamiltonian, but the graph will be important to our proof.
Recall the construction of $G_{n, R}(t)$, in which each edge $\{u, v\}$ is included at a random time $E(u, v)$. Let $D\ge k$ be a constant integer and let
$$
T_D(u) = \inf\{t > 0 : d_t(u) \ge D\}
$$
be the random time at which $u$ attains degree $D$. Define $H(t)\subseteq G_{n, R}(t)$ by including any edge $\{u, v\}$ with $E(u, v) \le t$ and $E(u, v) \le \max\{T_D(u), T_D(v)\}$. Let $H = H(\tau_k)$. In other words, an edge is included in $H$ if it is among the first $D$ edges attached to one of its endpoints in $G_{n, R}(\tau_k)$.
For a graph $G = (V, E)$ and $A\subseteq V$ we let $N(A) = \{u\notin A : \{u, v\}\in E, \text{ some $v$}\}$ and $\widehat N(A) = A\cup N(A)$. The following lemma is proved in Section~\ref{sec:Hprops}.
\begin{lemma}\label{lem:H}
Suppose $R\in \mathrm{RM}$, and let $k = O(1)$. For a large enough constant $D$, the following holds.
\begin{enumerate}[(i)]
\item (Light tail) Let $\theta$ tend to infinity with $n$ and let $S_{\theta}$ be the set of $u$ with $d_H(u) \ge \theta$ or $d_R(u) \ge \theta d$. Then with high probability, $|\widehat N_H(S_{\theta})| = o(n)$.
\item (Expansion) There exists a constant $\beta = \beta(R, k) > 0$ such that with high probability, every vertex set $A$ with $|A| < \beta n$ satisfies $|N_{H}(A)| \ge k|A|$.
\end{enumerate}
\end{lemma}
Let $\mathrm{SE}_k(R)$ (``sparse expanders'') be the set of graphs satisfying (i) -- (ii). Note that if $G\in \mathrm{SE}_k$ and $\Delta(F) \le \ell < k$ then $G\setminus F \in \mathrm{SE}_{k-\ell}$ and $G\cup F\in \mathrm{SE}_k$. Note also that $\mathrm{SE}_k \subseteq \mathrm{SE}_\ell$.
For $t > 0$ let
\begin{equation}\label{eq:SMLdef}
\mathbb{S}(t) = \{u : T_D(u) > t\} = \{u : d_t(u) < D\},
\end{equation}
and $\mathbb{L}(t) = V\setminus \mathbb{S}(t)$. For $0\le t_0 \le t_1$ define a random graph $G_{n, R}^*(t_0, t_1)$ by including any edge $\{u, v\}$ with $E(u, v)\le t_1$, and any $\{u, v\}$ with one endpoint in $\mathbb{S}(t_0)$ and $E(u, v) \le \tau_k$. Then $G_{n, R}(t_1)\subseteq G_{n, R}^*(t_0, t_1) \subseteq G_{n, R}(\tau_k)$ whenever $t_1 \le \tau_k$, and $H\subseteq G_{n, R}^*(t_0, t_1)$ for any $0 < t_0\le t_1$. Suppose $t_0 < t_1 < t_2$ and define
$$
G_0 = G_{n, R}^*\left(t_0, t_0\right), \quad G_1 = G_{n, R}^*\left(t_0, t_1\right),\quad G_2 = G_{n, R}^*\left(t_0, t_2\right).
$$
For $i=0,1,2$ let $\mathcal{F}_i$ denote the $\sigma$-algebra generated by $G_{n, R}^*(t_0, t_0)$ (including $E(u, v)$ for all edges included) and $G_{n, R}^*(t_0, t_i)$ (excluding $E(u, v)$). Then $H$ is $\mathcal{F}_i$-measurable for all $i$, and the following lemma lets us jump between $G_1$ and $G_2$.
\begin{lemma}\label{lem:Gstar}
Suppose $R\in \mathrm{RM}(1)$. Let $0 < t_0 < t_1 < t_2$ and $G_i = G_{n, R}^*(t_0, t_i)$ for $i=0,1,2$. Let $\mathcal{L}_2\in \mathcal{F}_2$. Then for any set $F$ of edges
\al{
\Prob{F\subseteq G_2 \mid \mathcal{F}_1} = \prod_{\substack{\{u, v\}\in F\setminus G_1 \\ u,v\in \mathbb{L}(t_0)}} \left(1 - e^{-R(u, v)(t_2-t_1)}\right), \label{eq:add} \\
\Prob{F\subseteq G_1 \mid \{F\subseteq G_2\}\cap \mathcal{L}_2} \ge \left(\frac{t_1-t_0}{t_2-t_0}\right)^{|F|}. \label{eq:remove}
}
\end{lemma}
\begin{proof}
Any edge in $G_2\setminus G_1$ is fully contained in $\mathbb{L}(t_0)$. Conditional on $G_0$ and $\mathbb{S}(t_0)$, the edges $\{u, v\}\notin G_0$ with $u, v\in \mathbb{L}(t_0)$ are independent exponential random variables with individual rates $R(u, v)$, conditioned to be at least $t_0$. Then \eqref{eq:add} follows from the memoryless property of exponential random variables. Since $\|R\| = o(1)$, we have
\begin{multline}
\Prob{F\subseteq G_1 \mid \{F\subseteq G_2\}\cap \mathcal{L}_2} = \prod_{\{u, v\}\in F\setminus G_0}\frac{e^{-R(u, v)t_0} - e^{-R(u, v)t_1}}{e^{-R(u, v)t_0} - e^{-R(u, v)t_2}} \\
= \left(\frac{t_1-t_0}{t_2-t_0}\right)^{|F\setminus G_0|} \ge \left(\frac{t_1-t_0}{t_2-t_0}\right)^{|F|}.
\end{multline}
\end{proof}
\subsection{The high-level argument}\label{sec:highlevel}
Suppose $R\in \mathrm{RM}(1)$ and let $t_0 = 1/4, t_1 = 1/2, t_2 = 3/4$. Define $G_0, G_1, G_2$ as in the previous section. We will show that $G_2 \in \mathcal{A}_k$ whp. In Section~\ref{sec:degrees} we show that $\tau_k > 3/4$ whp, and since $G_2 \subseteq G_{n, R}(\tau_k)$ when $\tau_k > 3/4$, it follows that
\begin{equation}\label{eq:actualpunchline}
\Prob{G_{n, R}(\tau_k) \in \mathcal{A}_k} \ge \Prob{G_2 \in \mathcal{A}_k\text{ and } \tau_k > 3/4} = 1-o(1).
\end{equation}
Say that $F$ is a $k$-graph if it can be written as the disjoint union of $F_1, \dots, F_{\fl{k/2}}$ where $F_i$ is a path or a cycle for all $i$, as well as a matching $F_0$ when $k$ is odd. Define
$$
s_k(G) = \max\{|F| : F\subseteq G \text{ a $k$-graph}\},
$$
so that $G\in \mathcal{A}_k$ if and only if $s_k(G) = \fl{kn / 2}$.
For $i=1,2$ let $\mathcal{M}_i^\ell$ be the event that $s_k(G_i) = \ell$. Recall the definition of $\mathbb{S}(t)$ from \eqref{eq:SMLdef}, and let
$$
\mathcal{L} = \{H\in \mathrm{SE}_k\} \cap \{|\mathbb{S}(t_0)| = o(n)\},
$$
and note that $\mathcal{L}$ is $\mathcal{F}_0$-measurable. Then $\mathcal{M}_i^\ell\cap \mathcal{L} \in \mathcal{F}_i$ for $i=1,2$, and Lemma~\ref{lem:Gstar} gives
$$
\Prob{\mathcal{M}_1^\ell \mid \mathcal{M}_2^\ell \cap \mathcal{L}} \ge \min_{\substack{F \text{ $k$-graph}}} \Prob{F\subseteq G_1\mid \mathcal{L}\cap \{F\subseteq G_2\}} \ge \bfrac12^{kn/2}.
$$
Then for any $\ell < \fl{kn / 2}$,
\al{
\Prob{\mathcal{M}_2^\ell \cap \mathcal{L}} & = \frac{\Prob{\mathcal{M}_1^\ell \cap \mathcal{M}_2^\ell\cap \mathcal{L}}}{\Prob{\mathcal{M}_1^\ell \mid\mathcal{M}_2^\ell\cap\mathcal{L}}} \le \frac{\Prob{\mathcal{M}_2^\ell \mid \mathcal{M}_1^\ell\cap \mathcal{L}}}{2^{-kn/2}}. \label{eq:bayes}
}
Suppose we are able to prove that for any $\ell < \fl{kn/2}$,
\begin{equation}\label{eq:numeratorbound}
\Prob{\mathcal{M}_2^\ell \ \middle|\ \mathcal{M}_1^\ell\cap \mathcal{L}} \le e^{-\Omega(n\sqrt{\ln n})}.
\end{equation}
Then plugging \eqref{eq:numeratorbound} into \eqref{eq:bayes} gives
\al{
\Prob{G_2 \notin \mathcal{A}_k} & \le \Prob{\overline\mathcal{L}} + \sum_{\ell < \fl{kn/2}} \Prob{\mathcal{M}_2^\ell\cap\mathcal{L}} \\
& \le \Prob{\overline\mathcal{L}} + kne^{-\Omega(n\sqrt{\ln n}) + O(n)} = \Prob{\overline \mathcal{L}} + o(1).
}
In Lemma~\ref{lem:degreebound} we will show that $|\mathbb{S}(t_0)| = o(n)$ whp. Together with Lemma~\ref{lem:H}, this shows that $\mathcal{L}$ occurs whp. As noted in \eqref{eq:actualpunchline}, this together with a proof that $\tau_k > 3/4$ whp (Lemma~\ref{lem:threshold} below) shows that $G_{n, R}(\tau_k) \in \mathcal{A}_k$ whp.
It remains to prove \eqref{eq:numeratorbound}. The graph $G_1$ contains $H$ by construction. If $\mathcal{M}_1^\ell\cap \mathcal{L}$ holds then $G_1$ contains some $k$-graph $F$ of size $\ell$. If $k$ is even, suppose $F = F_1\cup \dots F_{k/2}$ for some disjoint paths and Hamilton cycles $F_i$. Suppose without loss of generality that $|F_1| < n$. Then $G_1$ contains the graph $G = H\cup F_1 \setminus (F_2\cup \dots \cup F_{k/2}) \in \mathrm{SE}_2$. Let $G^p$ be the graph obtained by independently adding any edge $\{u, v\}$ to $G$ with probability $p(u, v)$, where $p$ is some symmetric function. Letting$$
p(u, v) = \left\{\begin{array}{ll}
0, & u\in \mathbb{S}(t_0)\text{ or } v\in \mathbb{S}(t_0), \\
0, & \{u, v\} \in F_2\cup\dots\cup F_{k/2}, \\
\frac13R(u, v), & \text{ otherwise},
\end{array}\right.
$$
we have $G^p\subseteq G_2$ by Lemma~\ref{lem:Gstar}. If $s_2(G^p) > s_2(G)$ then $s_k(G_2) \ge s_k(G_1\cup G^p) > s_k(G_1)$, since $G^p$ is disjoint from $F_2\cup\dots\cup F_{k/2} \subseteq G_1$.
If $k$ is odd and $F = F_1\cup\dots\cup F_{(k+1)/2}$, assume without loss of generality that $\Delta(F_1) \le i$ and $|F_1| < in/2$ for some $i\in \{1, 2\}$. Then $G = H\cup F_1 \setminus (F_2\cup\dots\cup F_{(k+1)/2}) \in \mathrm{SE}_1$, and $s_i(G^p) > s_i(G)$ implies $s_k(G_2) > s_k(G_1)$.
So, \eqref{eq:numeratorbound} follows from the following lemma.
\begin{lemma}\label{lem:thebiglemma}
Let $i\in \{1, 2\}$. Suppose $R\in \mathrm{RM}(1)$ and $G\in \mathrm{SE}_i(R)$ with $s_i(G) < in/2$, and suppose $p(u, v) \ge R(u, v) / 3$ for all $\{u, v\}\notin E$ where $\sum_{\{u, v\}\in E} R(u, v) = o(n\ln n)$. Then
\al{
\Prob{s_i(G^p) = s_i(G)} \le e^{-\sqrt{n\ln n}}.
}
\end{lemma}
The remainder of the paper is mainly devoted to proving Lemma~\ref{lem:H} (Section~\ref{sec:Hprops}) and Lemma~\ref{lem:thebiglemma} (Sections~\ref{sec:alternatingwalks} through~\ref{sec:alternatingproofs}).
\section{Preliminaries}\label{sec:prelims}
We state some preliminary, for the most part standard, results, and leave the proofs for Section~\ref{sec:prelimproofs}.
\subsection{Degrees}\label{sec:degrees}
Recall that $d_R(u) = \sum_v R(u,v)$. We will often assume that $\gamma_1(R) = 1$, and note that this implies that $d = \Theta(\ln n)$ where $d = \min d_R(u)$.
\begin{lemma}\label{lem:degreebound}
Let $R\in \mathrm{RM}(1)$.
\begin{enumerate}[(i)]
\item For any integer $D \ge 1$, $u\in V$ and $S\subseteq V$ with $|V\setminus S| = O(1)$, and $t = \Omega(1)$,
$$
\Prob{e_t(u, S) < D} = \exp\{-td_R(u) + O(\ln d_R(u))\}.
$$
\item Let $\mathbb{S}(t)$ denote the set of vertices in $G_{n, R}(t)$ with degree less than $D$. If $t = \Omega(1)$ then $|\mathbb{S}(t)| = o(n)$ whp.
\end{enumerate}
\end{lemma}
Recall the definition $\gamma_k(P) = \sum_u e^{-d_P(u)} d_P(u)^{k-1}$.
\begin{lemma}\label{lem:threshold}
Let $k\ge 1$. Suppose $P\in \mathrm{RM}$ has $\gamma_k(P) \to \gamma_k \in [0,\infty]$. Then
$$
\lim_{n\to \infty} \Prob{\delta(G_{n, P}) \ge k} = e^{-\gamma_k}.
$$
If $R\in \mathrm{RM}(1)$ and $\varepsilon \gg \frac{\ln \ln n}{\ln n}$, then $G_{n, R}(t)$ has $1-\varepsilon < \tau_k < 1 + \varepsilon$ whp.
\end{lemma}
We also note the following simple consequence of conditions~(\ref{item:scale}) and~(\ref{item:powerlaw}) of Definition~\ref{def:degrees}.
\begin{lemma}\label{lem:sublinear}
Suppose $R\in \mathrm{RM}$ has stationary distribution $\sigma$, and $A\subseteq V$. Then $\sigma(A) = o(1)$ if and only if $|A| = o(n)$.
\end{lemma}
\subsection{The expander mixing lemma}
For a matrix $A$ indexed by $V$ and sets $S, T\subseteq V$ we write
$$
A(S, T) = \sum_{u\in S, v\in T} A(u, v),
$$
noting that pairs $(u,v)$ with $u,v\in S\cap T$ are counted twice. We will use the following version of the well-known Expander Mixing Lemma~\cite{AlonChung88}.
\begin{lemma}\label{lem:EML}
Suppose $R\in RM$ has transition matrix $M$ with stationary distribution $\sigma$. Then for any $A, B\subseteq V$,
\al{
M(A, B) = |A|\sigma(B) + O\left(\lambda(R)\sqrt{n|A|\sigma(B)}\right).\label{eq:fullEML}
}
In particular, the following holds.
\begin{enumerate}[(i)]
\item $R(A, B) = \Omega(dn)$ for any $A, B\subseteq V$ with $|A|, |B| = \Omega(n)$.
\item\label{item:kappa} There exists a constant $c > 0$ such that
$$
\left|\left\{u\in A : M(u, A) \ge \bfrac{|A|}{n}^c\right\}\right| = o(|A|) \quad \text{for all $|A| \le n/2$}.
$$
\end{enumerate}
\end{lemma}
\subsection{Probabilistic bounds}
We consider the following lemma well-known, and state it without proof.
\begin{lemma}\label{lem:exprank}
Suppose $X_1,\dots,X_m$ are independent exponential random variables with finite respective rates $r_1,\dots,r_m > 0$. Let $r = r_1+\dots + r_m$ and suppose $r_i \le \varepsilon r$ for all $i$, for some $\varepsilon > 0$. Let $X_{(D)}$ be the $D$--th smallest value in the family. Then for $D \le 1/2\varepsilon$,
$$
\Prob{X_i \le X_{(D)}} \le 2D\frac{r_i}{r}.
$$
\end{lemma}
We will also use the following Chernoff bounds.
\begin{lemma}\label{lem:chernoff}
Suppose $X$ is a finite set and let $\sigma_x \in [0, 1]$ for all $x\in X$. Let $\mu = \sum_{x\in X} \sigma_x$.
\begin{enumerate}[(i)]
\item Suppose $S\subseteq X$ is a random set obtained by including any $x\in X$ independently with probability $\sigma_x$. If $\phi \le \frac12 \mu$, then
\al{
\Prob{|S| \le \phi} & \le \exp\left\{-\frac{\mu}{8}\right\}.\label{eq:chernoff2}
}
\item Suppose $T\subseteq X$ is a random set with $\Prob{A\subseteq T} \le \prod_{x\in A} \sigma_x$ for all $A\subseteq X$. If $\phi \ge 5\mu$, then
\al{
\Prob{|T| \ge \phi} \le \bfrac{\mu}{\phi}^{\phi/2}. \label{eq:chernoff4}
}
\end{enumerate}
\end{lemma}
\begin{proof}
Note that the condition on $T$ implies that $\E{|T|^\ell} \le \E{|S|^\ell}$ for all $\ell\ge 0$. The bounds then follow by standard methods, see e.g. \cite[Section 21.4]{FriezeKaronski}.
\end{proof}
\subsection{A matrix lemma}
Suppose $I$ is a totally ordered set, $A$ a set and ${\bf a} = (a_1,\dots,a_\ell)\in A^\ell$ a sequence in $A$. Say that a function $f : I\to A$ {\em respects} ${\bf a}$ if the sequence $(f(i))_{i\in I}$ is a subsequence of ${\bf a}$.
\begin{lemma}\label{lem:observation}
Suppose $\tau : I\times J \to A$ and ${\bf a} = (a_1,\dots,a_\ell)\in A^\ell$, $\ell \ge 1$. Suppose $I$ can be totally ordered so that $i\mapsto \tau(i, j)$ respects ${\bf a}$ for each $j\in J$. Let $\pi_I, \pi_J$ be finite measures on $I, J$, respectively. Then there exist $S\subseteq I, T\subseteq J$ with $\pi_I(S) \ge \pi_I(I) / \ell$ and $\pi_J(T) \ge \pi_J(J) / \ell$, such that $\tau$ is constant on $S\times T$.
\end{lemma}
\subsection{Mixing in simple random walks}\label{sec:onestep}
Let $M$ be a transition matrix with stationary distribution $\sigma$ with $\sigma(u) > 0$ for all $u\in V$. For a probability measure $\pi$ on $V$, define
$$
\mu_\sigma(\pi) = \sqrt{\left(\sum_{u\in V} \frac{\pi(v)^2}{\sigma(v)}\right) - 1}.
$$
Note that $\mu_\sigma(\pi) \ge 0$ with equality if and only if $\pi = \sigma$. We use $\mu_\sigma$ as a measure of distance from stationarity.
\begin{lemma}\label{lem:norm}
Suppose $R\in \mathrm{RM}$ has transition matrix $M$. Let $\pi_0$ be a probability measure on $V$, and define $\pi_1(v) = \sum_u \pi_0(u)M(u, v)$. Then
\al{
\mu_\sigma(\pi_1) \le \lambda(R) \mu_\sigma(\pi_0).
}
\end{lemma}
\section{On alternating walks}\label{sec:alternatingwalks}
Suppose $F = (V, E)$ is a graph, and $R$ is a rate matrix on $V$. A walk $W = (w_0,w_1,\dots,w_\ell)$ on $V$ is {\em $F$-alternating} if $\{w_i, w_{i+1}\}\in F$ for all odd $i$, and {\em strictly $F$-alternating} if also $\{w_i, w_{i+1}\}\notin F$ for all even $i$.
We need a way to measure the size of a family of $F$-alternating walks. This will be slightly cumbersome to define. Firstly, for any walk $W = (w_0,\dots,w_\ell)$ define
$$
R_\mathrm{alt}[W] = \prod_{j = 0}^{\fl{\ell/2}-1} R(w_{2j}, w_{2j+1}).
$$
For a family of walks $\mathcal{W}$ let $R_\mathrm{alt}[\mathcal{W}] = \sum_{W\in \mathcal{W}}R_\mathrm{alt}[W]$.
For any edge set $E$ with $E\cap G = \emptyset$ we also define
$$
R_G[E] = \prod_{\{u, v\}\in E} R(u, v).
$$
If $E\cap G \ne \emptyset$ let $R_G[E] = 0$. If $\mathcal{E}$ is a family of edge sets, let $R_G[\mathcal{E}] = \sum_{E\in \mathcal{E}} R_G[E]$.
For a walk $W$ let $\mathrm{odd}(W)$ be the set of edges $\{\{w_{2i}, w_{2i+1}\} : i \ge 0\}$. Note that if $W$ is a walk which repeats no vertex and is strictly $G$-alternating, then $R_G[\mathrm{odd}(W)] = R_\mathrm{alt}[W]$. If $E$ is an edge set of size $r$, then the number of walks $W$ with $\mathrm{odd}(W) = E$ is $2^rr!$. We conclude that if $\mathcal{W}$ is a family of non-repeating strictly $G$-alternating walks of length $2r-1$ and $\mathrm{odd}(\mathcal{W}) = \{\mathrm{odd}(W) : W\in \mathcal{W}\}$, then
\begin{equation}\label{eq:Galt}
R_G[\mathrm{odd}(\mathcal{W})] \ge \frac{1}{2^rr!} R_\mathrm{alt}[\mathcal{W}].
\end{equation}
For a walk $W = (w_0,\dots,w_i)$ let $f(W) = w_i$ denote its final vertex. For a family $\mathcal{W}$ of walks and $v\in V$ let $\mathcal{W}^{\to v}$ be the set of $W\in \mathcal{W}$ with $f(W) = v$. Let $(W, v_1,\dots,v_j) = (w_0,\dots,w_i, v_1,\dots,v_j)$, and for two walks $W_1 = (w_0,\dots,w_i)$ and $W_2 = (w_0',\dots,w_j')$ write
$$
W_1\circ W_2 = (w_0,\dots,w_i, w_j',\dots,w_0').
$$
\subsection{Mixing for alternating walks}\label{sec:altmix}
We define a random walk on $V$ as a probability measure $\pi$ on the set $V^\infty$ of infinite walks on $V$. For walks $W$ of length $\ell$, write $\pi(W) = \pi(\mathcal{W}(W))$ where $\mathcal{W}(W)$ is the family of walks agreeing with $W$ for the first $\ell$ steps. Define
$$
\pi(w_{j+1}\mid w_0,\dots,w_j) = \frac{\pi(w_0,\dots,w_{j+1})}{\pi(w_0,\dots,w_j)}
$$
whenever $\pi(w_0,\dots,w_j) > 0$. Define $\pi(v\mid W) = 0$ when $\pi(W) = 0$.
Suppose $G$ is a graph and $R$ a rate matrix. Recall the definition of $\widehat N(A)$ from Section~\ref{sec:Hintro}. Starting at some (possibly random) initial point $w_0$, say that a random walk $\pi$ is an {\em $(R, G)$-alternating random walk} if for all $i\ge 0$,
\al{
\pi(w_{2i + 1} \mid w_0,\dots,w_{2i}) & = M(w_{2i}, w_{2i+1}), \\
\pi(w_{2i + 2} \mid w_0,\dots,w_{2i+1}) & = 0,\quad w_{2i+2}\notin \widehat N_G(w_{2i+1}).
}
We let $\pi_i$ denote the measure on $V$ induced by $w_i$. A special case is the {\em simple, lazy $(R, G)$-alternating random walk} $\pi_{G}$ defined by
$$
\pi_G(w_{2i} \mid w_0,\dots,w_{2i-1}) = \frac{1}{d_G(w_{2i-1}) + 1}, \quad w_{2i} \in \widehat N_G(w_{2i-1}),
$$
for all $i\ge 1$. If the initial vertex $x$ is specified, we denote the measure by $\pi_{G, x}$.
Note that if $W = (w_0,\dots,w_j)$ is a $G$-alternating walk and $\pi$ a random $(R, G)$-alternating walk, then
\begin{equation}
\pi(W\mid w_0) \le \prod_{i=0}^{\cl{j/2}-1}\frac{R(w_{2i}, w_{2i+1})}{d_R(w_{2i})}\le \frac{R_\mathrm{alt}[W]}{d^{\cl{j/2}}}. \label{eq:pitoR}
\end{equation}
For rate matrices $R$ define
\begin{equation}\label{eq:mixdef}
\mathrm{mix}(R) = \cl{2 - 2\frac{\ln (n\|M\|)}{\ln \lambda(R)}},
\end{equation}
and note that $\mathrm{mix}(R) = O(1)$ for $R\in \mathrm{RM}$ since then $\lambda(R) = (n\|M\|)^{-\Omega(1)}$. The name is in reference to the following lemma.
\begin{lemma}\label{lem:weightmix}
Suppose $R \in \mathrm{RM}$, $\Delta(F)\le 2$, and that $\pi$ is an $(R, F)$-alternating random walk. Suppose $\theta \le \lambda(R)^{-1/4}$ tends to infinity with $n$, and that $j\ge \mathrm{mix}(R)$. Suppose $c > 0$ is a constant. There exists a constant $\rho > 0$ such that if $\mathcal{W}$ is a family of $F$-alternating walks $W$ of length $2j$, such that $d_R(v) < \theta d$ for all $v\in W$, and $\pi(\mathcal{W}) \ge c$, then there exists a vertex set $U_{2j}$ of size at least $\rho n$ such that
\al{
\pi(\mathcal{W}^{\to u}) \ge \frac{\rho}{n}, \quad \text{for all $u\in U_{2j}$}.
}
\end{lemma}
Say that a measure $\mu$ on $V$ is {\em near-uniform} if there exists a constant $c > 0$ such that $\mu(v) \le c/n$ for all $v$. Say that a random walk $\pi$ is near-uniform if $\pi_0$ is near-uniform. Note that for a near-uniform $\pi$, \eqref{eq:pitoR} shows that for any family of walks $\mathcal{W}$ of length $j$,
\begin{equation}\label{eq:mixpitoR}
\pi(\mathcal{W}) = \sum_{u} \pi_0(u) \pi(\mathcal{W} \mid u) \le \frac{c}{n} \frac{R_\mathrm{alt}[\mathcal{W}]}{d^{\cl{j/2}}}
\end{equation}
For $S\subseteq V$ let $\tau(S)$ be the random time at which a walk first visits $S$, and let $\tau_\mathrm{odd}(S)$ be the first odd index for which it occurs.
\begin{lemma}\label{lem:hitlargeset}
Suppose $R\in \mathrm{RM}$ and that $G$ is light-tailed. Suppose $\pi$ is a near-uniform random $(R, G)$-alternating walk.
\begin{enumerate}[(i)]
\item\label{item:nohit} If $j\ge 0$ is constant and $|S| = o(n)$, then $\pi(\tau(S) \le 2j) = o(1)$.
\item\label{item:dohit} If $|S| = \Omega(n)$ then $\pi(\tau(S) \le 1) = \Omega(1)$.
\end{enumerate}
\end{lemma}
Lemma~\ref{lem:hitlargeset}~(\ref{item:nohit}) is particularly interesting for $S = S_\theta$, which has $|S_\theta| = o(n)$ if $R\in \mathrm{RM}$ and $G$ is light-tailed. We let $\mathcal{D}_{j}^\theta$ denote the event that $\tau(S_\theta) \le j$, and let $Z_j^\theta(c)$ be the corresponding vertex set. Then $|Z_j^\theta(c)| = o(n)$.
\begin{lemma}\label{lem:improper}
Suppose $R\in \mathrm{RM}$ and that $G$ is light-tailed. Let $j\ge 0$. Let $\mathcal{C}_j^\theta$ be the set of $G$-alternating walks $W = (w_0,\dots,w_j)$ with $d_G(w_i) < \theta$ and $d_R(w_i) < \theta d$ for all $i$, such that either (a) $|\{w_0,\dots,w_j\}| < j+1$ or (b) $W$ is not strictly $G$-alternating. If $\theta^{j}\|M\| = o(1)$ then
$$
R_\mathrm{alt}[\mathcal{C}_j^\theta] = o(nd^{\cl{j/2}}).
$$
\end{lemma}
\subsection{Matchings and augmenting walks}\label{sec:matchings}
Suppose $G$ is a graph on an even number $n$ of vertices. Let $s_1(G) \le \fl{n/2}$ denote the size of the largest matching in $G$. Suppose $s_1(G) < \fl{n/2}$, and let $\mathcal{F}_1(G)$ be the family of matchings $F$ with $|F| = \mu(G)$. For $F\in \mathcal{F}_1(G)$ let $\mathrm{IP}_F$ be the set of vertices isolated by $F$, i.e. vertices $x$ such that $d_F(x) = 0$. Let $\mathrm{IP}_2 = \mathrm{IP}_2(G)$ be the set of vertex pairs $\{x, y\}$ such that $\{x, y\}\subseteq \mathrm{IP}_F$ for some $F\in \mathcal{F}_1(G)$.
\begin{lemma}\label{lem:IPk}
Suppose $G\in \mathrm{SE}_1$ with $s_1(G) < \fl{n / 2}$. Then $|\mathrm{IP}_2| \ge (\beta n)^2/ 2.$
\end{lemma}
\begin{proof}
Let $F\in \mathcal{F}_1(G)$ with and $x\in \mathrm{IP}_F$. Let $Y$ be the set of vertices $y$ such that there exists an $F$-alternating walk $(x = w_0,\dots,w_{2j} = y)$ of even length, with $x\in Y$. Then $N_G(Y) = N_F(Y)$. Indeed, suppose $v\in N_G(Y)\setminus N_F(Y)$, and let $W = (x,\dots,y)$ be an even-length $F$-alternating walk such that $yv\in G$. If $v\in \mathrm{IP}_F$ then $W+v$ is an augmenting walk, contradicting the maximality of $F$. If $v\notin \mathrm{IP}_F$ then $vw\in F$ for some $w$, and the walk $(W, v, w)$ shows that $w\in Y$, contradicting $v\notin N_F(Y)$. It follows that $|N_G(Y)| \le |N_F(Y)| \le |Y| - 1$. Since $G$ expands, we then have $|Y| \ge \beta n$.
The same argument shows that for every $y\in Y$, there is a set $|X_y| \ge \beta n$ such that $\{x, y\}\in \mathrm{IP}_2$ for each $x\in X_y$. We conclude that $|\mathrm{IP}_2| \ge (\beta n)^2/2$.
\end{proof}
For a graph $G$, integer $r \ge 1$, and $\theta$ tending to infinity, let $\mathcal{T}_r^\theta(G)$ be the family of edge sets $|T| = r$ such that no $e\in T$ is contained in $G$ or incident to $S_\theta$, and $s_1(G \cup T) > s_1(G)$.
\begin{proposition}\label{prop:augmentingwalks}
Suppose $R\in \mathrm{RM}$ and $G\in \mathrm{SE}_1$ with $s_1(G) < \fl{n/2}$. Then there exists an $r\le 2\mathrm{mix}(R) + 1$ such that if $\theta$ tends to infinity sufficiently slowly, then
$$
R_G[\mathcal{T}_r^\theta] = \Omega(nd^{r}).
$$
\end{proposition}
\begin{proof}
Let $\mathrm{AW}_{2r-1}^\theta(G)$ be the set of walks $(w_0,\dots,w_{2r-1})$ which (a) repeat no vertex, (b) avoid $S_\theta$, (c) are $F$-alternating with $w_0, w_{2r-1}\in \mathrm{IP}_F$ for some $F\in \mathcal{F}_1(G)$, and (d) are strictly $G$-alternating. Then $\mathrm{odd}(\mathrm{AW}_{2r-1}^\theta(G)) \subseteq \mathcal{T}_r^\theta$, and \eqref{eq:Galt} shows that it is enough to prove that $R_\mathrm{alt}[\mathrm{AW}_{2r-1}^\theta(G)] = \Omega(nd^r)$ for some $r\le 2\mathrm{mix}(R) + 1$ and $\theta$.
For $F\in \mathcal{F}_1(G)$ let $\mathcal{A}_i(F)$ be the set of $F$-alternating walks $(w_0, w_1,\dots,w_i)$ with $w_0\in \mathrm{IP}_F$ and $w_i \notin \mathrm{IP}_F$. Let $(w_0,\dots,w_i)\in \mathcal{B}_i(F)$ if $(w_0,\dots,w_{i-1})\in \mathcal{A}_{i-1}(F)$ and $w_i \in \mathrm{IP}_F$. Let $\mathcal{A}_i, \mathcal{B}_i$ be the walks which are in $\mathcal{A}_i(F), \mathcal{B}_i(F)$ for some $F\in \mathcal{F}_1(G)$, respectively. Note that $\mathcal{B}_i\setminus (\mathcal{C}_i\cup \mathcal{D}_i^\theta) \subseteq \mathrm{AW}_{i}^\theta(G)$.
Suppose $x\in \mathrm{IP}_F$ for some $F\in \mathcal{F}_1(G)$. Let $\pi_{F, x}$ be the simple, lazy $(R, F)$-alternating random walk with starting vertex $x$, as defined in Section~\ref{sec:altmix}. Then for all $i\ge 0$, it holds that $\pi_{F, x}(\mathcal{A}_{2i+1} \cup \mathcal{B}_{2i + 1}) = \pi_{F, x}(\mathcal{A}_{2i})$ and $\pi_{F, x}(\mathcal{A}_{2i+2}) \ge \frac12\pi_{F, x}(\mathcal{A}_{2i+1})$. For any $j\ge 1$, we conclude that there is a constant $c_j$ such that either $\pi_{F, x}(\mathcal{B}_{2i-1}) \ge c_j$ for some $1\le i < j$, or $\pi_{F, x}(\mathcal{A}_{2j}) \ge c_j$. We set $j = \mathrm{mix}(R)$, as defined in \eqref{eq:mixdef}.
For each pair $\{x, y\}\in \mathrm{IP}_2$ pick some $F(x, y)\in \mathcal{F}_1(G)$ with $\{x, y\}\in \mathrm{IP}_F$. Define a random $(R, G)$-alternating walk $\pi$ by
$$
\pi = \sum_{\{x, y\}\in \mathrm{IP}_2} \frac{1}{|\mathrm{IP}_2|}\left(\frac{\pi_{F(x, y), x}}{2} + \frac{\pi_{F(x, y), y}}{2}\right).
$$
In other words, we pick a pair $\{x, y\} \in \mathrm{IP}_2$ uniformly at random, then pick one of $x, y$ as our starting point with probability $1/2$, and run the simple, lazy $(R, F(x, y))$-alternating random walk. Note that $\pi$ is near-uniform, as $|\mathrm{IP}_2| = \Omega(n^2)$.
{\bf The one-sided case.} Suppose $\pi(\mathcal{B}_{2i-1}) = \Omega(1)$ for some $i < j$. Since $\mathcal{B}_{2i-1} \setminus (\mathcal{C}_{2i-1}\cup \mathcal{D}_{2i-1}^\theta) \subseteq \mathrm{AW}_{2i-1}^\theta$, Lemmas~\ref{lem:hitlargeset}~(\ref{item:nohit}) and \ref{lem:improper} imply
\al{
\pi(\mathrm{AW}_{2i-1}^\theta(G)) & \ge \pi(\mathcal{B}_{2i-1}) - \pi(\mathcal{C}_{2i-1}) - \pi(\mathcal{D}_{2i-1}^\theta) = \Omega(1).
}
Then \eqref{eq:mixpitoR} gives $R_\mathrm{alt}[\mathrm{AW}_{2i-1}^\theta(G)] = \Omega(nd^i)$.
{\bf The two-sided case.} Suppose $\pi(\mathcal{B}_{2i-1}) < c_j$ for all $i < j$. Let $\mathcal{A}_{2j}^\theta$ be the set of walks in $\mathcal{A}_{2j}$ which avoid $\mathcal{D}_{2j}^\theta$, and let
$$
\mathrm{XY} = \left\{\{x, y\}\in \mathrm{IP}_2 : \pi_{F(x, y), x}(\mathcal{A}_{2j}^\theta) \ge \frac{c_j}{2}\text{ and } \pi_{F(x, y), y}(\mathcal{A}_{2j}^\theta) \ge \frac{c_j}{2}\right\}.
$$
Then $|\mathrm{XY}| = \Omega(n^2)$. Indeed, $\pi(\mathcal{D}_{2j}^\theta) = o(1)$ by Lemma~\ref{lem:hitlargeset}~(\ref{item:nohit}), so
\al{
c_j-o(1) \le \pi(\mathcal{A}_{2j}^\theta) \le \frac{|\overline \mathrm{XY}|}{|\mathrm{IP}_2|}\frac{c_j}{2} + \frac{|\mathrm{XY}|}{|\mathrm{IP}_2|} \le \frac{c_j}{2} + \frac{|\mathrm{XY}|}{|\mathrm{IP}_2|}.
}
Fix some $\{x, y\}\in \mathrm{XY}$ and let $F = F(x, y)$. Let $\mathcal{A}_{x, y} \subseteq \mathcal{A}_{2j}^\theta$ be the set of $F$-alternating walks in $\mathcal{A}_{2j}^\theta$ originating at $x$. Note that $\mathcal{A}_{x, y}\circ \mathcal{A}_{y, x}\subseteq \mathcal{B}_{4j+1}\setminus \mathcal{D}_{4j+1}^\theta$. We have
\al{
R_\mathrm{alt}[\mathcal{A}_{x, y} \circ \mathcal{A}_{y, x}] & = \sum_{u, v} R_\mathrm{alt}[\mathcal{A}_{x, y}^{\to u}] R(u, v) R_\mathrm{alt}[\mathcal{A}_{y, x}^{\to v}].\label{eq:glue}
}
By Lemma~\ref{lem:weightmix} there exists a constant $\rho > 0$ and a set $|U_x| \ge \rho n$ such that $\pi_{F, x}(\mathcal{A}_{x, y}^{\to u}) \ge \rho/n$ for all $u\in U_x$,
and by \eqref{eq:pitoR} we have
$$
R_\mathrm{alt}[\mathcal{A}_{x, y}^{\to u}] \ge d^j \pi_{F, x}(\mathcal{A}_{x, y}^{\to u}) = \Omega\bfrac{d^j}{n}.
$$
Likewise, $R_\mathrm{alt}[\mathcal{A}_{y, x}^{\to v}] = \Omega(d^j/n)$ for all $v\in U_y$ where $|U_y| \ge \rho n$. Then \eqref{eq:glue} and Lemma~\ref{lem:EML} imply
\al{
R_\mathrm{alt}[\mathcal{A}_{x, y}\circ \mathcal{A}_{y, x}] \ge R(U_x, U_y) \Omega\bfrac{d^j}{n}^2 = \Omega\bfrac{d^{2j+1}}{n}. \label{eq:glue2}
}
Since $\mathcal{A}_{x, y}\circ \mathcal{A}_{y, x}\setminus \mathcal{C}_{4j+1} \subseteq \mathrm{AW}_{4j+1}^\theta(G)$, Lemma~\ref{lem:improper} gives
\al{
R_G[\mathrm{AW}_{4j+1}^\theta(G)] & \ge \left(\sum_{(x, y)\in \mathrm{XY}} R_\mathrm{alt}[\mathcal{A}_{x, y}\circ \mathcal{A}_{y, x}] \right) - R_\mathrm{alt}[\mathcal{C}_{4j+1}] = \Omega(nd^{2j+1}).
}
\end{proof}
\subsection{Paths and boosters}\label{sec:posa}
We view paths $P = (x = v_0,\dots,v_\ell = y)$ as being directed from $x$ to $y$, and for any $P$ define a total ordering $\le_P$ of $V$ by $v_i\le_P v_j$ whenever $i\le j$, and $u\le_P v$ whenever $u\in P, v\notin P$ (arbitrarily ordering the vertices not on $P$).
Suppose ${\bf z} = (z_1, z_2,\dots,z_\ell)$ is a sequence of distinct vertices on $P$. Define $\tau({\bf z})$ as the permutation of $[\ell]$ for which $z_{\tau(1)} \le_P z_{\tau(2)} \le_P \dots \le_P z_{\tau(\ell)}$. If ${\bf z}$ repeats a vertex or contains a vertex not on $P$, define $\tau({\bf z}) = \perp$. For a pair of $P$-alternating walks $(X, Y)$ with $X = (x,x_1,\dots,x_i)$ and $Y = (y,y_1,\dots,y_{2j})$, let $\tau(X, Y) = \perp$ if $X$ has odd length and $f(X) \in N_P(Y)$, and $\tau(X, Y) = \tau(y_1,\dots,y_{2j}, x_1,\dots,x_i)$ otherwise (note that $\tau = \perp$ is possible in this case as well). See Figure~\ref{fig:tau}.
\begin{figure}[b]
\begin{center}
\begin{tikzpicture}
\node (x) at (0, 0) {};
\node (y) at (10, 0) {};
\node[label node] at (0, -.3) {$x$};
\node[label node] at (10, -.3) {$y$};
\node[fill=black] (y1) at (3, 0) {};
\node[label node] at (3, -.3) {$y_1$};
\node[fill=black] (y2) at (3.5, 0) {};
\node[label node] at (3.5, -.3) {$y_2$};
\node[fill=black] (y3) at (8, 0) {};
\node[label node] at (8, -.3) {$y_3$};
\node[fill=black] (y4) at (7.5, 0) {};
\node[label node] at (7.5, -.3) {$y_4$};
\node[fill=black] (x1) at (5, 0) {};
\node[label node] at (5, -.3) {$x_1$};
\node[fill=black] (x2) at (4.5, 0){};
\node[label node] at (4.5, -.3) {$x_2$};
\node[label node] at (3, -.7) {$[1]$};
\node[label node] at (3.5, -.7) {$[2]$};
\node[label node] at (8, -.7) {$[3]$};
\node[label node] at (7.5, -.7) {$[4]$};
\node[label node] at (5, -.7) {$[5]$};
\node[label node] at (4.5, -.7) {$[6]$};
\draw (x) -- (y);
\draw[bend right,line width=1pt] (y) to (y1);
\draw[line width=1pt] (y1) to (y2);
\draw[bend left,line width=1pt] (y2) to (y3);
\draw[line width=1pt] (y3) to (y4);
\draw[bend left, line width=1pt] (x) to (x1);
\draw[line width=1pt] (x1) to (x2);
\end{tikzpicture}
\end{center}
\caption{Two walks $X, Y$ with $\tau(X, Y) = (126543)$. If $X' = (X, x_3)$, then $\tau(X', Y)$ will take the following values in order as $x_3$ increases from $x$ to $y$ and out of $P$: $\perp$, $(7126543)$, $\perp$, $(1276543)$, $\perp$, $(1265743)$, $\perp$, $(1265437)$, $\perp$. This sequence is the same for any compatible $X, Y$ with the same $\tau(X, Y)$, though some values may be skipped.}
\label{fig:tau}
\end{figure}
\subsubsection{Path rotations}\label{sec:rotations}
Suppose $P = (v_0,\dots,v_\ell)$ is a path of length $\ell$, and let $e = \{v_\ell, v_i\}$ be an edge with $0 < i < \ell-1$. Then $P\triangle (v_\ell, v_i, v_{i+1}) = (v_0,\dots,v_i,v_\ell,v_{\ell-1},\dots,v_{i+1})$ is also a path of length $\ell$. We say that $(v_\ell, v_i, v_{i+1})$ is a rotation walk of length 2.
In general, suppose $P$ is a path with endpoints $x$ and $y$, and suppose $W$ is a non-repeating even-length strictly $P$-alternating walk starting at $y$ and avoiding $x$. If $P\triangle W$ is a path, we say that $W$ is a {\em rotation walk} for $P$. Let $\mathcal{R}_{2i}(P, y)$ be the set of rotations walks for $P$ of length $2i$ with starting point $y$. Let $\mathcal{R}_{2i+1}(P, y)$ be the set of walks $(w_0,\dots,w_{2i+1})$ with $(w_0,\dots,w_{2i})\in \mathcal{R}_{2i}(P, y)$ and $w_{2i+1}\in P$ with $\mathrm{dist}_P(w_{2i+1}, \{w_0,\dots,w_{2i}, x\}) > 1$.
Suppose $W = (w_0,\dots,w_{2i+1})\in \mathcal{R}_{2i+1}(P, y)$ for some $i \ge 0$. Then there is a unique vertex $v \in N_P(f(W))$ for which $(W, v) \in \mathcal{R}_{2i+2}(P, y)$, and we define $r_P(W)$ as the path $(W, v)$. Note that $\tau(r_P(W))$ is fully determined by $\tau(W)$, as $v$ is the immediate successor of $u$ along the path $P\triangle W$, viewed as going from $x$ to $f(W)$.
We will need to consider pairs of rotation walks starting at $x$ and $y$. Let $\mathcal{A}_i(P, x)$ be the set of $P$-alternating walks of length $i$ starting at $x$. For $X\in \mathcal{A}_i(P, x)$ and $Y\in \mathcal{R}_{2j}(P, y)$, say that $(X, Y)$ is a {\em compatible pair} if $X\in \mathcal{R}_i(P\triangle Y, x)$, no vertex appears twice in $X\cup Y$, and $\mathrm{dist}_P(f(X), Y) > 1$ if $X$ has odd length. For a compatible pair $(X, Y)$ let $r_{P, Y}(X) = r_{P\triangle Y}(X)$ be the unique walk $(X, v)$ for which $(r_P(X), Y)$ is compatible. Note that $\tau(r_{P, Y}(X), Y)$ is fully determined by $\tau(X, Y)$.
We summarize this with a lemma. For families of walks $\mathcal{X}$ and $\mathcal{Y}$, say that $(\mathcal{X}, \mathcal{Y})$ is compatible if every $(X, Y)\in \mathcal{X}\times \mathcal{Y}$ is compatible, and there exists a permutation $\tau_0$ such that $\tau(X, Y) = \tau_0$ for all $(X, Y)\in \mathcal{X}\times \mathcal{Y}$.
\begin{lemma}\label{lem:orient}
Let $i, j \ge 0$. Suppose $\mathcal{X}\subseteq \mathcal{A}_{2i+1}(P, x)$ and $\mathcal{Y} \subseteq \mathcal{R}_{2j}(P, y)$ are such that $(\mathcal{X}, \mathcal{Y})$ is compatible. Let $r_{P, \mathcal{Y}}(\mathcal{X}) = \{r_{P, Y}(X) : X\in \mathcal{X}, Y\in \mathcal{Y}\} \subseteq \mathcal{A}_{2i+2}(P, x)$. Then $(r_{P, \mathcal{Y}}(\mathcal{X}), \mathcal{Y})$ is compatible.
\end{lemma}
\begin{proof}
This follows from the fact that $\tau(r_{P, Y}(X), Y)$ is fully determined by $\tau(X, Y)$, which is constant on $\mathcal{X}\times \mathcal{Y}$.
\end{proof}
The relevance of compatible walks is this: if $(\mathcal{X}, \mathcal{Y})$ is a compatible pair of families of even-length walks, then $P\triangle (X\circ Y)$ is a cycle of length $\ell + 1$ for every $(X, Y)\in \mathcal{X}\times \mathcal{Y}$.
\subsubsection{Boosters}
Suppose $G\in \mathrm{SE}_2$ has $s_2(G) < n$, and let $\mathcal{F}_2(G)$ be the set of paths and cycles $F\subseteq G$ with $|F| = s_2(G)$. We first take care of a special case.
\begin{lemma}\label{lem:cycles}
Suppose $G\in \mathrm{SE}_2$ and that $\mathcal{F}_2(G)$ contains a cycle. Let $\mathrm{BW}_1(G)$ denote the set of edges $e$ such that $s_2(G + e) > s_2(G)$. Then $R_G[\mathrm{BW}_1(G)] = \Omega(dn)$.
\end{lemma}
\begin{proof}
The family $\mathcal{F}_2(G)$ contains a cycle $C$ only if the vertex set $U$ of $C$ forms a connected component in $G$. Since $G$ expands, any connected component has size between $\beta n$ and $(1-\beta)n$. Since $U\times \overline U \subseteq \mathrm{BW}_1(G)$, by Lemma~\ref{lem:EML} we have $R_G[\mathrm{BW}_1(G)] \ge R(U, \overline U) = \Omega(dn)$.
\end{proof}
Now suppose $\mathcal{F}_2(G)$ contains only paths. For $U\subseteq V$, let $\mathrm{EP}_2(U)$ be the set of ordered pairs $(x, y)$ such that some $P\in \mathcal{F}_2(G)$ has endpoints $x$ and $y$, and $V(P) = U$.
\begin{lemma}\label{lem:EPk}
Suppose $G\in \mathrm{SE}_2$ with $s_2(G) < n$. Then there exists some $U\subseteq V$ with $|\mathrm{EP}_2(U)| \ge (\beta n)^2$.
\end{lemma}
\begin{proof}
P\'osa's lemma (see e.g.~\cite{FriezeKaronski}) shows that if $\{x, y\}\in \mathrm{EP}_2(U)$ for some $x, y$, then the set $Y = \{y : \{x, y\}\in \mathrm{EP}_2(U)\}$ has $|N_G(Y)| < 2|Y|$. Since $G\in \mathrm{SE}_2$ we then have $|Y| \ge \beta n$, and the lemma follows just like in Lemma~\ref{lem:IPk}, this time considering ordered pairs.
\end{proof}
We prove the analogue of Proposition~\ref{prop:augmentingwalks} for paths. For a graph $G$, integer $r \ge 1$, and $\theta$ tending to infinity, let $\mathcal{T}_r^\theta(G)$ be the family of edge sets $|T| = r$ such that no $e\in T$ is contained in $G$ or incident to $S_\theta$, and $s_2(G \cup T) > s_2(G)$.
\begin{proposition}\label{prop:boosterwalk}
Suppose $R\in \mathrm{RM}$ and $G\in \mathrm{SE}_2$ with $s_2(G) < n$. Then there exists an $r\le 2\mathrm{mix}(R) + 1$ such that if $\theta$ tends to infinity sufficiently slowly, then
$$
R_G[\mathcal{T}_r^\theta] = \Omega(nd^{r}).
$$
\end{proposition}
\begin{proof}
Lemma~\ref{lem:cycles} takes care of the case when $\mathcal{F}_2(G)$ contains a cycle; it only remains to note that the set $E_\theta$ of edges incident to $S_\theta$ has $R[E_\theta] = o(nd)$ by the degree condition \eqref{eq:powerlaw}. Assume $\mathcal{F}_2(G)$ contains no cycles.
For a graph $G$ and integer $r\ge 0$, let $\mathrm{BW}_{2r-1}^\theta(G)$ be the family of walks $W$ of length $2r-1$ such that (a) $W$ is $P$-alternating for some $P\in \mathcal{F}_2(G)$, (b) $W$ is strictly $G$-alternating, (c) no vertex appears more than once in $W$, (d) $W$ avoids $S_\theta$, and (e) $s_2(P\triangle W) > s_2(P)$. By \eqref{eq:Galt}, it is enough to show that $R_\mathrm{alt}[\mathrm{BW}_{2r-1}^\theta(G)] = \Omega(nd^r)$ for some $r\le 2\mathrm{mix}(R) + 1$.
Suppose $P$ is a path from $x$ to $y$ on vertex set $U$. Define $\mathcal{R}_i(P, y)$ as in Section~\ref{sec:rotations}. For $i\ge 0$, let $\mathcal{B}_{2i+1}(P, y)$ be the set of walks $(w_0,\dots,w_{2i+1})$ with $(w_0,\dots,w_{2i})\in \mathcal{R}_{2i}$ and $w_{2i+1} \in \{x\} \cup \overline U$, and note that $s_2(P\triangle W) > s_2(P)$ for $W\in \mathcal{B}_{2i+1}$.
We define $\mathrm{rot}_{P, y}$ as a random $(R, P)$-alternating walk initated at $y$ with
\al{
\mathrm{rot}_{P, y}(r_P(Y) \mid Y) & = 1, \quad Y\in \mathcal{R}_{2i+1}(P, y).
}
This satisfies
\al{
\mathrm{rot}_{P, y}(\mathcal{R}_{2i+2}) & = \mathrm{rot}_{P, y}(\mathcal{R}_{2i+1}), \\
\mathrm{rot}_{P, y}(\mathcal{R}_{2i+1}\cup \mathcal{B}_{2i+1}) & \ge (1-o(1)) \mathrm{rot}_{P, y}(\mathcal{R}_{2i}).
}
Indeed, if $W = (w_0,\dots,w_{2i+1})\notin \mathcal{R}_{2i+1}\cup \mathcal{B}_{2i+1}$ while $(w_0,\dots,w_{2i}) \in \mathcal{R}_{2i}$, then $\mathrm{dist}_P(w_{2i+1}, \{w_0,\dots,w_{2i}, x\}) < 2$, which happens with probability $o(1)$. We conclude that for any constant $j\ge 1$,
\begin{equation}\label{eq:ABpart}
\mathrm{rot}_{P, y}(\mathcal{R}_{2j}) + \sum_{i = 0}^{j-1} \mathrm{rot}_{P, y}(\mathcal{B}_{2i+1}) = 1-o(1).
\end{equation}
For each $(x, y) \in \mathrm{EP}_2(U)$ pick some $P(x, y)\in \mathcal{F}_2(G)$ on $U$, with $P(x, y)$ and $P(y, x)$ each other's reverses. Define a random $(R, G)$-alternating walk
$$
\mathrm{rot} = \sum_{(x, y)\in \mathrm{EP}_2(U)} \frac{1}{|\mathrm{EP}_2(U)|} \mathrm{rot}_{P(x, y), y}.
$$
Since $|\mathrm{EP}_2(U)| = \Omega(n^2)$, this is near-uniform. Let $j = \mathrm{mix}(R)$, and pick $\theta \le \lambda(R)^{-1/4}$ so that $\theta^{4j+1}\|M\| = o(1)$.
{\bf The one-sided case.} Suppose $s_2(G) = n-\Omega(n)$. Then $|\overline U| = \Omega(n)$, and $\{\tau(\overline U) \le 1\} \subseteq \mathcal{B}_1$. By Lemma~\ref{lem:hitlargeset}, we have
$$
\mathrm{rot}(\mathcal{B}_{1}\setminus \mathcal{D}_1^\theta) = \Omega(1).
$$
Since $\mathrm{rot}$ is near-uniform, \eqref{eq:mixpitoR} shows that $R_\mathrm{alt}[\mathcal{B}_1\setminus \mathcal{D}_1^\theta] = \Omega(nd)$. Since $\mathcal{B}_1 \setminus \mathcal{D}_1^\theta \subseteq \mathrm{BW}_1^\theta(G)$, that finishes this case.
{\bf The two-sided case.} Suppose $s_2(G) = n-o(n)$. Then $\overline U = o(n)$, and Lemma~\ref{lem:hitlargeset} gives $\mathrm{rot}(\tau(\overline U\cup S_\theta) \le 2j) = o(1)$, so $\mathrm{rot}(\mathcal{R}_{2j}) = 1-o(1)$. Then there exists a set $\mathrm{XY}\subseteq \mathrm{EP}_2$ such that all $(x, y)\in \mathrm{XY}$ have $\mathrm{rot}_{P(x, y), x}(\mathcal{R}_{2j}^\theta) = 1-o(1)$ and $\mathrm{rot}_{P(x, y), y}(\mathcal{R}_{2j}^\theta) = 1-o(1)$.
Fix $(x, y)\in \mathrm{XY}$ and let $P = P(x, y)$, directed from $x$ to $y$. We will construct a compatible pair $(\mathcal{X}_{2j}, \mathcal{Y}_{2j})$ of families of walks of length $2j$. Then $\mathcal{X}_{2j}\circ \mathcal{Y}_{2j} \subseteq \mathcal{B}_{4j+1}(P, y)$, and we can apply the same techniques that we used for matchings.
{\bf Construction of $\mathcal{X}_i, \mathcal{Y}_i$.} Initially let $\mathcal{X}_0 = \{(x)\}$. Let $\tau_0$ be a permutation such that $\mathrm{rot}_{P, y}(\tau((x), Y) = \tau_0) \ge \mathrm{rot}_{P, y}(\mathcal{R}_{2j}(P, y)) / 2^j$, and let $\mathcal{Y}_0$ be the set of $Y\in \mathcal{R}_{2j}(P, y)$ with $\tau((x), Y) = \tau_0$. For $i = 1,\dots,2j$ we inductively construct compatible families $\mathcal{X}_i, \mathcal{Y}_i$ with $\mathrm{rot}_{P, x}(\mathcal{X}_i) \ge c_{i, j}$ and $\mathrm{rot}_{P, y}(\mathcal{Y}_i) \ge c_{i, j}$, where $c_{i, j} = (3j)^{-2i}$ for $i = 1,\dots,2j$.
{\bf Construction, odd $i$.} Suppose $(\mathcal{X}_{i-1}, \mathcal{Y}_{i-1})$ is a compatible pair of families. Define $\mathcal{X}_i'$ as the set of one-step extensions of $\mathcal{X}_{i-1}$:
$$
\mathcal{X}_i' = \{(x_0,\dots,x_{i-1}, v) : (x_0,\dots,x_{i-1})\in \mathcal{X}_{i-1}, v \in V\}.
$$
We aim to apply Lemma~\ref{lem:observation} to $\tau(X, Y)$ for $(X, Y)\in \mathcal{X}_i' \times \mathcal{Y}_{i-1}$. In order to do this, we need to define a total ordering on $\mathcal{X}_i'$ such that $X\mapsto \tau(X, Y)$ respects some common sequence for all $Y\in \mathcal{Y}_{i-1}$.
Let $\le_\tau$ be some arbitrary ordering of the set $\tau(\mathcal{X}_i') = \{\tau(X) : X\in \mathcal{X}_i'\}$. Note that $|\tau(\mathcal{X}_i')| \le i+1$. Define a total ordering $\le$ on $\mathcal{X}_i'$ such that $X_1 \le X_2$ whenever $\tau(X_1) \le_\tau \tau(X_2)$, or $\tau(X_1) = \tau(X_2)$ and $f(X_1) \le_P f(X_2)$.
Let $\tau' \in \tau(\mathcal{X}_i')$ and consider the set $\mathcal{X}_i'(\tau')$ of $X\in \mathcal{X}_i'$ with $\tau(X) = \tau'$. Restricted to this set, our ordering orders walks by their final vertex. Suppose $X_1, X_2\in \mathcal{X}_i'(\tau')$ and $Y\in \mathcal{Y}_{i-1}$ are such that $f(X_1) \le_P f(X_2)$ with no vertex of $Y$ in the interval $[f(X_1), f(X_2)]$ (note that no vertex of $X_1\cup X_2$ is in this interval as then $\tau(X_1)\ne \tau(X_2)$). Then $\tau(X_1, Y) = \tau(X_2, Y)$.
As $X$ runs through $\mathcal{X}_i'(\tau')$ according to the ordering $\le$, the value of $\tau(X, Y)$ changes only when $f(X)$ enters or exits $\widehat N_P(Y)$, which happens at most $2j + 1$ times. This shows that the map $X\mapsto (X, Y)$, restricted to $\mathcal{X}_i'(\tau')$, respects some sequence of length at most $2j+2$, common to all $Y\in \mathcal{Y}_i$ (see Figure~\ref{fig:tau}). Since $|\tau(\mathcal{X}_i')| \le i+1$, the sequence $X\mapsto \tau(X, Y)$ on all of $\mathcal{X}_i'$ respects some common sequence of length at most $\ell = 2i(j+1) \le (3j)^2$.
We apply Lemma~\ref{lem:observation} with measures $\mathrm{rot}_{P, x}$ and $\mathrm{rot}_{P, y}$ on $\mathcal{X}_i'$ and $\mathcal{Y}_{i-1}$, respectively. The lemma asserts that there exist sets $\mathcal{X}_i \subseteq \mathcal{X}_i'$ and $\mathcal{Y}_i\subseteq \mathcal{Y}_{i-1}$ such that $\tau$ is constant on $\mathcal{X}_i\times \mathcal{Y}_i$, and
$$
\mathrm{rot}_{P, x}(\mathcal{X}_i) \ge \frac{\mathrm{rot}_{P, x}(\mathcal{X}_{i-1})}{\ell}\quad \text{ and }\quad \mathrm{rot}_{P, y}(\mathcal{Y}_i) \ge \frac{\mathrm{rot}_{P, y}(\mathcal{Y}_{i-1})}{\ell}.
$$
By induction, both quantities are at least $c_{i-1, j} / \ell \ge c_{i,j}$. Note that if $\tau(X, Y) = \perp$ then either $X\in \mathcal{B}_i(P\triangle Y, x)$, or $\mathrm{dist}_P(f(X), Y) < 2$. The latter has probability $O(\|M\||Y|) = o(1)$. Since $(x, y)\in \mathrm{XY}$, for any $Y\in\mathcal{Y}_{i-1}$ we then have
$$
\mathrm{rot}_{P, x}(\tau(\ \cdot\ , Y) = \perp) \le \mathrm{rot}_{P, x}(\tau(\overline U) \le i) + o(1) = o(1).
$$
We conclude that the common value of $\tau$ on $\mathcal{X}_i\times \mathcal{Y}_i$ is not $\perp$, so $(\mathcal{X}_i, \mathcal{Y}_i)$ is compatible.
{\bf Construction, even $i$.} Suppose $\mathcal{X}_{i-1}, \mathcal{Y}_{i-1}$ have been constructed. Lemma~\ref{lem:orient} shows that $\mathcal{X}_i = r_{P, \mathcal{Y}_{i-1}}(\mathcal{X}_{i-1})$ and $\mathcal{Y}_i = \mathcal{Y}_{i-1}$ are compatible. We have $\mathrm{rot}_{P, x}(\mathcal{X}_i) = \mathrm{rot}_{P, x}(\mathcal{X}_{i-1})$.
{\bf Gluing.} We proceed exactly as in \eqref{eq:glue2} to obtain
\al{
R_\mathrm{alt}[\mathcal{X}_{2j}\circ \mathcal{Y}_{2j}] & = \sum_{u, v} R_\mathrm{alt}[\mathcal{X}_{2j}^{\to u}] R(u, v) R_\mathrm{alt}[\mathcal{Y}_{2j}^{\to v}] = \Omega\bfrac{d^{2j+1}}{n}.
}
Since $\mathcal{X}_{2j}\circ \mathcal{Y}_{2j}\setminus (\mathcal{C}_{4j+1} \cup \mathcal{D}_{4j+1}^\theta) \subseteq \mathrm{BW}_{4j+1}^\theta(G)$ for any $(x, y)\in \mathrm{XY}$, summing over $(x, y)\in \mathrm{XY}$ and applying Lemmas~\ref{lem:hitlargeset}~(\ref{item:nohit}) and \ref{lem:improper} gives
$$
R_\mathrm{alt}[\mathrm{BW}_{4j+1}^\theta(G)] = \Omega(nd^{2j+1}) - o(nd^{2j+1}).
$$
\end{proof}
\section{Sprinkling}\label{sec:sprinkling}
Suppose $X$ is a finite set, $p : X\to [0, 1]$ a function, and $\mathcal{G} = (X, \mathcal{E})$ a graph. Let $X_p \subseteq X$ be the random set obtained by independently including any $x\in X$ with probability $p(x)$. Suppose for some $r \ge 1$ that $\mathcal{T}$ is a set of paths on $r$ vertices in $\mathcal{G}$. We are interested in the probability that some $T\in \mathcal{T}$ is contained in $X_p$.
For a set $A\subseteq X$ write $p(A) = \sum_{x\in A} p(x)$ and $p[A] = \prod_{x\in A} p(x)$, and for a family $\mathcal{A}$ of sets write $p[\mathcal{A}] = \sum_{A\in \mathcal{A}} p[A]$. Let $\Delta$ be the maximum value of $p(N_\mathcal{G}(x))$ for $x\in X$.
\begin{lemma}\label{lem:sprinkling}
Suppose $r\ge 1$ is an integer, and $\mathcal{T}$ is a set of paths on $r$ vertices in $\mathcal{G} = (X, \mathcal{E})$, such that $p[\mathcal{T}] \ge 6(3\Delta)^{r-2}p(X)$. Then
\al{
\Prob{T\nsubseteq X_p, \text{ all $T\in \mathcal{T}$}} \le \exp\left\{-\frac{p[\mathcal{T}]}{(3\Delta)^{r-1}r^r}\right\}.
}
\end{lemma}
We begin our proof with the following lemma.
\begin{lemma}\label{lem:PQ}
Suppose $r > 1$. Let $\mathcal{T}_1$ be the random set of paths $(x_2,\dots,x_r)$ such that $(x_1,\dots,x_r)\in \mathcal{T}$ for some $x_1\in X_p$. If $p[\mathcal{T}] \ge 6\Delta^{r-2}p(X)$, then
\al{
\Prob{p[\mathcal{T}_1] < \frac{p[\mathcal{T}]}{3\Delta}} \le \exp\left\{-\frac{p[\mathcal{T}]}{3\Delta^{r-1}}\right\}.
}
\end{lemma}
\begin{proof}
For $x\in X$ let $\mathcal{Q}(x)$ be the set of $(x_2,\dots,x_r)$ such that $(x,x_2,\dots,x_r)\in \mathcal{T}$. For $S\subseteq X$ let $\mathcal{Q}(S) = \cup_{x\in S} \mathcal{Q}(x)$. For $S\subseteq X$ and $x\notin S$, say that $x$ is {\em $S$-useful} if
$$
p[\mathcal{Q}(x) \setminus \mathcal{Q}(S)] \ge \frac{p[\mathcal{T}]}{3p(X)}.
$$
\begin{claim}\label{cl:useful}
Suppose $S\subseteq X$ has $p(\mathcal{Q}(S)) < p[\mathcal{T}]/3\Delta$. Then the set $U\subseteq X\setminus S$ of $S$-useful elements has $p(U) \ge p[\mathcal{T}] / 3\Delta^{r-1}$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{cl:useful}]
For $T = (x_2,\dots,x_r)$ let $\mathcal{Q}^{-1}(T)$ be the set of $x$ such that $(x,x_2,\dots,x_r)\in \mathcal{T}$. Then $\mathcal{Q}^{-1}(T) \subseteq N_\mathcal{G}(x_2)$, and $p(\mathcal{Q}^{-1}(T)) \le \Delta$. Let $\mathcal{M}(S)$ be the set of $(x_1,\dots,x_r)\in \mathcal{T}$ with $(x_2,\dots,x_r)\notin \mathcal{Q}(S)$. Then
\begin{equation}\label{eq:MSlo}
p[\mathcal{M}(S)] \ge p[\mathcal{T}] - \Delta p(\mathcal{Q}(S)) \ge \frac23 p[\mathcal{T}].
\end{equation}
For any $x\in X$ we have $p[\mathcal{Q}(x)] \le \Delta^{r-1}$, so
\al{
p[\mathcal{M}(S)] & \le \sum_{x\notin S} p(x) p[\mathcal{Q}(x)\setminus \mathcal{Q}(S)] \\
& \le p(U) \Delta^{r-1} + p(X\setminus S) \frac13 \frac{p[\mathcal{T}]}{p(X)} \le p(U)\Delta^{r-1} + \frac13 p[\mathcal{T}].\label{eq:MShi}
}
Combining \eqref{eq:MSlo} and \eqref{eq:MShi} gives $p(U) \ge p[\mathcal{T}]/3\Delta^{r-1}$.
\end{proof}
Consider sampling $S \subseteq X_p$ by the following procedure.
\begin{enumerate}
\item Initially let $S_0 = \emptyset$ and $Z_0 = \emptyset$. Set $i = 1$.
\item Let $x_i\notin Z_{i-1}$ be an $S_{i-1}$-useful element. Let $Z_i = Z_{i-1}\cup \{x_i\}$, and with probability $p(x_i)$ let $S_i = S_{i-1}\cup\{x_i\}$, otherwise let $S_i = S_{i-1}$.
\item If $p[\mathcal{Q}(S_i)] \ge p[\mathcal{T}] / 3\Delta$, declare \textsc{success} and end the procedure. If $p(Z_i) \ge p[\mathcal{T}] / 3\Delta^{r-1}$, declare \textsc{failure} and end the procedure. Otherwise, increase $i$ by 1 and go to step 2.
\end{enumerate}
By Claim~\ref{cl:useful}, Step 2 can be carried out as long as neither \textsc{success} nor \textsc{failure} has been declared, since then $p(U\setminus Z_{i-1}) > 0$ where $U$ is the set of $S_{i-1}$-useful elements.
Since each $x_i$ is $S_{i-1}$-useful at time of sampling, we have
$$
p[\mathcal{Q}(S_\ell)] \ge \frac{p[\mathcal{T}]}{3p(X)} \sum_{i = 1}^\ell \xi_i,
$$
where the $\xi_i$ are independent indicator random variables. Letting $\xi(\ell) = \sum_{i\le \ell} \xi_i$, we have $\E{\xi(\ell)} = p(Z_\ell)$. If \textsc{failure} is declared, there exists some $\ell$ for which $p(Z_\ell) \ge p[\mathcal{T}] / 3\Delta^{r-1}$ while $\xi(\ell) < p(X) / \Delta = o(p(Z_\ell))$. By the Chernoff bound \eqref{eq:chernoff2} we have
\al{
\Prob{\xi(\ell) < \frac{p(X)}{\Delta} \text{ and } p(Z_\ell) \ge \frac{p[\mathcal{T}]}{3\Delta^{r-1}}} \le \exp\left\{- \frac{p[\mathcal{T}]}{24\Delta^{r-1}}\right\}.
}
Since $p[\mathcal{T}_1] \ge p[\mathcal{Q}(S_\ell)]$, the lemma follows.
\end{proof}
We can now prove Lemma~\ref{lem:sprinkling}.
\begin{proof}[Proof of Lemma~\ref{lem:sprinkling}]
If $r = 1$ then $\mathcal{T}$ is a collection of elements of $X$, and
\begin{equation}\label{eq:ris1}
\Prob{\mathcal{T}\cap X_p = \emptyset} = \prod_{x\in \mathcal{T}} (1-p(x)) \le e^{-p[\mathcal{T}]}.
\end{equation}
Suppose $r > 1$. Let $X_1,\dots,X_r$ be independent random subsets of $X$, each sampling any $x\in X$ with probability $p'(x) = p(x) / r$. Then any $x$ is independently in $X_1\cup\dots \cup X_r$ with probability $1 - (1-p(x)/r)^r \le p(x)$.
Let $\mathcal{T}_0 = \mathcal{T}$, and for $0 < i < r$ let $\mathcal{T}_i$ be the random set of $(x_{i+1},\dots,x_r)$ such that $(x_1,\dots,x_i,x_{i+1},\dots,x_r)\in \mathcal{T}$ for some $x_j\in X_j$, $1\le j \le i$.
Let $\mathcal{E}_i$ denote the event that $p'[\mathcal{T}_i] \ge p'[\mathcal{T}] / (3\Delta)^i$. Lemma~\ref{lem:PQ} shows that for $0 < i < r$,
\al{
\Prob{\overline{\mathcal{E}_i} \mid \mathcal{E}_{i-1}} \le\exp\left\{-\frac{1}{3^i\Delta^{r-1}} p'[\mathcal{T}]\right\}.
}
We then have
$$
\Prob{\overline{\mathcal{E}_{r-1}}} \le \sum_{i=1}^{r-1}\Prob{\overline{\mathcal{E}_i}\mid \mathcal{E}_{i-1}} \le (r-1) \exp\left\{- \bfrac{1}{3\Delta}^{r-1} p[\mathcal{T}] \right\}.
$$
Finally, note that $\mathcal{T}_{r-1}$ is a set of elements in $X$. Repeating the argument behind \eqref{eq:ris1} gives
$$
\Prob{T\nsubseteq X_p,\text{ all $T\in \mathcal{T}$} \mid \mathcal{E}_{r-1}} \le \exp\left\{- \bfrac{1}{3\Delta}^{r-1} p'[\mathcal{T}]\right\}.
$$
With $p'[\mathcal{T}] = p[\mathcal{T}] / r^r$, this finishes the proof.
\end{proof}
\section{Finishing the high-level proof}
We can now prove Lemma~\ref{lem:thebiglemma}. Suppose $R\in \mathrm{RM}$ and $G\in \mathrm{SE}_i$ with $s_i(G) < \fl{in / 2}$ for some $i\in \{1, 2\}$. Note that $d = \Theta(\ln n)$ since $\gamma_1(R) = 1$. Let $\theta$ tend to infinity arbitrarily slowly, and let $E_\theta$ be the set of edges incident to $S_\theta = S_\theta(R, G)$. Propositions~\ref{prop:augmentingwalks} and \ref{prop:boosterwalk} show, for $i=1, 2$ respectively, that there exists an $r \le 2\mathrm{mix}(R) + 1$ and a set $\mathcal{T}_r$ of edge sets $T$ with $|T| = r$ and $T\cap (G\cup E_\theta) = \emptyset$ such that
$$
R_G[\mathcal{T}_r] = \sum_{T\in \mathcal{T}_r} \prod_{uv\in T} R(u, v) = \Omega(nd^r).
$$
Suppose $E$ is an edge set with $R(E) = o(n\ln n)$, and that $p$ satisfies $p(u, v) \ge R(u, v) / 3$ for all $\{u, v\}\notin E$. Let $X = \binom{V}{2} \setminus (G \cup E_\theta)$. We then have
\al{
\Prob{s_i(G^p) = s_i(G)} = \Prob{T\nsubseteq X_p, \text{ all $T\in \mathcal{T}_r$}}.
}
Let $\mathcal{G} = (X, \mathcal{E})$ be the graph on $X$ where $u_1v_1, u_2v_2\in X$ are adjacent if $G$ contains an edge between $\{u_1,v_1\}$ and $\{u_2,v_2\}$. Then
$$
\Delta = \max_{uv\in X} p(N_\mathcal{G}(uv)) \le 2\theta d.
$$
Let $\mathcal{T}_r(E)$ be the set of $T\in \mathcal{T}_r$ which intersect $E$. Picking $\theta$ so that $R(E)\theta^{r-1} = o(nd)$, we have
$$
R_G[\mathcal{T}_r(E)] \le R(E)\Delta^{r-1} = o(nd^r).
$$
It follows that
$$
p[\mathcal{T}_r] \ge \frac13R_G[\mathcal{T}_r \setminus \mathcal{T}_r(E)] = \Omega(nd^r).
$$
Note that $p(X) = O(nd)$. Lemma~\ref{lem:sprinkling} then gives
$$
\Prob{T\nsubseteq X_p, \text{ all $T\in \mathcal{T}_r$}} = \exp\left\{-\Omega\bfrac{nd^r}{\Delta^{r-1}}\right\} = \exp\left\{-\Omega\bfrac{nd}{\theta^{r-1}}\right\}.
$$
Letting $\theta^{r-1} = o(\sqrt{d})$ and recalling that $d = \Theta(\ln n)$ finishes the proof.
\section{Proofs for Section~\ref{sec:altmix}: alternating walks mix}\label{sec:alternatingproofs}
Suppose $G$ is a graph and $R$ a rate matrix on $V$, with associated transition matrix $M$. In Section~\ref{sec:altmix} we defined the simple, lazy $(R, G)$-alternating random walk, which is a special case of the following definition. Recall that $\widehat N(A) = A\cup N(A)$.
\begin{definition}
Given a graph $G$, a random walk $\pi$ on $V$ is a {\em random $(R, G)$-alternating walk} if the following hold for all $j \ge 0$:
\al{
\pi(w_{2j+1} \mid w_0,\dots,w_{2j}) & = M(w_{2j}, w_{2j+1}), \\
\pi(w_{2j+2} \mid w_0,\dots,w_{2j+1}) & = 0, \quad w_{2j+2}\notin \widehat N_G(w_{2j+1}).
}
\end{definition}
In short, a random $(R, G)$-alternating walk alternates between memoryless transitions weighted by $M$, and (lazy) steps restricted to the edges of $G$. We use $\pi_j$ to denote the measure induced by the $j$--th vertex $w_j$, and note that the initial distribution $\pi_0$ may be any distribution on $V$.
Before going into mixing of the random $(R, G)$-alternating walk, we restate and prove Lemma~\ref{lem:improper}. Define $S_{\theta}$ as the set of vertices $u$ with $d_G(u) \ge \theta$ or $d_R(u) \ge \theta d$, and say that a walk avoids $S_\theta$ if it contains no vertex of $S_\theta$.
\begin{lemma}
Suppose $R\in \mathrm{RM}$ and that $G$ is light-tailed, and let $j = O(1)$. Let $\mathcal{C}_j^{\theta}$ be the set of $G$-alternating walks of length $j$ which avoid $S_{\theta}$ and either (a) repeat some vertex, or (b) are not strictly $G$-alternating. If $\theta^{j+1}\|M\| = o(1)$ then $R_\mathrm{alt}[\mathcal{C}_j^{\theta}] = o(nd^{\cl{j/2}})$.
\end{lemma}
\begin{proof}
If a $G$-alternating walk $W = (w_0,\dots,w_j)$ repeats a vertex or has $\{w_{2i}, w_{2i+1}\}\in G$ for some $i$, there must exist some $i$ such that $w_{2i+1}\in \widehat N_G(w_0,\dots,w_{2i})$. If $d_G(w_i) < \theta$ for all $i$, then
$$
M(w_{2i}, \widehat N_G(w_0,\dots,w_{2i})) \le \|M\| (2i+1)\theta,
$$
so $\pi(\mathcal{C}_j^\theta) = O(\theta\|M\|)$ for any random $(R, G)$-alternating walk $\pi$. Note that for any walk $W = (w_0,\dots,w_j)$ which avoids $S_\theta$,
\begin{multline}
R_\mathrm{alt}[W] \le \prod_{i=0}^{\cl{j/2}-1} \frac{\theta d}{d_R(w_{2i})} R(w_{2i}, w_{2i+1}) \prod_{i=0}^{\fl{j/2} - 1} \frac{\theta}{d_G(w_{2i-1})+1}\\
\le \theta^jd^{\cl{j/2}} \times n\pi_{G}(W),
\end{multline}
where $\pi_{G}$ is the simple, lazy $(R, G)$-alternating walk initiated at a vertex chosen uniformly at random. Since $\theta^{j+1}\|M\| = o(1)$, it follows that $R_\mathrm{alt}[\mathcal{C}_j^\theta] = o(nd^{\cl{j/2}})$.
\end{proof}
\subsection{Mixing for the random alternating walk}
The $R$-steps of the $(R, G)$-alternating walk gets $\pi_j$ closer to stationarity by Lemma~\ref{lem:norm}, while the $G$-steps may pull it back. The following lemma bounds the harm done.
\begin{lemma}\label{lem:semirandom}
Suppose $R\in \mathrm{RM}$ with $\lambda(R) = \lambda$ and stationary distribution $\sigma$. Suppose $\pi$ is a random $(R, G)$-alternating walk for some graph $G$ with maximum degree $\theta - 1$ and $d_G(u) = 0$ whenever $d_R(u) \ge \theta d$. Then for $i\ge 0$,
\al{
\mu_\sigma(\pi_{2i+1}) & \le \lambda \mu_\sigma(\pi_{2i}), \label{eq:semirandomodd}\\
\mu_\sigma(\pi_{2i+2})^2 & \le \theta^4\left(\mu_\sigma(\pi_{2i+1})^2 + 1\right).\label{eq:semirandomeven}
}
In particular, if $\lambda\theta^2 = o(1)$ then for all $i\ge 0$,
\begin{equation}\label{eq:justaddxi}
\mu_\sigma(\pi_{2i+1})^2 \le \lambda^{2i}\theta^{4i} \mu_\sigma(\pi_1)^2 + O(\lambda^2\theta^4).
\end{equation}
\end{lemma}
\begin{proof}
Lemma~\ref{lem:norm} immediately gives \eqref{eq:semirandomodd}, and we prove \eqref{eq:semirandomeven}. Note that for any $i\ge 1$ and $u\in V$,
\al{
\pi_{2i}(u) & = \sum_{v\in \widehat N(u)} \pi_{2i-1}(v)\pi(w_{2i}=u\mid w_{2i-1}=v) \le \theta \max_{v\in \widehat N(u)}\pi_{2i-1}(v).
}
Suppose $v\in \widehat N(u)$. Then $u = v$ or $d_R(u) \le \theta d$, since $d_G(u) = 0$ whenever $d_R(u) \ge \theta d$. Since $d_R(v) \ge d$, in either case we conclude that $\sigma(u) / \sigma(v) = d_R(u) / d_R(v) \le \theta$, and
\al{
\mu_\sigma(\pi_{2i})^2 & \le \sum_v \frac{\pi_{2i}(v)^2}{\sigma(v)}\le \sum_v \frac{\theta^2}{\sigma(v)} \max_{u\in \widehat N(v)}\pi_{2i-1}(u)^2.
}
Any vertex $u$ is counted at most $\theta$ times in this sum, so
\al{
\mu_\sigma(\pi_{2i})^2 \le \theta^3\sum_u \frac{\pi_{2i-1}(u)^2}{\sigma(u)}\max_{v\in \widehat N(u)}\frac{\sigma(u)}{\sigma(v)} \le \theta^4 \left(\mu_\sigma(\pi_{2i-1})^2 + 1\right). \label{eq:oddtoeven}
}
This shows \eqref{eq:semirandomeven}. Repeatedly applying \eqref{eq:semirandomodd} and \eqref{eq:semirandomeven} with $\lambda\theta^2 = o(1)$ gives \eqref{eq:justaddxi}.
\end{proof}
Recall that for a family $\mathcal{W}$ of walks and a vertex $v$, $\mathcal{W}^{\to v}$ is the set of walks in $\mathcal{W}$ ending at $v$. Say that a walk $W = (w_0,\dots,w_j)$ is {\em non-lazy} if $w_{i} \ne w_{i+1}$ for all $0\le i < j$.
For any random $(R, G)$-alternating walk $\pi$ we define a variant $\pi^\theta$ by the following holding for any $W = (w_0,\dots,w_{2j-1})$: if $w_{2j-1}\in S_\theta$ then $w_{2j} = w_{2j-1}$, and if $w_{2j-1}\notin S_\theta$ then
\begin{equation}
\pi^\theta(w_{2j}\mid W) = \left\{\begin{array}{ll}
0, & w_{2j}\in S_\theta, \\
\pi(w_{2j}\mid W) + \sum_{v\in S_\theta} \pi(v\mid W), & w_{2j} = w_{2j-1}, \\
\pi(w_{2j}\mid W), & w_{2j}\notin \{w_{2j-1}\}\cup S_\theta.
\end{array}\right.\label{eq:pith}
\end{equation}
Then $\pi^\theta$ is a random $(R, G^\theta)$-alternating walk, where $G^\theta\subseteq G$ is obtained from $G$ by removing any edge incident to $S_\theta$. The walk $\pi^\theta$ is designed to satisfy the conditions of Lemma~\ref{lem:semirandom} as well as satisfying $\pi^\theta(\mathcal{W}) = \pi(\mathcal{W})$ for any family $\mathcal{W}$ of walks which either (a) are non-lazy and avoid $S_\theta$, or (b) have $w_i\notin \widehat N(S_\theta)$ for odd $i$.
\begin{lemma}\label{lem:RGmix}
Suppose $R \in \mathrm{RM}$ with $\lambda(R) = \lambda$, suppose $F$ is a graph with maximum degree $\Delta(F)\le 2$, and suppose $\pi$ is a random $(R, F)$-alternating walk. Suppose $\theta \le \lambda^{-1/4}$ tends to infinity with $n$, and let $j\ge 2-2\frac{\ln (n\|M\|)}{\ln\lambda}$ be an integer. Let $c > 0$ be constant. Suppose $\mathcal{W}$ is a set of non-lazy $S_\theta(R, F)$-avoiding walks of length $2j$, such that $\pi(\mathcal{W}) \ge c$. Then there exists a constant $\rho > 0$ such that there exists a set $|U_{2j}|\ge \rho n$, such that any $u\in U_{2j}$ has $\pi(\mathcal{W}^{\to u}) \ge \rho/ n$.
\end{lemma}
\begin{proof}
Let $S_\theta = S_{\theta}(R, F)$ and consider the walk $\pi^\theta$ defined above. Since $\theta \le \lambda^{-1/4}$ and $\lambda=o(1)$, Lemma~\ref{lem:semirandom} implies that
\begin{equation}\label{eq:downtopi1}
\mu_\sigma(\pi_{2j-1}^\theta)^2 \le \lambda^{j-1}\mu_\sigma(\pi_1^\theta)^2 + O(\lambda).
\end{equation}
We have $\pi_1^\theta(u) = \sum_v \pi_0^\theta(v)M(v, u) \le \|M\|$ for all $u$, and since $\sigma(u) \ge 1/bn$ for all $u$,
$$
\mu_\sigma(\pi_1)^2 \le \sum_u \frac{\pi_1(u)^2}{\sigma(u)} \le bn^2\|M\|^2.
$$
So for $j\ge 2 - 2\ln(n\|M\|)/\ln(\lambda)$, \eqref{eq:downtopi1} becomes $\mu_\sigma(\pi_{2j-1}^\theta)^2 = O(\lambda)$. Define $T$ as the set of vertices $u$ with $|\pi_{2j-1}^\theta(u) - \sigma(u)| < \lambda^{1/3}\sigma(u)$. Then by definition of $\mu_\sigma$,
\al{
O(\lambda) = \mu_\sigma(\pi_{2j-1}^\theta)^2 \ge \sum_{v\notin T} \left(\frac{\pi_{2j-1}^\theta(v)}{\sigma(v)} - 1\right)^2\sigma(v) \ge \lambda^{2/3}\sigma(\overline T).
}
Then $\sigma(T) = 1-O(\lambda^{1/3})$. By definition of $T$,
\al{
\pi_{2j-1}^\theta(T) & \ge (1-\lambda^{1/3})\sigma(T) = 1-O(\lambda^{1/3}), \label{eq:pi2jT}
}
and for any vertex set $A$,
\begin{equation}\label{eq:Abound}
\pi_{2j-1}^\theta(A) \le (1+\lambda^{1/3}) \sigma(A\cap T) + \pi_{2j-1}^\theta(\overline T) \le \sigma(A) + O(\lambda^{1/3}).
\end{equation}
Let $\mathcal{W}_{2j-1}\subseteq \mathcal{W}$ be the set of walks obtained by removing the final vertex from walks in $\mathcal{W}$. Note that $c \le \pi(\mathcal{W}_{2j-1}) = \pi^\theta(\mathcal{W}_{2j-1})$. For any $v\in V$ let $\mathcal{W}_{2j-1}^{\to v}$ be the set of walks in $\mathcal{W}_{2j-1}$ which end at $v$, and let $U_{2j-1}$ be the set of $v$ such that $\pi^\theta(\mathcal{W}_{2j-1}^{\to v}) \ge c/2n$. Then
\begin{multline}
c \le \pi^\theta(\mathcal{W}_{2j-1})
= \sum_v \pi^\theta(\mathcal{W}_{2j-1}^{\to v}) \\
\le \frac{c}{2n}|\overline{U_{2j-1}}| + \pi_{2j-1}^\theta(U_{2j-1})
\le \frac{c}{2} + \sigma(U_{2j-1}) + O(\lambda^{1/3}).
\end{multline}
Since $\lambda = o(1)$, we conclude that $\sigma(U_{2j-1}) \ge c/3$. By Lemma~\ref{lem:sublinear} we then have $|U_{2j-1}| \ge c' n$ for some constant $c' > 0$. Let
$$
U_{2j} = \left\{v : \pi^\theta(\mathcal{W}^{\to v}) \ge \frac{1}{3}\frac{c}{2n}\right\}.
$$
Since $\Delta_F\le 2$, every $u\in U_{2j-1}$ has at least one neighbour in $U_{2j}$, counting self-loops, and $|U_{2j}| \ge |U_{2j-1}| / 3 \ge c' n / 3$. Since $\pi(\mathcal{W}^{\to v}) = \pi^\theta(\mathcal{W}^{\to v})$ for all $v$, this finishes the proof with $\rho = \min\{c'/3, c/6\}$.
\end{proof}
\subsection{Hitting times for sets}
For a random walk $(w_0,w_1,\dots)$ and $S\subseteq V$, recall that $\tau(S)$ is the smallest $i$ for which $w_i\in S$.
\begin{lemma}\label{lem:badwalks}
Suppose $R\in \mathrm{RM}$ and that $G$ is light-tailed. Suppose $\pi$ is a random $(R, G)$-alternating walk with $\pi_0(u) = 1/|A|$ for $u\in A$, where $A\subseteq V$ has size $|A| = \Omega(n)$.
\begin{enumerate}[(i)]
\item If $j\ge 0$ is constant and $|S| = o(n)$, then
$$
\pi(\tau(S) \le 2j) = o(1).
$$
\item If $|S| = \Omega(n)$ then
$$
\pi(\tau(S) \le 1) = \Omega(1).
$$
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (i). Let $\theta = o(n/|S|)$ tend to infinity with $n$. We may assume that $S_\theta\subseteq S$, since replacing $S$ by $S\cup S_\theta$ only increases the probability in question. Note that since $G$ is light-tailed,
$$
|\widehat N(S)| \le |\widehat N(S\setminus S_\theta)| + |\widehat N(S_\theta)| \le \theta|S| + o(n) = o(n).
$$
Note that either $\tau(S) = 0$ or $\tau(S) \ge \tau_\mathrm{odd}(\widehat N(S))$. We then have
\begin{equation}\label{eq:piz}
\pi(\tau(S) \le 2j) \le \pi_0(S) + \pi(\tau_\mathrm{odd}(\widehat N(S)) < 2j \mid \overline S).
\end{equation}
The first term equals $|S| / |A| = o(1)$.
Let $\pi^\theta$ be the modification of $\pi$ defined in \eqref{eq:pith}. Note that since $S_\theta \subseteq S$, any walk $W\in \{\tau_\mathrm{odd}(\widehat N(S)) < 2j\}$ has $\pi^\theta(W) = \pi(W)$. Since $\sigma(u) \ge 1/bn$ for all $u$ for some constant $b\ge 1$,
\al{
\mu_\sigma(\pi_0^\theta)^2 & = \left(\sum_{u\in Z} \frac{(1/|Z|)^2}{\sigma(u)}\right) - 1 \le \frac{bn}{|Z|} - 1 = O(1).\label{eq:pi0lin}
}
By Lemma~\ref{lem:semirandom} and since $\theta\le\lambda^{-1/4}$, for any odd $i \ge 1$ we have $\mu_\sigma(\pi_{i}^\theta)^2 \le \mu_\sigma(\pi_1^\theta)^2 + O(\lambda) = O(\lambda)$.
As in \eqref{eq:Abound}, we have $\pi_i^\theta(\widehat N(S)) \le \sigma(\widehat N(S)) + o(1)$. Then
$$
\pi(\tau(S)\le 2j) \le o(1) + \pi^\theta(\tau_\mathrm{odd}(\widehat N(S)) < 2j) \le \sum_{\substack{i < 2j \\ i \text{ odd}}} \pi_i^\theta(\widehat N(S)) = o(1).
$$
Part (ii) follows from $\mu_\sigma(\pi_1)^2 = O(\lambda)$. Applying \eqref{eq:Abound} to the complement of $S$, we have
$$
\pi(\tau(S) \le 1) \ge \pi_1(S) \ge \sigma(S) - o(1).
$$
\end{proof}
\section{A low-degree expander}\label{sec:Hprops}
Recall that $G_{n, R}(t)$ is constructed by letting $E(u, v)$ be independent exponential random variables with rate $R(u, v)$ for all $\{u, v\}$, including any edge $\{u, v\}$ with $E(u, v) \le t$ (note that $E(u, v) = E(v, u)$). Let $D\ge k$ be an integer and define
$$
T_D(u) = \inf\{t > 0 : |\{v : E(u, v) \le t\}| \ge D\}
$$
be the random time at which $u$ attains degree $D$. We define a graph $H(t) \subseteq G_{n, R}(t)$ by including an edge $\{u, v\}$ whenever $E(u, v)\le t$ and $E(u, v) \le \max\{T_D(u), T_D(v)\}$, and let $H = H(\tau_k)$.
\begin{lemma}\label{lem:Hprops}
Suppose $R \in \mathrm{RM}$ and $k\ge 1$. There exists some $D = O(1)$ such that the following hold.
\begin{enumerate}[(i)]
\item Let $\theta$ tend to infinity with $n$. Letting $S_{\theta}$ denote the set of $u$ with $d_H(u) \ge \theta$ or $d_R(u) \ge \theta d$, with high probability $|\widehat N_H(S_{\theta})| = o(n)$.
\item There exists a constant $\beta = \beta(R, k) > 0$ such that with high probability, every $|A| < \beta n$ has $|N_{H}(A)| \ge k|A|$.
\end{enumerate}
\end{lemma}
We prove Lemma~\ref{lem:Hprops} over the next few sections.
\subsection{$H$ and the $D$-out graph}\label{sec:Dout}
For all ordered pairs $(u, v)$, let $X(u, v)$ be independent exponential random variables with rate $R(u, v) / 2$. Define
\al{
T_D^+(u) & = \inf\{t > 0 : |\{v : X(u, v) \le t\}| \ge D\}, \\
T_D^-(v) & = \inf\{t > 0 : |\{u : X(u, v) \le t\}| \ge D\}.
}
Define two undirected graphs on $V$ by
\al{
H_D^+ & = \{\{u, v\} : X(u, v) \le T_D^+(u)\}, \\
H_D^- & = \{\{u, v\} : X(u, v) \le T_D^-(v)\}.
}
Then $H_D^+$ and $H_D^-$ are equal in distribution, the common distribution being the $R$-weighted $D$-out random graph $\mathcal{G}_{R, D}$, defined as follows. Each $u$ independently samples $D$ vertices $N(u)$ chosen without replacement with probability proportional to $R(u, \cdot)$. Let $\vec{\mathcal{G}}_{R, D}$ be the graph with edges $(u, v)$ for $v\in N(u)$. Then $\mathcal{G}_{R, D}$ is obtained by ignoring orientations and merging parallel edges in $\vec{\mathcal{G}}_{R, D}$.
We couple $H_D^+$ and $H_D^-$ to $H$ by letting $E(u, v) = \min\{X(u, v), X(v, u)\}$. Then $H\subseteq H_D^+\cup H_D^-$. Indeed, suppose $\{u, v\}\in H$. If $X(u, v) \le X(v, u)$ then
\al{
X(u, v) & = E(u, v) \le \max\{T_D(u), T_D(v)\} \le \max\{X_D^+(u), X_D^-(v)\},
}
so $\{u, v\}\in H_D^+\cup H_D^-$. The same argument with the signs reversed holds if $X(v, u) \le X(u, v)$.
\subsection{Lemma~\ref{lem:Hprops} (i): degrees in $H$}
We have $S_\theta = A_\theta \cup B_\theta$ where $A_\theta = \{v : d_H(v) \ge \theta\}$ and $B_\theta = \{u : d_R(u) \ge \theta d\}$, and
\al{
|\widehat N(S_{\theta})| & \le |\widehat N(B_\theta\setminus A_\theta)| + |\widehat N(A_{\theta})| \le \theta |B_\theta| + |\widehat N(A_\theta)|.
}
Letting $c = 1/20D$, we bound
\begin{multline}
|\widehat N_H(A_\theta)| \le \sum_{\ell \ge \theta} (\ell+1)|\{v : d_H(v) = \ell\}| = \theta|A_\theta| + \sum_{\ell \ge \theta} |A_\ell| \\
\le \theta|A_\theta \setminus B_{c\theta}| + \theta|B_{c\theta}| + \sum_{\ell \ge \theta}(|A_\ell\setminus B_{c\ell}| + |B_{c\ell}|).
\end{multline}
We bound $|A_\ell \setminus B_{c\ell}|$. The discussion in Section~\ref{sec:Dout} shows that $H\subseteq H_D^+\cup H_D^-$ where $H_D^+ \stackrel{d}{=} H_D^- \stackrel{d}{=} \mathcal{G}_{R, D}$. Letting $d(u)$ denote degrees in $\mathcal{G}_{R, D}$,
\al{
\Prob{d_H(u) \ge \ell} \le 2\Prob{d(u) \ge \ell/2}.
}
Let $X_u$ be the number of vertices $v$ with $u\in N(v)$. If $d(u) \ge \ell/2$ then $X_u \ge \ell/2-D \ge \ell/4$. Vertices $v\ne u$ independently have $u\in N(v)$ with probability at most $DM(v, u)$ by Lemma~\ref{lem:exprank}. Since $M(v, u) = d_R(u)M(u, v)/d_R(v)$, and $d_R(u)/d_R(v) \le \ell/20D$ for $u\notin B_{c\ell}$, we have
$$
\E{X_u} = \sum_{v} \Prob{u\in N(v)} \le D\sum_v M(v, u) \le \frac{\ell}{20}.
$$
By the Chernoff bound~\eqref{eq:chernoff4}, we have
\begin{equation}\label{eq:dHbound}
\Prob{d_H(u) \ge \ell} \le 2\Prob{X_u \ge \frac{\ell}{4}} \le 2\bfrac{1}{5}^{\ell/2},\quad u\notin B_{c\ell}.
\end{equation}
It follows that $\mathbb{E}|A_\ell\setminus B_{c\ell}| = ne^{-\Omega(\ell)}$. Since $R\in \mathrm{RM}$, there are constants $b, b_1 > 0$ and $\alpha\in [0, 1/2)$ such that
\begin{equation}\label{eq:detpowlaw}
\frac{t|B_t|}{b_1n} \le \sigma(B_t) \le b\bfrac{|B_t|}{n}^{1-2\alpha}.
\end{equation}
If $\alpha > 0$ then $|B_t| \le b_2 t^{-\frac{1}{1-\alpha}} n$ for some $b_2 > 0$. Then
\al{
\mathbb{E}|\widehat N(S_\theta)| & \le \theta|B_\theta| + \theta|B_{c\theta}| + \theta\mathbb{E}|A_\theta| + \sum_{\ell \ge \theta} |B_{c\ell}| + \mathbb{E}|A_\ell\setminus B_{c\ell}| \\
& \le \left(\frac{b_2}{\theta^{\frac{\alpha}{1-\alpha}}} + \frac{b_2}{(c\theta)^{\frac{\alpha}{1-\alpha}}} + \theta e^{-\Omega(\theta)} + \sum_{\ell\ge\theta}\frac{b_2}{(c\ell)^{\frac{1}{1-\alpha}}} + e^{-\Omega(\ell)} \right)n.
}
For $\theta$ tending to infinity, Markov's inequalty shows that $|\widehat N(S_\theta)| = o(n)$ whp. If $\alpha = 0$ then~\eqref{eq:detpowlaw} gives $|B_t| = 0$ for any $t$ tending to infinity, and we again conclude that $|\widehat N(S_\theta)| = o(n)$ whp.
\subsection{Lemma~\ref{lem:Hprops} (ii): expansion in $H$}
Note that the distribution of $H$ is unaffected by scaling $R$, and we may assume that $R$ is scaled so that $\gamma_1(R) = 1$, and in particular $1-\varepsilon \le \tau_k \le 1+\varepsilon$ whp for any $\varepsilon \gg \frac{\ln\ln n}{\ln n}$, by Lemma~\ref{lem:threshold}. Let $\mathbb{S}$ be the set of vertices $u$ with degree less than $D$ in $G_{n, R}(1-\varepsilon)$. For $A\subseteq V$, let $A_1 = A\cap \mathbb{S}$ and $A_2 = A\setminus \mathbb{S}$, and note that
\al{
|N_{H}(A)| & = |N_H(A_1)\setminus A_2| + |N_H(A_2) \setminus \widehat N_H(A_1)| \\
& \ge |N_H(A_1)| + |N_{H}(A_2)| - |A_2| - e_H(A_2, \widehat N(\mathbb{S})),
}
We proceed in three parts. Firstly, we show that whp no vertex in $H$ has two neighbours in $\mathbb{S}$, and that $\mathbb{S}$ contains no edges, which implies that $|N_H(A_1)| \ge k|A_1|$ since $H$ has minimum degree at least $k$. Secondly, we note that $N_H(A_2) = N_{H(\infty)}(A_2)$ since $A_2\cap\mathbb{S} = \emptyset$, and show that whp $|N_{H(\infty)}(A)| \ge \frac{D}{16}|A|$ for all $|A| \le \beta n$, if $D$ is large enough. Lastly, we show that whp $e_H(u, \widehat N(\mathbb{S})) \le \frac{D}{17}$ for all $u$, if $D$ is large enough. We conclude that if $D$ is large enough then whp, for all $|A| \le \beta n$,
$$
|N_H(A)| \ge k|A_1| + \frac{D}{16}|A_2| - |A_2| - \frac{D}{17}|A_2| \ge k|A|.
$$
\subsubsection{Part 1}
Let $t = 1-\varepsilon$. For an edge set $F$, let $\mathbb{S}_F(t)\subseteq \mathbb{S}(t)$ be the set of vertices $u$ with degree less than $D$, not counting the edges in $F$. Letting $\mathcal{T}$ denote the event that $t \le \tau_k\le 2$, we have
\al{
\{u, v\in \mathbb{S}(t)\}\cap \{F\subseteq H\}\cap \mathcal{T} \subseteq \{u,v\in \mathbb{S}_F(t)\} \cap \{F\subseteq G_{n, R}(2)\}.
}
The two events in the right-hand side are independent, and we first use Lemma~\ref{lem:degreebound} to bound
\al{
\Prob{e_H(\mathbb{S}(t)) > 0}
& \le \Prob{\overline \mathcal{T}} + \sum_{u, v} \Prob{u, v\in \mathbb{S}_{\{uv\}}(t)}\Prob{E(u, v) \le 2} \\
& \le o(1) + 2\|R\|\sum_{u, v}p_\varepsilon(u)p_\varepsilon(v), \label{eq:ss1}
}
for some $p_\varepsilon(u) = e^{-(1-\varepsilon)d_R(u) + O(\ln d_R(u))}$. Likewise, the probability that some $w$ has two neighbours in $\mathbb{S}$ is bounded by
\al{
4\sum_{u,v,w}p_\varepsilon(u)p_\varepsilon(v) R(u, w)R(v, w) & \le 4\|R\|\sum_{u, v} p_\varepsilon(u)p_\varepsilon(v)d_R(v) \\
& \le \|R\|\sum_{u, v}p_\varepsilon(u)p_\varepsilon(v), \label{eq:ss2}
}
where $4d_R(v)$ is absorbed into the error term of $p_\varepsilon(v)$. We bound $\sum_u p_\varepsilon(u)$. Recall that $\gamma_1(R) = \sum_u e^{-d_R(u)} = 1$ by choice of scaling. Let $U$ be the set of $u$ with $d_R(u) \le 2\ln n$. Then, as $p_\varepsilon(u) \le e^{-(1-2\varepsilon)d_R(u)}$,
\al{
\sum_u p_\varepsilon(u) & \le e^{4\varepsilon\ln n}\sum_{u\in U} e^{-d_R(u)} + |\overline U|e^{-(1-2\varepsilon)2\ln n} \le 2n^{4\varepsilon}.
}
We have $\|R\|n^{8\varepsilon} = o(1)$, and conclude that both \eqref{eq:ss1} and \eqref{eq:ss2} are $o(1)$.
\subsubsection{Part 2}
Let $m\ge 1$ be some integer to be chosen. Consider the $2m$-out graph $\mathcal{G}_{R, 2m}$. Let $N(u)$ be the $2m$ vertices chosen by $u$, independent for all $u$. Fix some set $A\subseteq V$ with $|A| = a \le \beta n$, let $\kappa = ((m+1)a/n)^c$ with $c > 0$ as in Lemma~\ref{lem:EML}~(\ref{item:kappa}). Note that $\kappa$ can be made smaller than any positive constant by letting $\beta$ be small enough, and we choose $\beta$ sufficiently small to allow the Chernoff bounds below.
Consider the following procedure. Initially set $B_0 = A$. For $1 \le i \le 3a/4$ do as follows. Let $u_i\in A \setminus\{u_1,\dots,u_{i-1}\}$ be such that $M(u, B_{i-1}) < \kappa$. Note that this is possible for $i\le 3a/4$ by Lemma~\ref{lem:EML}~(\ref{item:kappa}) since $A\subseteq B_{i-1}$ and $|B_{i-1}| \le (m+1)a$. Reveal vertices of $N(u_i)$ until (a) at least $m$ vertices not in $B_{i-1}$ have been found, in which case we add those $m$ vertices to $B_{i-1}$ to form $B_i$, or (b) all of $N(u_i)$ has been revealed. Let $X_i = 1$ if (a) occurs and 0 otherwise.
When a vertex of $N(u_i)$ is revealed it has probability at most $2\kappa$ of being in $B_i$ (with the factor 2 accounting for the choices already made). So, conditional on the procedure so far, the probability that $X_i = 0$ is at most
\al{
\Prob{\Bin{2m}{2\kappa} \ge m} \le \bfrac{4\kappa m}{m}^{m/2},
}
by the Chernoff bound \eqref{eq:chernoff4}. With $p = (4\kappa)^{m/2}$ we then have
\al{
\Prob{|N(A)| < \frac{m}{2}|A|} \le \Prob{\sum_{i=1}^{3a/4} X_i < \frac{a}{2}} \le \Prob{\Bin{\frac{3a}{4}}{p} \ge \frac{a}{4}}.
}
Again applying \eqref{eq:chernoff4}, we obtain
\al{
\Prob{\exists |A| \le \beta n : |N(A)| < \frac{m}{2}|A|} & \le \sum_{a\le \beta n} \binom{n}{a} \bfrac{3ap/4}{a/4}^{a/8} \\
& \le \sum_{a\le \beta n} \left(\frac{ne}{a} (3p)^{1/8}\right)^a.
}
We have $p^{1/8} = f(m)(a/n)^{cm/16}$ for some function $f(m)$. Choosing $m > 16/c$, and $\beta$ small enough, we conclude that this sum is $o(1)$. So $\mathcal{G}_{R, 2m} \in \mathcal{E}_{m/2}$ whp, where
$$
\mathcal{E}_{m/2} = \left\{|N(A)| \ge \frac{m}{2}|A| \text{ for all $|A| \le \beta n$}\right\}.
$$
Let $D = 4m$. Condition on the whp events $H_{D/2}^+\in\mathcal{M}_{m/2}$ and $H_{D/2}^-\in \mathcal{M}_{m/2}$, and let $|A| \le \beta n$. Note that for each $u$, $N_{H(\infty)}(u)$ contains at least one of $N^+(u)$ and $N^-(u)$. Let $A^+$ be the set of $u$ with $N^+(u)\subseteq N_{H(\infty)}(u)$, and suppose without loss of generality that $|A^+| \ge |A|/2$. Then
\al{
|N_{H(\infty)}(A)| & \ge |N^+(A^+)| \ge \frac{m}{2}|A^+| \ge \frac{D}{16}|A|.
}
For $D$ large enough, we conclude that $H(\infty) \in \mathcal{E}_{D/16}$ whp.
\subsubsection{Part 3}
Let $t = 1+\varepsilon$. Fix $u\in V$ and let $\mathbb{S}_u\supseteq \mathbb{S}$ be the set of vertices $v$ with degree less than $D$ in $G_{n, R}(t)$, not counting edges incident to $u$. Let $X(u) = |N_t(u)\cap \widehat N_t(\mathbb{S}_u)|$. If $\tau_k\le t$ we then have $e_H(u, \widehat N_H(\mathbb{S})) \le X(u)$ since $H\subseteq G_{n, R}(t)$. Note that $N_t(u)$ and $\widehat N_t(\mathbb{S}_u)$ are independent. Conditional on $E(v, w)$ for all $v,w\ne u$, the expected value of $X(u)$ is
$$
tR(u, \widehat N_t(\mathbb{S}_u)) \le 2\|R\| |N_t(\mathbb{S}_u)| \le 2D \|R\||\mathbb{S}_u|.
$$
Let $\phi$ be such that $\|R\|^{\phi/2} \le n^{-2}$. The Chernoff bound \eqref{eq:chernoff4} then gives
\al{
\Prob{X(u) \ge \phi\ \middle|\ |\mathbb{S}_u| = n^{o(1)} } & \le \bfrac{\E{X}}{\phi}^{\phi/2} \\
& = O\left((\|R\||N_t(\mathbb{S}_u)|)^{\phi/2}\right) = o(n^{-1}).
}
Since $\tau_k \le 1+\varepsilon$ whp by Lemma~\ref{lem:threshold}, we conclude that $e_H(u, \widehat N_H(\mathbb{S})) < \phi$ for all $u$ whp.
\section{Proofs for Section~\ref{sec:prelims}}\label{sec:prelimproofs}
\subsection{Degrees}
\begin{proof}[Proof of Lemma~\ref{lem:degreebound}]
Suppose $X = X_1 + \dots + X_n$ where the $X_i$ are independent indicator random variables with $\E{X_i} = 1-e^{-\mu_i}$ and $\mu = \mu_1+\dots+\mu_n$, where $\mu_i /\mu \le \varepsilon = o(1)$ for all $i$, and $\mu$ tends to infinity with $n$. It is not hard to show that
\al{
\Prob{X \le \ell} & = \frac{e^{-\mu}\mu^\ell}{\ell!}\left(1 + O\bfrac{\varepsilon}{\mu}\right),\label{eq:Pxl}
}
Define $\mathrm{Po}(\mu, \ell) = e^{-\mu}\mu^\ell/\ell!$.
Let $R\in \mathrm{RM}(1)$ and let $t = \Omega(1)$. For any vertex $u$ and any set $S\subseteq V$ with $|V\setminus S| = O(1)$, $e_t(u, S)$ satisfies the above with
$$
\E{e_t(u, S)} = tR(u, S) = td_R(u) - o(1).
$$
With $d_R(u)\ge d$ tending to infinity we then have
\al{
\Prob{e_t(u, S) \le \ell} = (1+o(1))\mathrm{Po}(td_R(u), \ell) = e^{-td_R(u) + O(\ln d_R(u))}.
}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:threshold}]
Let $P\in \mathrm{RM}$ be a matrix with $\gamma_k(P) \to \gamma_k\in (0, \infty)$. Note that $d = \min d_P(u) = \Theta(\ln n)$ tends to infinity with $n$. Let $U$ be a set of $\ell$ distinct vertices, let $k > 0$, and let $0 < k_u \le k$ for each $u\in U$, with $\sum_{u\in U}(k-k_u) = 2m$ for some $m\ge 0$. Consider the graph $G_{n, P}$. By~\eqref{eq:Pxl},
\al{
\Prob{e(u, \overline U) < k_u, \text{ all $u\in U$}} & = \prod_{u\in U} \Prob{e(u, \overline U) < k_u} \\
& = (1+o(1)) \prod_{u\in U} \mathrm{Po}(d_P(u), k_u-1) \label{eq:exactdegree} \\
& \le \frac{1+o(1)}{d^{2m}} \prod_{u\in U} \mathrm{Po}(d_P(u), k-1). \label{eq:approxdegree}
}
Let $\mathcal{E}_m$ be the event that $U$ contains exactly $m$ edges. Then $\mathcal{E}_m$ is independent of $\{e(u, \overline U) : u\in U\}$, and $\Prob{\mathcal{E}_m} = O(\|P\|^m)$ and $\Prob{\mathcal{E}_0} = 1-o(1)$. For $k > 0$ we then have, using both \eqref{eq:exactdegree} and \eqref{eq:approxdegree},
\al{
\Prob{d(u) < k\ \forall u\in U} & = \sum_m\Prob{\mathcal{E}_m}\sum_{\sum k_u = k\ell-2m} \Prob{e(u, \overline U) < k_u \forall u\in U} \\
& \le \left(\prod_{u\in U} \mathrm{Po}(d_P(u), k-1)\right)\left(\Prob{\mathcal{E}_0} + \sum_{m > 0} O\bfrac{\|P\|^m}{d^{2m}}\right) \\
& = (1+o(1))\prod_{u\in U}\mathrm{Po}(d_P(u), k-1).
}
Letting $X_k$ denote the number of vertices in $G_{n, P}$ with $d(u) < k$,
\al{
\E{\binom{X_k}{\ell}} & = \sum_{|U| = \ell} \Prob{d(u) < k, \text{ all $u\in U$}} \\
& = \sum_{|U| = \ell}(1+o(1))\prod_{u\in U} \mathrm{Po}(d_P(u), k-1) = (1+o(1)) \frac{\gamma_k(P)^\ell}{\ell!}.
}
If $\gamma_k(P)$ converges to some $\gamma_k < \infty$, the method of moments (see e.g.~\cite{Durrett10}) implies that $X_k$ converges to a Poisson random variable with expected value $\gamma_k$, and
$$
\lim_{n\to \infty} \Prob{\delta(G_{n, P}) \ge k} = \lim_{n\to \infty} \Prob{X_k = 0} = e^{-\gamma_k}.
$$
If $\gamma_k(P)$ diverges to infinity, we note that $\Var{X_k} = o(\E{X_k})$, and Chebyshev's inequality implies that $\Prob{X_k > 0} \to 1$.
To obtain a bound for $\tau_k$, let $\varepsilon \gg \frac{\ln\ln n}{\ln n}$ and suppose $\gamma_1(R) = 1$. Note that $G_{n, R}(1+\varepsilon) \stackrel{d}{=} G_{n, P}$ with $P(u, v) = 1 - e^{-(1+\varepsilon)R(u, v)}$. This matrix has $d_P(u) + O(\ln d_P(u)) \ge (1+\varepsilon/2)d_R(u)$ for all $u$, where we use the fact that $\|R\| = o(\varepsilon)$. We have
\al{
\gamma_k(P) = \sum_u e^{-d_P(u)} d_P(u)^{k-1} & \le \sum_u e^{-(1+\varepsilon/2)d_R(u)} \\
& \le e^{-\varepsilon d/2} \gamma_1(R) = o(1).
}
We conclude that $\delta(G_{n, R}(1+\varepsilon)) \ge k$ whp. By the same token, $\delta(G_{n, R}(1-\varepsilon)) < k$ whp.
\end{proof}
\subsection{A matrix lemma}
\begin{proof}[Proof of Lemma~\ref{lem:observation}]
If $\ell = 1$, take $S = I$ and $T = J$. We prove the case $\ell > 1$ by induction. By rescaling, we may assume that $\pi_I(I) = 1$ and $\pi_J(J) = 1$.
For each $j\in J$ let $I_\ell(j)$ be the set of $i\in I$ with $\tau(i, j) = a_\ell$. Let
$$
J' = \left\{j\in J : \pi_I(I_\ell(j)) < \frac{1}{\ell}\right\}.
$$
If $\pi_J(J') < 1-\ell^{-1}$, let $S = \cap_{j\notin J'}I_\ell(j)$ and $T = J\setminus J'$. Then $\tau = a_\ell$ on $S\times T$ and $\pi_I(S) \ge \ell^{-1}$, $\pi_J(T) \ge \ell^{-1}$.
If $\pi_J(J') \ge 1-\ell^{-1}$, let $I' = \cap_{j\in J'} (I\setminus I_\ell(j))$ and consider the matrix
$$
\tau'(i, j) = \tau(i, j), \quad i \in I', j\in J'.
$$
This takes values $\{a_1,\dots,a_{\ell-1}\}$, and by induction there exist $S\subseteq I', T\subseteq J'$ with $\pi_I(S) \ge (\ell-1)^{-1}\pi_I(I')$ and $\pi_J(T) \ge (\ell-1)^{-1}\pi_J(J')$ such that $\tau'$, and therefore $\tau$, is constant on $S\times T$. We have
$$
\pi_I(I') \ge \min_{j\in J'} \pi_I(I\setminus I_\ell(j)) \ge 1-\ell^{-1}, \quad \pi_J(J') \ge 1-\ell^{-1},
$$
so
\al{
\pi_I(S) & \ge (\ell-1)^{-1}\pi_I^j(I') \ge \ell^{-1}, \\
\pi_J(T) & \ge (\ell-1)^{-1}\pi_J(J') \ge \ell^{-1}.
}
\end{proof}
\subsection{Mixing in simple random walks}
\begin{proof}[Proof of Lemma~\ref{lem:norm}]
This proof is more or less taken from~\cite{LevinPeres17}, with slight modifications. We first note that for $R\in \mathrm{RM}$, the transition matrix $M$ is reversible:
$$
\sigma(u)M(u, v) = \frac{d_R(u)}{d_R(V)} \frac{R(u, v)}{d_R(u)} = \frac{d_R(v)}{d_R(V)} \frac{R(v, u)}{d_R(v)} = \sigma(v)M(v, u).
$$
For vectors $f,g : V\to \mathbb{R}$ we define an inner product
\begin{equation}\label{eq:normdef}
\langle f, g\rangle_\sigma = \sum_v f(v)g(v)\sigma(v),
\end{equation}
and the associated norm $\|f\|_\sigma = \langle f, f\rangle_\sigma^{1/2}$. Then for probability measures $\pi$,
\begin{equation}\label{eq:mudef}
\left\|\frac{\pi(\cdot)}{\sigma(\cdot)} - \mathbf{1}\right\|_\sigma = \sqrt{\left(\sum_v \frac{\pi(v)^2}{\sigma(v)}\right) - 1} = \mu_\sigma(\pi),
\end{equation}
where $\mathbf{1} = (1,1,\dots,1)$.
Let $1 = \lambda_1 \ge \lambda_2\ge\dots\ge\lambda_n\ge -1$ be the eigenvalues of $M$ , with a corresponding eigenbasis $\mathbf{1} = f_1,\dots,f_n$, orthonormal with respect to the $\langle\cdot, \cdot\rangle_\sigma$ inner product. Then (see e.g.~\cite{LevinPeres17})
\al{
\frac{\pi M(v)}{\sigma(v)} - 1 & = \sum_{j = 2}^n \sum_u \pi(u)f_j(u)f_j(v)\lambda_j \\
& = \sum_{j=2}^n \lambda_jf_j(v)\sum_u\left[\left(\frac{\pi(u)}{\sigma(u)} - 1\right)f_j(u)\sigma(u) + f_j(u)\sigma(u)\right] \\
& = \sum_{j=2}^n \lambda_jf_j(v)\left[\left\langle \frac{\pi}{\sigma} - \mathbf{1}, f_j\right\rangle_\sigma + \langle f_j, \mathbf{1}\rangle_\sigma\right].
}
For any $j > 1$, orthonormality implies $\langle f_j, \mathbf{1}\rangle_\sigma = 0$. Let $F(j) = \langle \pi/\sigma-\mathbf{1}, f_j\rangle_\sigma$, and note that $\mu_\sigma(\pi)^2 = \sum_{j=1}^n F(j)^2$. Then
\al{
\mu_\sigma(\pi M)^2 & = \sum_v \sigma(v)\left(\sum_{j=2}^n \lambda_j f_j(v)F(j)\right)^2 \\
& = \sum_{j\ge 2} \lambda_j^2F(j)^2 \|f_j\|_\sigma^2 + 2\sum_{k > j \ge 2} \lambda_j\lambda_kF(j)F(k) \langle f_j, f_k\rangle_\sigma
}
Since the $f_j$ are orthonormal in the $\langle\cdot,\cdot\rangle_\sigma$ inner product, we are left with
\al{
\mu_\sigma(\pi M)^2 & = \sum_{j\ge 2} \lambda_j^2F(j)^2 \le \lambda^2\sum_{j=1}^n F(j)^2 = \lambda^2\mu_\sigma(\pi)^2.
}
\end{proof}
\subsection{The expander mixing lemma}
\begin{proof}[Proof of Lemma~\ref{lem:EML}]
For $A\subseteq V$ let $\mathbf{1}_A : V\to \{0,1\}$ be the indicator for $A$. Let $\pi(u) = \mathbf{1}_{A} / |A|$. One easily checks that
\al{
\frac{1}{|A|}M(A, B) - \sigma(B) = \left\langle\frac{\pi M(\cdot)}{\sigma(\cdot)} - \mathbf{1}_V, \mathbf{1}_B\right\rangle_\sigma.
}
Cauchy-Schwarz' inequality and Lemma~\ref{lem:norm} then give
\al{
\left|\frac{1}{|A|}M(A, B) - \sigma(B)\right|^2 \le \left\|\frac{\pi M}{\sigma} - \mathbf{1}_V\right\|_\sigma^2 \|\mathbf{1}_B\|_\sigma^2 \le \lambda\mu_\sigma(\pi)^2 \sigma(B).
}
Since $\mu_\sigma(\pi)^2 \le \sum_{u\in A}\pi(u)^2/\sigma(u) \le bn/|A|$, \eqref{eq:fullEML} follows.
To see how (i) follows, note that $R(u, v) \ge dM(u, v)$ for all $u, v$. Lemma~\ref{lem:sublinear} gives $\sigma(B) = \Omega(1)$ whenever $|B| = \Omega(n)$. Since $\lambda(R) = o(1)$, for $|A|, |B| = \Omega(n)$ we then have
$$
R(A, B) \ge dM(A, B) \ge d(|A|\sigma(B) - O(\lambda\sqrt{n|A|\sigma(B)}) = \Omega(dn).
$$
For (ii), let $c > 0$ be some constant to be chosen, and for each $|A| \le n/2$ define $A'$ as the set of $u\in A$ with $M(u, A) \ge (|A|/n)^c$. If $|A| < (n^c\|M\|)^{-1/(1-c)}$ then any $u\in A$ has
$$
M(u, A) \le \|M\||A| < \bfrac{|A|}{n}^c,
$$
so $A' = \emptyset$. Suppose $(n^c\|M\|)^{-1/(1-c)} \le |A| \le n/2$. Then \eqref{eq:fullEML} and the power law condition \eqref{eq:powerlaw} give, for some constant $0\le \alpha < 1/2$,
\begin{multline}
\bfrac{|A|}{n}^c \le \frac{1}{|A'|} M(A', A) \le \sigma(A) + \lambda\sqrt{\frac{bn\sigma(A)}{|A'|}} \\
\le \left[\bfrac{|A|}{n}^{1-2\alpha-2c} + \lambda\sqrt{\frac{b|A|}{|A'|}} \bfrac{n}{|A|}^{\alpha+2c}\right]\bfrac{|A|}{n}^{2c}.\label{eq:EMLstuff}
\end{multline}
For $2c < 1-2\alpha$, the first term in square brackets is at most 1. Since $\lambda \le (n\|M\|)^{-\alpha-\gamma}$ for some constant $\gamma > 0$, we have for $|A| \ge (n^c\|M\|)^{-1/(1-c)}$ that
\al{
\bfrac{n}{|A|}^{\alpha+2c} & \le \bfrac{n}{n^{-\frac{c}{1-c}}\|M\|^{-\frac{1}{1-c}}}^{\alpha+2c} = (n\|M\|)^{\frac{\alpha+2c}{1-c}}.
}
We have $n\|M\| \ge 1$ since $M$ is a transition matrix. Since $\lambda = o(1)$ and $\lambda \le (n\|M\|)^{-\alpha-\gamma}$ for some constant $\gamma > 0$, we conclude that for $c > 0$ small enough, $\lambda(n/|A|)^{\alpha+2c} = o(1)$. From \eqref{eq:EMLstuff} we then have, for $|A| \le n/2$,
\al{
1 < 2^c \le \bfrac{n}{|A|}^c \le 1 + o\left(\sqrt{\frac{b|A|}{|A'|}}\right).
}
We conclude that $|A'| = o(|A|)$.
\end{proof}
\bibliographystyle{plain}
|
2,877,628,091,348 | arxiv | \section{Introduction}
Microscopic swimmer suspensions constitute an interesting playground for non-equilibrium physics. Energy is continuously taken from the nutrients dissolved in the solution and used to produce directed motion. As an effect of the mutual interaction, swimmers present coherent motion with features similar to turbulence when the suspension is considered as an effective fluid~\cite{Cisneros,Wensink}. By their motion, swimmers also agitate the fluid and it has been observed that this fluid agitation induces enhanced diffusion~\cite{Wu2000,Valeriani,Mino2010,Lin2011,Zaid2011} and generates directed motion~\cite{Sokolov,DiLeonardo}. From a mechanical point of view, swimmers are autonomous objects and, therefore, the total force acting on them vanishes. Consequently, the net force exerted on the fluid vanishes as well and, at first order, swimmers can be modeled as force dipole tensors. Depending on whether the dipole is tensile or contractile, the swimmers are classified as pushers or pullers, respectively~\cite{HernandezOrtiz2005, Saintillan2007, Baskaran2009}. In the first category we find bacteria like \emph{Escherichia coli}, while algae like \emph{Chlamydomonas reinhardtii} belong to the second category.
Swimmer suspensions present high fluctuations in particle density and also in the orientation field when they align in domains. It has been argued that giant density fluctuations develop as a consequence of the coupling with the orientation field in presence of self-propulsion \cite{Giant1,Giant2,Giant3}. Also, the orientation field shows long wavelength fluctuations in the form of Goldstone modes that are extremely soft \cite{Goldstone1,Goldstone2}.
Thanks to the fluctuations in orientation, swimmer suspensions---even in the ordered phase---do not show long-range order. At large scales they look homogeneous and isotropic.
It is interesting to question whether these large fluctuations can generate macroscopic phenomena. When the fluctuating fields are limited to some modes due to the presence of boundary conditions---for example, due to immersed bodies---the Casimir effect can appear.
The presence of this effect can have important effects on the motion and self-assembly of immersed objects.
Normally, when two large bodies are immersed in the fluctuating medium, there is a pressure difference between the region bounded by the bodies and the exterior region, giving rise to a force. This pressure difference emerges as a result of the renormalization of the pressure by the fluctuations \cite{Soto2007}.
Microswimmers are governed by hydrodynamics at low Reynolds numbers where the generated stresses are linear in the force dipole intensity. As as result, when averaging over the different swimmer orientations it is expected that no renormalization of the pressure or fluid flow is possible, leading at first sight to a vanishing Casimir effect. In this article we will investigate the emergence of Casimir effect and show that, thanks to the large density fluctuations Casimir effects can develop. Indeed, the coarse grained equations that describe the dynamics of the suspension have noise terms that are proportional to the square root of the density, implying that the stochastic equations are non-linear \cite{Dean,Chavanis}. A second, minor concern is that at low Reynolds number, the total force over any body immersed in the fluid adds up to zero \cite{Kim,Happel}. However, in this Stokes regime it is not the force but the drag on the immersed bodies that is the relevant quantity that will turn out to be finite due to the fluctuations. Recently, it has been proposed that a Casimir-like effect can be originated in the momentum transfer at swimmer-walls collisions (steric interactions) \cite{Reichhardt}, which can be a complementary mechanism of the one proposed here.
The article is organized as follows. Section \ref{sec.flucthydro} presents the fluctuating description of the swimmer suspension and how the average drag is obtained in terms of coarse grained fields. In Section \ref{sec.casimir} it is shown that the noise terms, which are non-linear, imply that the primary fluctuating fields are non-Gaussian. By making a change of variables we generate a framework that allows us to compute the Casimir drag in terms of correlation functions. Section \ref{sec.mech2} presents a complementary mechanism that also generates a Casimir effect that is due to non-linear couplings that emerge near the ordering transition. The drag generated by this mechanism is obtained by performing similar computations to the principal case under study. To analyze both cases, a model for the relevant correlations that are needed for the calculation is introduced in Section \ref{sec.mediumrange}, giving rise to explicit expressions for the drag. Finally, conclusions and perspectives are presented in Section \ref{sec.conclusions}.
\section{Fluctuating description of an active suspension} \label{sec.flucthydro}
We consider a suspension of swimmers in a fluid in three dimensions, that we assume to be homogeneous and isotropic on the large scale. Each swimmer is described by its director $\hat{\mathbf{n}}$, which points along its direction of motion. Axisymmetric swimmers are characterized by a force dipole tensor acting on the fluid
\begin{align}
S_{jk}=\sigma_0 n_jn_k,
\end{align}
where $\sigma_0$ is the dipole intensity, negative for pushers and positive for pullers.
The effect of the force dipole is to move the fluid around the swimmer, with a velocity field that is obtained by solving the Stokes equations, valid at low Reynolds number. Also, each swimmer generates a stress field (pressure plus viscous stresses), but as indicated on the introduction the total force on any close object vanishes and therefore the stresses cannot produce a Casimir effect. We concentrate then on the velocity field that, for a swimmer located at $\mathbf{r}_0$, is given by
\begin{align}
u_i(\mathbf{r})=J_{ij,k}(\mathbf{r}-\mathbf{r}_0)S_{jk} ,\label{eqn:Umicro}
\end{align}
where $J_{ij,k}$ is the gradient of the Oseen tensor along the direction $k$
\begin{align}
J_{ij,k}(\mathbf{r}) &= \frac{1}{8\pi\eta r^3}\left(\delta_{ik}r_j + \delta_{jk}r_i - \delta_{ij}r_k - 3\frac{r_ir_jr_k}{r^2} \right)
\end{align}
and summation over repeated indices is assumed~\cite{Kim,Happel}.
When several swimmers are placed in the fluid, by the linearity of the Stokes equations, the resulting flow field is the sum of the effects produced by each swimmer. In a suspension of $N$ swimmers placed in a volume $V$, the dipolar density is defined as
\begin{align}
s_{jk}(\mathbf{r})&=\sum\limits_{\alpha=1}^N\delta\left(\mathbf{r}-\mathbf{r}^{\alpha}\right)S_{jk}^{\alpha},\label{eqn:Stress}
\end{align}
where $S_{jk}^{\alpha}=\sigma_0 n_j^{\alpha}n_k^{\alpha}$ is the dipolar tensor of the $\alpha$-th swimmer located at $\mathbf{r}^{\alpha}$ and we have assumed that all swimmers have the same dipolar intensity $\sigma_0$. Note that, since $\hat{\mathbf{n}}$ is a unit vector $\Tr s_{jk}(\mathbf{r}) = \sigma_0 \rho(\mathbf{r})$, where $\rho$ is the local number density of swimmers. In terms of the dipolar density, the velocity field is
\begin{align}
u_i(\mathbf{r})&=\int \limits_V \! d^3 r'\, J_{ij,k}(\mathbf{r}-\mathbf{r}')s_{jk}(\mathbf{r}'). \label{eqn:Vel}
\end{align}
We are interested in the ensemble average $\langle u_i(\mathbf{r}) \rangle$, which depends on $\left\langle s_{jk} \right\rangle$. To give its statistical properties, we consider a coarse grained description for the dominant fields which are the swimmer density $\rho$, the polar density field ${\bf p}$, which is related to the average director $\boldsymbol{\tau}={\bf p}/\rho$, and the already described dipolar density tensor field $\boldsymbol{s}$. The coarse grained descriptions adopt the form of fluctuating hydrodynamic equations that in general are coupled equations of the form~\cite{Dean,Chavanis}
\begin{align}
\partial_t \rho &= g_1[\rho, \boldsymbol{p}, \boldsymbol{s}] +\sqrt{\rho} \eta, \label{eqn:GeneralEQ1}\\
\partial_t p_{i} &= g_2[\rho, \boldsymbol{p}, \boldsymbol{s}] + \sqrt{\rho}\xi_{i}, \label{eqn:GeneralEQ2}\\
\partial_t s_{jk} &= g_3[\rho, \boldsymbol{p}, \boldsymbol{s}] + \sqrt{\rho}\zeta_{jk}, \label{eqn:GeneralEQ3}
\end{align}
where $g_n$ are functionals of the fields, which depend on the symmetries, conservation
laws, interactions and models for activity ~\cite{Goldstone1,Igor07,Marchetti08,Bertin09,Marchetti12,Igor12}. The noise terms are modeled---as it is usually done---to be white which, under isotropic conditions, are characterized by the following statistical properties
\begin{align*}
&\left\langle \eta(\mathbf{r},t)\right\rangle =
\left\langle \xi_i(\mathbf{r},t)\right\rangle =
\left\langle \zeta_{ij}(\mathbf{r},t)\right\rangle = 0\\
&\left\langle\eta (\mathbf{r},t) \xi_i(\mathbf{r}',t')\right\rangle =
\left\langle\eta(\mathbf{r},t) \xi_{i}(\mathbf{r}',t')\right\rangle =
\left\langle\xi_i(\mathbf{r},t) \zeta_{jk}(\mathbf{r}',t')\right\rangle =0\\
&\left\langle \eta(\mathbf{r},t)\eta(\mathbf{r}',t')\right\rangle = \Gamma_1 \delta(\mathbf{r}-\mathbf{r}')\delta(t-t')\\
&\left\langle \xi_{i}(\mathbf{r},t)\xi_{j}(\mathbf{r}',t') \right\rangle = \Gamma_2 \delta_{ij} \delta(\mathbf{r}-\mathbf{r}')\delta(t-t')\\
&\left\langle \zeta_{ij}(\mathbf{r},t)\zeta_{kl}(\mathbf{r}',t') \right\rangle = \left[\Gamma_3 \delta_{ij}\delta_{kl}
+\Gamma_4 (\delta_{ik}\delta_{jl}
+ \delta_{il}\delta_{jk}) \right]\\
& \phantom{\left\langle \zeta_{ij}(\mathbf{r},t)\zeta_{kl}(\mathbf{r}',t') \right\rangle = }
\times \delta(\mathbf{r}-\mathbf{r}')\delta(t-t').
\end{align*}
We remark that the noise intensities depend on density because the system is particulate. Indeed, the fluctuations are originated in the displacements and interactions of the swimmers, events that in the limit of large numbers are described by Poissonian statistics leading to deviations that are proportional to the square root of the number of individuals \cite{Dean,Chavanis}.
\section{Casimir effect} \label{sec.casimir}
Models with explicit expressions for Eqs. (\ref{eqn:GeneralEQ1}-\ref{eqn:GeneralEQ3}) have been described in several cases~\cite{Igor07,Marchetti08,Bertin09,Marchetti12,Igor12}. Here, without going into specific details of these models, we will show that they generally present Casimir effects.
As the equations for the fields are coupled and the noise terms enter multiplicatively, in general the fluctuations of the dipole field $s_{jk}$ around its equilibrium value will not be linear in the noise. Therefore, its stationary probability distribution function will not be Gaussian and its average will not vanish, giving rise to Casimir effects.
Note that in particulate systems the noise term are always multiplicative. However, in many cases the systems are approximately incompressible and this non-linearity is irrelevant. Swimmer suspensions, on the contrary, present large fluctuations and this dependence cannot be neglected.
The coupling with the noise terms is made linear---additive noise---by means of defining new fields $\phi$, $ \boldsymbol{\psi}$, and $\boldsymbol{\chi}$ such that $\rho= \rho_0 + \phi$, $p_i=\sqrt{\rho}\psi_i$, and $s_{jk}=\sqrt{\rho}\left[ \frac{\sigma_0}{3}\sqrt{\rho_0}\delta_{jk} + \chi_{jk}\right]$. Replacing these expressions in Eqs. (\ref{eqn:GeneralEQ1}-\ref{eqn:GeneralEQ3}) and linearizing the equations in the new fields we obtain
\begin{align}
\partial_t \phi &= \tilde{g}_1[\phi, \boldsymbol{\psi}, \boldsymbol{\chi}]+\eta ,\label{eq.linear1}\\
\partial_t \psi_i &= \tilde{g}_2[\phi, \boldsymbol{\psi}, \boldsymbol{\chi}]+\xi_i ,\label{eq.linear2}\\
\partial_t \chi_{jk} &= \tilde{g}_3[\phi, \boldsymbol{\psi}, \boldsymbol{\chi}] + \zeta_{jk} ,\label{eq.linear3}
\end{align}
where now the noise terms enter additively and the functionals $\tilde{g}_n$ are linear. Consequently, now all the fluctuating fields have Gaussian statistics with zero mean.
In terms of the new fields, the average dipolar density is
\begin{align}
\langle s_{jk}(\mathbf{r}) \rangle = \frac{\rho_0\sigma_0}{3} \delta_{jk} + \frac{1}{2\sqrt{\rho_0}} \langle \phi(\mathbf{r}) \chi_{jk}(\mathbf{r}) \rangle,
\end{align}
which is now quadratic on the linearly fluctuating fields; hence, its average will be generally different from zero.
The isotropic part of the stress does not contribute to the velocity field, property that is represented in Stokesian flows by the relation $J_{ij,k}\delta_{jk}=0$. Therefore, we are left to compute the cross correlation $\langle \phi \chi_{jk} \rangle$.
The Casimir effect emerges in non-equilibrium systems because the value of this cross correlation depends on the geometry. In particular it is modified by the presence of immersed bodies that introduce boundary conditions on the fluctuating fields. Here, the potential Casimir effect would consist on the drag of the immersed bodies and therefore they should not be considered as fixed objects. However, if the drag velocity is small, we can consider that the swimmers see the intruders as impenetrable bodies. Consequently they impose a non-flux boundary condition for the swimmer density that translates into a non-flux boundary condition for $\phi$. We do not have a natural boundary condition for the dipolar density $\boldsymbol{s}$ and the associated field $\boldsymbol{\chi}$, which should be obtained from kinetic models that include swimmer-object interactions (for example \cite{Marconi}). For lack of these models we consider for simplicity that there are also non-flux boundary conditions for $\boldsymbol{\chi}$, but other boundary conditions can be studied in an analogous way, leading to similar results, although the sign of the effect may be reversed as it happens in the critical Casimir effect \cite{CriticalCasimir}.
To perform the calculation we consider a geometry and a protocol similar to the one used in Ref. \cite{Cattuto} (Fig. \ref{fig.geom}). That is, two equal bodies are immersed in the fluid. If the separation between the bodies is small compared to their size, the volume in between can be modeled to have non-flux boundary conditions on the bodies' surfaces and periodic boundary conditions in the other direction. The activity in the region inside will generate a drag on the objects that should be subtracted to a similar drag on the other side of the objects. To make an illustrative calculation of the Casimir drag and, specially, to show that it gives non-vanishing results we will simplify the geometry to that of a parallelepiped.
We proceed in a similar way as in Refs. \cite{Soto2007, SotoGranular}, considering a volume $V=L_x \times L_y \times L_z$ with non-flux boundary conditions for $\phi$ and $\boldsymbol{\chi}$ at $x=0$ and $x=L_x$, while the fields are periodic in $y$ and $z$. Using these boundary conditions the fluctuating density field is expanded as
\begin{align}
\phi(\mathbf{r},t) &= V^{-1}\sum\limits_{k_x} \sum\limits_{k_y}\sum\limits_{k_z}\phi(\mathbf{k},t) \cos (k_xx)e^{ik_yy}e^{ik_zz},
\end{align}
where $k_x=\pi n_x/L_x$, $k_y=2\pi n_y/L_y$, $k_z=2\pi n_z/L_z$, $n_x=0,1,2,\ldots$; $n_y,n_z=\ldots,-1,0,1,\ldots$. Analogous expressions are used for $\chi_{ij}$.
\begin{figure}[htb]
\includegraphics[width=.8\columnwidth]{geom.eps}
\caption{Geometry used for the calculation of the Casimir drag. The bodies confine an active suspension in a region of size $L_x \times L_y \times L_z$. There are non-flux boundary conditions at $x=0$ and $x=L_x$, while the fields are periodic in $y$ and $z$. In Eq. \reff{eqn:MeanV} the cross sections of the bodies are modeled as circles of radius $R$. The total drag on the bodies results from the subtraction of the drag produced by the region inside (separation $L_x$) and outside (separation $L_x'$) the bodies. To obtain the final expression \reff{Eq:finaldrag} the limit $L_x'\to\infty$ is considered.}
\label{fig.geom}
\end{figure}
The density-dipole correlation in Fourier space results in
\begin{align}
\left\langle \phi (\mathbf{k})\chi_{jk}(\mathbf{q})\right\rangle &= \gamma_{k_x} V {G}_{jk}(\mathbf{k}) \hat{\delta}_{\mathbf{k},\mathbf{q}},
\end{align}
where ${G}_{jk}(\mathbf{k})$ is the density-dipole structure factor in the bulk that can be obtained by solution of the coarse grained equations \reff{eq.linear1}-\reff{eq.linear3} with full periodic boundary conditions. The prefactor $\gamma_{k_x}$ ($\gamma_{k_x} =1/2$ if $k_x=0$ and $\gamma_{k_x}=1$ if $k_x\neq 0$) and the modified Kroenenker delta
$\hat{\delta}_{\mathbf{k},\mathbf{q}} = \delta_{k_x,q_x}\delta_{k_y,-q_y}\delta_{k_z,-q_z}$ appear from the use of the non-flux boundary conditions~\cite{Soto2007}.
Going back to real space, we compute
\begin{align}
C_{jk}(\mathbf{r}) &= \left\langle \phi(\mathbf{r}) \chi_{jk}(\mathbf{r})\right\rangle \nonumber\\
&= V^{-1} \sum\limits_{\mathbf{k}} \! ^\prime {G}_{jk}(\mathbf{k})\cos (k_xx)^2,
\end{align}
where the prime in the sum indicates that the term $k_x=0$ is multiplied by 1/2 and it only depends on $x$ due to the periodic boundary conditions in $y$ and $z$. It is possible now to compute the velocity field
\begin{align*}
\left\langle u_{i}(x)\right\rangle =& \frac{1}{2\rho_0^{1/2}} \int d^3r' J_{ij,k}(x-x',y',z') C_{jk}(x')\\
=& \frac{1}{2\rho_0^{1/2}V} \sum\limits_{\mathbf{k}} \! ^\prime \int d^3r' \cos^2 (k_xx') \nonumber \\
& \times J_{ij,k}(x-x',y',z'){G}_{jk}(\mathbf{k}).
\end{align*}
For an isotropic system, the bulk structure factor can be generally expressed as
\begin{align}
{G}_{jk}(\mathbf{k}) &= A(k)\delta_{jk} + B(k)\frac{k_jk_k}{k^2} \label{tensorAB}
\end{align}
in terms of two scalar functions of the wavenumber. Again the isotropic part does not contribute to the velocity field and we have
\begin{align}
J_{ij,k}(\mathbf{r}){G}_{jk}(\mathbf{k}) &= \frac{1}{8\pi\eta} B(k)\frac{r_i}{r^3}\left[1 - 3\frac{(\mathbf{r}\cdot \mathbf{k})^2}{r^2k^2} \right].
\end{align}
The swimmer suspension produces a net velocity field as an effect of the confinement and the non-linear fluctuations of the dipolar field. This velocity field generates a Casimir effect by noting that the velocity at the intruders surfaces does not vanish, i.e. they are dragged by the fluid.
The relevant drag takes place in the $x$ direction, and has the form
\begin{align}
\left\langle u_{x}(x)\right\rangle =& \frac{1}{16 \pi\eta \rho_0^{1/2}V} \sum\limits_{\mathbf{k}} \! ^\prime \int d^3r' \cos^2(k_x x') B(k) \nonumber \\
& \times \frac{l_x}{l^3} \left[1 - 3 \frac{\left({\bf l}\cdot{\bf k}\right)^2}{ l^2k^2}\right], \label{uxB}
\end{align}
where ${\bf l}=(x-x',y',z')$ and we recall that the integration in ${\bf r}'$ is over the position of the dipole sources while $x$ is the position where the field is evaluated. To compute the drag on the wall, the velocity field must be evaluated at its location ($x=0$) to be further averaged over the wall surface. Here we note a peculiarity of the Stokes flows: as a consequence of the incompressibility condition, when a point source is placed in a fluid, the integrated velocity across an infinity surface vanishes identically \cite{Happel,Kim}. By linearity, a distribution of dipolar sources produces the same result. Then, if the velocity field \reff{uxB} were averaged over an infinite surface it would also vanish. However, the immersed bodies are finite. In order to achieve simple results, we consider an immersed body of circular cross section of radius $R$. The average velocity reads
\begin{align}
\left\llangle u_{x}(0)\right\rrangle &= -\frac{1}{16 \pi\eta \rho_0^{1/2}L_x} \sum\limits_{\mathbf{k}} \! ^\prime \int d x' \cos^2(k_x x') \nonumber \\
& \qquad \times B(k) \frac{x' \left(k^2-3 k_x^2\right)}{k^2\left(R^2+x'^2\right)^{3/2}},\label{eqn:MeanV}
\end{align}
where $\left\llangle \ldots\right\rrangle$ means an ensemble average over the noise and an average over the body surface.
\section{Complementary mechanism} \label{sec.mech2}
There is a second mechanism that can also generate a non-vanishing average of $\boldsymbol{s}$ and therefore induce a Casimir effect.
Pusher suspensions (e.g. bacterial baths) and rod-like swimmers with steric interactions can present a polar transition where the average director field $\boldsymbol{\tau}$ is finite through a spontaneous symmetry breaking \cite{Goldstone1,Goldstone2}. Close to the transition, but still in the isotropic phase the density and director fields become slow variables and the other fields are enslaved to them. Specifically, the dipole tensor field is found to be $s_{jk} = \sigma_0 \left[(1-\lambda \tau_l \tau_l)\rho\delta_{jk}/3 + \lambda\rho\tau_j\tau_k \right]$ with a positive dimensionless constant $\lambda$ \cite{aranson2006theory}. Therefore, apart from the isotropic term, it is quadratic in the fluctuating field.
Deep in the polar phase the average director field $\boldsymbol{\tau}$ has finite norm. But the system is not in a global symmetry broken state because the orientation has only a finite correlation length due to Goldstone mode fluctuations. Therefore, at large length scales the system is globally isotropic and the analysis we have performed can be applied here. In the perfectly locally ordered case ($|\boldsymbol{\tau}|\approx 1$), the dipole tensor is $s_{jk} = \rho_0\sigma_0 \tau_j \tau_k$, being also quadratic in the fluctuating field.
In an isotropic medium the average $\langle \tau_i\tau_j\rangle$ can be written also like in \reff{tensorAB}. The derivation then follows in exactly the same way as in the previous section with a induced Casimir drag that is given by the expression \reff{eqn:MeanV}, albeit with a different prefactor.
Being the expressions identical we continue the discussion using the notation of the mechanism described in the previous section.
\section{Model with medium range order} \label{sec.mediumrange}
To proceed with the calculation of the Casimir effect, we need to provide a model for the density-dipole structure factor $B(k)$. To our knowledge, this function has not been measured in swimmer suspensions. Here we consider a model with medium range order that can describe situations were the suspension is showing structures with a finite correlation length.
The tensorial structure factor ${G}_{ij}(\mathbf{k})$ needs to meet two conditions: it must vanish for $k\to \infty$ and it must be single-valued at the origin. From this second requirement, it follows that $B(0)=0$, and we can write $B(k)=k^2 \tilde{B}(k)$. The simplest assumption we can make is that $\tilde{B}$ is characterized by a single correlation length $k_0^{-1}$ that, eventually, can diverge at a critical point as it could happen in a swarming phase or in other phases with collective order~\cite{Goldstone1}. Following our approach in \cite{Parra2013} we take, therefore, ${B}$ as a simple rational function with a correlation length $k_0^{-1}$
\begin{align}
B(k) &= \sigma_0 \Gamma \frac{k^2}{\left({k_0}^2+k^2\right)^2}. \label{Bmedium}
\end{align}
The prefactor $\Gamma$ is a measure of the correlation intensity, which will be a function of the noise intensities $\Gamma_{1,2,3,4}$, and we have factored out the dependence on the dipole strength. The sign of $\Gamma$ depends on the particular model that describes the swimmer suspension. A stochastic extension of the kinetic model presented in Ref. \cite{aranson2006theory} predicts a positive value \cite{ParraToBePublished}. In the case of the second mechanism where the dipolar density is enslaved to the director field, the sign of $\Gamma$ is given by the sign of $[3\langle(\hat{\bf k}\cdot\boldsymbol{\tau}({\bf k}))^2\rangle-\langle |\boldsymbol{\tau}({\bf k})|^2\rangle]$, which is positive if the director field develops longitudinal structures and negative if vortex-like structures are formed.
More accurate models, obtained from experiments, discrete element simulations~\cite{Wensink} or continuos models~\cite{rheology} will change only qualitatively the picture below if they are characterized by a single correlation length. More complex models, with different scaling at large distances, should be worked out separately.
Once Eq.~\reff{Bmedium} is substituted into \reff{eqn:MeanV}, the sums over the transverse wavevectors $k_y$ and $k_z$ can be replaced by integrals when $R$ is large compared with $L_x$. To integrate, we introduce a cutoff for large wavevectors $2\pi/a$ in order to take into account that the continuous model is valid up to the microscopic length $a$, resulting in the expression
\begin{widetext}
\begin{align}
\left\llangle u_{x}(0)\right\rrangle &= - \frac{\tilde{R}^2\sigma_0\Gamma}{32\pi^2\eta \tilde{L}_x\rho_0^{1/2}} \sum\limits_{\tilde{k}_x} \! ^\prime \int d \tilde{x}' \frac{\cos^2 (\tilde{k}_x\tilde{x}')\tilde{x}'}{\left(\tilde{R}^2+\tilde{x}'^2\right)^{3/2}}\left[\tanh ^{-1}\left(\frac{2\pi^2}{2\pi^2+\tilde{a}^2\left(1+\tilde{k}_x^2\right)}\right)-\frac{2\pi^2\left(1+3\tilde{k}_x^2\right)}{\left(1+\tilde{k}_x^2\right)\left(4\pi^2+\tilde{a}^2\left(1+\tilde{k}_x^2\right)\right)}\right] ,\label{Eq:uxmodel}
\end{align}
\end{widetext}
where $\tilde{L}_x=k_0L_x$, $\tilde{R}=k_0R$, $\tilde{x}'=k_0x'$, $\tilde{k}_x=k_x/k_0$ and $\tilde{a}=k_0 a$.
The most relevant case for the Casimir effect is when there is a finite correlation length, much larger than the microscopic cutoff, in which case $\tilde{a}\ll 1$. If in Eq. \reff{Eq:uxmodel} we use $\cos^2(\tilde{k}_x \tilde{x}') = 1/2 + \cos(2\tilde{k}_x \tilde{x}')/2$ the constant contribution goes as $1/\tilde{a}$ for small $\tilde{a}$, while the oscillatory one goes as $\log \tilde{a}$. Therefore, in the relevant regime, $\tilde{a}\ll 1$, we can consider only the constant contribution that is the dominant one. The sums can be done numerically and the results, computed for the cases $\tilde{R}\ll 1$, $\tilde{R}\sim 1$ and $\tilde{R}\gg 1$, are all well fitted by the expression
\begin{align}
\left\llangle u_{x}(0)\right\rrangle &= - \frac{\tilde{R}^2\sigma_0\Gamma}{32\pi^2\eta \rho_0^{1/2}} \frac{c_0 \tilde{L}_x^2}{\tilde{a}\tilde{R}(\tilde{L}_x^2 +c_1 \tilde{R}^2)},
\end{align}
where $c_0=0.29$ and $c_1=1.62$. For large distances compared to the intruders' size ($\tilde{L}_x\gg \tilde{R}$) the expression saturates to a constant value, while for small distances it grows like $\tilde{L}_x^2$.
As usual when considering Casimir effects, the total drag on a surface is obtained by the subtraction of the drag generated in the region at one side of the intruder with the drag generated on the other side (see Fig. \ref{fig.geom}). A simple case corresponds to considering that the region to the left of the body is large ($\tilde{L}_x\gg \tilde{R}$) such that the asymptotic expression can be used, resulting in
\begin{align}
u^{\rm total}_{x} = \frac{\sigma_0\Gamma}{32\pi^2\eta \rho_0^{1/2}} \frac{c_0 R}{ \tilde{a}}
\left[1- \frac{L_x^2}{L_x^2 +c_1 R^2} \right] \label{Eq:finaldrag}
\end{align}
that, remarkably, does not depend on the correlation length.
The range of the Casimir drag scales with the size of the immersed body. This property is an effect of the long range effect of the hydrodynamic interactions. A similar result was obtained for the fluid velocity-velocity correlation function in a swimmer suspension, even if the swimmer correlations were short range \cite{Parra2013}. If $\Gamma\sigma_0>0$ the Casimir drag is positive, meaning that immersed objects are attracted, while in the opposite case the intruders are repelled. The stochastic extension of the kinetic model for swimmers predicts $\Gamma>0$ \cite{aranson2006theory,ParraToBePublished}, therefore a suspension of
pushers ($\sigma_0<0$) would lead to a repulsive drag.
Note that the drag depends on the cutoff length $a$. Normally, in the Casimir effect in quantum electrodynamics or critical fluids this is not the case. In non-equilibrium systems the results depend on the specific system under study, with cases that depend on the cutoff \cite{Cattuto} while others are cutoff-independent \cite{Soto2007}. Nevertheless, this is not a serious issue because here there is a natural cutoff given by the swimmer size and there is no \emph{a priori} reason to expect that there is an ultraviolet regularization.
\section{Conclusions} \label{sec.conclusions}
We have shown that a Casimir effect is present in low Reynolds number swimmer suspensions. It consists on an average drag over immersed objects which result from the fluctuating dipolar density field. Although the deterministic dynamics at low Reynolds number is linear, the stochastic dynamics that governs fluctuations is non-linear because the noise intensities are proportional to the square root of the density, which is also a fluctuating field. Changing variables to new fields where now the linear fluctuations are Gaussian, the drag on an immersed body turns out to be quadratic function of the new fields. The average drag is susceptible to have a contribution of the different modes of the fluctuating fields which result in a Casimir effect when the allowed modes are different on both sides of the immersed objects.
The intensity of the Casimir drag depends on the correlation function of the rescaled density and dipolar density tensor fields. These correlations have not been measured in experiments or discrete element simulations and we propose a simple model with medium range order for a medium that is isotropic and homogeneous at the large scale. The resulting drag range depends on the body size and separation, but not on the correlation length, which is a result of the long range interactions in Stokes flows.
In order to make more quantitative predictions, measurements of the relevant correlation functions are needed, which could be done for example by confocal microscopy methods as in Ref. \cite{confocal} where it was possible to track simultaneously the position and orientation of microscopic objects. Also, a proper modeling of the dipolar density boundary condition on immersed bodies is needed as other boundary conditions than the one used here (non-flux for the density and dipolar density tensor) could change the sign of the effect, as it has been observed for example in critical Casimir forces \cite{CriticalCasimir}.
In non-equilibrium systems the Casimir effect can lead to new phenomena, as compared to its equilibrium counterparts. Notably, there is the possibility that a single immersed object of asymmetric shape can experience a drag on its own leading to self-propulsion originated in fluctuation-induced phenomena \cite{Buenzli,QEDNonEq}. In principle, there is no reason \emph{a priori} to exclude this possibility, but precise calculations or experiments would need to be performed to confirm this.
\section*{Acknowledgment}
This research is supported by Fondecyt Grant No. 1140778. C.P.-R. acknowledges the support of a Becas Chile CONICYT No. 72140425.
|
2,877,628,091,349 | arxiv | \section{Introduction}
Cosmological inflation is currently considered to be the best paradigm
for describing the early stages of the universe
\cite{Linde:1990-bk,Liddle-Lyth:2000-bk}.
Inflation leads to the existence of a causal mechanism for producing
fluctuations on cosmological scales (when measured today), which at
the time of matter-radiation equality had a physical wavelength larger
than the Hubble radius. Thus, it solves several conceptual problems of
standard cosmology and leads to a predictive theory of the origin of
cosmological fluctuations.
During inflation, the physical wavelength corresponding to a fixed
comoving scale decreases exponentially as time decreases whereas the
Hubble radius is constant. Thus, as long as the period of inflation is
sufficiently long, all scales of cosmological interest today originate
inside the Hubble radius during inflation. Recently, it has been
realized that if inflation lasts slightly longer than the minimal time
(i. e. the time it needs to last in order to solve the horizon problem
and to provide a causal generation mechanism for CMB fluctuations),
then the corresponding physical wavelength of these fluctuations at
the beginning of inflation will be smaller than the Planck
length. This is commonly referred to as the {\it trans-Planckian
problem of inflation}
\cite{Brandenberger-Mart:2000,Martin-Bran:2000,Niemeyer:2000}.
Naturally, considerable amount of attention has been devoted to
examine the possibility of detecting trans-Planckian imprints on the
CMB
\cite{Tanaka:2000,Niemeyer-Pare:2001,Kempf-Niem:2001,Starobinsky:2001,Easther-Green:2001a,Lemoine-Lubo:2001,Bastero-Gil-Fram:2001,Easther-Green:2001b,Shanki:2002,Danielsson:2002a,Brandenberger-Ho:2002,Hassan-Sloth:2002,Easther-Green:2002,Kaloper-Kleb:2002,Martin-Bran:2003,Bastero-Gil-Free:2003,Shanki-Srir:2004a,Shanki-Srir:2004b,Brandenberger-Mart:2004,Tsujikawa-Maar:2003,Sriram-paddy:2004,Greene-Scha:2004,Calcagni:2004a}.
Broadly, there have been two approaches in the literature in order to
study these effects. In the first approach, the specific nature of
trans-Planckian physics is not presumed, but is rather described by the
boundary conditions imposed on the mode at the cut-off scale. In the
second approach, which is of interest in this work, one incorporates
quantum gravitational effects by introducing the fundamental length
scale into the standard field theory in a particular fashion.
In almost all the analyses performed so-far in the literature, the
trans-Planckian effects are introduced into the scalar and tensor
perturbation equations in an {\it ad hoc manner}. In the standard
inflation, the scalar and tensor perturbation equations are derived,
from the first principles, in a gauge-invariant manner
\cite{Kodama-Sasa:1984,Mukhanov-Feld:1992}. However in the
trans-Planckian inflationary scenario, to our knowledge, such a
calculation has never been performed and the scalar/tensor
power spectrum, with the trans-Planckian corrections, were
obtained from the presumed perturbation equations.
With the possibility of detecting the trans-Planckian signatures in
the current and future CMB experiments\footnote{For the current status
and developments of WMAP and PLANCK, see the following URLs:
http://map.gsfc.nasa.gov/,
http://astro.estec.esa.nl/SA-general/Projects/Planck}, it becomes {\it
imperative} to obtain the scalar and tensor perturbation equation from
the first principles. (For recent work on the trans-Planckian
constraints from the CMB, see
Refs. \cite{Martin-Ring:2003a,Martin-Ring:2004a,Martin-Ring:2004b,Easther-Kinn:2004,Easther-Kinn:2005}.) In this work, we
perform the gauge-invariant cosmological perturbation theory for the
single-scalar field inflation with the trans-Planckian
corrections. The model we shall consider in this work is a
self-interacting scalar field in (3 + 1)-dimensional space-time
satisfying a linear wave equation with higher spatial derivative
terms. The dispersion relation [$\omega = \omega(k)$] thus differs at
high wave-vectors from that of the ordinary wave equation. Such a
model breaks the local Lorentz invariance explicitly while preserving
the rotational and translational invariance. The particular
dispersion relation we shall study in detail is
\begin{equation}
\omega^2(k)=|\vec{k}|^2 + b_{11} |\vec{k}|^4,
\label{eq:dr}
\end{equation}
where $b_{11}$ is a dimensionfull parameter. $b_{11} < 0 $ implies
subluminal group velocity, while $b_{11} >0$ implies superluminal
group velocity. The above dispersion relation is a subset of a general
class of the form $\omega^2=|\vec{k}|^2[1+g(|\vec{k}|/k_0)]$, where $g$ is
a function which vanishes as $k_0 \to \infty$, and $k_0$ is a constant
which sets the scale for the deviation from Lorentz invariance. It
has been suggested that these general modified dispersion relation
might arise in loop quantum gravity
\cite{Gambini-Pull:1998,Alfaro-Mora:1999}, or more generally from an
unspecified modification of the short distance structure of space-time
(see for example Refs. \cite{Amelino-Camelia:1999,Jacobson:1999}).
Possible observational consequences have also been studied. For an
up-to-date review, see Refs. \cite{Mattingly:2005,Jacobson-Libe:2005}.
Even though the above dispersion relation breaks the local Lorentz
invariance explicitly it has been shown that it is possible to write
a Lagrangian for the above field in a generally covariant manner,
consistent with spatial translation and rotation invariance, by
introducing a unit time-like Killing vector field which defines a
particular direction
\cite{Jacobson-Matt:2000,Lemoine-Lubo:2001,Lim:2004} (see
Sec. (\ref{sec:MDR}) for more details). In this work, we use such a
framework in-order to obtain the scalar/tensor perturbation equations
for such a model during inflation.
Using the covariant Lagrangian used in Ref. \cite{Jacobson-Matt:2000},
we obtain the perturbed stress-tensor for the scalar and tensor
perturbations about the FRW background. We show that: (i) The
non-linear effects introduce corrections to the perturbed energy
density while the other components of the stress-tensor remains
unchanged. (ii) The non-linear terms contributing to the stress-tensor
are proportional to $k^2$. Hence in the super-Hubble scales the
trans-Planckian contributions to the perturbed energy density, as
expected, can be ignored. (iii) The spatial higher derivative terms
appear {\it only} in the equation of the motion of the perturbed
inflation field ($\delta\varphi$) and {\it not} in the equation of motion of
the scalar perturbations ($\Phi$). (iv) Unlike the canonical scalar
field inflation, the perturbations, in general, are {\it not} purely
adiabatic. The entropic perturbations generated during the inflation,
however, vanish at the super-Hubble scales. The speed of propagation
of the perturbations is a constant and is less than the speed of
light. (v) The tensor perturbation equation remain unchanged
indicating that the well-know consistency relation between the scalar
and tensor ratio will also be broken in this model.
We obtain the equation of motion of the Mukhanov-Sasaki variable
corresponding to the inflaton field. We show that the equation of
motion derived from the gauge-invariant perturbation theory is {\it not}
same as those {\it assumed} in the earlier analyzes
\cite{Brandenberger-Mart:2000,Martin-Bran:2000,Niemeyer:2000}.
Later, we combine the system of differential equations into a single
differential equation in $\Phi$ and obtain the solutions for the
power-law inflation in different regimes. We also obtain the spectrum
of scalar perturbations in a particular limit and compare with the
earlier results.
This paper is organized as follows: In the following section, the
theory of cosmological perturbations for the canonical single scalar
field inflation is discussed and essential steps leading to the
perturbation equation are reviewed. In Sec. (\ref{sec:MDR}), we
discuss the general covariant formulation of the Lagrangian describing
the scalar field with modified dispersion relation and derive the
corresponding stress-tensor. In Sec. (\ref{sec:per-st}), we obtain the
perturbed stress-tensor for the scalar and tensor perturbations. In
Sec. (\ref{sec:Sca-Per-Eq}), we obtain the scalar perturbation
equation and the equation of motion of the Mukhanov-Sasaki
variable. In Sec. (\ref{sec:clas-ana}), we perform the classical
analysis and obtain the form of the scalar perturbations ($\Phi$) in
the various regimes. In Sec. (\ref{sec:Pow-Spe}), we solve the
perturbation equations in a particular limit and obtain the
power-spectrum of the perturbations. Our results are summarized and
discussed in the last section. In Appendices (A, B) we derive the
equations of motion of the fields in the FRW and perturbed FRW
backgrounds. In Appendix (C), we obtain the equation of motion of the
Bardeen potential in our model.
Through out this paper, the metric signature we adopt is $(+,-,-,-)$
\cite{Landau-Lifs:1976-bk2}, we set $\hbar~ =~ c = ~1$ and $1/(8
\pi G) = M_{_{\rm Pl}}^2$. The various physical quantities with the {\it
over-line} refers to the values evaluated for the homogeneous and
isotropic FRW background. A dot denotes derivative with respect to the
cosmic time ($t$), a prime stands for a derivative with respect to
conformal time ($\eta$) and $_{,i}$ denotes a derivative w.r.t space
components. We follow the notation of Ref. \cite{Mukhanov-Feld:1992}
to provide easy comparison.
\section{Gauge-invariant perturbation: Canonical single field inflation}
\label{sec:GIV-Per}
In this section, we obtain the scalar and tensor perturbation
equations for the canonical single scalar field inflation. In the
following subsection, we discuss key properties of the perturbed FRW
metric and the ``gauge problem'' of the scalar perturbations. In the
subsequent subsections, we discuss the matter Lagrangian and provide
key steps in obtaining the scalar/tensor perturbation equations.
\subsection{Perturbed FRW metric}
We consider perturbations about a spatially flat $(3 + 1)$-dimensional
FRW line element
\begin{eqnarray}
ds^2 &= &\overline{g}_{\mu\nu} dx^{\mu} dx^{\nu} \nn \\
&=& dt^2-a^{2}(t)\, d{\bf x}^2 =
a^{2}(\eta)\left(d\eta^{2} - d{\bf x}^2\r),
\label{eq:frw}
\end{eqnarray}
where $t$ is the cosmic time, $a(t)$ is the scale factor and
$\eta=\int \left[dt/a(t)\r]$ denotes the conformal time. $H$ is the
Hubble parameter given by $H \equiv \dot{a}/{a}$ while $\mathcal H \equiv
a'/a$ is related to the Hubble parameter by the relation $\mathcal H =
H\,a$.
At the linear level, for the canonical single scalar field inflation,
the metric perturbations ($\delta g_{\mu\nu}$) can be categorized into
two distinct types --- scalar and tensor perturbations. Thus, the
perturbed FRW line-element can be written as
\begin{eqnarray}
\label{eq:per-frw}
{\rm d}s^2 &=& a^2(\eta )\left[ (1+2\phi){\rm d}\eta ^2 - 2 {\partial}_i B {\rm d}x^i
{\rm d}\eta \r. \\
& & - \left. \left[(1-2 \psi )\delta _{ij}+2 {\partial}_i {\partial}_j E + h_{ij} \r]
{\rm d}x^i{\rm d}x^j \r]\ , \nn
\end{eqnarray}
where the functions $\phi$, $B$, $\psi$ and $E$ represent the scalar
sector whereas the tensor $h_{ij}$, satisfying $h_i^i = {\partial}^{i} h_{ij}
= 0$, represent gravitational waves. Note that all these first-order
perturbations are functions of $(\eta, {\bf x})$. For convenience, we
do not write the dependence explicitly.
The tensor perturbations do not couple to the energy density
($\delta\rho$) and pressure ($\delta p$) inhomogeneities. However, the scalar
perturbations couple to the energy density and pressure which lead to
the growing inhomogeneities. At the linear level, the two types of
perturbations decouple and can be treated separately.
The scalar and tensor perturbations have four and two degrees of
freedom respectively. In the case of tensor perturbations, the two
degrees of freedom correspond to the two polarizations of the
gravitational waves and hence are physical. The scalar perturbations
suffer from the gauge problem. (For a detailed discussion, see
Refs. \cite{Kodama-Sasa:1984,Mukhanov-Feld:1992}.) However, it is
possible to construct two gauge invariant variables, which
characterize the perturbations completely, from the metric variables
alone i. e.,
\begin{equation}
\label{eq:gimp}
\Phi \equiv \phi +\frac{1}{a}[(B-E')a]', \qquad
\Psi \equiv \psi - \mathcal H (B-E') \, .
\end{equation}
Physically, $\Phi$ corresponds to the Newtonian gravitational potential
and is commonly referred to as the Bardeen potential while $\Psi$ is
related to the perturbations of the $3$-space. For the single
canonical scalar field scenario, we have $\Phi = \Psi$.
$\Phi$ and $\Psi$ are related to the pressure and density
perturbations of a generic perfect fluid {\it via} the perturbed
Einstein's equations. The pressure perturbations, in general, can be
split into adiabatic and entropic (non-adiabatic) parts, by writing
\begin{equation}
\delta p = c_{{\rm s}}^2 \delta\rho + \overline{p}{'} \Gamma \, ,
\label{eq:dPdR}
\end{equation}
where $c_s^2 \equiv {{\overline p}{'}}/{{\overline \rho}{'}}$ is the adiabatic
sound speed
\cite{Wands-Mali:2000,Malik-Wand:2002,Malik-Wand:2004}. The
non-adiabatic part is $\delta p_{\rm nad}\equiv {{\overline
p}{'}}\Gamma$, and
\begin{equation}
\label{defGamma}
\Gamma \equiv \frac{\delta p}{{\overline p}{'}} -
\frac{\delta\rho}{{\overline \rho}{'}} \,.
\end{equation}
The entropic perturbation $\Gamma$, defined in this way, is
gauge-invariant, and represents the displacement between
hyper-surfaces of uniform pressure and uniform density. In the
context of canonical single scalar field inflation, only adiabatic
perturbations are present, i. e., $\delta p = c_s^2 \delta\rho$ (where $c_s^2
= 1$) or $\Gamma = 0$. However, as we shall see in the later sections,
the trans-Planckian inflationary scenario introduces entropic
perturbations as well.
\subsection{Canonical scalar field}
The dominant matter component during inflation is a spatially
homogeneous canonical scalar field $\overline{\varphi}(\eta)$ (inflaton). The
Lagrangian density for the canonical scalar field $\varphi(\eta,{\overline x})$
propagating in a general curved background is given by
\begin{equation}
\label{eq:std-sca}
{\cal L}_{\varphi} = \frac{1}{2} g^{\mu\nu} \,
{\partial}_{\mu}\varphi \, {\partial}_{\nu}\varphi - V(\varphi) \, ,
\end{equation}
where $V(\varphi)$ is the self-interacting scalar field potential. The
equation of motion and the stress tensor of the scalar field
($\varphi$) in the conformally flat FRW background (\ref{eq:frw}) are
given by
\begin{eqnarray}
\label{eq:eom-phi}
& &
{\varphi}'' + 2 \, \mathcal H \, {\varphi}' - \nabla^2\varphi + a^2 \,
V_{{,\varphi}}(\varphi) = 0 \, , \\
\label{eq:ST-phi}
& & {T^{\mu}_{\nu}}^{(\varphi)} = {\partial}^{\mu}\varphi \, {\partial}_{\nu}\varphi
- \left[\frac{1}{2} {\partial}^{\alpha}\varphi \, {\partial}_{\beta}\varphi - V(\varphi)\r]
\delta^{\mu}_{\nu} \, ,
\end{eqnarray}
where $V_{,\varphi} = \left(dV(\varphi)/d\varphi\r)$ and $\nabla^2$
refers to the Laplacian in the flat space.
Let us consider a small inhomogeneous quantum fluctuations on top of a
homogeneous and isotropic classical background. For the scalar field,
we have
\begin{equation}
{\varphi}(\eta, {\bf x}) = \overline{\varphi}(\eta) + \delta \varphi
(\eta, {\bf x}) \, ,
\end{equation}
where one assumes that the perturbation $\delta \varphi$ is small.
The perturbed scalar field ($\delta\varphi$) and the perturbed
stress-tensor of the scalar field ($\delta{T^{\mu}_{\nu}}^{(\varphi)}$),
like the other scalar-type perturbation functions $(\phi, B, \psi,
E)$, suffer from the gauge problem. (For a detailed discussion, see
Refs. \cite{Kodama-Sasa:1984,Mukhanov-Feld:1992}.) Similar to
Eq.~(\ref{eq:gimp}), it is possible to define a gauge-invariant
quantity for the perturbed scalar field and the perturbed
stress-tensor, i.~e.,
{\small
\begin{eqnarray}
\label{eq:gisf}
& & {\delta{\varphi}}^{(gi)} \equiv {\delta{\varphi}} + \overline{\varphi}' (B - E')
\, , \\
& &
\delta{T_{0}^{0}}^{({\rm gi})} \equiv \delta{T_{0}^{0}} + {\overline{T_{0}^{0}}}'
(B - E') \, , \nn \\
\label{eq:giST}
& &
\delta{T_{i}^{j}}^{({\rm gi})} \equiv \delta{T_{i}^{j}} + {\overline{T_{i}^{j}}}'
(B - E') \, , \\
& &
\delta{T_{0}^{i}}^{({\rm gi})} \equiv \delta{T_{0}^{j}} +
\left(\overline{T_{0}^{0}} - \frac{1}{3} \overline{T_{i}^{j}}\r)
(B - E')_{,i} \, . \nn
\end{eqnarray}
}
Separating the homogeneous and perturbed part from
Eq. (\ref{eq:eom-phi}), we have
\begin{eqnarray}
\label{eq:phi-frw}
& & \overline{\varphi}^{''} + 2 \mathcal H \overline{\varphi}^{'} + a^2 V_{,\varphi} = 0 \, , \\
\label{eq:gisfper}
& & {\delta{\varphi}^{(gi)}}'' + 2 \, \mathcal H \, {\delta{\varphi}^{(gi)}}' -
\nabla^2\left({\delta{\varphi^{(gi)}}}\r) \\
& & \qquad \qquad \quad + V_{,\varphi\varphi} \, a^2 \,
{\delta{\varphi}}^{(gi)} - 4 \overline{\varphi}' \Phi' +
2 V_{,\varphi} \, a^2 \, \Phi = 0 \nn \, .
\end{eqnarray}
Similarly, separating the homogeneous and perturbed part in the
stress-tensor (\ref{eq:ST-phi}), we get
{\small
\begin{eqnarray}
\label{eq:ST-frw}
& & \!\!\!\!\!\!\!\!\!\!\!\!\!
\overline{{T^0_0}^{(\varphi)}} = \frac{1}{2}\left(\frac{\overline{\varphi}'^2}{a^2}
+ V(\overline{\varphi})\r); - \overline{{T^i_j}^{(\varphi)}} =
\frac{1}{2}\left(\frac{\overline{\varphi}'^2}{a^2} - V(\overline{\varphi})\r) \delta^i_j \\
& & ^{(gi)}{\delta T^0_0}^{(\varphi)} =
a^{-2} \left[-{\overline{\varphi'}}^2 \Phi + {\overline{\varphi'}} {\delta\varphi^{(gi)}}'
+ V_{,\varphi} a^2 \delta \varphi^{(gi)}\r] \, ,\nn \\
\label{eq:per-std}
& & ^{(gi)}{\delta T^0_i}^{(\varphi)} =
a^{-2} {\overline{\varphi'}} \delta\varphi^{(gi)}_{,i} \, , \\
& & ^{(gi)}{\delta T^i_j}^{(\varphi)} =
a^{-2}\left[{\overline{\varphi'}}^2 \Phi - {\overline{\varphi'}} {\delta\varphi^{(gi)}}'
+ V_{,\varphi} a^2 \delta \varphi^{(gi)}\r] \delta^{i}_{_j} \, . \nn
\end{eqnarray}
}
\subsection{Scalar and Tensor Perturbation equations}
In the earlier subsections, we obtained the gauge invariant variables
related to the scalar field $({\delta{\varphi}^{(gi)}}, ^{(gi)}{\delta
T^\mu_\nu}^{(\varphi)})$ and metric perturbations $(\Phi, \Psi)$. In this
subsection, we outline the essential steps leading to the scalar and
tensor equations of motion. Even though, this is a standard result,
and can be found in numerous review articles (see, for example, Refs.
\cite{Kodama-Sasa:1984,Mukhanov-Feld:1992}), we have given here the key
steps for future reference.
\indent From Eq. (\ref{eq:per-std}), it is easy to see that the non-diagonal
space-space components of the stress-tensor are absent. This leads to
the condition that $\Phi = \Psi$. Thus, the Einstein's equations for
the perturbed FRW metric (\ref{eq:per-frw}) in terms of the gauge
invariant quantities are:
\begin{subequations}
\label{eq:hydro-eq}
\begin{eqnarray}
\label{eq:hydro-eq1}
\nabla^2 \Phi - 3 \mathcal H \Phi' - 3 \mathcal H^2 \Phi =
\frac{1}{2 \, M_{_{\rm Pl}}^2}\, a^2 \,\delta {T^0_0}^{({\rm gi})} & & \\
\label{eq:hydro-eq2}
\frac{(a \Phi)'_{,i}}{a} = \frac{1}{2 \, M_{_{\rm Pl}}^2}\, a^2 \,
\delta {T^0_i}^{({\rm gi})} & & \\
\label{eq:hydro-eq3}
\Phi'' + 3 \mathcal H \Phi' + (2 \mathcal H' + \mathcal H^2) \Phi =
\frac{1}{2 \, M_{_{\rm Pl}}^2}\, a^2 \, \delta {T^i_i}^{({\rm gi})} \, , & &
\end{eqnarray}
\end{subequations}
where $\delta {T^\mu_\nu}^{({\rm gi})}$ is given by
Eq. (\ref{eq:per-std}). The three perturbed Einstein's equations can
be combined to form a single differential equation in $\Phi$:
{\small
\begin{equation}
\label{eq:Std-Phi}
\Phi^{''} - \nabla^{2}\Phi
+ 2 \left(\mathcal H - \frac{\overline{\varphi}^{''}}{\overline{\varphi}^{'}}\r) \Phi^{'}
+ 2 \left(\mathcal H^{'} - \mathcal H \frac{\overline{\varphi}^{''}}{\overline{\varphi}^{'}}\r) \Phi = 0 \, .
\end{equation}
}
The system of perturbation equations (\ref{eq:Std-Phi},
\ref{eq:hydro-eq2}, \ref{eq:gisfper}) is quite complex. In order
to extract the physical content more transparent, these equations are
expressed in terms of two new variables -- $Q$ and $u$ -- which are
linearly related to $\Phi$ and $\delta\varphi^{(gi)}$, i. e.,
\begin{equation}
\label{eq:MS-u-def}
Q = a\left[\delta \varphi + \overline{\varphi}'\frac{\psi }{\mathcal H}\right]~;~
u = \frac{a}{\overline{\varphi}'} \Phi \, .
\end{equation}
$Q$ is a gauge-invariant (Mukhanov-Sasaki)
\cite{Kodama-Sasa:1984,Mukhanov-Feld:1992} variable whose equation of
motion is homogeneous. This ensures that one can quantize $Q$ in the
standard way using the Lagrangian associated with its equation of
motion. At the early stages of inflation where the quantum effects
are important, the equation of motion of $Q$ helps in quantizing the
fields and fixing the initial conditions. However, at the end stages
of inflation where the relevant modes have crossed the Hubble radius
and behave classically, it is easier to analyze the equation of motion
of $u$.
The equation of motion of $Q$ is derived as follows: Substituting
$\delta\varphi$ in terms of $Q$ in Eq.~(\ref{eq:gisfper}), and using the
relations (\ref{eq:Std-Phi}, \ref{eq:phi-frw}, \ref{eq:hydro-eq2}), we
get:
\begin{equation}
\label{eq:MSeq-Std}
Q'' - \, \nabla^2 Q - \frac{z''}{z} Q = 0 \, ,
\end{equation}
where
\begin{equation}
\label{eq:def-z}
z = \frac{a \, (\mathcal H^2 - \mathcal H')^{1/2}}{\mathcal H \, c_s}
= a \frac{\bar{\varphi}'}{\mathcal H} \, .
\end{equation}
The equation of motion of $u$ is obtained by substituting the
transformation (\ref{eq:MS-u-def}) in Eq. (\ref{eq:Std-Phi}):
\begin{equation}
\label{eq:ueq-Std}
u'' - \, \nabla^2 u - \frac{\theta''}{\theta} u = 0 \, ,
\end{equation}
where
\begin{equation}
\theta = \frac{\mathcal H}{a} \left[\frac{2}{3}(\mathcal H^2 - \mathcal H')\r]^{-1/2}
= \frac{\mathcal H}{a \bar{\varphi}'} \, .
\end{equation}
$Q$ is related to another gauge-invariant quantity $\zeta (\equiv -
(\mathcal H/{\rho^{'}}) \delta\rho + \psi)$ by the relation $Q = 2 c_s\,z
\zeta$. The quantity $\zeta$ is time-independent on scales larger than
the Hubble radius and, more importantly, related to the large scale
CMB anisotropies (via the Sachs-Wolfe effect) \cite{Liddle-Lyth:2000-bk}.
Decomposing $Q$ into Fourier space, we have
\begin{equation}
\label{eq:SPer0}
\mu_{_S}'' + \left[k^2 - \frac{z''}{z} \r] \mu_{_S} = 0 \, ,
\end{equation}
where $k = |{\bf k}|$ and $\mu_{_S} = - Q_k = - 2 c_s\,z \zeta$. The
above equation is similar to a time-independent Schr\"odinger
equation
where the usual role of the radial coordinate is now played
by the
conformal time whose effective potential is $U_{_{\rm
S}}\equiv
{z''}/{z}$ \cite{Wang-Mukh:1997,Martin-Schw:2002}. The
scalar perturbation spectrum per logarithmic interval
can then be
written in terms of the modes $\mu_{_S}$ as
\begin{equation}
\left[k^3\; {\cal P}_{S}(k)\right]
=\left(\frac{k^3}{2\pi^2}\right)\, \left(\frac{\vert
\mu_{_S}\vert}{z}\right)^2 \, ,
\label{eq:pS0}
\end{equation}
and the expression on the right hand side is to be evaluated when
the physical wavelength $(k/a)^{-1}$ of the mode corresponding to
the wavenumber ${\bf k}$ equals the Hubble radius $H^{-1}$.
Before proceeding to the next section, we obtain the tensor
perturbation equation in the FRW background. As we mentioned earlier,
the tensor perturbations $h_{ij}(\eta, {\bf x})$ do not couple to the
energy density. These represent free gravitational waves and satisfy
the equation:
\begin{equation}
\label{eq:TPer0}
\mu_{_T}^{''} + \bigl( k^2 - \frac{a''}{a} \bigr) \mu_{_T} \, = \, 0 \, .
\end{equation}
where $\mu_T \, \equiv \, a \, h_k $. This equation is very similar to
the corresponding equation (\ref{eq:SPer0}) for scalar gravitational
inhomogeneities, except that in the effective potential $(U_{_{\rm
T}}\equiv {a''}/{a})$ the scale factor $a(\eta)$ is replaced by
$z(\eta)$.
\section{Modified Dispersion relation Lagrangian}
\label{sec:MDR}
In this section, we briefly discuss the general covariant formulation
describing a scalar field with modified dispersion relation and derive
the corresponding stress-tensor.
As discussed in the introduction, to keep the calculations tractable,
we will assume that the scalar field with the high frequency
dispersion relation is of the form
\begin{equation}
\omega^2 = |\vec{k}|^2 + b_{_{11}} |\vec{k}|^4 \, ,
\end{equation}
where $b_{_{11}} > 0$. The above dispersion relation breaks the local
Lorentz invariance explicitly while it preserves rotational and
translational invariance. Even though, the modified dispersion
relation breaks the local Lorentz invariance explicitly, it was shown
in Ref. \cite{Jacobson-Matt:2000b} that a covariant formulation of the
corresponding theory can be carried out by introducing a unit
time-like vector field $u^{\mu }$ which defines a preferred rest
frame.
\subsection{Covariant Lagrangian}
\label{sec:Cov-Lag}
The action for a scalar field with the modified dispersion relation
takes the form~\cite{Jacobson-Matt:2000b,Lemoine-Lubo:2001}
\begin{eqnarray}
\label{eq:mdr-sca}
S &=&\int{\rm d}^4x\sqrt{-g}~({\cal L}_{\varphi}+{\cal L}_{_{\rm
cor}}+{\cal L}_u),
\end{eqnarray}
where ${\cal L}_{\varphi}$ is the standard Lagrangian of a minimally
coupled scalar field given by Eq. (\ref{eq:std-sca}). The last two
terms --- ${\cal L}_{_{\rm cor}}$ and ${\cal L}_u$ --- contribute to
the modified dispersion relation of the scalar field. ${\cal
L}_{_{\rm cor}}$ corresponds to the non-linear part of the dispersion
relation while ${\cal L}_u$ describes the dynamics of the vector field
$u^\mu$. The two corrective Lagrangians have the form
\begin{subequations}
\begin{eqnarray}
\label{eq:lcor}
{\cal L}_{_{\rm cor}}&=& - b_{11}
\left({\cal D}^{2}\varphi\right)^2, \\
\label{eq:lu}
{\cal L}_u&=& - \lambda(g^{\mu\nu}u_\mu u_\nu - 1)-
d_1F^{\mu \nu}F_{\mu \nu} \, ,
\end{eqnarray}
\end{subequations}
where
\begin{eqnarray}
\label{eq:def-Fmn}
F_{\mu \nu }&\equiv& \nabla _{\mu }\,u_{\nu}-\nabla_\nu\,u_\mu \, , \\
\label{eq:DDvp}
{{\cal D}}^2 \varphi
&=& \perp^{\alpha\beta}\nabla_\alpha\nabla_\beta\varphi
+u^\alpha\nabla_\alpha\varphi\nabla_\beta u^\beta \, , \\
\label{eq:def-per}
\perp_{\mu\nu} &\equiv& - g_{\mu\nu}+ u_{\mu} u_{\nu} \quad .
\end{eqnarray}
The covariant derivative associated with the metric $g_{\mu \nu}$ is
$\nabla_{\mu }$ while $b_{11}$ and $d_1$ are arbitrary (dimensional)
constants. The tensor $\perp_{\mu\nu}$ gives the metric on a slice of
fixed time while ${{\cal D}}^2$ is proportional to the Laplacian operator
on the same surface. The fact that $u^\mu$ is a unit time-like vector
($u^\mu u_{\mu} = 1$) is enforced by the Lagrange multiplier
$\lambda$. In the above, $b_{11}, d_1$ have the dimensions of inverse
mass square and mass square, respectively while $u_{\mu}$ is
dimensionless.
The equation of motion for $\varphi$ and $u_{\mu}$ obtained by varying the
action (\ref{eq:mdr-sca}), respectively, are
{\small
\begin{eqnarray}
\label{eq:DE-vp}
\nabla^{\mu}\nabla_{\mu}\varphi + V_{,\varphi} &=& 2 b_{_{11}}
\left[\nabla_{\mu}\left({\cal D}^2\varphi \, u^{\mu} \, \nabla_{\nu}u^{\nu}\r) \r. \\
& & \qquad \qquad \qquad \left.
- \nabla_{\mu}\nabla_{\nu}\left({\cal D}^2\varphi \, \perp^{\mu\nu}\r) \r] \, , \nn \\
\label{eq:DE-u}
2 d_{1} \nabla_{\nu} F^{\nu\mu} - \lambda u^{\mu}
&=& - b_{_{11}} \left[\nabla^{\mu}\left({\cal D}^2\varphi u^{\nu} {\partial}_{\nu}\varphi\r) \r. \\
& -& \left. \nabla^{\mu}\nabla_{\nu}\varphi \, u^{\nu} {\cal D}^2\varphi
- \nabla_{\nu}\left(\nabla^{\mu}\varphi u^{\nu}\r) {\cal D}^2\varphi \r] \, . \nn
\end{eqnarray}
}
\noindent From the above equation, we get
\begin{eqnarray}
\label{eq:def-lam}
\lambda &=&
b_{_{11}} u_{\mu} \left[\nabla^{\mu}\left({\cal D}^2\varphi u^{\nu} {\partial}_{\nu}\varphi\r)
- \nabla^{\mu}\nabla_{\nu}\varphi \, u^{\nu} {\cal D}^2\varphi \r. \\
& & \left. - \nabla_{\nu}\left(\nabla^{\mu}\varphi u^{\nu}\r) {\cal D}^2\varphi \r]
+ 2 d_1 u_{\mu} \nabla_{\nu} F^{\nu\mu} \, .\nn
\end{eqnarray}
Using the results of Appendix (\ref{sec:FRW-bkg}), it is easy to show
that for the FRW background $\overline{\varphi}$ satisfies
Eq. (\ref{eq:phi-frw}) which is same as that of the canonical scalar
field. The field equation for $u_{\mu}$ gives $\overline{\lambda} = 0$.
Before we proceed to the computation of the stress-tensor, it is
important to know the current astrophysical constraints on the
parameters of the model \cite{Mattingly:2005}: The constraints of the
parameter $d_1$ comes from the big-bang nucleosynthesis
\cite{Carroll-Lim:2004} and the solar system tests of general relativity
\cite{Graesser-Jenk:2005}. These give:
\begin{equation}
0 < \frac{d_1}{M_{_{\rm Pl}}^2} < \frac{1}{7} \, .
\end{equation}
The constraints on the parameter $b_{11}$ comes from the observations
of highest energy cosmic rays \cite{Gagnon-Moor:2004}. Using
effective field theory with higher-dimensional operators, resulting in
the modified field theory with a dispersion relation, it was shown
that for various standard model particles
\begin{equation}
b_{11} \, M_{_{\rm Pl}}^2 < 5 \times 10^{-5} \, .
\end{equation}
\subsection{Stress tensor}
\label{sec:Str-Ten}
In this subsection, we obtain the stress-tensor for the scalar field
defined in Eq. (\ref{eq:mdr-sca}). Formally, the stress-tensor for a
general Lagrangian containing up to first order derivative in the
metric is given by (cf. Ref. \cite{Landau-Lifs:1976-bk2}, p. 272)
\begin{equation}
\label{ll}
T_{\mu\nu}=-g_{\mu\nu}{\cal L} + 2
\frac{\partial{\cal L}}{\partial g^{\mu\nu}}
- \frac{2}{\sqrt{-g}}\partial_\rho\left(\!\sqrt{-g}
\frac{\partial{\cal L}}{\partial(\partial_\rho g^{\mu\nu})}\right).
\end{equation}
For simplicity, we will separate the contributions from the three
different Lagrangians defined in Eq. (\ref{eq:mdr-sca}), i. e.,
\begin{equation}
T_{\mu\nu} = T^{(\varphi)}_{\mu\nu} +
T^{(u)}_{\mu\nu} +
T^{(cor)}_{\mu\nu} \, .
\end{equation}
The stress-tensor $T_{\mu \nu}^{^{(\varphi)}}$ corresponding to the
canonical scalar field Lagrangian is given in Eq. (\ref{eq:ST-phi}).
The stress-tensor corresponding to the Lagrangian ${\cal L}_{_{u}}$
can easily be obtained and is given by
\begin{eqnarray}
T_{\mu \nu}^{^{(u)}} &=&
d_1 g_{\mu \nu} F_{\varepsilon\varkappa} F^{\varepsilon\varkappa} -
4 d_1 F_{\mu \varepsilon} F_{\nu \varkappa} g^{\varepsilon\varkappa} \nn \\
& & + \lambda \left[g_{\mu \nu}
(g_{\varepsilon\varkappa} u^{\varepsilon} u^{\varkappa} -1) - 2 u_\mu u_\nu \r] \, .
\label{eq:ST-u}
\end{eqnarray}
However, the stress-tensor corresponding to ${\cal L}_{_{cor}}$ is
much more involved. For the sake of continuity we give below only the
final result while the steps leading to the result is given in
Appendix (\ref{sec:FRW-bkg}). We get,
{\small
\begin{equation}
\label{eq:ST-cor}
T_{\mu\nu}^{^{\rm (cor)}} = b_{_{11}} g_{_{\mu \nu}} ({{\cal D}}^2 \varphi)^2
+ b_{_{11}} E_{_{\mu\nu}} {\cal D}^2\varphi
+ 4 b_{_{11}} C_{_{\mu \nu}}^{\rho \varkappa} \partial_{\varkappa}\varphi \,
\partial_\rho[{{\cal D}}^2 \varphi] \, ,
\end{equation}
}
\noindent where,
\begin{eqnarray}
E_{\mu\nu}& =& 4 \left[
\partial_\rho[\ln(\sqrt{-g})] \, C_{\mu \nu}^{\rho \varkappa}\,
\partial_{\varkappa} \varphi
+ \partial_\rho \left(C_{\mu \nu}^{\rho \varkappa} \partial_{\varkappa} \varphi\r)\r.\nn \\
\label{eq:Def-Emn}
& & - \left. A_{\mu \nu}^{\varepsilon\varkappa} \partial_{\varepsilon} \partial_{\varkappa}
\varphi - B^{\varkappa}_{\mu\nu} \partial_{\varkappa} \varphi \r] \\
C^{\rho \varkappa}_{\mu\nu} &=& \frac{1}{2} \left[ g_{\mu\nu} g^{\varkappa \rho} -
\delta^\rho_\mu \delta^\varkappa_\mu - \delta^\varkappa_\mu \delta^\rho_\nu
+ u_{_\mu} \delta^\rho_\nu u^{\varkappa} - u_{_\mu} u_{_\nu} g^{\varkappa \rho} \r. \nn\\
\label{eq:Def-C}
&+& \left. u_{_\mu} \delta^\varkappa_\nu u^{\rho} + u_{_\nu} \delta^\varkappa_\mu u^{\rho}
- g_{\mu\nu} u^{\rho} u^{\varkappa} + \delta^\rho_\mu u_\nu u^{\varkappa} \r] \, .
\end{eqnarray}
Before proceeding with the evaluation of the perturbed stress-tensor
we would like to mention the following point: Using the results of
Appendix (\ref{sec:FRW-bkg}), it is clear that $T_{\mu\nu}^{(u)}$ and
$T_{\mu\nu}^{(cor)}$ vanishes in the unperturbed FRW background.
Hence, the equations determining the evolution of the scale
factor, i. e.,
\begin{equation}
3 \mathcal H^2 = \frac{a^2}{M_{_{\rm Pl}}^2} \overline{T^0_0}^{(\varphi)}~;~
2 \mathcal H^{'} + \mathcal H^2 = \frac{a^2}{3 M_{_{\rm Pl}}^2} \overline{T^i_i}^{(\varphi)} \, ,
\end{equation}
remain the same as in the canonical scalar field inflation. It is also
worth mentioning that the trans-Planckian corrections do not play any
role on the expansion of the FRW background while, as we will see in
the next section, the trans-Planckian corrections affect the metric
and inflaton perturbations.
In this work, we will focus on the power-law inflation, for which, the
scale factor is given by
\begin{equation}
\label{eq:polaw}
a(t) = \left(a_0 \, t^p\r)\quad {\rm or} \quad
a(\eta) = \left(\frac{-\eta}{-\eta_0} \r)^{(\beta+1)}
\, ,
\end{equation}
where $p > 1$, $\beta \leq -2$ ($\beta = -2$ corresponds to de
Sitter), $a_0$ is a constant,
\begin{equation}
\beta = -\left(\frac{2p-1}{p - 1}\r)
\quad{\rm and}\quad
(- \eta_0) = \frac{a_0^{-1/p}}{(p - 1)} \, .
\end{equation}
The scalar field potential and other background field parameters
are given by ($q = \sqrt{2/p}$)
\begin{eqnarray}
V = v_o M_{_{\rm Pl}}^4
\exp \left(- q \frac{\overline{\varphi}}{M_{_{\rm
Pl}}}\r) &;& \mathcal H = \frac{-(1 + \beta)}{(-\eta)}~;\\
{\overline \varphi} = \sigma_o M_{_{\rm Pl}} \ln\left(\frac{-\eta}{-\eta_0}\r)
&;&\sigma_o = \sqrt{2 \beta (\beta +1)}~; \nn \\
\label{eq:PLaw-para}
a_0 = - \frac{\sqrt{(\beta + 1)(1+2 \beta)}}{\sqrt{v_o} \eta_0 M_{_{\rm
Pl}}} &;&
\frac{z''}{z} = \frac{a''}{a} = \frac{\beta (\beta +1)}{(-\eta)^2} \, . \nn
\end{eqnarray}
\section{Perturbed stress tensor}
\label{sec:per-st}
In this section, we will obtain the perturbed stress-tensor for the
scalar field with modified dispersion relation (\ref{eq:mdr-sca}) in
the perturbed FRW background (\ref{eq:per-frw}). As in the previous
section, we will separate the contributions to the perturbed
stress-tensor from the three different Lagrangians defined in
Eq. (\ref{eq:mdr-sca}), i. e.\footnote{The mixed stress-tensor $\delta
T^{\mu}_{\nu}$ is given by
\begin{equation}
\delta T^{\mu}_{\nu} \equiv \delta \left(g^{\mu\epsilon} T_{\epsilon\nu}\r)
= \overline{g^{\mu\epsilon}} (\delta T_{\epsilon\nu}) + (\delta g^{\mu\epsilon}) \overline{T_{\epsilon\nu}}
\label{eq:rel-sts}
\end{equation}
},
\begin{equation}
\delta T^{\mu}_{\nu} = \delta {T^{\mu}_{\nu}}^{(\varphi)} +
\delta {T^{\mu}_{\nu}}^{(u)} +
\delta {T^{\mu}_{\nu}}^{(cor)} \, .
\label{eq:per-stmn}
\end{equation}
The first term in the RHS of the above expression corresponds to the
perturbed stress-tensor of the canonical scalar field Lagrangian and
is given by Eqs. (\ref{eq:per-std}). In the rest of the section, we
will obtain contributions from the other two terms.
Perturbing Eq. (\ref{eq:ST-u}) and using the fact that $F_{\mu\nu}$
vanishes for the FRW background [See Appendix (\ref{sec:FRW-bkg})], we
get
\begin{equation}
\label{eq:per-stu}
\delta {T^{\mu}_{\nu}}^{(u)} = - 2 \, \delta^\mu_0 \,
\delta^0_\nu \, (\delta\lambda) \, ,
\end{equation}
where $\delta\lambda$ is given by Eq. (\ref{eq:p-lam}). Using the fact
that $\overline{{{\cal D}}^2(\varphi)}$ and $\overline{{\partial}_{\rho}{{\cal D}}^2(\varphi)}$
vanish for the FRW background [See Appendix (\ref{sec:FRW-bkg})],
the perturbation of Eq. (\ref{eq:ST-cor}) takes the following simple
form:
\begin{equation}
\delta T^{(cor)}_{\mu \nu} =
4 b_{11} \left[ \overline{E_{\mu\nu}}\, \delta({\cal D}^2 \varphi) +
\, \overline{C^{\rho 0}_{\mu\nu}} \, \overline{\varphi}' \,
{\partial}_{\rho}(\delta{\cal D}^2\varphi) \r] \, .
\end{equation}
Substituting the relation (\ref{eq:d-DD}) for $\delta({\cal D}^2 \varphi)$ and
Eqs. (\ref{eq:bABCE1}) in the above expression, we get
\begin{eqnarray}
\label{eq:per-stcor}
& & \!\!\!\!\!\!\!\!\!\!\!\!
\delta {T^{\mu}_{\nu}}^{(cor)} = \frac{2 b_{11}}{a^4}
\left[ 5 \frac{{\cal H}}{a} {\overline{\varphi}^{'}}^2 \nabla^2 \xi^{(gi)}
- \frac{1}{a} {\overline{\varphi}'}^2 \nabla^2 {\xi^{(gi)}}' \r. \\
& & \quad ~~ - \left. \left({\overline{\varphi}^{''}} + 4 {\cal H} {\overline{\varphi}^{'}} \r)
\nabla^2(\delta\varphi) + {\overline{\varphi}^{'}} \nabla^2(\delta\varphi)' \r]
\delta^{\mu}_{_{0}}\delta^{0}_{_{\nu}} \, ,\nn
\end{eqnarray}
where $\xi$ is defined in Eq.~(\ref{eq:def-xi}). Substituting
Eqs. (\ref{eq:per-std}, \ref{eq:per-stu}, \ref{eq:per-stcor}) in
Eq. (\ref{eq:per-stmn}), we obtain the perturbed gauge-invariant
stress-tensor to be:
\begin{subequations}
\label{eq:per-fin}
\begin{eqnarray}
{\delta T^0_0}^{(gi)}&=& \frac{1}{a^{2}}
\left[-{\overline{\varphi'}}^2 \Phi + {\overline{\varphi'}} {\delta\varphi^{(gi)}}^{'}
+ V_{,\varphi} a^2 \delta \varphi^{(gi)}\r] \nn \\
\label{eq:per-fin1}
& & \qquad + \,
\frac{4 d_{_1}}{a^2} \left[\nabla^2 \Phi - \frac{1}{a} \nabla^2 {\xi^{(gi)}}^{'}
\r] \, , \\
\label{eq:per-fin2}
{\delta T^i_j}^{(gi)} &=& \left[{\overline{\varphi'}}^2 \Phi - {\overline{\varphi'}} \delta\varphi'^{(gi)}
+ V_{,\varphi} a^2 \delta \varphi^{(gi)}\r] \delta^{i}_{_j} \, , \\
\label{eq:per-fin3}
{\delta T^0_i}^{(gi)}&=& a^{-2} {\overline{\varphi'}} \delta\varphi_{,i}^{(gi)} \, .
\end{eqnarray}
\end{subequations}
Following points are worth-noting regarding the above result: Firstly,
the two corrective Lagrangians -- ${\cal L}_{u}$ and ${\cal
L}_{_{cor}}$ -- contributes only to the perturbed energy density
$(\delta\rho)$. This implies that the non-diagonal space-space components
of the stress-tensor are absent leading to the condition that $\Phi =
\Psi$. This also implies that the constraint equation
(\ref{eq:hydro-eq2}) remains unchanged even for trans-Planckian
inflation. Secondly, since the trans-Planckian corrections do not
change the pressure perturbations, the perturbation equations for the
tensor modes do not change. Hence, the tensor perturbation equations
remain unchanged. Recently, Lim \cite{Lim:2004} had show that general
Lorentz violating models (with out taking into account the higher
derivatives of the scalar field) can modify the pressure perturbations
and hence the tensor perturbation equations. However, in our specific
Lorentz violating model, this is not the case. { This indicates that
the well-know consistency relation between the scalar and tensor ratio
will also be broken in this model \cite{Hui-Kinn:2001,Ashoorioon-Mann:2004,Ashoorioon-Hovd:2005}.}
Thirdly, it is
interesting to note that the trans-Planckian contributions to the
energy density go as $k^2$. Hence as one would expect, in the
super-Hubble scales, {\it only} the canonical scalar field contributes
significantly in these scales. Lastly, these can have two significant
implications on the perturbation spectrum: (a) The speed of
propagation of the perturbations ($c_s^2$) can be different from that
of the standard single-scalar field inflation. In the case of single
scalar field inflation, we know that $c_s^2 = 1$. However, due to the
extra contributions to the energy density, this can no longer be
true. (b) The perturbations need not be purely adiabatic. $\xi$ can
act as an extra scalar field during the inflation and hence can act as
a source. This can introduce non-adiabatic (entropic) perturbations.
(See, for example, Ref. \cite{GrootNibbelink-Vant:2001,vanTent:2003})
We will discuss more on these in the following sections.
\section{Scalar Perturbation equation}
\label{sec:Sca-Per-Eq}
Substituting Eqs. (\ref{eq:per-fin}) in (\ref{eq:hydro-eq}), the
first-order perturbed Einstein's equations take the following form,
\begin{subequations}
\label{eq:Per-EinTP}
{\small
\begin{eqnarray}
& & \nabla^2 \Phi - 3 \mathcal H \Phi' - 3 \mathcal H^2 \Phi =
\frac{2 d_{_1}}{M_{_{\rm Pl}}^2} \left[\nabla^2 \Phi - \frac{1}{a}
\nabla^2 {\xi^{(gi)}}^{'}\r] \\
& & \qquad \qquad \quad
+ \, \frac{1}{2 M_{_{\rm Pl}}^2} \left[-{\overline{\varphi'}}^2 \Phi + {\overline{\varphi'}}
{\delta\varphi^{(gi)}}^{'} + V_{,\varphi} a^2 \delta \varphi^{(gi)}\r] \, , \nn \\
& & \mathcal H \Phi + \Phi' = \frac{1}{2 M_{_{\rm Pl}}^2}\,
{\overline{\varphi'}} \delta\varphi^{(gi)} \, , \\
& & \Phi'' + 3 \mathcal H \Phi' + (2 \mathcal H' + \mathcal H^2) \Phi =
\frac{1}{2 M_{_{\rm Pl}}^2}\,\left[{\overline{\varphi'}}^2 \Phi - {\overline{\varphi'}} \delta\varphi'^{(gi)}
\r. \nn \\
& & \left. \qquad \qquad \qquad \qquad \qquad \qquad \quad
+~~ V_{,\varphi} a^2 \delta \varphi^{(gi)}\r] \, .
\end{eqnarray}
}
\end{subequations}
As in the canonical scalar field inflation, the three perturbed
Einstein's (\ref{eq:Per-EinTP}) can be combined to give
\begin{multline}
\label{eq:MDR-Phi}
\Phi^{''} - \left(1 - \frac{2 d_{_1}}{M_{_{\rm Pl}}^2}\r) \nabla^{2}\Phi
+ 2 \left(\mathcal H - \frac{\overline{\varphi}^{''}}{\overline{\varphi}^{'}}\r) \Phi^{'} \\
+ 2 \left(\mathcal H^{'} - \mathcal H \frac{\overline{\varphi}^{''}}{\overline{\varphi}^{'}}\r) \Phi
= \frac{2 d_{_1}}{M_{_{\rm Pl}}^2} \frac{1}{a} \nabla^2 {\xi^{(gi)}}^{'} \, .
\end{multline}
Perturbing the field equations (\ref{eq:DE-vp}, \ref{eq:DE-u}), we get
{\small
\begin{eqnarray}
\label{eq:per-vpTP}
& & \!\!\!\!\!
{\delta{\varphi}^{(gi)}}'' + 2 \, \mathcal H \, {\delta{\varphi}^{(gi)}}' -
\nabla^2\left({\delta{\varphi^{(gi)}}}\r) + V_{,\varphi\varphi} \, a^2 \,
{\delta{\varphi}}^{(gi)} \\
& & - 4 \overline{\varphi}' \Phi' + 2 V_{,\varphi} \, a^2 \, \Phi
+ \frac{2 b_{_{11}}}{a^2}\left[\nabla^4 \delta\varphi^{(gi)} -
\frac{\overline{\varphi}'}{a} \nabla^4 \xi^{(gi)} \r] = 0 \nn \, , \\
& & \!\!\!\!\!
{\partial}_m \left[\left(1 - \frac{c_1}{M_{_{\rm Pl}}^2} \frac{\nabla^2}{a^2}\r)\Phi -
2 \mathcal H \left(1 + \frac{c_1}{2 M_{_{\rm Pl}}^2}\frac{\nabla^2}{a^2}\r)
\Phi'\r] \nn \\
\label{eq:per-uTP}
& & - \frac{1}{a} {\partial}_m \left[{\xi^{(gi)}}^{''} - 3 \mathcal H {\xi^{(gi)}}^{'} -
\frac{b_{11}}{2 d_1} \frac{\overline{\varphi}^2}{a^2} \nabla^2\xi^{(gi)}\r] = 0 \, ,
\end{eqnarray}
}
where $c_1 = (M_{_{\rm Pl}}^4 b_{11})/d_1$ is a dimensionless
constant.
Following points are interesting to note regarding the above results:
(i) The spatial higher derivatives appear {\it only} in the equation
of motion of $\delta\varphi$ and not in metric perturbation equation $\Phi$.
Unlike the standard inflation, the perturbations are not purely
adiabatic and the speed of propagation of the perturbations is less
than unity i. e. $c_s^2 = 1 - 2 d_1/M_{_{\rm Pl}}^2$. (ii) In the case of
canonical scalar-field inflation, the two dynamical variables $\Phi$
and $\delta\varphi^{(gi)}$ are related by the constraint equation
(\ref{eq:hydro-eq2}). In our model, $\Phi$ and $\delta\varphi^{(gi)}$ are
again related by the same constraint equation, however $\xi$ is
related to the other fields {\it via} the equations of motion. Hence,
unlike the standard inflation, we have two sets of independent
variables. (iii) In the case of canonical scalar-field inflation,
$\Phi$ and $\delta\varphi^{(gi)}$ can be combined into a single variable ---
Mukhanov-Sasaki ($Q$) variable --- in terms of which we can obtain the
perturbation spectrum. However, in this model, as in the case of
multi-field inflation models \cite{GrootNibbelink-Vant:2001}, the
equations of motion in terms of the Mukhanov-Sasaki variables are
coupled. In the rest of the section, we derive the equation of motion
of the Mukhanov-Sasaki variables corresponding to $\delta\varphi^{(gi)}$ and
$\xi$.
Substituting $\delta\varphi$ in-terms of $Q$ in Eq. (\ref{eq:per-vpTP}), and
using the relations (\ref{eq:MDR-Phi}, \ref{eq:phi-frw},
\ref{eq:hydro-eq2}), we get
\begin{eqnarray}
Q'' - \left(1 - \frac{2 b_{11}}{a^2}\nabla^2 \r) \nabla^2Q -
\frac{z''}{z} Q = \frac{2 d_1}{M_{_{\rm Pl}}^2} \nabla^2 {\cal S}(\eta) \, ,
\end{eqnarray}
where
\begin{equation}
{\cal S} = \overline{\varphi}^{'} \left[
\frac{Q_{\xi}^{(2)}}{\mathcal H} + \frac{c_1}{a^{1/2}} \nabla^2 Q_{\xi}^{(1)}
\r] \, ,
\end{equation}
and $Q_{\xi}^{(1)}, Q_{\xi}^{(2)}$ are the gauge-invariant variables
associated with $\xi$ and are given by:
\begin{equation}
Q_{\xi}^{(1)} = a^{-3/2} \left[ \xi + \frac{a}{\mathcal H} \psi\r]\, ; \,
Q_{\xi}^{(2)} = \xi^{'} - a \phi \, .
\end{equation}
Substituting for $\xi$ in-terms of $Q_{\xi}^{(1)}$ in (\ref{eq:per-uTP}),
we get,
{\small
\begin{eqnarray}
& & \!\!\!\!\!\!\!\!\!
\Bigg[ a^{3/2} \left({Q_{\xi}^{(1)}}^{''} + \left[\frac{3}{2}\mathcal H^{'}
- \frac{9}{4}\mathcal H^{2} -
\frac{b_{_{11}} {\overline \phi^{'}}^2}{2 d_1 a^2} \nabla^2\r]{Q_{\xi}^{(1)}}
\r) \\
& & \!\!\!\!\!\!\!\!\! - \frac{{\overline \phi^{'}}}{M_{_{\rm Pl}}^2 \mathcal H}
\left(\frac{{\overline \phi^{''}}}{{\overline \phi^{'}}} -
\frac{\mathcal H^{'}}{\mathcal H} - \mathcal H - \frac{c_1 \mathcal H}{2 M_{_{\rm Pl}}^2 a^2} \nabla^2\r)
Q - \frac{a}{\mathcal H} \nabla^2{\Phi}\Bigg ],m = 0 \, .\nn
\end{eqnarray}
}
Decomposing $Q, Q_{\xi}^{(1)}, Q_{\xi}^{(2)}$ into Fourier space, we have
{\small
\begin{eqnarray}
\label{eq:SPerTP}
& & \!\!\!\!\!
\mu_{_S}^{''} + \left[ k^2 + \frac{2 b_{_{11}}}{a^2(\eta)} k^4 -
\frac{z^{''}}{z} \r] \mu_{_S} =
- \frac{2 d_1}{M_{_{\rm Pl}}^2} k^2 {\cal S}_{k}(\eta) \\
\label{eq:UPerTP}
& & \!\!\!\!\! \mu_{_\xi}^{''} + \left[\frac{3}{2}\mathcal H^{'}
- \frac{9}{4}\mathcal H^{2} +
\frac{b_{_{11}} {\overline \phi^{'}}^2}{2 d_1 a^2} k^2 \r]{\mu_{_\xi}}
= a^{-3/2} \left[\frac{a k^2}{\mathcal H}\Phi_k \r.
\\
& & \left. \qquad \qquad + \frac{{\overline \phi^{'}}}{M_{_{\rm Pl}}^2 \mathcal H}
\left(\frac{{\overline \phi^{''}}}{{\overline \phi^{'}}} -
\frac{\mathcal H^{'}}{\mathcal H} - \mathcal H + \frac{c_1 \mathcal H}{2 M_{_{\rm Pl}}^2 a^2} k^2\r)
\mu_{_S} \r] \, , \nn
\end{eqnarray}
}
\noindent
where the Fourier transform of $Q$, $Q_{\xi}^{(1)}$, $Q_{\xi}^{(2)}$,
respectively, are $\mu_{_S}, \mu_{_\xi}, Q_{k}^{(2)}$ and
\begin{equation}
\label{eq:sou-fin}
{\cal S}_k(\eta) = \overline{\varphi}^{'} \left[
\frac{Q_{k}^{(2)}}{\mathcal H} - \frac{c_1}{a^{1/2}} k^2 \mu_{_\xi} \r] \, .
\end{equation}
Eqs. (\ref{eq:SPerTP}, \ref{eq:UPerTP}) are the main results of our
paper, regarding which we would like to stress the following points:
Firstly, in the earlier analyses, the equation of motion of the
Mukhanov-Sasaki variable ($Q$) was assumed to be satisfy the
differential equation Eq. (\ref{eq:SPerTP}) in which the source term
was assumed to be zero. We have shown explicitly from the gauge
invariant perturbation theory that, in general, this is not true. The
RHS of (\ref{eq:SPerTP}) vanishes in the super-Hubble scales (i.e $k
\to 0$) where the perturbations can be treated classical. Hence, as
expected, the trans-Planckian effects are negligible. Secondly, it is
clear from Eq. (\ref{eq:SPerTP}) that the terms in the RHS will
dominate during the trans-Planckian regime and can have interesting
consequences on the primordial spectrum. Lastly, the perturbations (in
general) are not purely adiabatic, i. e., it contains isocurvature
perturbations. However, these perturbations does not contribute
significantly in the super-Hubble scales. Taking the Fourier
transformation of the non-adiabatic part of the pressure perturbation
($\delta p_{\rm nad}$), we have
\begin{equation}
{\cal F}(\delta p_{\rm nad}) = - 4 d_1 \frac{k^2}{a^3} Q_k^{(2)} \, .
\end{equation}
\indent From the above expression, it is straight forward to see that, in the
super-horizon scales, the entropic perturbations vanish. Following
Refs. \cite{Wands-Mali:2000,Malik-Wand:2002}, we can assume that, on
large scales, the total curvature perturbation $\zeta$ is
conserved. As mentioned earlier, in the FRW background {\it only} the
canonical scalar field contributes to the stress-tensor. Following
Ref.\cite{Malik-Wand:2004}, it is possible to show that, on large
scales, {\it only} the curvature perturbation associated to $\delta\phi$
contributes to the total curvature perturbation. Hence, it is
sufficient to calculate the power-spectrum associated to the
scalar-field perturbation ($\delta\varphi$). This will be discussed in
Sec. (\ref{sec:Pow-Spe})
\section{Classical Analysis}
\label{sec:clas-ana}
In this section, we combine
Eqs. (\ref{eq:MDR-Phi},\ref{eq:per-vpTP},\ref{eq:per-uTP}) to obtain a
single differential equation of $\Phi$. We show that the resultant
differential equation of $\Phi$ is different from that of the standard
canonical scalar field driven inflation. More importantly, the
differential equation of $\Phi$ in our model is fourth order while in
the standard canonical scalar field it is second order. We obtain the
solutions of $\Phi$ in three regimes --- trans-Planckian (I), linear
(II) and super-Hubble (III) --- for the power-law inflation. In the
following section, we obtain the power-spectrum of the scalar
perturbations, in a particular limit, for the power-law inflation.
\subsection{The Power law inflation.}
Let us decompose the fields in their Fourier modes:
\begin{eqnarray}
\Phi(\eta,\vec{x}) &=& \Phi_k(\eta) e^{i \vec{k}.\vec{x}} \, , \,
\delta \varphi(\eta,\vec{x}) = \delta \varphi_k(\eta) e^{i
\vec{k}.\vec{x}} \nonumber\\
\delta \xi(\eta,\vec{x}) &=& \xi_k(\eta) e^{i \vec{k}.\vec{x}} \, .
\end{eqnarray}
We have dropped the superscript indicating that the quantities are
gauge invariant. Combining the equations, we end up with a fourth
order differential equation in the Bardeen potential. To keep the
presentation light, the derivation of the equation is given in
Appendix C. This equation, for the power-law inflation
(\ref{eq:polaw},\ref{eq:PLaw-para}), reads
{\small
\begin{eqnarray}
\Phi^{(4)}_k &+& \frac{\Gamma_1}{\eta} \Phi^{(3)}_k \\
&+& \left[ \frac{\Gamma_2}{\eta^2} + \left( 1 + \frac{\Gamma_3}{(-\eta)^{(5+3\beta)}}
\right) k^2 + \frac{\Gamma_4}{(-\eta)^{2(1+\beta)}} k^4 \right] \Phi^{(2)}_k \nn\\
&+& \left[ \frac{\Gamma_5}{\eta^3} + \left( \frac{\Gamma_6}{(-\eta)^{3(2+\beta)}}+
\frac{\Gamma_7}{\eta} \right) k^2 +
\frac{\Gamma_8}{(-\eta)^{3+2 \beta}} k^4 \right] \Phi^{'}_k \nn \\
&+& \left[ \frac{\Gamma_{9}}{\eta^4} + \frac{\Gamma_{10}}{\eta^2} k^2 +
\left( \frac{\Gamma_{11}}{(-\eta)^{2 \beta + 4}} +
\frac{\Gamma_{12}}{(-\eta)^{5 + 3 \beta }} \right) k^4 \right] \Phi_k = 0 \, .
\nn
\end{eqnarray}
}
\noindent
The constants $\Gamma_i$ depend on the background and the fundamental
constants in the following way
\begin{eqnarray}
\Gamma_1 &=& 4 (2 + \beta) \, , \quad
\Gamma_2 = 12 + 13 \beta + 3 \beta^2 - 6 \sigma_o^2 \, , \nn\\
\Gamma_3 &=& - \frac{3}{2} \frac{(-1)^{3 \beta} \eta_o^{3+3 \beta}
\sigma_o^2 }{a_o^3} \, \frac{b_{11}}{d_1} M_{pl}^2 \, , \nn\\
\Gamma_4 &=& 2 \, \frac{(-1)^{2 \beta} \eta_o^{2+2 \beta} }{a_o^2} b_{11} \, ,
\quad
\Gamma_5 = - 18 (1+\beta) \sigma_o^2 \, , \nn\\
\Gamma_6 &=& 3 \frac{(-1)^{3 \beta} \eta_o^{3+3 \beta}
\sigma_o^2 }{a_o^3} \, \frac{b_{11}}{d_1} M_{pl}^2 \, , \quad
\Gamma_7 = 6 + 4 \beta \, , \nn\\
\Gamma_8 &=& - 4 (2 + \beta) \, \frac{(-1)^{2 \beta} \eta_o^{2+2 \beta} }{a_o^2}
b_{11} \, , \nn\\
\Gamma_9 &=& 6 a_o^2 v_o \eta_o^2 \frac{q^2(-24+q^2)}{(-6+q^2)^2} M_{pl}^2
\, , \quad \Gamma_{10} = 4 + 7 \beta + 3 \beta^2 \, , \nn\\
\Gamma_{11} &=& 2(1+\beta) (2 + \beta) \, \frac{(-1)^{2 \beta} \eta_o^{2+2 \beta} }{a_o^2}
\, b_{11} \, , \nn\\
\Gamma_{12} &=& \frac{3}{2} \frac{(-1)^{3 \beta} \eta_o^{3+3 \beta}
\sigma_o^2 }{a_o^3} \, \frac{b_{11}}{d_1} (2 d_1 - M_{pl}^2) \quad .
\end{eqnarray}
\subsection{The zeroth order approximation}
We can find approximate solutions for the power law inflation in the
following way. Introducing the quantity $\epsilon$ by
\begin{equation}
\beta = -2 - \epsilon \, ,
\end{equation}
($\epsilon$ vanishes on the de Sitter space) we can make Taylor
expansions of the coefficients $\Gamma_i$ and postulate the same for
the Bardeen potential
\begin{equation}
\Phi_k(\eta) = \sum_{m=0} \epsilon^m \Phi_{k,m}(\eta) \quad .
\end{equation}
The outcome is the following. Each component $\Phi_{k,m}(\eta)$ obeys
a differential equation which is inhomogeneous, the source term
depending on the preceding components $\Phi_{k,0}(\eta), \cdots,
\Phi_{k,m-1}(\eta)$.
As discussed in Ref. \cite{Martin-Bran:2003}, we have to be careful in
the limit of $\epsilon \rightarrow 0$ in the sense that it does not
give the perturbation corresponding to the de Sitter space. This is
related to the fact that on this background the inflaton field is
constant so that the quantities $X_i,Y_i,Z_i$ are undetermined. To
work out what happens for the de Sitter space, one would have to
consider the earlier equations where divisions by the derivatives of
the inflaton was not performed yet.
The zeroth order contribution is a solution of the equation
\begin{eqnarray}
\label{eq:zeroth-ord}
\Phi^{(4)}_{k,0}(\eta) & + &
\left( k^2 - \frac{2}{\eta^2} + \gamma^2 k^4 \eta^2 \right)
\Phi^{(2)}_{k,0}(\eta)
- 2 \frac{k^2}{\eta} \Phi^{'}_{k,0}(\eta) \nn\\
&+& 2 \frac{k^2}{\eta^2} \Phi^{'}_{k,0}(\eta) = 0 \quad .
\end{eqnarray}
We now introduce the dimensionless variable $x$ defined by $x= k \eta$
and the function $f(x)$ given by
\begin{eqnarray}
\Phi_{k,0}(\eta) = f(x) \quad .
\end{eqnarray}
The fourth order equation takes the form
{\small
\begin{equation}
f^{(4)}(x) + \left( 1 - \frac{2}{x^2} + \gamma^2 x^2 \right)
f''(x) - \frac{2}{x} f'(x) + \frac{2}{x^2} f(x) = 0 \, ,
\label{eq:fxde}
\end{equation}
}
where,
\begin{equation}
\gamma = \sqrt{2 b_{11} v_0} M_{pl} \, , \nn
\end{equation}
As mentioned earlier, we obtain approximate solutions to the above
differential equation in three different regions.
\subsubsection{The first region}
In this region, the term $\gamma^2 x^2 $ dominates. In other words,
the trans-Planckian effects are dominant and we are dealing with large
values of $x$. Using the fact that $f(x)=x$ is a solution of the full
equation, one introduces the function $h(x)$ by the relation
\begin{equation} f(x)=
x \int^x h(\zeta) d\zeta
\end{equation}
and obtains the third order equation
\begin{equation}
2 (-1+\gamma^2 x^2) h(x) + \gamma^2 x^3 h{'}(x) + 4 h{''}(x) + x
h^{'''}(x) = 0
\end{equation}
We can get rid of the second derivative by the change of function
\begin{eqnarray} h(x) &=&
\frac{1}{x^{4/3}} R(x) \quad ; \end{eqnarray}
using the fact that we are in the region given by large values of $x$,
one ends up with the differential equation
\begin{eqnarray} R^{'''}(x) + \gamma^2 x^2
R^{'}(x) + \frac{2}{3} \gamma^2 x R(x) = 0 \end{eqnarray}
whose solution is a combination of generalized hypergeometric
functions multiplied by polynomials:
\begin{eqnarray} R(x) &=& C^{'}_1 F_{pq}
\left[ \left\{ \frac{1}{6} \right\} , \left\{ \frac{1}{2} ,
\frac{3}{4} \right\} , - \frac{1}{16} \gamma^2 x^4 \right] \nn\\ &+&
C^{'}_2 \sqrt{\gamma} x F_{pq} \left[ \left\{ \frac{5}{12} \right\} ,
\left\{ \frac{3}{4} , \frac{5}{4} \right\} , - \frac{1}{16} \gamma^2
x^4 \right] \nn\\ &+& C^{'}_3 \gamma x^2 F_{pq} \left[ \left\{
\frac{2}{3} \right\} , \left\{ \frac{5}{4} , \frac{3}{2} \right\} , -
\frac{1}{16} \gamma^2 x^4 \right] \, ,
\end{eqnarray}
where $C'_i$'s are constants to be determined and $F_{pq}$ are the
generalized Hypergeometric functions. Thus, the solution to the differential
equation (\ref{eq:fxde}) is
\begin{eqnarray} f(x) &=& C_1(k) \, x \nn\\ &+& C_2(k) \, x^{2/3}
F_{pq} \left[ \left\{ - \frac{1}{12} , \frac{1}{6} \right\} , \left\{
\frac{1}{2} , \frac{3}{4} , \frac{11}{12} \right\} , - \frac{1}{16}
\gamma^2 x^4 \right] \nn\\ &+& C_3(k) \, x^{5/3} F_{pq} \left[ \left\{
\frac{1}{6} , \frac{5}{12} \right\} , \left\{ \frac{3}{4} ,
\frac{7}{6} , \frac{5}{4} \right\} , - \frac{1}{16} \gamma^2 x^4
\right] \nn\\ &+& C_4(k) \, x^{8/3} F_{pq} \left[ \left\{ \frac{5}{12}
, \frac{2}{3} \right\} , \left\{ \frac{5}{4} , \frac{17}{12} ,
\frac{3}{2} \right\} , - \frac{1}{16} \gamma^2 x^4 \right] \, \nn \\
\end{eqnarray}
where $C_i$'s are related to $C'_i$'s. These generalized
hypergeometric functions have a few properties which are worth
mentioning. First, they are highly oscillating. For example, the
function
\begin{eqnarray}
F_{pq} \left[ \left\{ - \frac{1}{12} , \frac{1}{6} \right\} ,
\left\{ \frac{1}{2} , \frac{3}{4} , \frac{11}{12} \right\} , - \frac{1}{16} \gamma^2 x^4
\right]
\end{eqnarray}
goes from $7.7 \, 10^{32}$ to $1.4 \, 10^{194}$ when $x$ goes from $x=50$ to $x=100$,
fixing $\gamma=1/10$ for illustrative purposes.
Let us now say a few words about these generalized hypergeometric
functions; this will help us to quantify their oscillatory
behavior. They are special cases of Meijer functions which can be
defined by integrals on the complex plane \cite{Mathai-Saxe:1973-bk}:
\begin{equation}
G^{m,n}_{p,q} \left( z \vert\begin{array}{cccc}
a_1 & a_2 & ... & a_p \\
b_1 & b_2 & ... & b_q
\end{array} \right) =
\frac{1}{2 \pi i} \int_C \chi(s) z^{-s} ds
\end{equation}
where
\begin{equation}
\chi(s) = \frac{\Pi_{j=1}^{m} \Gamma(b_j+s) \Pi_{j=1}^{n} \Gamma(1-a_j-s)}
{\Pi_{j=m+1}^{q} \Gamma(1-b_j-s) \Pi_{j=n+1}^{p} \Gamma(a_j+s)}
\end{equation}
and three possibilities are allowed for the contour $C$, according to some
conditions on the parameters $a_i,b_j,m,n,p,q$ \cite{Mathai-Saxe:1973-bk}.
Our solutions correspond to $m=n=0$.
The asymptotic behavior which is relevant here is the following.
For large values with $- (\nu^\star +1) \pi < arg z < 0$, the dominant
part is roughly given by
\begin{equation}
G^{m,n}_{p,q} \left( z \vert\begin{array}{cccc}
a_1 & a_2 & ... & a_p \\
b_1 & b_2 & ... & b_q
\end{array} \right)
\sim H_{p q}(z e^{i \pi \mu^\star} ) \quad .
\end{equation}
where
\begin{eqnarray}
\mu^\star &=& q - m - n \, , \quad
\nu^\star = - p + m + n \, , \nn\\
H_{pq}(z) & = & \exp{\left( (p-q) z^{\frac{1}{q-p}} \right) }
z^{\rho^\star} \, , {\rm and} \nn\\
\rho^\star &=& \frac{1}{q-p}
\left( \sum_{j=1}^q b_j - \sum_{j=1}^{p} a_j + \frac{p-q+1}{2} \right)
\end{eqnarray}
In our case, one has to make the replacements
\begin{equation}
z = - \frac{1}{16} \gamma^2 k^4 \eta^4 \, , \quad q=3 , \, ,
p = 2 \quad
\end{equation}
so that
\begin{eqnarray}
\Phi_{k,0}(\eta) &=&
C_0(k) \, k \eta + \sum_{i=1}^3 C_i(k) \, (\kappa \eta)^{\sigma_i} \\
&\times & \left( - \frac{1}{16} \gamma^2 k^4 \eta^4 \right)^{\rho^\star_i}
\exp{ \left( \frac{1}{16} \gamma^2 k^4 \eta^4 \exp{(i \pi \mu^\star_i)}
\right) } \, , \nn
\end{eqnarray}
where $\sigma_1 = 2/3, \sigma_2 = 5/3, \sigma_3 = 8/3$. Fomr the
above expressiom, it is easy to see that the solution in the the
Bardeen potential $\Phi$ is oscillating in this region.
The choice of the constants $C_i(k)$ correspond to different choices
of initial conditions and thus, in principle, to different choices of
vacua. We will come back to this later.
\subsubsection{The second region}
In the intermediary region, $1$ dominates over $\gamma^2 x^2$. The
solution in this region is
\begin{eqnarray}
f(x) &=& D_1(k) \, x + D_2(k) \, x^2 \nn\\
&+& D_{3}(k) \, \left[ e^{-i x} (-1+ i x ) - x^2 Ei(-ix) \r] \nn\\
&+& D_{4}(k) \, \left[ e^{i x} (i - x ) + i x^2 Ei(ix) \r] \,
\end{eqnarray}
where $Ei(x)$ refers to the exponential integral. Using the asymptotic
behavior of the exponential integral
(cf. Ref. \cite{Abramowitz-Steg:1964-bk}, p. 231), we get
\begin{eqnarray}
f(x) &=& D_1(k) \, x + D_2(k) \, x^2 \nn\\
&+& D_{3}(k) \, \left[- e^{-i x} + 2 x \sin(x) \r] \nn\\
&+& D_{4}(k) \, \left[ i \, e^{i x} - 2 x \sin(x) \r] \,
\end{eqnarray}
As we can see, the Bardeen potential is a sum of plane-waves.
\subsubsection{The third region}
When the term $-2/x^2$ dominates in the coefficient of the second
derivative, the solution can be found and is given by
\begin{eqnarray}
f(x) &=& G_1(k) + G_4(k) \, x + G_3(k) \, x^4 + G_2(k) \, x \ln{x} \, .~~
\end{eqnarray}
From the above expression, we see that in the super-Hubble scales the
scalar perturbations has a constant term which is identical to the
canonical scalar field inflation.
To finish this section, let us remark that in the non trans-Planckian
region, i.e when
\[ 1 - \frac{2}{x^2} >> \gamma^2 x^2 \, , \]
the solution to the differential equation (\ref{eq:fxde}) can be obtained
and is given by
\begin{eqnarray}
f(x) &=& H_1(k) \, x + \frac{1}{2}\, H_2(k) \, (2- 2 x+x^2) \nonumber\\
&+& \frac{1}{2} \, H_3(k) \, x \left(- \frac{e^{-i x}}{x} - i Ei(- i x) \right) \nonumber\\
&+& \frac{1}{2} \, H_4(k) \, x \left( \frac{e^{i x}}{x} -
i Ei(i x) \right) \quad ;
\end{eqnarray}
this approximation covers the $II$ and $III$ region simultaneously.
\subsection{The first order approximation}
The first order contribution obeys the equation
\begin{eqnarray}
& & \Phi_{k,1}^{(4)}(\eta)
+ \left( k^2 - \frac{2}{\eta^2} + \gamma^2 k^4 \eta^2 \right)
\Phi_{k,1}^{(2)}(\eta) - 2 \frac{k^2}{\eta} \Phi_{k,1}^{'}(\eta) \nn\\
& & \qquad \qquad +~2 \frac{k^2}{\eta^2} \Phi_{k,1}(\eta) = S_k(\eta) \, ,
\end{eqnarray}
where
\begin{eqnarray}
S_k(\eta) &=& - \frac{4}{\eta} \Phi_{k,0}^{(3)}(\eta)
+ \left[ - \frac{5}{\eta^2} +
\frac{b_{_{11}} M_{pl}^5 v_o^{3/2}}{d_1} k^2 \eta \right. \nn\\
&- & \left. \frac{10}{3} b_{_{11}} M_{pl}^2 v_o k^4 \eta^2
+ 4 b_{11} M_{pl}^2 v_o \eta^2 \log{ \left(\frac{\eta}{\eta_o}
\right)}\right] \Phi_{k,0}^{(2)}(\eta) \nn\\
&+ & \left[\frac{12}{\eta^3} - 4 \frac{k^2}{\eta} -
4 b_{11} M_{pl}^2 v_o k^4\eta \right] \Phi_{k,0}^{'}(\eta) \nn\\
&+& \left[ - \frac{24}{\eta^4} + \frac{5 k^2}{\eta^2}
+ 2 b_{11} M_{pl}^2 v_o \right. \nn\\
& & \left. + \frac{b_{11} M_{pl}^3 v_o^{3/2} (- 2 d_1 +
M_{pl}^2 ) }{d_1} \eta \right] \Phi_{k,0}{(\eta)} \, .\nn
\end{eqnarray}
This equation is exactly the one obeyed by the zeroth order
contribution, except for the source term which is known since we
obtained the approximations of the zeroth order in the three
regions. Let us specialize to one of the regions and call
$Y_1(\eta),Y_2(\eta),Y_3(\eta), Y_4(\eta)$ the four different
solutions of the homogeneous equation given in Eq~(\ref{eq:zeroth-ord}):
The equation being linear and knowing the complete solution of the
homogeneous equation ($ \Phi=\sum_{a=1}^4 L_a Y_a $ with $L_a$
constants) solution, we can solve it using the method of the variation
of the constants. One can show that this can be achieved by the following
system of equations
\begin{eqnarray}
\sum_{a=1}^4 L^{'}_a Y_a = 0 &,&
\sum_{a=1}^4 L^{'}_a Y^{'}_a = 0 \, , \nn \\
\sum_{a=1}^4 L^{'}_a Y^{(2)}_a = 0
&,& \sum_{a=1}^4 L^{'}_a Y^{(3)}_a = S(\eta) \, .
\end{eqnarray}
Let us concentrate on the second and third region for example
(the non trans-Planckian zone). One has
\begin{eqnarray}
\Phi_{k,1}(\zeta) &=& k \zeta \int_0^{\zeta} d\eta
\left[ i \frac{(2 i+2 k \eta - i k^2 \eta^2)}{2 k^4 \eta}
Ei(i k \eta) \right. \nn\\
&+& \left. \frac{(-2 i + 2 k \eta + i k^2 \eta^2)}{2 k^4 \eta} Ei(-i k
\eta) \right] S_k(\eta) \nn\\
&+&\left[ 1- k \zeta +\frac{1}{2} k^2 \zeta^2 \right] \int_0^\zeta
d\eta \, 2 \, \frac{1}{k^4 \eta} S_k(\eta) \nn\\
&+& \left[ - \frac{1}{2} i e^{-i k \zeta} + \frac{1}{2} x Ei(-i k
\zeta) \right] \nn\\
& & \times \int_0^\zeta d\eta \, i \, \frac{(-2+2 i k \eta + k^2
\eta^2)}{k^4 \eta} e^{i k \eta} S_k(\eta) \nn\\
&+& \left( \frac{1}{4} i e^{i k \zeta} - \frac{1}{4} i k \zeta Ei(i k
\zeta) \right) \\
& & \times \int_0^\zeta d\eta \, 2 \frac{(-2-2 i k \eta + k^2
\eta^2)}{k^4 \eta} e^{-i k \eta} S_k(\eta) \, . \nn
\end{eqnarray}
A similar treatment can be applied to the trans-Planckian region but
the formulas are too lengthy and will not be recorded here.
Using the analysis discussed in this section, the power spectrum of
the perturbations can be obtained upto a $k$ dependent constant
factor. In order to obtain the exact power spectrum, we need to
quantize the theory and fix the initial state of the field
\cite{Mukhanov-Feld:1992}. In the following section, we obtain exact
power spectrum of the perturbations in a particular limit.
\section{Power-spectrum of the perturbations -- Quantum Analysis}
\label{sec:Pow-Spe}
In this section, we calculate the power-spectrum corresponding to
$\mu_{_S}$ during the power-law inflation using the following
approach: (i) We assume that the quantum field $\mu_{_S}$ is coupled
to an external, classical source field ${\cal S}_k(\eta)$ which is
determined by solving the coupled differential equations
(\ref{eq:MDR-Phi}, \ref{eq:per-uTP}). (ii) We solve the equation of
motion of $\mu_{_S}$ in three regions -- Trans-Planckian (I), linear
(II) and super-Hubble (III) -- separately \cite{Martin-Bran:2000}. We
further assume that ${\cal S}_k(\eta)$ will contribute significantly
in the trans-Planckian region while it can be neglected in the linear
and super-Hubble region. (iii) The power-spectrum at the super-Hubble
scales is determined by performing the matching of the modes and its
derivatives at the times of transition between regions I and II
[$(-\eta_{\rm Pl})^{1 + \beta} \equiv (\omega k^2)^{-1/2}$] and regions II and III
[$\eta_{_H} \equiv (1 + \beta)/k$]. We assume that the quantum field
$\mu_{_S}$ is in a minimum energy state at $\eta = \eta_i$
\cite{Brown-Dutt:1978}.
Region (I) corresponds to the limit where the non-linearities of the
dispersion relation play a dominant role, i. e. $2b_{11} [k/a(\eta)]^2
\gg 1$ and $k \eta \gg 1 $. Region (II) corresponds to the limit
where the non-linearities of the modes are negligible i. e. $\omega
\simeq k$ and $k \eta \gg 1 $. Region (III) corresponds to the limit
where $k \eta \ll 1$. In the three regions, the equation of motion of
$\mu_{_S}$ (\ref{eq:SPerTP}) reduces to:
\begin{subequations}
\label{eq:SPerTP-Reg}
\begin{eqnarray}
\label{eq:SPerTP-Reg1}
{\mu_{_S}^{(I)}}^{''} + \omega^2(\eta) \mu_{_S}^{(I)} &\simeq&
- \frac{2 d_1}{M_{_{\rm Pl}}^2} k^2 {\cal S}_{k}(\eta) \, , \\
\label{eq:SPerTP-Reg2}
{\mu_{_S}^{(II)}}^{''} + k^2 \mu_{_S}^{(II)} &\simeq& 0 \, , \\
\label{eq:SPerTP-Reg3}
{\mu_{_S}^{(III)}}^{''} -
\frac{\beta (\beta + 1)}{\eta^2} \mu_{_S}^{(III)} &\simeq& 0 \, ,
\end{eqnarray}
\end{subequations}
where
\begin{equation}
\omega(\eta) = \omega_0 \, \frac{k^2}{(-\eta)^{(1 + \beta)}} \, ; \,
\omega_0 = (2 b_{_{11}})^{1/2} (-\eta_0)^{(1 + \beta)} \, ,
\end{equation}
and ${\cal S}_k$ is given by Eq. (\ref{eq:sou-fin}). The general
solution to the differential equation (\ref{eq:SPerTP-Reg1}) is given
by
\begin{eqnarray}
\mu^{\rm(I)}_{_S}(\eta ) & = & A_1(k)\, (-\eta)^{1/2}
H_{\nu}^{(1)}[\alpha(\eta)] \nn \\
&+& A_2(k) \, (-\eta)^{1/2} H_{\nu}^{(2)}[\alpha(\eta)] + \mu_{_P}(\eta)
\label{eq:Sol-Reg1}
\end{eqnarray}
where $\mu_{_P}(\eta)$ is the particular solution to the inhomogeneous
part of the differential equation and is given by
(cf. Ref. \cite{Morse-Fesh:1953-bk}, p. 529)
\begin{widetext}
{\small \begin{equation}
\label{eq:par-sol}
\mu_{_P}(\eta) = \frac{i \pi}{2} \frac{d_1 k^2}{\beta M_{_{\rm Pl}}^2}
(-\eta)^{1/2} \left[ H_{\nu}^{(1)}[\alpha(\eta)]
\int_{\eta_l}^{\eta} (-s)^{1/2} H_{\nu}^{(2)}[\alpha(s)] {\cal S}_k(s) \, ds
+ H_{\nu}^{(2)}[\alpha(\eta)]
\int_{\eta_l}^{\eta} (-s)^{1/2} H_{\nu}^{(1)}[\alpha(s)] {\cal S}_k(s) \, ds
\r]
\end{equation}
}
\end{widetext}
\begin{equation}
\label{eq:def-nu-a}
\! {\rm and} \quad
\nu = - \frac{1}{2 \beta};~~~
\alpha(\eta) = \alpha_0 (-\eta)^{-\beta};~~~
\alpha_0 = \frac{\omega_0 k^2}{-\beta}
\, ,
\end{equation}
$\eta_l$ ($<\eta_i$) is the epoch in which the integrals in
(\ref{eq:par-sol}) vanish. The quantities $H_{\nu}^{(1)}$ and
$H_{\nu}^{(2)}$ in the above solution are the Hankel functions of the
first and the second kind (of order $\nu$), respectively, and the
$k$-dependent constants $A_1(k)$ and $A_2(k)$ are to be fixed by the
initial conditions for the modes at $\eta _{\rm i}$. Unlike the
canonical scalar field inflation, where one assumes that the field is
in a Bunch-Davies vacuum at $\eta_i$, it is not possible to assume
such an initial condition due to the non-linearities of the modes. As
mentioned earlier, we assume that the field is in the minimum energy
vacuum state at $\eta_i$, i. e.,
\begin{equation}
\label{eq:Ini-Sta}
\mu_{_S}(\eta _{\rm i}) = \frac{1}{\sqrt{2 \omega(\eta_{\rm i})}}~;~
\mu_{_S}'(\eta _{\rm i})= \pm i \sqrt{\frac{\omega(\eta_{\rm i})}{2}}.
\end{equation}
We thus get
\begin{subequations}
\begin{eqnarray}
\label{Ai1sol}
A_1(k) &=& \frac{i \pi \alpha(\eta_i)}{4} (-\eta _{\rm i})^{-1/2}
{\tilde \mu_{_S}(\eta_i)} H_{\nu - 1}^{(2)}[\alpha(\eta_i)] \nn \\
& \times& \biggr[1 + \frac{(-\eta _{\rm i})^{\beta + 1}}{(-\alpha_0 \beta)}
\frac{{\tilde \mu_{_S}}'(\eta _{\rm i})}{{\tilde \mu_{_S}}(\eta _{\rm i})}
\frac{H_{\nu}^{(2)}[\alpha(\eta_i)]}{H_{\nu - 1}^{(2)}[\alpha(\eta_i)]}\biggr], \\
\label{Ai2sol}
A_2(k) &=& - \frac{i \pi \alpha(\eta_i)}{4} (-\eta _{\rm i})^{-1/2}
{\tilde \mu_{_S}(\eta_i)} H_{\nu - 1}^{(1)}[\alpha(\eta_i)] \nn \\
& \times& \biggr[1 + \frac{(-\eta _{\rm i})^{\beta + 1}}{(-\alpha_0 \beta)}
\frac{{\tilde \mu_{_S}}'(\eta _{\rm i})}{{\tilde \mu_{_S}}(\eta _{\rm i})}
\frac{H_{\nu}^{(1)}[\alpha(\eta_i)]}{H_{\nu - 1}^{(1)}[\alpha(\eta_i)]}\biggr] \, ,
\end{eqnarray}
\end{subequations}
where ${\tilde \mu_{_S}}(\eta) = \mu_{_S}^{(I)}(\eta) -
\mu_{_P}(\eta)$. From the above expressions it is evident that the
particular solution to the inhomogeneous differential equation
effectively changes the initial state of the field. Hence, ${\tilde
\mu_{_S}}(\eta_{\rm i})$ can be treated as the new effective initial
state of the field. It is worth mentioning that the particular
solution has not been fixed and is arbitrary.
In Region II, the solution to the differential equation
(\ref{eq:SPerTP-Reg2}) is (Minkowski) plane waves, i. e.,
\begin{equation}
\label{eq:Sol-Reg2}
\mu_{_S}^{\rm (II)}(\eta ) \, = \, B_1(k) \exp[-i k \eta] +
B_2(k) \exp[i k \eta] \, ,
\end{equation}
where $B_1(k), B_2(k)$ are $k-$dependent constants and are obtained by
the junction conditions of the mode functions $\mu_{_S}^{\rm (I)},
\mu_{_S}^{\rm (II)}$ and their derivatives at $\eta = \eta_{\rm Pl}$. This
gives,
\begin{widetext}
{\small
\begin{subequations}
\begin{eqnarray}
\label{B1}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\frac{\exp(-i k \eta _{\rm Pl})}{(-\eta_{\rm Pl})^{1/2}} B_1 &=&
\frac{A_1}{2} H_{\nu}^{(1)}[\alpha(\eta_{\rm Pl})]
\left[1 + \frac{i \beta \alpha_0}{k (-\eta_{\rm Pl})^{\beta + 1}}
\frac{H_{\nu - 1}^{(1)}[\alpha(\eta_{\rm Pl})]}{H_{\nu}^{(1)}[\alpha(\eta_{\rm Pl})]}
\r]
+ \frac{A_2}{2} H_{\nu}^{(2)}[\alpha(\eta_{\rm Pl})]
\left[1 + \frac{i \beta \alpha_0}{k (-\eta_{\rm Pl})^{\beta + 1}}
\frac{H_{\nu - 1}^{(2)}[\alpha(\eta_{\rm Pl})]}{H_{\nu}^{(2)}[\alpha(\eta_{\rm Pl})]}
\r] \, , ~~\\
\label{B2}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\frac{\exp(i k \eta _{\rm Pl})}{(-\eta_{\rm Pl})^{1/2}} B_2 &=&
\frac{A_1}{2} H_{\nu}^{(1)}[\alpha(\eta_{\rm Pl})]
\left[1 - \frac{i \beta \alpha_0}{k (-\eta_{\rm Pl})^{\beta + 1}}
\frac{H_{\nu - 1}^{(1)}[\alpha(\eta_{\rm Pl})]}{H_{\nu}^{(1)}[\alpha(\eta_{\rm Pl})]}
\r]
+ \frac{A_2}{2} H_{\nu}^{(2)}[\alpha(\eta_{\rm Pl})]
\left[1 - \frac{i \beta \alpha_0}{k (-\eta_{\rm Pl})^{\beta + 1}}
\frac{H_{\nu - 1}^{(2)}[\alpha(\eta_{\rm Pl})]}{H_{\nu}^{(2)}[\alpha(\eta_{\rm Pl})]}
\r] \, . ~~
\end{eqnarray}
\end{subequations}
}
\end{widetext}
In region III, the solution is
\begin{equation}
\mu _{\rm (III)}(\eta ) \, = \, C(k) \, a(\eta ) \, ,
\end{equation}
where $C(k)$ is constant (not to be confused with the constants used
in the previous section) whose modulus square gives the power spectrum
of the density perturbations and is determined by performing the
matching of the modes $\mu_{_S}^{\rm (II)}, \mu_{_S}^{\rm (III)} $ at
$\eta_{_H} \equiv (1 + \beta)/k$. We thus get,
{\small
\begin{equation}
\!\!\!\! C(k) = \left[\frac{\eta_{_0} k}{1 + \beta}\r]^{1 + \beta} \!\!\!
\left[B_1(k) \exp(-i k \eta_{_H}) + B_2(k) \exp(i k \eta_{_H})\r] \, .
\end{equation}
}
\noindent The spectrum of the perturbations (\ref{eq:pS0}) reduce to
\begin{equation}
\left[k^3\; {\cal P}_{S}(k)\right]
=\left(\frac{1}{4 \pi^2 M_{_{\rm Pl}}^2} \frac{(\beta + 1)}{\beta}\r) \, k^3
\left|C(k)\r|^2\, .
\label{eq:pS}
\end{equation}
We are interested in the leading order behavior of the primordial
power-spectrum and the possible modifications to the primordial
spectrum due to the trans-Planckian effects. In order to do that, we
need to obtain the leading order behavior of the constants $A_1, A_2,
B_1$ and $B_2$. Using the fact that $k \eta \gg 1$ and the asymptotic
behavior of the Hankel functions,
viz. (cf.~Ref.~\cite{Abramowitz-Steg:1964-bk}, p.~364)
\begin{eqnarray}
\lim_{z \to \infty} H^{(1/2)}_{\nu}(z)\longrightarrow
\left({\frac{2}{\pi z}}\r)^{1/2}\, {\rm e}^{\pm i\left[z-(\pi \nu /2)-(\pi /4)\r]},
\end{eqnarray}
we get
\begin{subequations}
\begin{eqnarray}
\label{A1approxJCr}
A_1(k) &\approx & A_0
{\tilde \mu_{_S}(\eta_i)} \exp(- i x_{\rm i})
\left( 1 \mp {\cal I}\r) \, , \\
\label{A2approxJCr}
A_2(k) &\approx & A_0 {\tilde \mu_{_S}(\eta_i)} \exp(i x_{\rm i})
\left( 1 \pm {\cal I}\r) \, ,
\end{eqnarray}
\end{subequations}
where
\begin{eqnarray}
A_0= \left(\frac{\pi \alpha_0}{8}\r)^{1/2} (-\eta _{\rm i})^{-(\beta + 1)/2}
&;& x_{\rm i} = \alpha(\eta_i) - \frac{\pi \nu}{2} - \frac{\pi}{4} \nn \\
{\cal I}= \frac{1 - \mu_{_P}'(\eta_i)/\mu_{_S}'(\eta_i)}
{1 - \mu_{_P}(\eta_i)/\mu_{_S}(\eta_i)} & & \, .
\end{eqnarray}
Having obtained $A_1, A_2$ in the limit of $k\eta \gg 1$. Our next
step is to evaluate $B_1(k), B_2(k)$ in the same limit. In order to do
that, we need to know the correct matching time $\eta_{\rm
Pl}$. Demanding $\omega^2(\eta_{_{\rm Pl}}) = k^2$ gives $(-\eta_{\rm
Pl})^{1 + \beta} = \omega^{-1/2}/k$. We thus get,
\begin{eqnarray}
\label{B1approxJCr}
B_1 &\approx & A_1 \left(\frac{- 2 \beta}{\pi k}\r)^{1/2} \exp(i x_{\rm Pl}) \\
\label{B2approxJCr}
B_2 &\approx & A_2 \left(\frac{- 2 \beta}{\pi k}\r)^{1/2} \exp(-i x_{\rm Pl})
\end{eqnarray}
where $x_{\rm Pl} = k \eta_{\rm Pl} (\beta + 1)/\beta - \pi \nu/2 - \pi/4$.
Thus, we get,
\begin{eqnarray}
\label{eq:SPSPec-Gen}
\left[k^3\; {\cal P}_{S}(k)\right] &\simeq&
C_0 k^{2(\beta + 2)} \left |1 - \frac{\mu_{_P}(\eta_i)}{\mu_{_S}(\eta_i)}\r |^2 \\
& \times & \left[1 + 2 \cos(x_{_H})
- 2 Im[{\cal I}] \sin(x_{_H})\r] \nn
\end{eqnarray}
where
\begin{eqnarray}
C_0 &=& \left(\frac{1}{16 \pi^2 M_{_{\rm Pl}}^2} \frac{(\beta + 1)}{\beta}\r)
\left(\frac{-\eta_0 \eta_i}{1 + \beta}\r)^{2 (1 + \beta)}~; \nn \\
x_{_H} &=& 2(1 + \beta - x_{_{Pl}} + x_i) \,
\end{eqnarray}
and we have neglected higher order terms like $|{\cal I}|^2$. It is
interesting to note that in the limit of ${\cal S}_k(\eta) \to 0$, the
power-spectrum is same as that of the standard power-law inflation
spectrum with small oscillations. In this limit, we recover the result
of Refs. \cite{Martin-Bran:2000,Niemeyer-Pare:2001}. In order to
obtain the exact form of the power-spectrum, we need to evaluate
$\mu_{_P}$ which requires the knowledge of ${\cal S}_k(\eta)$.
In the rest of this section, we evaluate the power-spectrum in a
particular limit ($1/c_1 \to 0$). We, first, obtain the form of ${\cal
S}_k(\eta)$ by solving the system of coupled differential equations
(\ref{eq:MDR-Phi}, \ref{eq:per-uTP}) in two -- sub-Hubble and
super-Hubble -- regimes. As mentioned earlier, the two differential
equations (\ref{eq:MDR-Phi}, \ref{eq:per-uTP}) do not contain higher
order spatial derivatives. Hence, it is sufficient to obtain solutions
in these two regimes. Performing the following transformations
\begin{equation}
u = \frac{a(\eta)}{\overline{\varphi}'(\eta)} \Phi~;~
\xi^{(gi)} = a^{3/2}(\eta) \, {\tilde \xi} \, ,
\end{equation}
and taking the Fourier transform, Eqs. (\ref{eq:MDR-Phi},
\ref{eq:per-uTP}), reduce to
\begin{eqnarray}
\label{eq:CSuk-1}
& & u_{k}'' + \left(c_s^2 k^2 - \frac{\theta''}{\theta} \r) u_k
= - \frac{2 d_1}{M_{_{\rm Pl}}^2} \frac{k^2}{\overline{\varphi}'} {\left(a^{3/2} {\tilde \xi}_k\r)}^{'} \\
\label{eq:CSxi-1}
& & {\tilde \xi}'' + \left[ - \frac{9}{4} \mathcal H^2 + \frac{3}{2} \mathcal H' +
\frac{c_1}{a^2} \frac{\varphi'^{2}}{2 M_{_{\rm Pl}}^2} \frac{k^2}{M_{_{\rm Pl}}^2} \r] {\tilde \xi} \\
&=& a^{1/2} \left[\left(1 + \frac{c_1}{a^2}\frac{k^2}{M_{_{\rm Pl}}^2} \r) \Phi_k'
- 2 \mathcal H \left(1 - \frac{c_1}{2 a^2}\frac{k^2}{M_{_{\rm Pl}}^2} \r) \Phi_k\r] \, .\nn
\end{eqnarray}
In the limit of $1/c_1 \to 0$ (i. e. $d_{1}/M_{_{\rm Pl}}^2 \ll b_{_{11}} M_{_{\rm Pl}}^2$),
the above differential equations can be solved exactly. In this limit,
the above differential equations become:
\begin{eqnarray}
& & u_{k}'' + \left(k^2 - \frac{\theta''}{\theta} \r) u_k = 0 \nn \\
& & \frac{\varphi'^{2}}{2 M_{_{\rm Pl}}^2} \xi_k^{(gi)} = a (\overline{\varphi}' u_k)' \, .
\end{eqnarray}
For the sub-Hubble scales, during the power-law inflation, we get
\begin{eqnarray}
u_{k} &=& D_1(k) \exp(- i k \eta) + D_2(k) \exp(i k \eta) \nn \\
\xi_{k}^{(gi)} &=&
i k \frac{(-\eta)^{(\beta + 2)}}{(-\eta_0)^{(\beta + 1)}}
\sqrt{\frac{2 M_{_{\rm Pl}}^2}{\beta (\beta + 1)}} \nn \\
& & \quad \left[D_1(k) \exp(i k \eta) - D_2(k) \exp(-i k \eta)\r]
\, ,
\end{eqnarray}
where we have neglected the terms of the order $1/(k \eta)$ and
$D_{1}(k),D_{2}(k)$ are $k$-dependent constants with the dimensions
of length squared ($k^{-2}$). Using the condition that the modes are
outgoing, we set $D_2(k) = 0$. In the super-Hubble scales, we have
\begin{equation}
u_{k} \simeq D_{3}(k) \, a(\eta)~;~ \xi_{k}^{(gi)} = D_{3}(k)
\sqrt{\frac{2 \beta}{\beta + 1}} a^2(\eta) \, ,
\end{equation}
where $D_{3}(k)$ is a constant. In the sub-Hubble scales, we have
\begin{equation}
\!
{\cal S}_k(\eta) = 4 \, i \,M_{_{\rm Pl}}^2 b_{_{11}} \, D_1(k) k^5 \,
\left(\frac{-\eta_0}{- \eta}\r)^{\beta + 1} \!\!\!\!\! \exp(-i k \eta) \, .
\end{equation}
Our next task is to obtain $\mu_{_P}$ and the power-spectrum of the
scalar perturbations. From Eq. (\ref{eq:par-sol}) using the asymptotic
limit of Hankel functions, we get
\begin{multline}
\mu_{_S}(\eta) = \left(\frac{b_{_{11}}}{\beta^2}\r)^{\frac{\beta + 1}{4\beta}}
\frac{(-\eta_0)^{\frac{1 - \beta^2}{2 \beta}}}{(-\eta)^{-\frac{\beta +1}{2}}} \,
M_{_{\rm Pl}}^2 k^{1/\beta} \\
\times \cos\left[\alpha(\eta) + \ln\left(\Gamma\left[\frac{\beta -1}{2 \beta}, i
\alpha(\eta)\r]\r)\r] \, ,
\end{multline}
where we have set $D_1 \propto 1/k^2$. Substituting the above
expression in Eq. (\ref{eq:SPSPec-Gen}), we get,
\begin{eqnarray}
\label{eq:SPSPec-Spe}
\left[k^3\; {\cal P}_{S}(k)\right] &=&
C_0 k^{2(\beta + 2)} \left |1 - C_1 \, k^{(1 + 1/\beta)}\r |^2 \\
& \times & \left[1 + 2 \cos(x_{_H})
- 2 Im[{\cal I}] \sin(x_{_H})\r] \nn \, ,
\end{eqnarray}
where $C_1$ depends on $b_{_{11}}$ and parameters of the power-law
inflation.
\begin{figure}[!htb]
\begin{center}
\epsfxsize 3.00 in
\epsfysize 2.50 in
\epsfbox{Fig1.eps}
\caption{Plots of the standard power spectrum (thick curve) and the
modified power spectrum with appropriate normalization.
In plotting these spectra we have assumed that $\mathcal H=10^{14}\,
{\rm GeV}=10^{52}\, {\rm Mpc}^{-1}$, $(\mathcal H/k_{\rm c})= 10^{-4}$ and
$\beta = - 2.04$ where $b_{_{11}} = k_{\rm c}^{-2}$ (cut-off scale) and
$\eta_{_0} = \mathcal H^{-1}$ (inflationary energy scale). The above range
of $k/\mathcal H$ corresponds to $2 < \ell < 100$ where $\ell$ denotes multipoles.}
\label{fig:comp}
\end{center}
\end{figure}
In Fig. (\ref{fig:comp}), we have plotted the standard and the
trans-Planckian inflationary power spectra. As can be seen the
trans-Planckian power spectrum has oscillations. We would like to
caution the readers that the oscillations are small, here we have
magnified the effect for illustrative purposes. We would also like to
point the following: The power-spectrum (\ref{eq:SPSPec-Spe}) we have
obtained becomes significantly different compared to that of
Ref. \cite{Martin-Bran:2000} for very large $k$. For example, for a
particular wave-vector this vanishes. However such an effect is not
observable.
Following points are to be noted regarding the above results: (i) We
have obtained the general power-spectrum (\ref{eq:SPSPec-Gen}) of the
scalar perturbations assuming that the scalar field is in the minimum
energy state and that the contribution of the unit vector field to the
energy density can be neglected. We have shown that the power spectrum
depends on the form of the source term ${\cal S}_k$ which can be
solved analytical in some particular limits. (ii) We have computed the
power-spectrum of perturbations in a particular limit i. e. $(1/c_1
\to 0)$. In this limit, we recover the result of
Refs. \cite{Martin-Bran:2000,Niemeyer-Pare:2001}.
\section{Discussion and Conclusion}
In this work, we have computed the gauge-invariant cosmological
perturbation for the single scalar field inflation with the
trans-Planckian effects introduced {\it via} the Jacobson-Corley
dispersion relation. Even though the dispersion relation breaks the
local Lorentz invariance, a covariant formulation of the
corresponding theory can be carried out by introducing a unit
time-like vector field.
Using the covariant Lagrangian, we have obtained the perturbed
stress-tensor for the scalar and tensor perturbations around the FRW
background. We have shown the following: (i) The non-linear effects
introduce corrections to the perturbed energy density while the other
components of the perturbed stress-tensor remains unchanged. Thus,
for the trans-Planckian scenario, we have shown that $\Phi =
\Psi$ and the constraint equation (\ref{eq:hydro-eq2}) remains unchanged.
(ii) The non-linear terms contributing to the stress-tensor are
proportional to $k^2$ and hence in the super-Hubble scales, as
expected, the contribution to the perturbed energy density can be
ignored. (iii) The spatial higher derivative terms appear {\it only}
in the equation of the motion of the perturbed inflation field
($\delta\varphi$) while the speed of propagation of the perturbations [in the
equation of motion of the scalar perturbations ($\Phi$)] is different
from that of the standard inflation. (iv) The speed of propagation of
the perturbations ($c_s^2$) is different from that of the canonical
single scalar field inflation. (v) The perturbations are not purely
adiabatic. $\xi$ act as an extra scalar field during inflation and
hence can act as a source. This introduces non-adiabatic (entropic)
perturbations. (vi) Since, the trans-Planckian corrections do not
change the pressure perturbations, the perturbation equations for the
tensor modes do not change. Hence, the tensor perturbation equation
remain unchanged. Recently, Lim \cite{Lim:2004} had show that general
Lorentz violating models (with out taking in-account the higher
derivatives of the scalar field) can modify the pressure perturbations
and hence the tensor perturbation equations. However, in this model,
this is not the case. Since the tensor perturbations remain the same,
the well-know consistency relation between the scalar and tensor ratio
will also be broken in this model \cite{Hui-Kinn:2001}.
We combined Eqs. (\ref{eq:MDR-Phi},\ref{eq:per-vpTP},\ref{eq:per-uTP})
to obtain a single differential equation of $\Phi$. We have shown that
the resultant differential equation of $\Phi$ is different from that
of the standard canonical scalar field driven inflation. More
importantly, the differential equation of $\Phi$ in our model is
fourth order while in the standard canonical scalar field it is second
order. We also obtained the solutions of $\Phi$ in the three regimes
for the power-law inflation.
We have also obtained the equation of motion of the Mukhanov-Sasaki
variable for the perturbed inflaton field with higher derivatives. In
all the earlier analyzes, the Mukhanov-Sasaki variable ($Q$) was
assumed to satisfy the differential equation Eq. (\ref{eq:SPerTP}) in
which the source term (${\cal S}_k$) was assumed to be zero. More
importantly, we had shown that the source term in
Eq. (\ref{eq:SPerTP}) dominates during the trans-Planckian regime. The
Mukhanov-Sasaki variable of the two fields are strongly coupled and
hence obtaining the solution analytically is possible only in a
particular limit.
In this work, we calculated the power-spectrum corresponding to the
inflaton field, during the power-law inflation by assuming that (i)
the quantum field $\mu_{_S}$ is coupled to an external, classical
source field ${\cal S}_k(\eta)$ which is determined by solving the
coupled differential equations (\ref{eq:MDR-Phi}, \ref{eq:per-uTP})
(ii) the quantum field is initially in a minimum energy state and
(iii) $d_1/M_{_{\rm Pl}}^2 \ll b_{_{11}} M_{_{\rm Pl}}^2$. We have shown that in this
particular limit, the power-spectrum is same as that obtained in
Refs. \cite{Martin-Bran:2000,Niemeyer-Pare:2001}.
The work suggests various possible directions for further study:
\begin{itemize}
\item We have obtained the power-spectrum analytically in the
limit of $d_1/M_{_{\rm Pl}}^2 \ll b_{_{11}} M_{_{\rm Pl}}^2$. The trans-Planckian
corrections in this limit are small to be observed in the present or
the future CMB experiments. It would be interesting to obtain the
power-spectrum by solving the system of differential equations
numerically and obtain the leading order trans-Planckian corrections
in these models.
\item As we have mentioned earlier, this model introduces non-adiabatic
perturbations which can lead to the non-Gaussianity in the CMB.
Recently in
Refs. \cite{Martin-Ring:2003a,Martin-Ring:2004a,Martin-Ring:2004b,Easther-Kinn:2004,Easther-Kinn:2005},
trans-Planckian constraints from the CMB was studied in detail. It
would be interesting to do a similar analysis for this scenario. The
non-Gaussian signatures may place stringent and independent
constraints on the parameters $b_{_{11}}, d_{1}$.
\item In this work, we have ignored the solenoidal part
of the perturbed $u$ field. The solenoidal part contributes to the
vector perturbations. It would be interesting to see whether the
solenoidal part of the perturbed unit-time like vector field can lead
to the growing large-scale vorticity and hence the production of large
scale primordial magnetic field.
\item In this work, we have ignored the back-reaction of
the field excitations on the perturbed FRW background. There have been
claims in the literature \cite{Tanaka:2000,Starobinsky:2001} that
trans-Planckian modes may effect the evolution of cosmological
fluctuations in the early stages of cosmological inflation in a
non-trivial way. In Ref. \cite{Brandenberger-Mart:2004}, the authors
have discussed in detail the backreaction problem of the
trans-Planckian inflation in a toy model and have shown that the
back-reaction of the trans-Planckian modes may lead to a
renormalization of the cosmological constant driving inflation.. It
would be interesting to perform a similar analysis for this model.
\end{itemize}
We hope to return to study some of these issues in the near future.
\section*{Acknowledgments}
The authors wish to thank J. Martin, L. Sriramkumar for comments on
the earlier version of the paper. The authors also wish to thank
S. Bashinsky, U. Seljak and in particular, N. Bartolo for stimulating
discussions. SS thanks D. Mattingly, B. van Tent for useful email
correspondences.
|
2,877,628,091,350 | arxiv | \section{Introduction}
Supernova remnants (SNRs) stand an important factor in the process
of cosmic ray acceleration and matter circulation. Albeit
very important, these processes are still not fully
understood. Various theories were suggested during the last
few decades {with a view to understanding} the SNR properties.
There is a general belief that the evolution of an SNR is
strongly influenced by the properties of the local interstellar
medium (ISM) in which SNR evolves. As SNRs are the luminous
synchrotron emitters in radio domain of the electromagnetic
spectrum, the magnetic field
inside them and energetic spectrum of relativistic
particles can be determined. Here, we will mainly focus
on the magnetic field properties such as field strength and
evolution. The most commonly used empirical relation in studies of
SNR evolution properties is radio surface brightness to diameter
($\Sigma-D$) relation. This is because the only statistically
reliable data samples of SNRs are found in radio domain. In
order to study SNR evolution issues from a slightly
different perspective, in this paper we apply a method that
transforms $\Sigma-D$ into magnetic field to diameter ($H-D$)
relation. This way, we can discuss SNR evolutionary
properties by comparing theories on $H$ with empirically extracted
$H-D$ relation. Statistical i.e. empirical study of $H$
evolution nevertheless requires reliable data samples
of SNRs in different types of interstellar medium.
There is a number of ways to estimate $H$ in SNRs.
Unfortunately, few are reliable and even they are
available only for a few well studied SNRs. The estimates are made
by measuring rotation measures or spectral line splitting. The
estimates can also be extracted from radiation fluxes from
different parts of the electromagnetic spectrum such as radio,
X-rays or $\gamma$-rays. However, there is another problem in
performing a statistical study of $H$ based on
these estimates. The data samples of SNRs are ballast by severe
selection effects through the entire electromagnetic
spectrum. SNRs are mainly identified in the radio domain. Unlike
optical, X-rays or $\gamma$-rays, radio waves are less influenced
by absorption and scattering in the interstellar medium. Also,
radio interferometers have the best resolution among all the
other observational devices, which also helps in the
detection of remnants. Large and reliable data samples are of
crucial importance for a good and well-founded statistical
study of empirical $H-D$ relation. Today, this condition is
partially fulfilled only by data samples in radio domain.
The empirical studies of SNR properties are also severely
influenced by the selection effects. It remains to be hoped
that the observational instruments and techniques in the future
will help us overcome this problem.
The main purpose of this paper is to apply and discuss a method
for the determination of $H-D$ slope from radio luminosity
at given frequency {$\nu$} to diameter ($L_\mathrm{\nu}-D$)
correlation ($\Sigma_\mathrm{\nu}=L_\mathrm{\nu}/(D^2\pi^2)$), in
SNR samples that show existence of such a correlation. The
method is based on the energy equipartition assumption between
magnetic field and relativistic particles. It uses equations for
revised equipartition calculation (REC). The equipartition
calculation is the most commonly used manner of obtaining
$H$ estimates in valid radio SNR samples. The obtained $H-D$ slope
from the only reliable data sample of M82 SNRs is then compared
with the slope arising from the theoretical models of SNR
evolution. We can then argue whether or not M82 SNRs are in
the equipartition state, and thereby give a contribution to
the general evolutionary studies of SNRs. We also try to
give estimates of the magnetic field strengths, particularly for
SNRs in M82. In addition, we discuss the accuracy of magnetic
field strength obtained under REC. This is done by comparing
the values for $H$, obtained herein, with the more reliable
ones available in literature (found for few SNRs from Large
Magellanic Cloud and our Galaxy). It is noteworthy that an
SNR luminosity is mainly determined by the density of environments
in which SNR evolves. This is an important issue for
the discussion of the influence of equipartition arguments
on $H$.
This paper is organized as follows:
Section 2 presents explanations of
the required topics which are too broad to be mentioned later in the text.
In Section 3 we describe and analyze the method and REC with its assumptions.
Section 4 features a discussion on the obtained results for $H-D$ slope and magnetic field strength.
There, we consider whether M82 SNRs are in the equipartition state or not.
Finally, the conclusions of this work are given in Section 5.
\section{The $H-D$ dependence}
\subsection{History of $H-D$ Relation}
We assume that $H-D$ relation can be written in the form:
\begin{equation}
H \propto D^{-\delta}.
\end{equation}
Historically, this form of the magnetic field evolution is used
in all theoretical models that explain the synchrotron
emission from SNRs.
Shklovsky (1960) was the first to theoretically describe
the synchrotron emission from a spherically expanding
nebula. He assumed that magnetic field structure remains unchanged
during the expansion. Consequently, magnetic field flux is
constant and $H \propto D^{-2}$, where $D$ is the diameter of the
remnant. Lequeux (1962) applied Shklovsky's theory to model shell
type remnants, which led to $H \propto (l \times D)^{-1}$,
where $l\propto D$ represents the thickness of the shell. Poveda
\& Woltjer (1968) and Kesteven (1968) also gave their contribution
to the general model of shell type remnant. They assumed that $H$
is gained with the compression of the interstellar medium
magnetic field (leading to $H=const$) and that shell
thickness remains constant during the expansion ( which leads
to $H \propto D^{-1}$). Theoretical interpretation of SNR
synchrotron emission by Duric \& Seaquist (1986) used the magnetic
field model with $H \propto D^{-\delta}$, based on the work of
Gull (1973) and Fedorenko (1983). According to the results
of Gull, magnetic field is compressed and amplified in the
convection zone, to finally gain enough strength to power the
bright synchrotron emission. Fedorenko stated that $1.5 \le \delta
\le 2.0$. Tagieva (2002) obtained $H\propto D^{-0.8}$, by using the $\Sigma\propto D^{-2.38}$ relation (Case \& Bhattacharya 1998). However, this result should be taken with great reserve because the $\Sigma-D$ relation from the work of Case \& Bhattacharya is balast by severe selection effects (Uro{\v s}evi{\' c} et al. 2005). Also, Tagieva did not take into account the influence of the density of environments in which the SNRs evolve. An interesting discussion about the magnetic field and the equipartition arguments for five Galactic SNRs, based on the results empirically obtained from an X-ray data, can be found in the work of Bamba et al. (2005). The predecessor of this paper is the work of Vukoti{\'c} et al. (2006).
\subsection{Magnetic Field Calculation from Radio Synchrotron Luminosities}
The magnetic field is calculated from the following formula for
synchrotron emission of relativistic electrons (Beck \& Krause
2005, hereafter BK):
\begin{eqnarray}
L_{\mathrm{\nu}}&=&4 \pi f V c_{\mathrm{2}}({\gamma}) n_{\mathrm{e,0}} \cdot \nonumber \\
&& \cdot \ E_{\mathrm{0}}^\mathrm{{\gamma}}
{(\nu/2c_\mathrm{1})}^{(1-\gamma)/2}{H_\mathrm{\perp}}^{(\gamma+1)/2}.
\end{eqnarray}
We adjusted the formula from BK to suit our needs. Here, $f$ is
a fraction of the radio source volume occupied by the
radiative shell. We assumed that $f=0.25$. This is consistent with
SNRs having strong shocks where compression ratio is $4$. However,
should this not be the case, variation of $f$ will
still not have any significant effect on values for $H$,
because of the small value of exponent $(\gamma+1)/2$ in Eq. (2).
Further, the total volume of SNR is designated by $V$.
Instead of spectral intensity along the radiation ray path
($I_\mathrm{\nu}$) in BK, we used spectral luminosity of the
source, because the majority of sources in used data samples
are seen almost as point-like sources, having only the flux
density data integrated over the whole source available. According
to BK this may lead to the overestimation of values
for $H$. This effect is discussed further in Section 4. The rest
is the same as in BK, $c_{\mathrm{2}}({\gamma})$ (in units
$\mathrm{{erg^{-2}~s^{-1}~G^{-1}}}$) is identical to
$c_{\mathrm{5}}({\gamma})$ in Pacholczyk (1970),
$n_{\mathrm{e,0}}$ is the number density of cosmic ray electrons
per unit energy interval for the normalization energy
$E_{\mathrm{0}}$, $c_\mathrm{1}=3e/(4\pi
m_\mathrm{e}^3c^5)=6.26428\cdot10^{18} {\rm erg^{-2} ~s^{-1}
~G^{-1}}$, $H_\mathrm{\perp}$ is the magnetic field strength in
the sky plane, and finally $\gamma$ represents exponent in the
cosmic ray power law energy spectrum (see Appendix A in BK).
Closer inspection
of Eq. (2) shows that in order to calculate $H$ from
$L_\mathrm{\nu}$, some assumption regarding the relationship
between $H$ and $n_{\mathrm{e,0}}$ has to be made.
\subsection{Data Samples}
Currently, it seems that there is no better way to determine $H$
by using only data on $L_\mathrm{\nu}$ and spectral index $\alpha$
(${\gamma=2\alpha+1}$) than the equipartition or the
minimum-energy assumption. This method is useful for SNR samples
where all other data are lacking. However, Galactic SNR data
samples are strongly biased by selection effects. The farther
the object, the greater its brightness detection limit. The
extragalactic samples suffer from milder selection effects. Their
brightness detection limits (sensitivity lines) do not differ from one SNR to another because
all the SNRs in the sample are approximately at the same distance. In this study, we
have relied on the only statistically reliable sample of
SNRs from a nearby starburst galaxy M82 (Huang et al. 1994). The
equations that we used in calculating $H$ are presented in Section
3. Inspection of those equations shows that any $H-D$
correlation requires the existence of $L_\mathrm{\nu}-D$
correlation. If $L_\mathrm{\nu}-D$ correlation does not exist,
than it makes no sense to extract $H-D$ relation from
$L_\mathrm{\nu}-D$ data. If SNR data samples show no existence or
poor $L_\mathrm{\nu}-D$ correlation there are two possibilities:
SNR luminosity does not evolve with the diameter, which is
unlikely, or the sample is made of SNRs that evolve in different
environments and is influenced by selection effects. This is
explained in the next paragraph.
In their work, Arbutina et al. (2004) showed that the best
$L_\mathrm{\nu}-D$ correlation exists for SNRs in M82. They also
showed that some correlation exists for Galactic SNRs associated
with large molecular clouds. Arbutina \& Uro{\v s}evi{\' c} (2005)
imply that the evolution of
SNR radio surface brightness depends on the properties of the
interstellar medium, primarily the density. They formed three SNR
data samples from the existing ones (Galactic and extragalactic):
the Galactic SNRs associated with large molecular clouds (GMC),
oxygen-rich and Balmer-dominated SNRs. The main intent of
Arbutina \& Uro{\v s}evi{\' c} (2005) was to group SNRs by their
properties, primarily the density of the interstellar medium in
which they evolve (and also by SN type). They also argued
that M82 sample is the best possible sample that one can
currently find. All SNRs from M82 are likely to evolve in
similar environment of dense molecular clouds. Consequently,
they are very luminous and being extragalactic, SNRs exhibit
milder selection effects. The reliability of M82 sample is also
discussed in Uro\v{s}evi\'{c} et al. (2005). By performing the
Monte Carlo simulation, the authors showed that M82 sample is not
severely affected by sensitivity selection effects, as in the case
of other extragalactic samples (LMC, SMC, M31, M33).
In this paper we have applied REC and calculated $\delta$
for M82 sample, as the best sample for statistical study,
and additionally we have analysed three samples from
Arbutina \& Uro{\v s}evi{\' c} (2005): GMC, oxygen-rich and Balmer
dominated SNRs. We did not use the last three samples to
calculate the slope $\delta$ because they are of poorer
quality. Instead, we used them for checking the consistency
of obtained $H$ values with the global picture of SNRs evolution
in different environments. Also, through literature search we
found the magnetic field strengths for some SNRs from Table 2 and
compared them with the values obtained in this paper.
We searched the catalog of observational data on Galactic SNRs
from Guseinov et al. (2003, 2004a,b) and papers available on
the Web-based Astrophysical Data Service ({\it
http://adswww.harvard.edu/}).\footnote[3]{ADS is NASA-funded
project which maintains three bibliographic databases containing
more than 4.7 million records.}
In calculation we have used the radio flux density per unit
frequency interval $S_{\mathrm{\nu}}$ and radio spectral index
$\alpha$ data (Table 2). These two properties are related as:
\begin{equation}
S_\nu=\beta\nu^{-\alpha},
\end{equation}
where $\beta$ is the flux scale factor. The luminosity is
calculated as $L_\mathrm{\nu}=4 \pi d^2 S_\mathrm{\nu}$, where $d$
is the distance to an SNR. In the case of extragalactic SNRs
we
assume that $d$ is the same for all SNRs, and equal to the distance to the host
galaxy.
\subsection{Magnetic Field and Relativistic Particles}
Since our studies are based on the radio synchrotron luminosity of
SNRs, we can not treat magnetic field separately from
relativistic particles. These two properties of an SNR are
strongly coupled and it makes no sense to study them
separately.
As mentioned before, calculation of $H$ from Equation (2) requires
an assumption about $n_{\mathrm{e,0}}$. This quantity also
evolves with $D$. In Table 1 and Section 4.2 we present and
discuss various assumptions about $n_{\mathrm{e,0}}(D)$ evolution
and its effect on $H(D)$ evolution ( assuming empirical $L-D$
relation). Some of the $n_{\mathrm{e,0}}$ evolution patterns are
only illustrative and are used for estimating the effect of
different patterns on $\delta$. The pattern we used in our method
to calculate $H$ arises from the equipartition of energies
implying that energy densities stored in the magnetic field and
relativistic particles are approximately equal. The equipartition
is widely used for $H$ strength estimates, based purely on
the radio data, in SNRs, galaxies, etc. It gives reasonably
explainable values for $\delta$ and $H$. Taking all of this into
account we based our method on the equipartition of energies.
Revised equipartition calculation (REC) used to calculate $H$ is
presented in detail in the work of BK. According to BK, REC gives
better results than the classical equipartition calculation (CEC)
presented by Pacholczyk (1970).
\subsection{Evolution of Magnetic Field in SNRs}
In this subsection we present the theoretical values for
$\delta$ that characterize a particular SNR evolution phase.
These values, together with the ones obtained by our
empirical method, are used in Section 4 in the discussion of
the most probable evolution scenarios for SNRs in M82.
If SNRs are young, in early Sedov or free expansion phase, they
expand practically adiabatically, since radiative energy losses
are negligible. Under the adiabatic expansion assumption i.e.
conservation of energy in cosmic rays and magnetic field
($\frac{d}{dt}(W)=0$), and equipartition conditions
($w_\mathrm{CR}=w_\mathrm{H}$), where $W$ is the total energy and
the quantities $w_\mathrm{CR}$ and $w_\mathrm{H}$ are the energy
densities of cosmic rays and magnetic field respectively, it
follows that $\delta=1.5$. Indeed:
\begin{equation}
\frac{d}{dt}(W)=\frac{d}{dt}(wV) \propto
\frac{d}{dt}(w_\mathrm{H}V) \propto \frac{d}{dt}(H^2D^3),
\end{equation}
\begin{equation}
\frac{d}{dt}(W)=0\Longrightarrow H \propto D^{-3/2},
\end{equation}
where $w$ is the total energy density.
In conclusion, SNRs in the free expansion or early Sedov phase will have $\delta=1.5$ if they are in the equipartition state.
On the other hand, if SNRs are older, in the late Sedov or
radiative phase, the value may be closer to $\delta=1.25$.
The radiative phase is characterized by significant energy losses
and SNR would later expand with the velocity $v \propto
D^{-5/2}$ (pressure-driven snowplow). If $n_\mathrm{e,0} \propto
n_\mathrm{p,0} \propto n_\mathrm{H}v$ (Berezhko \& Volk 2004,
hereafter BV), assuming equipartition $H^2 \propto
n_\mathrm{e,0}$, $\delta$ would be 5/4=1.25. The quantity
$n_\mathrm{p,0}$ is the number density of cosmic ray
protons per unit energy interval for the normalization energy
$E_{\mathrm{0}}$, and $n_\mathrm{H}$ is the hydrogen
number density.
It is a general belief that, during the expansion, SNRs strongly
amplify interstellar magnetic field. Two basic mechanisms of
magnetic field amplification operate in SNRs. The first one is the
Rayleigh-Taylor instability at the contact discontinuity between
the supernova ejecta and ISM swept by SNR forward shock.
This scenario leads to $1.5\le\delta\le2$ (Fedorenko 1983) and is
preferred in young SNRs. The second mechanism operates right
behind the shock, where magnetic field is amplified by strongly
excited magnetohydrodynamic waves. This is the probable mechanism
for older remnants.
\section{Analysis and Results}
There are two most commonly used assumptions regarding the
magnetic field and cosmic ray energy content: 1) the minimum of
total energy stored in the particles and magnetic field, and 2)
the equipartition between these energies. The minimum energy
assumption gives $4/3$ for the ratio of the energies stored in the
particles and magnetic field, which is $\sim 1$. These two
assumptions are thereby often treated as synonymous and both
procedures are referred to as the equipartition calculation.
There are also two different methods for obtaining these two
estimates: classical (Pacholczyk 1970) and revised (BK)
equipartition, i.e. minimum-energy calculation. We will only
present the formulas that we have used in calculating $H$ and
reader is referred to the mentioned papers for a detailed
treatment of the subject.
\subsection{Classical Calculation}
Classical formulas are:
\begin{eqnarray}
H^\mathrm{min} &=& 4.5^{2/7} {(1 + k)}^{2/7} \cdot \nonumber \\
&&\cdot \ {c_\mathrm{12}}^{2/7} f^{-2/7}
\left({D}/{2}\right)^{-6/7} L^{2/7},
\end{eqnarray}
\begin{eqnarray}
H^\mathrm{eqp} &=& 6^{2/7} {(1 + k)}^{2/7} \cdot \nonumber \\
&&\cdot \ {c_\mathrm{12}}^{2/7} f^{-2/7}
\left({D}/{2}\right)^{-6/7} L^{2/7}.
\end{eqnarray}
In these expressions we have introduced the following quantities:
$k$ is the ratio of the energies of the heavy relativistic
particles and relativistic electrons, $c_\mathrm{12}$ and
$c_\mathrm{13}$ are functions which are weakly dependent on
$\alpha$ and are tabulated by Pacholczyk (1970). The radio
luminosity $L$ integrated between radio synchrotron spectrum
cutoff frequencies $\nu_\mathrm{1}$ and $\nu_\mathrm{2}$ is
calculated as:
\begin{equation}
L=4 \pi d^2 \int_{\nu_\mathrm{1}=10^7
~\mathrm{Hz}}^{\nu_\mathrm{2}=10^{11}
~\mathrm{Hz}}S_\mathrm{\nu}~d\nu.
\end{equation}
Using Equation (3) we can eliminate $\beta$ and obtain $L$. We
used $k=40$ which should be adequate for strong shocks in SNRs.
Being luminous synchrotron emitters and having small linear
diameters, SNRs from M82 are likely to be young and have strong
shocks, but their true nature is still a subject to debate.
We obtained $\delta$ from Equations (6) and (7) by replacing $L$
with $L_\nu-D$ relation from Arbutina et al. (2004). Replacing $L$
with $L_\nu$ does not have any noticeable effect on $\delta$. We
also assumed that $H$ depends on $D$ only trough $L$ or
$L_\nu$. Therefore,
\begin{equation}
H\propto{\left(D^{-3}L_\nu\right)}^{2/7}\propto{\left(D^{-4.4}\right)}^{2/7},
\end{equation}
if $L(D)\propto D^{-1.4}$ (Arbutina et al. 2004). This gives
$\delta=1.26$. To prove the assumptions in Equation (9), we have
calculated $L$ from equations (3) and (8), and $H$ from Equation
(7). Then we fit linear regression in $\log H-\log D$ plane to
obtain $\delta=1.26\pm0.08$. This shows that
$c_\mathrm{12}(\alpha)$ does not change with $D$, so it does not
affect $\delta$, which is why we can calculate
$\delta$ directly from the slope $s$ of the $L_\nu\propto D^{-s}$ relation,
\begin{equation}
\delta=(3+s)\frac{2}{7},
\end{equation}
as in Eq. (9).
Equations (6) and (7) differ to a constant,
giving exactly the same $\delta$. In the sequel we do not
show results for minimum energy estimates of $H$.
\subsection{Revised Formulas}
The main revision of the classical formulas is in using $K$
instead of $k$. The quantities $K$ and $k$ stand for the ratios of
proton to electron number densities and energy densities,
respectively. In the CEC, integration of the radiation energy
spectrum between fixed frequency limits is performed. As
opposed to this in REC the integration is performed over the
energy spectrum of relativistic particles. This gives more
accurate results (see BK).
The revised formulas are:
\begin{eqnarray}
H_{\mathrm{rev}}^{\mathrm{min}} &=& \Big[ 4 \pi K A(\gamma , L_\mathrm{\nu}, \nu ,V ,f, i)\cdot \nonumber\\
&& \cdot \ C(\gamma , E_\mathrm{2} )( \alpha +1)\Big] ^{1/(
\alpha +3)},
\end{eqnarray}
\begin{eqnarray}
H_\mathrm{rev}^\mathrm{eqp} &=& \Big[ 8 \pi K A(\gamma ,
L_\mathrm{\nu},
\nu , V ,f , i) \cdot \nonumber\\
&& \cdot \ C( \gamma , E_\mathrm{2} )\Big] ^{1/( \alpha +3)},
\end{eqnarray}
where
\begin{eqnarray}
&C( \gamma , E_\mathrm{2} )= E_\mathrm{0}^2 \cdot \bigg\{
\frac{1}{2} \Big( \frac{E_\mathrm{0}}{E_\mathrm{p}}\Big)
^{\gamma-2} + & \nonumber\\
& + \frac{1}{2-\gamma} \bigg[ \Big( \frac{E_\mathrm{0}}{E_\mathrm{2}}
\Big) ^{\gamma-2} - \Big( \frac{E_\mathrm{0}}{E_\mathrm{p}} \Big)
^{\gamma-2} \bigg] \bigg\} \ \ \ \mathrm{for}\ \gamma \neq 2,\ &
\end{eqnarray}
\begin{equation}
C( \gamma , E_\mathrm{2}
)=E_\mathrm{0}^2\left[\frac{1}{2}+\ln\frac{E_\mathrm{2}}{E_\mathrm{p}}\right]
\ \ \mathrm{for}\ \gamma=2,
\end{equation}
and
\begin{equation}
A(\gamma , L_\mathrm{\nu}, \nu , V, f,
i)=\frac{L_\mathrm{\nu}{(\nu/2c_\mathrm{1})}^{(\gamma-1)/2}}{4 \pi
c_\mathrm{2}(\gamma){E_\mathrm{0}}^\gamma f V c_\mathrm{4}(i)}.
\end{equation}
In the above equations the following quantities appear: $K$
is the ratio of proton-to-electron number densities per particle
energy interval for the normalization energy $E_\mathrm{0}$,
$E_\mathrm{2}$ presents the high-energy limit for the spectrum of
cosmic ray particles. The spectral break at low energies for
protons is designated as $E_\mathrm{p}=938.28 \,\mathrm{MeV}
=1.5033\cdot10^{-3} \,\mathrm{erg}$ and finally $c_\mathrm{4}(i)$
is used to replace the projected field component
$H_\mathrm{\perp}$ with the total field $H$ (see Appendix A in
BK), with $i$ being the projection angle.
Equations (11) and (12) were originally taken from BK, with
a few adjustments. To make equations hold for $\gamma\le2$ we used
$E_\mathrm{2}=3\times~10^{15}~\mathrm{eV}$ (Vink 2004). Instead of
$K+1$ factor we used only $K$ which is justified for
proton-dominated plasma, and since the original formulas do not
include the effect of possible synchrotron losses that affect the
electron power law energy spectrum. Using $K$ instead of $K+1$
may provide an even better approximation when taking into
account synchrotron losses. To put it simple, it is as
if there were almost no electrons in cosmic rays, and only
protons remained. This can be justified by the
fact that the protons are far more energetic than electrons and
show less synchrotron losses. Such assumption does not
have any significant effect on the values for $H$ because of the
1/($\alpha$+3) exponent in Equations (11) and (12). In this case,
Equation (9) transforms into
\begin{equation}
H\propto{\left(D^{-3}L_\nu\right)}^{1/(\alpha+3)}\propto{\left(D^{-4.4}\right)}^{1/(\alpha+3)},
\end{equation}
and Equation (10) becomes
\begin{equation}
\delta=(3+s)\frac{1}{\overline{\alpha}+3}.
\end{equation}
In Eq. (16) we applied the $L_\nu-D$ correlation, to obtain
$\delta=1.22$, while fitting gives $\delta=1.19\pm0.08$. For
$\alpha$ we used an average spectral index of the whole sample
($\overline{\alpha}=0.6$). The value for $\delta$ from Equation
(17), and the one obtained by fitting calculated values for
$H$ using Equation (12), are almost identical. The
difference is negligible and we could have smoothly,
as in CEC, calculate $\delta$ from the slope of $L_\mathrm{\nu}-D$
relation.
In calculating $H$, we assumed that the magnetic field in radiative shell of
SNR is completely turbulent and has an isotropic angle
distribution in three dimensions, giving
$c_\mathrm{4}={(2/3)}^{(\gamma+1)/4}$ (Appendix A in BK). This is
the best assumption to be made when the majority of SNRs are
point-like sources, i.e. without maps for $H$. We also used
$K={(E_\mathrm{p} / E_\mathrm{e})}^{(\gamma-1)/2}$ (Appendix A in
BK), where
$E_\mathrm{e}=511~\mathrm{keV}=8.187~10^{-7}~\mathrm{erg}$
designates the spectral break at low energies for electrons. The
data for 21 SNRs from M82 from the work of Uro{\v s}evi{\'
c} et al. (2005), and the obtained values for $H$, are shown
in Table 2. As it can be seen, the magnetic field strengths are up
to 10 mG. Using $L_\nu/(4 \pi f V)$ in our formulas instead of
$I_\nu/l$ (BK) could lead to an overestimation of the
average field. Nevertheless, if the magnetic field is
significantly overestimated it should not have a significant
effect on the value for $\delta$. There is also a
possibility that M82 remnants are pulsar driven wind nebulae
(PWNe). Unlike shell type SNRs, PWNe have different mechanisms
that maintain magnetic fields. Magnetic field strengths in PWNe
are comparable with the ones we obtained from REC for M82 SNRs.
This possibility is investigated further in Section 3.4.
\subsection{Direct Derivation}
It is possible to derive $\delta$ directly from Eq. (2) if there
is an additional assumption concerning the evolution of
$n_\mathrm{e,0}$ with $D$. We consider models used by Shklovsky
(1960), and the assumption of conservation of cosmic ray energy
i.e. adiabatic expansion (e.g. BV). Respectively, these are
\begin{equation}
n_\mathrm{e,0} \propto D^{-(2\alpha+3)}
\end{equation}
and
\begin{equation}
n_\mathrm{e,0} \propto D^{-3}.
\end{equation}
Equation (2) together with $L_\nu-D$ relation gives
\begin{equation}
H \propto
{\left(\frac{D^{-4.4}}{n_\mathrm{e,0}}\right)}^{1/(\alpha+1)}.
\end{equation}
For an average spectral index $\alpha = 0.6$ the results are
presented in Table 1. Here, we found fitting unnecessary because
we already saw in Sections 3.1 and 3.2 that the rest of the
quantities from Equation 2 do not change with $D$, at least
not in a way to affects $\delta$. By using direct method we
can only get values for $H$ scaled to a constant because of
proportionality of Equations (18) and (19).
\begin{Code}
\begin{table}
\caption{RESULTS FOR $\delta$ \label{label}}
\begin{center}
\begin{tabular}{cc}\hline\hline
\multicolumn{2}{c}{\textbf{Direct}}\\
\hline
Shklovsky (1960) ($n_\mathrm{e,0} \propto D^{-(2\alpha+3)}$)&0.125\\
Berezhko \& Volk (2004) ($n_\mathrm{e,0} \propto D^{-3}$)&0.875\\
\hline
\multicolumn{2}{c}{\textbf{Classical}}\\
\hline
equipartition&$1.26$\\
\hline
\multicolumn{2}{c}{\textbf{Revised}}\\
\hline
equipartition&1.22\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\end{Code}
\subsection{Calculated and Literature-found $H$ Values for GMC, Oxygen-rich and Balmer-dominated SNRs}
With a view to checking values obtained for $H$ we performed
the same REC on the SNRs associated with large molecular clouds,
oxygen-rich and Balmer dominated SNRs. According to Arbutina \&
Uro{\v s}evi{\' c} (2005), these SNRs form parallel tracks in
radio surface brightness to diameter plane. If environmental
density is higher we expect the SNR to be brighter. The
implication is that SNRs with the same $D$ should have different
luminosities if environmental densities are different. According
to Equation (12), SNRs that evolve in a more dense
environment should also have stronger $H$ than SNRs with the same
diameter that evolve in a less dense environment. The data used
and the obtained CEC and REC results for all groups of SNRs are
presented in Table 2.
Figure 1 presents a plot of all REC values from Table 2. It shows
that SNRs in a more dense environment (M82, GMC,
oxygen-rich) appear to form a track in $H-D$ plane, while
Balmer-dominated SNRs form another track that lies beneath the
first one. Due to the dispersion and incompleteness of data
samples, any statistical study of the tracks should be avoided,
for now. We can, however, make some qualitative conclusions.
In Figure 1 we can see that REC does not change $L_\nu-D$
the evolution pattern. This is very convenient for the
estimate of the reliability of $H$ in M82 SNRs. From Figure
1 it is clear that $H$ values for SNRs in M82 seem consistent with
the values for GMC and oxygen-rich remnants. They all evolve in
a dense environment and accordingly may have a similar
$H-D$ evolution pattern. Their $H$ values, according to Arbutina
\& Uro{\v s}evi{\' c} (2005), are different in comparison to the
values for Balmer-dominated SNRs. This is because Balmer-dominated
SNRs {are likely to evolve} in a low density environment. In
the group that consists of Balmer-dominated, oxygen-rich and GMC
SNRs, used in this work, we did not include PWNe, because
REC is made for shell type SNRs. Accordingly, to avoid possible PWNe, we did not include SNRs with $\alpha\le 0.4$, which is the characteristic of PWNe (Gaensler \& Slane 2006). From Figure 1 we can
see that the most of SNRs in M82 are, probably, not PWNe because
they fit the evolution pattern for SNRs in dense environments. In
addition, the higher spectral indices of the M82 SNRs (average
$\alpha\approx0.6$; see Table 2) are not characteristic for PWNe.
However, the possibility that at least some of these objects
are PWNe should not be easily put aside. For now, we can
only wait for the observational instruments to advance, and for a
possible detection of pulsars in M82.
Table 2 also shows the best available literature-found
values for $H$ inferred from other methods, for Galactic and LMC
SNRs. The agreement of these values with the values obtained
from REC is another way to show the reliability of $H$
estimates for SNRs in M82. This is one of the subjects
discussed in Section 4.
\begin{Code}
\begin{figure}
\begin{center}
\includegraphics[height=7.8cm, width=8.3cm]{novaslika.eps}
\end{center}
\caption{The revised equipartition data in $\log
H-\log D$ plane. The SNRs are presented by: crosses (M82), open circles (oxygen-rich), open triangles (Galactic SNRs associated with large molecular clouds), filled triangles (Balmer-dominated SNRs). \label{label} }
\end{figure}
\end{Code}
\section{Discussion}
\subsection{Values Obtained for $H$}
Both, classical and revised equipartition calculation
contain various uncertainties and assumptions and as such,
are of limited applicability (BK). Nevertheless, by performing
CEC and REC
we arrived at the conclusion that all of the imperfections do not
have a noticeable effect on $\delta$, but could have a
significant impact on the values for $H$. Inspection of
Table 2 shows that obtained $H$ values are higher than those
found in literature. Such overestimates are probably due to
replacement of $I_\mathrm{\nu}$ with $L_\mathrm{\nu}$ (BK). The
assumptions regarding $f$ and $K$ in REC equations are not of
great importance because of the small $1/( \alpha +3)$
exponent. Due to the $L_\mathrm{\nu} \rightarrow
I_\mathrm{\nu}$ replacement, the amount of overestimate is
strongly affected by SNR morphology and consistently shows
considerable variations from one SNR to another (Table 2). The
morphology variations should not depend on diameter, which
means that overestimates of $H$ are mainly arising from morphology
related factors, and they should only make data scattering in
$H-D$ plane without affecting $\delta$. Table 2 also
shows that an average overestimate by a factor of 2 can be
adopted. Coupled with explanation of Figure 1 (Section 3.4), this
shows that $H$ values for SNRs in M82 are estimated reliably to an
order of magnitude. This means that M82 does contain SNRs
with magnetic fields of up to $10^{-2}$ G. However, this should be
taken with reserve because of the possibility that some SNRs in
M82 are perhaps PWNe.
REC used in this paper thus gives reliable estimates
accurate to an order of magnitude. This is of small
significance in studies of nearby, well resolved SNRs with data
from all parts of electromagnetic spectrum, but may be of
great applicability in statistical and empirical studies of SNRs
residing in other galaxies, that are unresolved and
often have only radio data available. As already mentioned,
Galactic SNR samples are strongly influenced by selection effects
and can not be used in the statistical and empirical studies
of SNRs evolution properties. For now, the only SNR samples
that can be used for reliable statistical and empirical studies
reside in other galaxies. With these samples, the obtained
values for $H$ will be probably overestimated by a factor of 2,
but accurate to an order of magnitude, as in this paper. In
the next section we discuss the results on the magnetic
field evolution obtained when our method is applied to SNRs in
M82. This should illustrate how the method can be used for
getting closer insight into the SNRs evolution properties, i.e.
SNRs evolution phases, and how it can be used to check the
validity of the equipartition assumption.
\subsection{Magnetic Field Evolution of SNRs in M82}
If the sample is statistically reliable, the obtained $H$ may
be overestimated, but chosen REC parameter values should
not have a significant effect on $\delta$. The difference
between $\delta$ obtained from classical and revised methods is
mainly due to the exponents in equations (7) and (12). These
exponents will be equal for an average spectral index
$\alpha=0.5$. For SNRs in M82 $\bar{\alpha}=0.6$ is used, and
therefore we obtain slightly different slopes in $H-D$ plane. In the work of Berkhuijsen (1986), the author implies that $\alpha$ could depend on the density of the ISM in which the SNRs evolve as: $\alpha=(0.075\pm 0.024)\log n_\mathrm{0}+(0.538\pm 0.012)
$, where $n_\mathrm{0}$ is the density of the ISM. This means, according to Eq. (17), that the lower track in Figure 1, that consist of SNRs evoloving in the small density environments, should have somewhat shallower slope when compared with the track above (large density environments). However, cosidering Eq. (17) and just mentioned relation, it is clear that for typical values of $\alpha$ and $n_\mathrm{0}$, there will be no significant effect on $\delta$. Consequently, the tracks from Fig. 1 should be considered as parallel.
Taking all of above into account, we conclude that $\delta$ is
strongly affected by the assumptions regarding $n_\mathrm{e,0}$.
"Directly" obtained values for $\delta$ of 0.125 and 0.875 (Table
1) are only illustrative. Shklovsky's model have a rather
historical meaning, since no additional particle acceleration (by
the shock) during evolution is assumed ( besides the initial
acceleration in supernova explosion). This leaves us with the
equipartition as our best assumption.
Table 1 shows that the equipartition arguments combined with the
possible $L_\nu-D$ dependence give $\delta\approx1.2$. This value
is slightly lower than theoretical value $\delta=1.5$ obtained
under equipartition and adiabatic approximations (Section 2.5). If
SNRs in M82 are young, in early Sedov or free expansion phase,
this difference can be explained by the sensitivity selection
effects related to the M82 sample. The Monte Carlo
simulations in Uro{\v s}evi{\' c} et al. (2005) show that
the measured slopes of extragalactic surface brightness to
diameter ($\Sigma_\nu-D$) relations are shallower due to the
sensitivity selection effects. Therefore, the apparent $\Sigma_\nu-D$ (
and $L_\nu-D$) slope for M82 is lower than the real slope. The lower
$L_\nu-D$ slope gives lower $\delta$. This means that
equipartition arguments for the SNRs in M82 sample may still
be applicable, whereas a small difference between the
theoretical and empirical $\delta$ can be ascribed to
selection effects.
On the other hand, $\delta=1.2$ might indicate that not all SNRs
from M82 sample are in the equipartition stadium. If, for example,
the larger ones are in the late Sedov phase where magnetic field
remains constant (BV), empirical $\delta$ would be a compromise
between values 0 and 1.5. The evolutionary status of SNRs
remains a great uncertainty. The SNRs in M82 may be in the
free expansion, as well as in the Sedov's, or even in the
radiative phase. Chevalier \& Fransson (2001) proposed that M82
SNRs may be in the radiative phase because they evolve in
a very dense environment. In this case, $\delta$ may be
1.25, close to the empirical value. As the previous ones,
this scenario too,
remains uncertain.
\subsection{Interstellar Magnetic Field in M82}
Condon (1992) estimated field strength in M82 to be
$H\approx100~\mathrm{\mu G}$ from classical minimum energy
calculation, considering that central emitting region of M82 is
$30''\times10''$ and probably 0.5 kpc thick. Hargrave (1974)
estimated the central emitting region in M82 to be $50''\times
15''$. Using revised equipartition we estimated the value of
${\approx 190}~\mathrm{\mu G}$ for the average interstellar
magnetic field in the central emitting region of M82 using the
data $S_\mathrm{1.4~GHz}=8.2~ \mathrm{Jy}$ and $\alpha=0.68$ from
Klein et al. (1988). We assumed that $f=1$ and that M82 radiates
mainly from its central region of $\approx500~\mathrm{pc}$ in
diameter. This estimate is rough and should be taken with
some reserve. Such ISM magnetic field strength is among the
highest field strengths when compared to other galaxies.
This, however, may imply that M82 central region
contains interstellar matter made of very dense molecular
clouds. This is consistent with the high values of $H$ in M82
SNRs, supporting the possibility that their luminous
synchrotron emission is mainly due to very dense environments and
not due to pulsar driven wind nebulae.
The values of up to 10 mG for $H$ in M82 SNRs however imply that
the magnetic field is strongly amplified from the average
ISM values of $100-200~\mathrm{\mu G}$.
\section{Conclusions}
In this paper we presented and discussed a method for
the determination of the magnetic field evolution pattern in SNRs
only from the radio luminosity data samples. Such samples are the
only available for the statistical and empirical studies of SNR
evolution properties. The best sample, for now, consists of SNRs
in M82, since these remnants seem to evolve in similar
environment and share similar properties, and are not severely
influenced by selection effects.
In order to calculate $H$ from REC we were forced to make some
assumptions. The only significant effect on values for $H$,
regarding the assumptions, comes from replacing $I_\mathrm{\nu}$
in REC formulas from BK with $L_\mathrm{\nu}$, which is done in
order to apply REC on practically point-like sources. The
other assumptions are less important because of the small
exponent in REC equations. Obtained under equipartition
assumption, $\delta$ is a direct consequence of $L_\mathrm{\nu}-D$
slope and has reasonable theoretical explanation. All assumptions
do not change the evolutionary picture from $L_\mathrm{\nu}-D$
plane. This means that our empirical estimate of $\delta$ is
likely to be reliably determined. When compared with the
more reliable values found in literature, the obtained $H$ values
appear to be overestimated approximately by a factor of 2.
We conclude that $H$ values for all SNRs, even the ones from M82,
are accurate to an order of magnitude.
To answer whether or not M82 SNRs are in equipartition state we
have compared empirical $\delta$ obtained by our method with the
theoretical values. The empirically obtained $\delta$ from
$L_\mathrm{\nu}-D$ correlation under the equipartition assumption
is probably theoretically explainable by the following two
scenarios:
(i) The slight difference between the theoretically derived
$H-D$ slope ($\delta=1.5$) under the adiabatic approximation and
the equipartition assumption, and the slope obtained in this paper
using the empirical $L_\mathrm{\nu}-D$ correlation and REC
($\delta\approx1.2$) can be explained by the sensitivity selection
effects which affected the sample of SNRs in M82. In this way, the
starting assumption concerning the approximative equipartition
between the energy stored in the relativistic particles and
in the magnetic field, could be justified. Therefore, we can
conclude that SNRs in the M82 sample are probably close to
the equipartition state.
(ii) Finally, equipartition conditions may not be fulfilled for
all remnants. If, for instance, they are in different stages
of evolution, $\delta$ may be between 0 and 1.5.
If SNRs are in the adiabatic phase, the most probable explanation
for the lower empirically obtained value for $\delta$ are the
sensitivity selection effects in the M82 sample, perhaps in
combination with slight deviation from equipartition, but the
problem is the unresolved evolutionary status of M82 SNRs.
Additional observations of SNRs in nearby starburst galaxies are
needed for any firmer conclusions to be made.
\begin{Code}
\begin{table*}
\caption{SNRs DATA\tabnotemark{a} AND RESULTS \label{label}}
\begin{center}
\small
\begin{tabular}{@{\extracolsep{-0.7mm}}p{2.4cm}p{1.7cm}p{1.0cm}cccccccc@{}}
\hline\hline
Catalog&Other&Type\tablenotemark{1}&$D$&$S_\mathrm{1}$&$\alpha$&Distance&$H^\mathrm{eqp}$&$H_\mathrm{rev}^\mathrm{eqp}$&$H_\mathrm{l}$\\
name&name&&&\small{flux density}&&&\small{class.}&\small{rev.}&\small{literature}\\
&&&&\small{at 1 GHz}&&&\small{equip.}&\small{equip.}&\\
&&&(pc)&(mJy)&&(kpc)&(G)&(G)&(G)&\\
\hline
M82 39.1+57.4&\nodata&MC&0.9&8.28&0.50&$3.9\times10^3$&6.03E-03&8.76E-03&\nodata\\
M82 39.4+56.1&\nodata&MC&3.23&4.25&0.58&$3.9\times10^3$&1.68E-03&2.10E-03&\nodata\\
M82 39.6+53.4&\nodata&MC&2.65&2.68&0.45&$3.9\times10^3$&1.74E-03&2.96E-03&\nodata\\
M82 40.6+56.1&\nodata&MC&3.02&4.97&0.72&$3.9\times10^3$&1.94E-03&2.24E-03&\nodata\\
M82 40.7+55.1&\nodata&MC&1.93&15.56&0.58&$3.9\times10^3$&3.78E-03&4.64E-03&\nodata\\
M82 41.3+59.6&\nodata&MC&1.02&6.19&0.52&$3.9\times10^3$&4.99E-03&6.85E-03&\nodata\\
M82 42.7+55.7&\nodata&MC&4.30&6.10&0.71&$3.9\times10^3$&1.51E-03&1.78E-03&\nodata\\
M82 42.8+61.3&\nodata&MC&1.97&3.58&0.63&$3.9\times10^3$&2.47E-03&2.92E-03&\nodata\\
M82 43.2+58.4&\nodata&MC&1.05&12.61&0.66&$3.9\times10^3$&6.11E-03&6.83E-03&\nodata\\
M82 43.3+59.2&\nodata&MC&0.60&29.54&0.68&$3.9\times10^3$&1.27E-02&1.35E-02&\nodata\\
M82 44.3+59.3&\nodata&MC&1.96&5.46&0.64&$3.9\times10^3$&2.80E-03&3.27E-03&\nodata\\
M82 44.5+58.2&\nodata&MC&2.25&3.55&0.50&$3.9\times10^3$&2.16E-03&3.13E-03&\nodata\\
M82 45.2+61.3&\nodata&MC&1.12&19.54&0.67&$3.9\times10^3$&6.58E-03&7.28E-03&\nodata\\
M82 45.3+65.2&\nodata&MC&2.05&5.80&0.82&$3.9\times10^3$&2.96E-03&3.32E-03&\nodata\\
M82 45.4+67.4&\nodata&MC&2.23&5.01&0.67&$3.9\times10^3$&2.47E-03&2.86E-03&\nodata\\
M82 45.8+65.3&\nodata&MC&2.13&3.74&0.46&$3.9\times10^3$&2.30E-03&3.79E-03&\nodata\\
M82 45.9+63.9&\nodata&MC&2.22&4.25&0.41&$3.9\times10^3$&2.32E-03&4.70E-03&\nodata\\
M82 46.5+63.9&\nodata&MC&1.39&6.93&0.74&$3.9\times10^3$&4.18E-03&4.60E-03&\nodata\\
M82 46.7+67.0&\nodata&MC&2.95&4.39&0.76&$3.9\times10^3$&1.94E-03&2.25E-03&\nodata\\
M82 41.9+58.0&\nodata&MC&0.52&154.96&0.75&$3.9\times10^3$&2.38E-02&2.32E-02&\nodata\\
M82 44.0+59.6&\nodata&MC&0.79&54.89&0.48&$3.9\times10^3$&1.16E-02&1.80E-02&\nodata\\\hline
G 111.7-2.1&Cas A&O&4.9&$2720\times10^3$&0.77&3.4&1.02E-03&1.23E-03&5.5E-04\tabnotemark{b}\\
G 260.4-3.4&Pup A&O&35.2&$130\times10^3$&0.5&2.2&5.73E-05&8.32E-05&\nodata\\
LMC 0525-69.6&N132 D&O&25&5800&0.7&55&2.07E-04&2.71E-04& $<$ 4E-05\tabnotemark{c}\\
SMC 0103-72.6&\nodata&O&55&250&0.5&65&4.53E-05&6.58E-05&\nodata\\
NGC 4449&\nodata&O&0.6&20&0.75&4200&1.22E-02&1.25E-02&\nodata\\\hline
G 42.8+0.6&\nodata&MC&76.8&$3\times10^3$&0.5&11&2.51E-05&3.64E-05&\nodata\\
G 78.2+2.1&$\gamma$ Cygni&MC&20.9&$340\times10^3$&0.5&1.2&8.34E-05&1.21E-04&\nodata\\
G 84.2-0.8&\nodata&MC&23.6&$11\times10^3$&0.5&4.5&6.00E-05&8.71E-05&\nodata\\
G 89.0+4.7&HB 21&MC&24.2&$220\times10^3$&0.4&0.8&5.20E-05&9.87E-05&\nodata\\
G 132.7+1.3&HB 3&MC&51.2&$45\times10^3$&0.6&2.2&3.10E-05&4.24E-05&\nodata\\
G 166.2+2.5&OA 184&MC&183.8&$11\times10^3$&0.57&8&1.43E-05&2.08E-05&\nodata\\
G 309.8+0.0&\nodata&MC&23&$17\times10^3$&0.5&3.6&6.11E-05&8.88E-05&\nodata\\
G 315.4-2.3&MSH 14-63&MC&28.1&$49\times10^3$&0.6&2.3&5.45E-05&7.34E-05&\nodata\\
G 349.7+0.2&\nodata&MC&8.7&$20\times10^3$&0.5&14.8&3.3E-04&4.80E-04&3.5E-04\tabnotemark{d}\\\hline
G 4.5+6.8&Kepler&B&2.4&$19\times10^3$&0.64&2.9&3.95E-04&4.97E-04&2.15E-04\tabnotemark{b}\\
G 120.1+1.4&Tycho&B&5&$56\times10^3$&0.61&2.3&2.49E-04&3.21E-04&3E-04\tabnotemark{b}\\
G 327.6+14.6&SN 1006&B&19&$19\times10^3$&0.6&2.2&5.67E-05&7.62E-05&1.6E-04\tabnotemark{b}\\
LMC 0505-67.9&DEM L71&B&19&9&0.5&55&3.96E-05&5.75E-05&\nodata\\
LMC 0509-68.7&N103 B&B&7&1100&0.6&55&3.72E-04&4.75E-04&\nodata\\
LMC 0509-67.5&\nodata&B&7&70&0.5&55&1.68E-04&2.43E-04&\nodata\\
LMC 0519-69.0&\nodata&B&8&150&0.5&55&1.86E-04&2.70E-04&\nodata\\
LMC 0548-70.4&\nodata&B&25&100&0.6&55&6.29E-05&8.44E-05&\nodata\\
SMC 0104-72.3&\nodata&B&29&12&0.5&65&3.29E-05&4.78E-05&\nodata\\\hline\hline
\end{tabular}
\end{center}
\vspace{3mm} \footnotesize{ Notes: $^\mathrm{a}$M82 data are taken
from Table A.1 in Uro{\v s}evi{\' c} et al. (2005) with
$S_\mathrm{1}$ being scaled from 1.4 to 1 GHz. The rest of the
used data are same as in papers of Arbutina et al. (2004) and
Arbutina \& Uro{\v s}evi{\' c} (2005), with data of Galactic MC
SNRs being updated for distances from Green (2004);
$^\mathrm{b}$V{\" o}lk et al. (2005); $^\mathrm{c}$Dickel \& Milne (1995); $^\mathrm{d}$Brogan et al. (2000); $^\mathrm{1}$MC -- Associated with giant molecular clouds, O -- Oxygen-rich, B -- Balmer-dominated.
}
\end{table*}
\end{Code}
\acknowledgments
\noindent {\it Acknowledgments.} The authors thank Dragana Momi{\'c} and Ivanka Mutavd{\v z}i{\'c} for careful reading and correction of the manuscript, and an anonymous referee for comments and suggestions. The authors would also like to thank
Prof. Rainer Beck for useful comments on the manuscript. This paper
is a part of the project "Gaseous and Stellar Components of
Galaxies: Interaction and Evolution" (No. 146012) supported by the
Ministry of Science and Environmental Protection of Serbia.
|
2,877,628,091,351 | arxiv | \section{Introduction}
Our Sun shows variability on a number of time scales, where the most pronounced are: the 5-minute p-mode oscillations, the 25-30 days rotational modulation and the 11-year solar cycle \citep{2017NatAs...1..612S}. While we have witnessed significant progress in our understanding of the first two phenomena over the last decades, our knowledge of the 11-year solar cycle is still deficient. Especially, our understanding of the long-term modulation of the amplitude of the 11-year solar cycle, where the occurrence of so-called grand minima, like the 17th century Maunder Minimum, constitute a huge challenge \citep{1976Sci...192.1189E}. This challenge can be met by making better observations and better models of the Sun and by comparing these to observations and models of other stars \citep{2008ssma.book.....S}.
Most of the important information we have of the 11-year solar cycle comes from either direct observations of sunspots or from observations of standing oscillations inside the Sun through helioseismology \citep{2002RvMP...74.1073C, 2004SoPh..224..217G}. With helioseismology we have learned about the differential rotation pattern of the Sun's deep interior and about the meridional circulation in the Sun's convection zone.
For other Sun-like stars (stars with masses and radii similar to the Sun), we have so far mainly obtained information of their magnetic cycles through observations of the temporal evolution of the emission in the Ca~{\sc ii} $H$ and $K$ lines as has been done from the Mount Wilson and Lowell observatories \citep{1995ApJ...438..269B, 2005PASP..117..657W}. Expectations where therefore high, when the $Kepler$ mission was launched on 7 March 2009. Though the nominal mission lifetime was only 3.5 years, there was in principle nothing that would prevent the mission to be operational for 10+ years, or so we thought. A 10+ years mission would have allowed us not only to detect stellar cycles in a large number of Sun-like stars, it would also have allowed us to use asteroseismology to sound the effect these cycles would have on the deep interior or vice versa. This information could be used to test and improve the models we have for the 11-year solar cycle.
In 2009 we therefore started the {\it Sounding stellar cycles with Kepler} program, where we made dual observations of 20 carefully selected Sun-like stars from both the Nordic Optical Telescope (NOT) and $Kepler$. We have earlier presented the strategy behind this program \citep[][Paper I]{2009MNRAS.399..914K} and the first results from the measurements of chromospheric emission \citep[][Paper II]{2013MNRAS.433.3227K}. Here we present the full set of measurements made over the course of the $Kepler$ mission from early 2010 to late 2014.
As $Kepler$ was only able to point to the Cygnus field for 4 years, studies of rotation of Sun-like stars experienced more progress than studies of stellar cycles of Sun-like stars. From early on in the mission, surface rotation was identified in a large number of, mainly active, stars \citep{2013ApJ...775L..11M, 2013A&A...557L..10N, 2014ApJS..211...24M, 2014A&A...572A..34G, 2016MNRAS.461..497B, 2017A&A...605A.111C} and there were even indications of differential rotation \citep{2013A&A...560A...4R, 2014A&A...569A.113B, 2015A&A...583A..65R, 2016MNRAS.463.1740B, 2018ApJ...852...46K, 2018Sci...361.1231B, 2018arXiv181008630B}, though the possibility to accurately extract differential rotation from photometry has been called into question \citep{2015MNRAS.450.3211A, 2018ApJ...865..142B}. The main result of these measurements was that the observations agreed nicely with the theory, especially for the behaviour of differential rotation \citep{2013A&A...560A...4R}. The new observations did however, also indicate that stars with a relative main sequence age older than the Sun lose angular momentum slower than generally believed by stellar evolution theory \citep{2016Natur.529..181V}. Though this result has received some criticism, mainly focusing on selection bias and underestimation of the age uncertainties \citep{2016AN....337..810B}, the result has later been confirmed by another independent study \citep{2016A&A...592A.156D}. Lately, a number of theoretical explanations for the small angular momentum loss of old solar-type stars have also emerged \citep{2016ApJ...826L...2M, 2017ApJ...845...79B, 2017SoPh..292..126M, 2017ApJ...848...43J}.
Based on asteroseismic analysis of observations from the CoRoT satellite, there were early claims of stellar cycles as short as 120 days \citep{2010Sci...329.1032G}. We now know that $F$-type stars slightly hotter than the Sun, with very thin outer convective zones, tend to show irregular variability with characteristic times scales of a few hundred days and not bona fide cycles \citep{2013A&A...550A..32M, 2014A&A...562A.124M, 2015A&A...583A.134F, 2016A&A...589A.103R}. Great care should thus be taken when claiming the detection of any cycles with only 4 years of $Kepler$ observations. It is however, very interesting to compare the variability in such different activity indicators, as it can teach us something about the physical mechanisms responsible for generating the variability \citep{2016A&A...596A..31S, 2017ApJ...834..207M}. Especially, it is interesting to compare magnetic activity indicators originating from the surface to indicators originating from the interior. It is therefore very fortunate that a number of recent studies have been able to use asteroseismology to trace magnetic variability \citep{2017A&A...598A..77K, 2018A&A...611A..84S, 2018ApJS..237...17S}.
As $Kepler$ was designed for exoplanet detection, the photometry is not suited for identifying stellar activity cycles or the effect of stellar activity. The main reason for this is that the mean level of the photometry is normalised by the calibration method every three months. A solution to this was found by \citet{2017ApJ...851..116M}, who employed the so-called full-frame images for reconstruction of the photometric long-term variability. They used these measurements to show a transition from spot-dominated to faculae-dominated variability for rotation periods between 15 and 25 days. Another solution was found by \citet{2017A&A...603A..52R}, who used the variability rather than the intensity to search for stellar cycles in active solar-type stars. Both studies suffered from the short lifetime of the $Kepler$ mission and were thus not able to identify bona fide stellar cycle variability in any of their targets.
The issue with the short lifetime of the $Kepler$ mission was however, not a problem for the high metallicity Sun-like star KIC 8006161. This star was not only part of our NOT program, it was also observed as part of the Mount Wilson HK project \citep{1995ApJ...438..269B} and the California Planet Search program \citep{2005PASP..117..657W}. We were thus able to reconstruct a time series of the chromospheric activity measurements dating back to the mid-nineties (with even a few data points in the late seventies and early eighties) that allowed us to measure a bona fide cycle period of $7.41\pm1.16$ year. As the rising phase of the last cycle was covered by observations from the $Kepler$ mission, this allowed us to compare the realisation of the magnetic activity in different activity indicators related to the chromosphere, photosphere and interior. The relation between these indicators agreed very nicely with theoretical predictions for this high metallicity star, indicating that the cycle in KIC 8006161 does indeed have the same nature as the solar cycle \citep{2018ApJ...852...46K}.
This study continues the work in Paper I and Paper II. In Paper I we presented the idea behind the {\it Sounding stellar cycles with Kepler} program and discussed the strategy for selecting targets. In Paper II we presented the first measurements of chromospheric emission and analysed the dependency between the mean values of these measurements and fundamental stellar parameters obtained from asteroseismology. At the time of the submission and publication, we expected that $Kepler$ would continue the observations in the Cygnus field for 10+ years. Shortly after the publication of Paper II the second reaction wheel failed and the Cygnus field was abandoned. As the idea from the beginning of the program was that we would only make simultaneous observations with the NOT as long as $Kepler$ was operational, we thus also interrupted our NOT program -- meaning that we stopped submitting proposals. In this study we thus present 4 years of simultaneous observations of magnetic activity related variability in our 20 target stars.
The main scope of this study is to present the measurements of chromospheric variability from the NOT made simultaneously with the observations by $Kepler$. In order to evaluate the information content in the measurements we also compare our measurements of chromospheric activity variability with asteroseismic measurements \citep{2018ApJS..237...17S} and photometric measurements \citep{2017ApJ...851..116M}.
Though we did interrupt our NOT program when the second reaction wheel on $Kepler$ failed, we have continued working on a solution for multidecadal observations of the 20 targets, with a dedicated spectrograph installed at the Hertzsprung Stellar Observations Network Group (SONG) telescope at Teide Observatory \citep{2017ApJ...836..142G}. We will provide a brief overview of this idea and discuss the lesson learned from our NOT program with respect to S/N, sampling and observation time span.
\section{Observations}
The spectrographic observations are described in Paper II. They were obtained with the high-resolution FIbre-fed Echelle Spectrograph (FIES) mounted on the NOT \citep{2000mons.proc..163F}.
The target list is provided in Table 1. Most stars were observed for 3 epochs per year in 2010, 2011, 2012 and 2013 (12 epochs in total). The low-resolution fibre (R=25,000) was used for the observations. The spectra were obtained with 7-minute exposures resulting in a S/N above 100 at the blue end of the spectrum for the faintest starts.
The spectra were reduced as described in Paper II using FIEStool\footnote{http://www.not.iac.es/instruments/fies/fiestool/FIEStool.html}.
\begin{table*}
\caption{Target list for the {\it sounding stellar cycles with Kepler} programme. We also list the Kepler magnitude and $B-V$ values from \citet{2000A&A...355L..27H} and [Fe/H] from \citet{2012MNRAS.423..122B}. The uncertainty on [Fe/H] is 0.06 dex.}
\centering
\begin{tabular}{lcccccccccc}
\hline \hline
KIC ID & $\alpha$ (2000) & $\delta$ (2000) & $k_p$ & $B-V$ & [Fe/H] & $\sigma S$ & $\sigma I$ (ppm) & $\sigma ( \delta \nu)$ ($\mu$Hz)\\
\hline
01435467 & 19:28:19.84 & 37:03:35.3 & 8.9 & 0.47$\pm$0.02 & -0.01 & 0.011 & 634 & 0.24\\
02837475 & 19:10:11.62 & 38:04:55.9 & 8.4 & 0.43$\pm$0.02 & -0.02 & 0.018 & 617 & 0.34\\
03733735 & 19:09:01.92 & 38:53:59.6 & 8.4 & 0.41$\pm$0.02 & -0.04 & 0.007 & 1057 & --\\
04914923 & 19:16:34.88 & 40:02:50.1 & 9.4 & 0.62$\pm$0.03 & 0.17 & 0.015 & 658 & 0.07\\
06116048 & 19:17:46.34 & 41:24:36.6 & 8.4 & 0.57$\pm$0.01 & -0.24 & 0.013 & 1919 & 0.11\\
06603624 & 19:24:11.18 & 42:03:09.7 & 9.0 & 0.76$\pm$0.03 & 0.28 & 0.018 & 843 &0.04\\
06933899 & 19:06:58.34 & 42:26:08.2 & 9.6 & 0.59$\pm$0.04 & 0.02 & 0.019 & 736 & 0.059\\
07206837 & 19:35:03.72 & 42:44:16.5 & 9.7 & 0.46$\pm$0.06 & 0.14 & 0.020 & 1354 & 0.227\\
08006161 & 18:44:35.14 & 43:49:59.9 & 7.3 & 0.87$\pm$0.01 & 0.34 & 0.018 & 1941 & 0.276\\
08379927 & 19:46:41.28 & 44:20:54.7 & 6.9 & 0.58$\pm$0.01 & & 0.009 & 1316 & 0.161\\
08694723 & 19:35:50.58 & 44:52:49.8 & 8.8 & 0.48$\pm$0.02 & -0.59 & 0.014 & 1741 & 0.116\\
09098294 & 19:40:21.20 & 45:29:20.9 & 9.7 & 0.68$\pm$0.08 & -0.13 & 0.017 & 1246 & 0.071\\
09139151 & 18:56:21.26 & 45:30:53.1 & 9.1 & 0.52$\pm$0.03 & 0.11 & 0.013 & 1100 & 0.124\\
09139163 & 18:56:22.12 & 45:30:25.2 & 8.3 & 0.49$\pm$0.01 & 0.15 & 0.009 & 1198 & 0.287\\
10124866 & 18:58:03.46 & 47:11:29.9 & 7.9 & 0.57$\pm$0.02 & -0.30 & 0.015 & 756 & --\\
10454113 & 18:56:36.62 & 47:39:23.0 & 8.6 & 0.52$\pm$0.02 & -0.06 & 0.009 & 1151 & 0.221\\
11244118 & 19:27:20.48 & 48:57:12.1 & 9.7 & 0.78$\pm$0.05 & 0.35 & 0.016 & 1312 & --\\
11253226 & 19:43:39.62 & 48:55:44.2 & 8.4 & 0.39$\pm$0.02 & -0.08 & 0.011 & 1307 & 0.225\\
12009504 & 19:17:45.80 & 50:28:48.2 & 9.3 & 0.55$\pm$0.03 & -0.09 & 0.012 & 952 & 0.138\\
12258514 & 19:26:22.06 & 50:59:14.0 & 8.0 & 0.59$\pm$0.01 & 0.04 & 0.012 & 1025 & 0.093\\
\hline
\end{tabular}
\label{tab1}
\end{table*}
\section{Analysis}
The most common activity indicator utilising the Ca~{\sc ii} $H$ and $K$ lines is the so-called $S$ index \citep{1991ApJS...76..383D}:
\begin{equation}
S=\alpha \cdot \frac{H+K}{R+V}
\end{equation}
where $H$ and $K$ are the recorded counts in 1.09 {\AA} full-width at half-maximum triangular bandpasses centred on the Ca~{\sc ii} $H$ and $K$ lines at 396.8 and 393.4 nm, respectively. $V$ and $R$ are two 20 {\AA} wide reference bandpasses centred on 390.1 and 400.1 nm, respectively, while $\alpha$ is a normalisation constant.
The normalisation constant ($\alpha$) can be obtained by measuring a number of stars that were part of the Mount Wilson HK project \citep{1991ApJS...76..383D}. The calibration does not have to be linear \citep{2010ApJ...725..875I}. This approach was however, not followed in Paper II as there was only one star in common between the Mount Wilson HK project and our targets. Instead the excess flux $\Delta\mathcal{F}_{\rm Ca}$ was used as the activity indicator. The excess flux is defined as the surface flux arising from magnetic sources, which is calculated as the flux in the Ca~{\sc ii} $H$ and $K$ lines, subtracting the photospheric flux and the so-called basal flux.
In \citet{2018ApJ...852...46K} we however, identified 21 stars that were observed by both the California Planet Search program \citep{2005PASP..117..657W, 2010ApJ...725..875I} and by some of our programs at the NOT (including both the targets in Paper I and Paper II). These stars were used by \citet{2018ApJ...852...46K} to calculate a linear transformation between the instrumental $S$ indices measured with the NOT and the $S$ indices measured as part of the California Planet Search program. In order to minimise numerical effects in the calculation of the S indices, all spectra from the NOT and Keck telescope were reanalysed using the same code. The uncertainties of the NOT measurements were obtained using nights with multiple observations to obtain the following relation between the uncertainty of the mean value of the chromospheric activity measured that night and S/N: $\sigma = 0.011/\sqrt{\rm S/N}$. An additional flat noise term of 0.002 was added in quadrature \citep{2011arXiv1107.5325L}.
Our calibrated $S$ indices can be used to calculate the excess flux \citep{1984A&A...130..353R}:
\begin{equation}
\mathcal{F}_{\textup{1\text{\normalfont\AA}}}=10^{-14}S\cdot C_{\rm cf}T_{\rm eff}^4,
\end{equation}
where $C_{\rm cf}$ is the conversion factor given in \citep{1982A&A...107...31M}:
\begin{equation}
\mathrm{log}~C_{cf}=0.25(B-V)^3-1.33(B-V)^2+0.43(B-V)+0.24
\end{equation}
A number of asteroseismic estimates are available for the effective temperatures, but we recommend the effective temperatures provided in \citet{2017A&A...601A..67C}
Our calibrated $S$ indices can also be used to calculate the so-called $R'_{\rm HK}$ activity indicator, defined as:
\begin{equation}
R'_{\rm HK}=R_{\rm HK}-R_{\rm phot},
\end{equation}
where $R_{\rm phot}$ is the photospheric contribution to the indicator, which can be calculated using the $B-V$ colour index \citep{1984ApJ...279..763N}:
\begin{equation}
\mathrm{log}~R_{\rm phot} = -4.898+1.918(B-V)^2-2.893(B-V)^3
\end{equation}
and $R_{\rm HK}$ is calculated based on the $S$ indices and the conversion factor given above:
\begin{equation}
R_{\rm HK}=1.34\cdot10^{-4}C_{cf} S
\end{equation}
After various tests we decided to use the $S$ index as the activity indicator. Generally, the results are not significantly affected depending on if the $S$ index or $R'_{\rm HK}$ is used as the activity indicator. The reason for this is that our 20 targets span a rather small range in effective temperature. Also, the focus of this study is the temporal variability of individual stars, so the calibration is a minor issue.
\section{Results}
The Sun shows a strong direct correlation between chromospheric activity, photometric flux and eigenmode frequencies \citep[see][and reference herein]{2018ApJ...852...46K}. The reason for this is that changes in all three parameters are caused by magnetic flux tubes rising up through the Sun. In the
outermost regions of the near surface convection zone they change the turbulent velocity affecting the eigen frequencies \citep{2004ApJ...600..464D, 2005ApJ...625..548D}, in the photosphere they lead to dark sunspots and bright faculae and in the chromosphere they lead to plages \citep[see][for a recent review]{2013ARA&A..51..311S}. For KIC 8006161 we were able to measure a similar effect for the rising phase of the cycle that started in 2010. We therefore search for similar correlations for the remaining 19 targets. This is done by comparing our measurements of chromospheric emission to the relative flux of the stars measured in the FFI with the method by \citet{2017ApJ...851..116M} with a method similar to what was done for KIC 8006161 by \citet{2018ApJ...852...46K}. In this work, we include a larger region to search for suitable reference targets, encompassing 125 pixels from the observed image of the star on the detector, including its bleed trail, rather than 125 pixels from the stellar centroid. We note that for three targets, KIC 2837475, 6116048, and 10124866, faint background stars overlap with the PSF of the target star, which, if variable, may affect the photometry at the ppm level. The measurements of chromospheric emission is also compared to the eigenmode frequencies presented in \citet{2018ApJS..237...17S}. We use the frequency shifts calculated as the mean of the 5 central orders of the radial and dipolar modes \citep[see][for detailed description of this]{2018ApJS..237...17S}. We present the comparison in Fig.~1. Three stars, KIC 3733735, KIC 10124866 and KIC 11244118 were not included in the analysis by \citet{2018ApJS..237...17S}. The comparison between the chromospheric emission and the relative flux of these stars is presented in Fig.~2.
\begin{figure*}
\centering
\hbox{\includegraphics[width=5.8cm]{FIG/01435467_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/02837475_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/04914923_3.eps}}
\vspace{0cm}
\hbox{\hspace{0cm}\includegraphics[width=5.8cm]{FIG/06116048_3.eps}
\includegraphics[width=5.8cm]{FIG/06603624_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/06933899_3.eps}}
\vspace{0cm}
\hbox{ \hspace{0cm}\includegraphics[width=5.8cm]{FIG/07206837_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/08006161_3.eps}
\includegraphics[width=5.8cm]{FIG/08379927_3.eps}}
\vspace{0cm}
\caption{Comparison of chromospheric activity, photometric flux and eigenmode frequencies shifts for the 17 stars with measured eigenmode frequencies in \citet{2018ApJS..237...17S}. Tables with the measured parameters are provided in the supplementary material.}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\centering
\hbox{ \hspace{0cm}\includegraphics[width=5.8cm]{FIG/08694723_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/09098294_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/09139151_3.eps}}
\vspace{0cm}
\hbox{\includegraphics[width=5.8cm]{FIG/09139163_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/10454113_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/11253226_3.eps}}
\vspace{0cm}
\hbox{\hspace{0cm}\includegraphics[width=5.8cm]{FIG/12009504_3.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/12258514_3.eps}}
\caption{continued}
\end{figure*}
\begin{figure*}
\centering
\hbox{\includegraphics[width=5.8cm]{FIG/03733735_2.eps}
\hspace{0cm}\includegraphics[width=5.8cm]{FIG/10124866_2.eps}
\vspace{0cm}\includegraphics[width=5.8cm]{FIG/11244118_2.eps}}
\caption{Comparison of chromospheric activity and photometric flux for the 3 stars without measured eigenmode frequencies in \citet{2018ApJS..237...17S}. Tables with the measured parameters are provided in the supplementary material.}
\end{figure*}
Based on the results presented in Figs.~1 \& 2 we have calculated the standard deviations of each time series. These are provided in Table~1 and in Figs.~3--5 we show the correlation between these standard deviations of the different activity indicators.
We have also calculated the correlation between the chromospheric emission and the eigenmode frequencies (Fig.~6) and between the chromospheric emission and the relative flux (Fig.~7).
\begin{figure}
\includegraphics[width=8cm]{FIG/dssf.eps}
\caption{Relation between the standard deviation of the eigenmode frequencies $\sigma(\delta \nu)$ and the chromospheric emission $\sigma S$. Gray make stars with a $S$ index less than 0.15. Crosses mark G-type dwarfs and diamonds mark F-type dwarfs. The spectral type classification has been made based on the effective temperatures.}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{FIG/dssl.eps}
\caption{Relation between the standard deviation of the relative flux $\sigma I$ and the chromospheric emission $\sigma S$. Symbols as in Fig.~3.}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{FIG/dsfl.eps}
\caption{Relation between the standard deviation of the relative flux $\sigma I$ and the eigenmode frequencies $\sigma(\delta \nu)$. Symbols as in Fig.~3.}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{FIG/cosf.eps}
\caption{Correlation between the eigenmode frequencies shifts and the chromospheric emission as a function of the mean value of the chromospheric emission. Crosses mark G-type dwarfs and diamonds mark F-type dwarfs.}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{FIG/cosl.eps}
\caption{Correlation between the relative flux and the chromospheric emission as a function of the mean value of the chromospheric emission. Symbols as in Fig.~6.}
\end{figure}
We did test all results using $R'_{\rm HK}$ instead of the $S$ index and no significant differences were found.
\section{Discussions}
So far KIC 8006161 is the only star observed by $Kepler$ that is known to show a bona fide cycle comparable to the 11-year solar cycle \citep[see][for a detailed analysis of this star]{2018ApJ...852...46K}. In our analysis it is also the only star to show a correlation above the 95\% significance level between the variability in the chromospheric emission, the eigenmode frequency shifts and the relative flux.
KIC 10124866 shows a very weak correlation between the chromospheric emission and the relative flux, but the correlation is not above the 95\% significance level. No other star shows correlations between the different activity indicators that are above the 95\% significance level.
KIC~8379927 shows a nice correlation between the variability in the chromospheric emission and the eigenmode frequencies, but the significance is only 61\%. This star is known to be a spectroscopic binary with a $1743.3\pm2.5$ day period \citep{2007Obs...127..313G}. KIC~8379927 was flagged by \citet{2018ApJS..237...17S} as one of the stars that showed evidence of activity-related frequency shifts. Though the significance is very low, we do agree with this interpretation, that between 2010 and early 2012 KIC~8379927 shows a continuous rise and fall in the frequency shifts that is also seen in the chromospheric emission. It is however, not clearly seen in the relative flux, though the measurements in 2011 seem to show larger than average scatter.
It is still too early to conclude if the variability we see in KIC~8379927 is related to a stellar cycle. Looking at the combined frequency shifts and chromospheric emission, there could be hints of a 2-year periodic variability, with maxima in mid-2009, mid-2011 and early/mid-2013. The time-series are however, too short to make any claims about the significance of such periodic variability.
According to \citet{2017A&A...601A..67C} KIC~8379927 is a Sun-like star with a radius of 1.105 M$_{\odot}$, mass of 1.08 M$_{\odot}$ and age of 1.65 Gyr. According to \citet{2014A&A...572A..34G} the star has a rotation period of $16.99\pm1.35$ days. KIC~8379927 is in other words likely a very Sun-like star and we would therefore expect any periodic variability to be longer than 2 years \citep{2007ApJ...657..486B}. This however, assumes a large mass difference between the two binary components.
Recently, there has been a number of claims of a biennial cycle in the Sun \citep{2010ApJ...718L..19F, 2012MNRAS.420.1405B, 2012A&A...539A.135S, 2013ApJ...765..100S}. What we are seeing in KIC~8379927 could thus be the manifestation of a second dynamo \citep{2010ApJ...718L..19F} comparable to the biennial cycle in the Sun. A number of other G-type dwarfs, like HD 190406, HD 78366 and HD 114710, also show such a secondary biennial cycles \citep{1995ApJ...438..269B, 2016A&A...590A.133O}. Unlike the variability seen in the F-type dwarfs, these secondary biennial cycles seen in the more Sun-like G-type dwarfs could be true polarity reversing solar-like activity cycles \citep{2018MNRAS.479.5266J}. More observations are however, needed before we can make any solid claims. Zeeman-Doppler imaging would be especially useful here, as it can be used to evaluate if we see a polarity flip.
In general the comparison of the standard deviation of the different activity indicators does not show any correlation (Figs.~3--5). In fact there might be hints of an inverse correlation between the standard deviation of the frequency shifts and the chromospheric emission for stars with $S$ indices with a standard deviation less than 0.0177 (Fig.~3). The correlation does not seem to be related to any effective temperature or metallicity effects. As no good explanation for such an inverse correlation exists, this may simply be due to noise.
In Fig.~4 we see an inverse relation between the standard deviation of the relative flux $\sigma I$ and the chromospheric emission $\sigma S$ for the stars with mean chromospheric emission larger than 0.15. This behaviour could be explained by a scenario where stars slightly more active than the Sun goes from being spot dominated to being faculae dominated and in this transition $\sigma I$ decreases with increasing chromospheric activity. Such a inverse relation would not show up for the inactive stars as they would be expected to have a more constant spot-to-faculae ratio throughout the cycle. This means that any spot darkening is balanced out by faculae brightening. The chromospheric emission is however, not affected by this phenomenon.
Alternately, the inverse relation between $\sigma I$ and $\sigma S$ could be explained by shifts in the distributions of active regions, where the area covered by spots increases, but the area covered by plage becomes more spatially homogeneous. These sorts of patterns are seen in more active stars \citep{2018ApJ...855...75R}.
If lack of correlations for the inactive stars is due to a missing degeneracy of the signals form spots and faculae then the problem could be solved by using multi-band photometry, since due to their different temperatures, faculae and spots cannot successfully cancel each other out at all wavelengths.
We do note that we were able in Paper II to see a correlation between the mean value of the excess flux and both the rotation period and age.
We further investigated if the correlation between the different activity indicators depends on the absolute level of the chromospheric emission (Figs~6 \& 7). Again, no correlation was found. It is not unlikely that the missing correlation is due to the sparse sampling of the measurements of the chromospheric emission and the fact that the relative flux measurements have to be interpolated for this comparison. In other words, even though we do not see any strong correlation between the different activity indicators, it is not the same as claiming that they are not there.
A correlation between the $R'_{\rm HK}$ calibration and metallicity has been noted by \citet{2006SPD....37.1201S}, \citet{2008A&A...485..571J} and \citet{2012IAUS..286..335S}. Indicating that metallicity suppress the measured $R'_{\rm HK}$ values. We searched for such a correlation between metallicity and the other measured parameters, but did not find any. Again, this could be due to our limited sample size, but also that our stars span a narrow range in effective temperature.
In general, the main conclusion from our work on {\it sounding stellar cycles with Kepler} is that we need a longer time baseline for the observations. In the most optimal case this means following the 20 targets in our sample and maybe even more stars indefinitely. We did not find such an approach feasible by submitting new observing proposals to the NOT every semester. Instead we have installed an eShell spectrograph system from Shelyak Instruments at the Hertzsprung SONG telescope at Teide Observatory and are at the moment testing if it can be used, with minor modifications, to measure chromospheric activity in our targets. If these test turn out to be successful, we will have the option to monitor the variability of the chromospheric emission in these stars for as long as SONG is operational. We note that the main spectrograph at SONG is optimised for measuring accurate radial velocities with the use of a iodine cell and is therefore optimised for the wave lengths covered by the main iodine lines (4900-6200 {\AA}). This means that the Ca~{\sc ii} $H$ and $K$ lines do not even fall on the detector.
At SONG we would try to follow an observing strategy that comes closer to the observing strategy used in the Mount Wilson project, where the targets were observed weekly, when visible, than the observing strategy we have used, where the targets were only observed three times per semester. We propose to follow such a strategy, even though it would result in fewer observed targets. There are two main reasons for such a prioritisation. Firstly, as we argue that the missing correlation between the different activity indicators in this study is due to the sparse sampling (and resulting interpolation), a higher sampling cadence should result in higher correlations between the different activity indicators and therefore also more reliable measurements. Secondly, a higher sampling rate would allow a robust mean and standard deviation of the amplitude of the rotational modulation to be calculated. Thirdly, as shown in \citet{2018ApJ...852...46K}, it is possible to use seasonal measured rotation rates to see indications of surface differential rotation caused by active regions at different latitudes. This is however, only possible if sufficient observations are available each semester for determining the rotation period.
Based on our experience with our NOT program, we recommend that the S/N should be not much less that 100 at the blue part of the spectrum in order to be able to obtain reliable activity measurements.
It is our aim that the new spectrograph for measuring stellar activity with SONG should be fully ready for science observations when NASA's Transiting Exoplanet Survey Satellite (TESS) will start new observations of the $Kepler$ field. This will allow us to test if a denser sampling will lead to a better correlation between the different activity indicators. Apart from this, TESS will provide us with asteroseismic measurements of the fundamental stellar parameters of the stars in which activity cycles have been observed from the Mount Wilson and Lowell observations and will therefore constitute a nice continuation of our {\it sounding stellar cycles with Kepler} program. This is especially true if TESS was extended beyond the nominal mission, which would allow the Kepler field to be revisited each summer.
\section{Conclusions}
We present four years of observations of emission in the Ca~{\sc ii} $H$ and $K$ lines in 20 Sun-like stars. The observations were undertaken simultaneously with observations by the $Kepler$ mission. This has allowed us to analyse the relation between cycle-induced changes on the surface and in the interior in the metal rich Sun-like star KIC 8006161 as seen in the chromospheric emission, in the eigenmode frequencies and in the relative flux \citep{2018ApJ...852...46K}.
The comparison for the different activity indicators for the remaining 19 stars does however, not show the same agreement as seen for KIC 8006161. We suggest that this is mainly due to two effects. Firstly, the sparse sampling of our spectrographic observations make it difficult to compare with the photometric $Kepler$ observations and secondly, due to the failure of $Kepler's$ reaction wheels, we only have 4 years of observations available, which is not sufficient to detect an activity cycle similar to the 11-year solar cycle. We are therefore trying to set up multidecadal observations of the 20 stars with a new dedicated spectrograph at the SONG telescope.
\section*{Acknowledgements}
We would like to thank the referee for thoughtful comments, which significantly improved the paper. The project is been supported by the Villum Foundation. Funding for the Stellar Astrophysics Centre is provided by the Danish National Research Foundation (Grant agreement No.: DNRF106). TSM acknowledges support from a Visiting Fellowship at the Max Planck Institute for Solar System Research. ARGS acknowledges the support from National Aeronautics and Space Administration under Grant NNX17AF27G. The Nordic Optical Telescope is operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrof{\' i}sica de Canarias.
|
2,877,628,091,352 | arxiv | \section{Introduction}
A system consisting of elements can be expressed
by using network representation,
i.e.,
nodes denote the elements and
edges represent their relations.
Many real-life systems,
e.g.,
mobile communication networks,
collaboration networks,
protein-protein interaction networks are analyzed using
network representation~\cite{%
onnela2007PNAS,
newman2001Collaboration,
chen2006Protein}.
A \emph{community} is defined as a group of nodes in a network
where nodes within the same group have more connections with
each other than
the nodes from other groups~\cite{%
girvan2002community}.
\emph{Community detection} is the task of identifying such groups in a network.
Although there is not a universally accepted definition of a community,
the above definition is used by many community detection algorithms~\cite{%
girvan2002community,
newman2004fast,
clauset2004heap,
rosvall2007Infomap,
blondel2008Louvain,
lancichinetti2011OSLOM,
de2014mixing,
raghavan2007LPA,
xie2011community,
gregory2010COPRA,
eustace2015community,
tasgin2018preference}.
There is a comprehensive survey on community detection methods
and algorithms in complex networks
by Fortunato~\cite{%
fortunato2010Survey}.
Different aspects and purposes of community detection
are investigated in a recent
work by Schaub et al.\cite{%
Schaub2017}.
Authors discuss that understanding the motivation of community detection
for a specific problem is important for
selecting the most suitable algorithm or approach,
since there are many facets of community detection.
Many of the proposed community detection algorithms,
some of which are nearly a decade old or more,
are successful on small networks of hundreds or thousands of nodes.
With the availability of very large network datasets
having millions or billions of nodes and edges
in recent years,
there are challenges for community detection algorithms.
Many of the existing community detection algorithms are not able to
run on such large networks
because of their high time-complexity.
If a community detection algorithm needs to optimize a global value or
a metric regarding the whole network,
then it may need to perform an operation or
calculation related
with all elements of the network (i.e. nodes and edges) many times.
Such an approach is computationally expensive and is not feasible
on very large networks.
Additionally,
processing the whole network data may require storing
and accessing it many times, which is expensive in
terms of data storage, too.
A \emph{local community detection} approach,
which uses local information around a node while identifying its community,
can be a practical solution on very large networks.
When the community of each node is decided
using such a limited data and
calculation,
then overall time-complexity of the algorithm will be reasonably low
on very large networks.
Besides their practicality, local algorithms may be
the only viable options
on these networks.
In this paper, we propose a new community detection algorithm
that has a local approach
and tries to find communities by identifying borderlines between them
using boundary nodes.
Initially,
every node is considered to be a boundary node.
Our community detection process naturally
decreases their numbers by identifying communities of them.
In the final situation,
only the actual boundary nodes remain
and they constitute the borderlines between communities.
Outline of the paper is as follows.
We first give background information about our notation,
local algorithms and our method of testing.
Then we briefly explain our community detection approach.
We go into the details of experiments and
present the results of our algorithm on
both generated and real-life networks
and compare it with other algorithms.
\section{Background}
\subsection{Notation}
\label{sec:notation}
Let $G = (V, E)$ be an unweighted and undirected graph
where
$V$ is the set of nodes and
$E$ is the set of edges.
A \emph{community structure} is a partition of $V$.
We label each block in the partition
using a symbol in the set of \emph{community labels}
$\SoL = \{ 1, \dotsc, \hbAbs{V} \}$.
We define function
$L \colon V \to \SoL$,
which maps each node in $V$ to a community label in $\SoL$.
That is,
the community of node $i \in V$ is given as $L(i)$.
If two nodes $i$ and $j$ are in the same community,
then we have $L(i) = L(j)$.
In community detection,
\emph{triangles},
i.e.,
three nodes connected by three edges,
play an important role~\cite{%
radicchi2004Clustering}.
We use two metrics related to triangles.
First one, the \emph{clustering coefficient} $CC_{i}$ of node $i$, is
the probability that two of its neighbors are friends of each other,
given as
\[
CC_{i}= \frac{\bigtriangleup_{i}}{\wedge_{i}}
\]
where
$\bigtriangleup_{i}$ is the number of triangles around node $i$ and
$\wedge_{i}$ is the number of \emph{triplets},
i.e.,
$i$ is connected to two nodes,
centered at
$i$~\cite{%
newman2001clustering}.
The second metric is the number of common neighbors of two nodes,
which is generally used for node similarity.
The \emph{number of common neighbors} of nodes $i$ and $j$ is given as
\[
\mtCommonNeighbors{i}{j} = \hbAbs{\Gamma(i) \cap \Gamma(j)}
\]
where
$\Gamma(i)$ is the \emph{1-neighborhood} of $i$,
i.e.,
the set of nodes whose distances to $i$ are 1.
We use the concepts of Xie and Szymanski~\cite{
xie2011community}
to mark the nodes.
A node $i$ is called an \emph{interior node}
if it is in the same community with all of its 1-neighbors.
If it is not an interior node, it is called a \emph{boundary node}.
Note that boundary nodes are positioned
among nodes from different communities.
\subsection{Local community detection algorithms}
In recent years, several local community detection
algorithms have been proposed~\cite{%
raghavan2007LPA,
xie2011community,
gregory2010COPRA,
eustace2015community,
tasgin2018preference}.
These algorithms generally discover
communities using local interactions of nodes
or local metrics calculated in the 1-neighborhood of nodes in the network.
Instead of performing a search or a
calculation on the whole network (i.e. global),
local approach splits the community detection task
into separate subtasks
on individual nodes and their neighborhoods.
Results of these subtasks are then
merged together to get the
community structure of the whole network.
Raghavan et al.~\cite{%
raghavan2007LPA}
proposed label propagation algorithm,
denoted by \emph{LPA},
which updates the community label of each node with
the most popular label in its 1-neighborhood,
i.e.,
majority rule of labels.
Labels of all nodes in the network are updated asynchronously and
algorithm terminates
when there is no possible label update in the network.
It is a linear-time algorithm,
which can identify communities in a fast way.
However,
it tends to find a single large community,
especially when community structure is subtle.
Xie and Szymanski~\cite{%
xie2011community}
proposed an extension on LPA,
which we denote by \emph{LPAc},
using neighborhood-strength driven approach.
LPAc\ improves the quality of identified communities
by incorporating the number of common neighbors
to the majority rule of labels in LPA.
It calculates the scores of labels by
first counting the number of members
having these labels,
which is similar to LPA.
Then it adds the number of common
neighbors each group has with the node,
multiplied with a constant, $c < 1$.
Additionally,
LPAc also decreases the number of execution steps by
avoiding unnecessary label updates.
Only a subset of nodes in the network update their labels,
namely, active boundary nodes.
Algorithm defines a node as \emph{passive} if it
would not change its label when there is
an attempt to update it;
a node that is not passive is called \emph{active}.
It keeps a list of both types and
iteratively selects a node $i$ from the
active boundary list and updates its label, $L(i)$.
After the label update,
status of node $i$ is checked and
if it becomes a passive or an interior node,
it is removed from active boundary list.
After label update, neighbors of $i$ are checked for a change of status,
i.e.,
if they become active boundary nodes,
they are inserted into the list;
if they change from active to passive,
they are removed from the list.
The algorithm iteratively identifies
the labels of nodes in active boundary list and
maintains the list with removals and
insertions of nodes with label updates.
Algorithm completes when the active boundary list is empty.
Despite the increased quality of communities and its speed,
LPAc still has the issue of finding a single
community, which is a drawback of LPA, too.
\subsection{Method for testing of algorithms}
A community detection algorithm outputs a partition of the set of vertices,
where each block of the partition corresponds to a community.
When we have the ground-truth community structure of the network,
we can compare the partition output of
the algorithm with that of the ground-truth
using Normalized Mutual Information (\emph{NMI})~\cite{%
danon2005NMI}.
We start testing our algorithm on real-life
networks with ground-truth community structure.
The first network is the small network of Zachary karate club~\cite{%
zachary1977Karate}.
Then, we use larger networks provided by
SNAP~\cite{%
datasetSNAP2014},
namely;
DBLP network,
Amazon co-purchase network, YouTube network and
European-email network,
which all have ground-truth communities.
Although we use the provided
ground-truth communities,
which are created by using some meta-data related to these networks,
Peel et al.~\cite{%
Peel2017}
present a detailed analysis on whether the given meta-data can
explain the actual ground-truth communities for the corresponding network.
When an algorithm finds the communities
on a network that are different from the communities
explained by meta-data,
then it may not be directly related with algorithm's failure;
but there may be other reasons,
i.e.
irrelevant meta-data,
meta-data showing different aspects of the network or
no community structure in the network.
On real-life networks,
we run some of the known community detection algorithms;
namely
Newman's fast greedy algorithm
(\emph{NM})~\cite{%
clauset2004heap},
Infomap
(\emph{Inf})~\cite{%
rosvall2007Infomap},
Louvain
(\emph{Lvn})~\cite{%
blondel2008Louvain},
LPA~\cite{%
raghavan2007LPA},
and
LPAc~\cite{%
xie2011community}
and compare their results with the ground-truth.
Execution times of the algorithms are also measured and reported.
Experiments are done with a computer
having 2.2 GHz Intel Core i7 processor with 4-cores.
We also use a set of computer generated networks for testing.
The LFR benchmark networks~\cite{%
lancichinetti2008benchmark},
with planted community structure,
are used for comparison of community detection algorithms.
These networks are generated with a parameter vector of
$[N, \langle k\rangle, k_{max}, C_{min}, C_{max}, \mu]$
where
$N$ is the number of nodes and
$\mu$ is the mixing parameter controlling the rate of
intra-community edges to all edges of nodes in the generated network.
Community structure of an LFR network is
related to the mixing parameter it is generated with.
As $\mu$ increases,
the community structure becomes more subtle and difficult to identify.
LFR algorithm runs in a non-deterministic way and
can create different networks,
given the same parameter vector.
In order to avoid a potential bias of an algorithm to a single network,
we generate 100 LFR networks for each vector and report the averages.
\section{Our Approach}
We propose a new community detection algorithm that finds communities
by identifying borderlines between communities based on boundary nodes.
We first provide an overview of the algorithm,
then we discuss the details.
The algorithm $\proc{Community-By-BoundryNodes}$
is given in \reffig{fig:Community-By-BoundryNodes}.
\begin{figure}[thbp]
\begin{codebox}
\Procname{$\proc{Community-By-BoundryNodes}$}
\li \Comment $V \gets \{ 1, \cdots, |V| \}$: set of nodes
\li \Comment $S$: set of boundary nodes
\li \Comment $L[i]$: the community of $i \in V$
\li \Comment $\proc{Initial-Heuristic}()$: an initial heuristic
\li \Comment $\proc{isBoundryNode}(i)$: $\const{true}$
\li \Comment $ $ if $i \in V$ is a boundary node
\li \Comment $\proc{bestCommunity}(i)$: best community
\li \Comment $ $ for $i \in V$
\li
\li \Comment initialization
\li \While $i$ in $V$
\li \Do
$L[i] \gets i$
\End
\li $\proc{Initial-Heuristic}()$
\li \While $i$ in $V$
\Do
\li \If $\proc{isBoundryNode}(i)$
\Do
\li $S \gets S \cup \{i\}$
\End
\End
\li
\li \Comment iteration
\li \While $S \ne \emptyset$
\li \Do
$i \gets$ randomly selected node in $S$
\li $S \gets S \smallsetminus \{ i \}$
\li $communityOld \gets L[i]$
\li $ L[i] \gets \proc{bestCommunity}(i)$
\li \If $communityOld \ne L[i]$
\li \Do
\While $j$ in $\Gamma(i)$
\li \Do
\If $\proc{isBoundryNode}(j)$
\Do
\li $S \gets S \cup \{ j \} $
\End
\End
\End
\End
\end{codebox}
\caption{
Main algorithm \proc{Community-By-BoundryNodes}.
}
\label{fig:Community-By-BoundryNodes}
\end{figure}
The algorithm keeps track of a set $S$ of boundary nodes.
We start with $|V|$ communities of size 1,
i.e.,
each node is a community by itself.
Since each node is a boundary node,
the set initially
would have all the nodes in it.
Set $S$ with $|V|$-elements is too large.
We apply a heuristic,
\proc{Initial-Heuristic} in \reffig{fig:Initial-Heuristic},
to reduce the initial number of communities,
hence,
the initial number of boundary nodes.
For each connected pair of nodes $i,j\in V$,
we calculate the ``benefit score'',
$b_{i}(j)$,
if $i$ assumes the community of $j \in \Gamma(i)$.
Note that $b_{i}(j)$ is calculated synchronously.
We set the community of $i$ to that of $j$ with the maximum benefit score.
Then,
using procedure $\proc{isBoundryNode}$ in \reffig{fig:isBoundryNode},
we identify the boundary nodes and insert them into the set $S$.
\begin{figure}[thbp]
\begin{codebox}
\Procname{$\proc{Initial-Heuristic}$}
\li \Comment $b_{i}(j)$: benefit score if $i$ assumes the community of $j$
\li
\li \While $i$ in $V$
\Do
\li $maxBenefit \gets 0$
\li $maxNode \gets 0$
\li \While $j$ in $\Gamma(i)$
\Do
\li \If $b_{i}(j)> maxBenefit$
\Do
\li $maxBenefit \gets b_{i}(j)$
\li $maxNode \gets j$
\End
\End
\li $L[i] \gets L[maxNode]$
\End
\end{codebox}
\caption{
Procedure \proc{Initial-Heuristic}.
}
\label{fig:Initial-Heuristic}
\end{figure}
\begin{figure}[thbp]
\begin{codebox}
\Procname{$\proc{isBoundryNode}(i)$}
\li \While $j$ in $\Gamma(i)$
\li \Do
\If $L[j] \ne L[i]$
\Do
\li \Return \const{true}
\End
\End
\li \Return \const{false}
\end{codebox}
\caption{
Procedure \proc{isBoundryNode}.
}
\label{fig:isBoundryNode}
\end{figure}
As long as the set is not empty,
the algorithm repeats the following steps.
A node $i$ in the set is selected at random
and removed from the set.
We reconsider the community of the selected node based on its 1-neighborhood.
A new community assignment,
which produces the largest ``benefit score'',
is made.
If the old and the new communities of $i$ are the same,
i.e. no effective change,
then we are done with this pass.
If the community of $i$ is changed,
then this may cause some of its 1-neighbors
to become boundary nodes.
In this case,
the new boundary nodes are inserted into the set.
Note that the selected node is not added to the set during this iteration
even if it is still a boundary node.
It is possible that it may be inserted into the set in some other iteration,
in which one of its 1-neighbors is processed.
Boundary node check is done with procedure $\proc{isBoundryNode}$.
This iteration process terminates
when the set $S$ becomes empty,
which indicates that the system reaches to a steady state,
where no further change in community assignment is possible
with a larger ``benefit score''.
\subsection{Best Community}
\label{subsection:BestCommunity}
Given a community assignment,
we want to reconsider the community $L(i)$ of a node $i$ by
investigating options in its 1-neighborhood $\Gamma(i)$.
This is the function of the procedure
$\proc{bestCommunity}$,
which is described below.
There are two different approaches:
\textbf{a) Individual approach.}
Consider each neighbor $j \in \Gamma(i)$ of $i$ individually.
Switching to the community of $j$ produces a benefit of $b_{i}(j)$.
Therefore,
$i$ switches to the community of $j$,
which produces the largest benefit.
That is,
\[
L(i)
= L\left(
\argmax {j \in \Gamma(i)}
b_{i}(j)
\right).
\]
If there is more than one community with the maximum benefit,
one of them is selected randomly.
For the value of $b_{i}(j)$,
we consider three metrics:
(i)~\textbf{\emph{I-R}:}
Assign a uniformly random number in a range of $[0, 1]$ to $b_{i}(j)$.
Clearly,
this will not reflect any information regarding the properties of a node or its neighborhood.
(ii)~\textbf{\emph{I-CC}:}
Use the clustering coefficient of $j$ as the benefit score,
i.e.,
$b_{i}(j) = CC_{j}$.
(iii)~\textbf{\emph{I-CN}:}
Use the number of common neighbors of $i$ and $j$,
i.e.,
$b_{i}(j) = \mtCommonNeighbors{i}{j}$.\\
\textbf{b) Community groups approach.}
We consider the communities represented by the neighbors.
The neighbors are grouped according to their communities.
We look at the collective contribution of each group.
The community of the group with the largest benefit score
is selected as the new community of $i$.
That is,
\[
L(i)
= L\left(
\argmax{k}
B_{i}(k)
\right)
\]
where
$B_{i}(k)$ is the collective benefit score of
community $k$
in 1-neighborhood of node $i$
and defined as
\[
B_{i}(k)
= \sum_{
\substack{
L(j) = k\\
j \in \Gamma(i)
}
}
b_{i}(j).
\]
For the value of $b_{i}(j)$,
we consider the three metrics
that we used in the individual approach.
The group versions are denoted by
(iv):~\textbf{\emph{G-R}},
(v):~\textbf{\emph{G-CC}}, and
(vi):~\textbf{\emph{G-CN}}.
In addition to these,
we consider one more measure:
(vii):~\textbf{\emph{G-1}}:
We assign $b_{i}(j) = 1$ to each neighbor $j$.
Note that this is similar to the majority rule of labels in LPA.
\section{Experiments and Discussion}
\label{section:ExperimentsAndDiscussion}
\subsection{Deciding the benefit score}
We define seven metrics for benefit score.
In order to decide on which metric to use,
we try each one on LFR generated networks of $1,000$ nodes.
NMI scores of identified partitions and execution times of our algorithm are presented
in \reffig{fig:LFR1000_benefitScores}
and \reffig{fig:LFR1000_benefitScoreExecutionTimes},
respectively.
We also run LPA and LPAc\
algorithms on these networks for comparison.
The $c$ parameter of LPAc\ is set as $0.25$.
\begin{figure}[!htbp]
\centering
\subfloat[NMI Scores]{
\label{fig:LFR1000_benefitScores}
\includegraphics
[width=0.45\textwidth]
{figBenefitScoreMethodsNMI.pdf}
}\\
\subfloat[Execution Times]{
\label{fig:LFR1000_benefitScoreExecutionTimes}
\includegraphics
[width=0.45\textwidth]
{figBenefitScoreMethodsExecutionTime.pdf}
}
\caption{
\textbf{(a)}~NMI scores and
\textbf{(b)}~execution times of our new algorithm
with various benefit score candidates,
and
LPA
and LPAc with $c=0.25$
on LFR benchmark network datasets.
LFR parameters:
$[N=1,000, \langle k\rangle=15, k_{max}=50, C_{min}=10, C_{max}=50]$.
(Average of 100 realizations)
}
\label{fig:LFR1000_Networks_benefitScores}
\end{figure}
We observe that all the group-based benefit scores have
better results than the individual ones.
Even group-random value assignment, G-R, has good results.
Surprisingly,
uniform benefit score using the group approach, G-1,
has the worst performance among all.
Although it is similar to the majority rule of labels in the LPA,
it is not a good fit for our algorithm.
As an exception among the individual ones,
I-CN\ outperforms LPA.
Benefit scores based on common neighbors,
both at individual and group level,
i.e.,
I-CN\ and G-CN,
produce better results in our tests.
G-CN\ is slightly better than I-CN\ in terms of NMI values.
Both LPA and LPAc\ algorithms have good NMI values,
but when $\mu > 0.6$,
their performances degrade while our algorithm still finds communities.
LPAc\ performs better than LPA.
However, when we look at the execution times of algorithms,
LPAc\ has the worst performance.
Its elapsed time is 2 to 3 times higher than our G-CN\ algorithm.
We conclude that using the number of common neighbors with
the community-groups approach, namely G-CN,
produces the best results in our algorithm.
We use G-CN\ for the rest of the paper.
\subsection{Zachary karate club network}
\begin{figure}[!htbp]
\begin{center}
\includegraphics
[width=0.7\columnwidth]
{fig_karateClub.pdf}
\caption{Zachary karate club: Identified communities by our G-CN\ algorithm}
\label{fig:karateClubCommunities}
\end{center}
\end{figure}
We run our algorithm on Zachary karate club network and
compare the identified communities with the ground-truth.
Our algorithm G-CN\
identifies two communities as seen in \reffig{fig:karateClubCommunities}.
Only the node~10 is misidentified by our algorithm.
There is a tie-situation among the benefit scores
exhibited to node 10 by its neighbors;
so with random selection among alternatives
our algorithm sometimes selects the
\emph{wrong} community.
The community labels of all the other nodes
are identified correctly with respect to
ground-truth community structure.
\begin{table*}[!htbp]
\caption{
Large real-life networks with ground truth
}
\begin{center}
\scalebox{0.60}{
\begin{tabular}{|l|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|}
\hline
\toprule
\multirow{2}{*}{Network} & \multirow{2}{*}{$|V|$} &\multirow{2}{*}{$|E|$}& \multirow{2}{*}{CC} & \multicolumn{7}{c}{\# communities} & \multicolumn{6}{|c|}{NMI wrt GT}& \multicolumn{6}{c|}{Execution time (ms)}\\ \cline{5-23}
{} & {} & {} & {} & GT & G-CN & Inf & LPA &LPAc & Lvn & NM & G-CN & Inf & LPA &LPAc&Lvn &NM & G-CN & Inf & LPA & LPAc &Lvn &NM \\ \hline
European-email &1,005& 16,064 & 0.40 & 42 & 23 & 38 &3 &20& 25 &28 &0.14&0.62&0.13&0.31&0.54&0.46 & 146 & 133 & 32 &704& 69 & 187\\
DBLP & 317,080 & 1,049,866 & 0.63 & 13,477 & 26,873 & 30,811 &36,780 &30,242& 565 &3,165 &0.56&0.65&0.64&0.61&0.13&0.16 & 8,825 & 35,753 & 26,413 &894,858& 8,217 & 4,362,272\\
Amazon & 334,863 & 925,872 & 0.40 & 75,149 & 33,395 & 35,139 &24,045 &30,908& 248 &1,474 &0.57&0.60&0.54&0.57&0.11&0.11 & 7,552 & 43,253 &30,931&997,088& 8,017 & 1,422,590\\
YouTube & 1,134,890 & 2,987,624 & 0.08 & 8,385 & 116,082 &102,125 &89,449 &69,817& 9,616& - &0.07&0.13&0.07&0.05&0.06&- & 295,935& 188,037 &324,641&76,129,367& 52,798 & -\\
\bottomrule
\hline
\end{tabular}
}
\end{center}
\label{tbl:tableLargeNetworks}
\scalebox{0.65}{
\begin{minipage}{0.85\textwidth}%
\begin{center}
\begin{tabular}{ll}
\small GT: Ground-truth
&\small LPA : Label propagation algorithm~\cite{raghavan2007LPA}\\
\small LPAc: Neighborhood-strength driven LPA~\cite{xie2011community}
&\small G-CN: Our algorithm\\
\small Lvn: Louvain community detection algorithm~\cite{blondel2008Louvain}
&\small Inf: Infomap algorithm~\cite{rosvall2007Infomap}\\
\small NM: Newman's fast greedy algorithm~\cite{clauset2004heap}\\
\end{tabular}
\end{center}
\end{minipage}%
}
\end{table*}
\subsection{Large real-life networks}
We run our G-CN\ algorithm on large
real-life networks with ground-truth
communities, provided by SNAP~\cite{
datasetSNAP2014}.
For comparative analysis,
Newman's fast greedy algorithm
(\emph{NM})~\cite{%
clauset2004heap},
Infomap
(\emph{Inf})~\cite{%
rosvall2007Infomap},
Louvain
(\emph{Lvn})~\cite{%
blondel2008Louvain},
Label Propagation
(\emph{LPA})~\cite{%
raghavan2007LPA} and
neighborhood-strength driven LPA
(\emph{LPAc})~\cite{xie2011community}
are also run on these networks.
Newman's algorithm is omitted for YouTube network
due to its long execution time.
The results are presented in \reftbl{tbl:tableLargeNetworks}.
There is no clear winner in \reftbl{tbl:tableLargeNetworks},
which is a good news for local algorithms.
That is,
although the local algorithms cannot see the global picture,
they perform good enough.
The number of detected communities by
G-CN, Infomap, LPA, and LPAc\ are close to each other
and not far from the ground-truth.
There are exceptions;
on the YouTube network, all four detect too many communities.
On European-email network with 42 ground-truth communities,
LPA merges many communities together and detects only 3 communities
while the other three do a better job.
On DBLP and Amazon networks;
both Louvain and
Newman's algorithm detect very few number of communities.
Louvain has the best detection on YouTube network,
while Newman's algorithm
experiences performance problems.
Our G-CN\ algorithm
performs well on most of the networks.
However,
it performs poorly on YouTube network,
which has the smallest clustering coefficient
of these four networks.
Considering NMI values of all six algorithms,
it is possible that YouTube network may have subtle community structure.
On this network, the best performing algorithm, Infomap,
only gets NMI value of 0.13.
On DBLP and Amazon networks;
Infomap, LPA, LPAc\ and our algorithm obtain similar NMI values, and
they are much better than Louvain and Newman's algorithm.
On European-email network,
local algorithms like LPA,
LPAc\ and ours are not good enough.
It is possible that the network is not a good one for local approaches.
On all of the networks, LPAc\ has highest
execution times among the local algorithms.
Its execution time on YouTube network is very high compared to our algorithm and LPA.
For all the real-life networks, we use the provided
ground-truth community structure to evaluate the quality of partitions found by
each algorithm.
However, there is no single algorithm that performs good on all networks or there is not
a single network on which some algorithms perform very good.
This may be due to the fact that supposed ground-truth for these networks
do not reflect the original ground-truth community structure or
show different aspects of the network structure,
as discussed in the work of
Peel et al.~\cite{Peel2017}.
For this reason, our test on real-life networks gives
an idea about the relative performance of
algorithms compared to each other on different networks;
but does not lead to a conclusion on
whether they perform well on these networks or not.
\subsection{Generated networks}
\label{sec:LFRNetworks}
We test our algorithm, G-CN,
also on generated LFR networks
of $1,000$ and $5,000$ nodes
as reported in \reffig{fig:LFR1000_NMI} and \reffig{fig:LFR5000_NMI},
respectively.
The same algorithms
that we run on real-life networks
are also used for comparative analysis on these networks.
We also measure the execution times of
the algorithms and report the results
in \reffig{fig:LFR1000_executionTimes} and
in \reffig{fig:LFR5000_executionTimes}.
We present the details of the results on LFR
networks of 5,000 nodes in \reftbl{tbl:tableLFRLarge}.
For each parameter set,
we generate 100 LFR networks for a given $\mu$ and
run algorithms on all these datasets and
then average the results for each algorithm.
On LFR networks with 1,000 nodes, our G-CN\ algorithm
is the best algorithm with Infomap when $0.1<\mu<0.5$.
For $0.5<\mu<0.8$,
our algorithm is in the second place after Infomap.
When $\mu>0.7$, most of the algorithms tend to find
a small number of communities while our algorithm
still identifies a reasonable set of communities.
LPA and LPAc find a single community
that leads to the NMI value of $0$.
Louvain and Newman's
algorithm also find very few number of communities on these networks.
\begin{table*}[!htbp]
\centering
\caption{Generated LFR benchmark networks of 5,000 nodes }
\label{tbl:tableLFRLarge}
\scalebox{0.69}{
\begin{tabular}{|l|r|r||r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Network}} & \multicolumn{1}{c|}{\multirow{2}{*}{$|V|$}} & \multicolumn{1}{c|}{\multirow{2}{*}{$\mu$}} & \multicolumn{1}{c|}{\multirow{2}{*}{$|E|$}}
& \multicolumn{1}{c|}{\multirow{2}{*}{CC}} & \multicolumn{7}{c|}{\# communities} & \multicolumn{6}{c|}{NMI wrt GT} & \multicolumn{6}{c|}{Execution time (ms)} \\ \cline{6-24}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{}
& \multicolumn{1}{c|}{GT} & \multicolumn{1}{c|}{G-CN} & \multicolumn{1}{c|}{Inf} & \multicolumn{1}{c|}{LPA} & \multicolumn{1}{c|}{LPAc} & \multicolumn{1}{c|}{Lvn} & \multicolumn{1}{c|}{NM}
& \multicolumn{1}{c|}{G-CN} & \multicolumn{1}{c|}{Inf} & \multicolumn{1}{c|}{LPA} & \multicolumn{1}{c|}{LPAc}& \multicolumn{1}{c|}{Lvn} & \multicolumn{1}{c|}{NM}
& \multicolumn{1}{c|}{G-CN} & \multicolumn{1}{c|}{Inf} & \multicolumn{1}{c|}{LPA} & \multicolumn{1}{c|}{LPAc} & \multicolumn{1}{c|}{Lvn} & \multicolumn{1}{c|}{NM} \\ \hline
LFR-1 & 5,000 & 0.1& 38,928& 0.52& 102& 102& 102& 102&102& 89 & 65& 1.00& 1.00 & 1.00 &1.00& 0.99& 0.93 & 161 & 261 & 51&612&132& 508 \\ \hline
LFR-2 & 5,000 & 0.2& 38,834& 0.37& 101& 102& 101& 100&101& 81 & 32& 1.00& 1.00 & 1.00 &1.00& 0.98& 0.78 & 167 & 273 & 52&647&142& 914 \\ \hline
LFR-3 & 5,000 & 0.3& 38,883& 0.26& 101& 103& 101& 98 &101& 73 & 18& 1.00& 1.00 & 1.00 &1.00& 0.97& 0.65 & 167 & 287 & 55&655&157& 1,504 \\ \hline
LFR-4 & 5,000 & 0.4& 38,939& 0.16& 101& 109& 101& 97 &102& 64 & 12& 0.98& 1.00 & 0.99 &1.00& 0.95& 0.55 & 169 & 309 & 56&691&174& 2,117 \\ \hline
LFR-5 & 5,000 & 0.5& 38,965& 0.10& 101& 131& 101& 94 &104& 53 & 9& 0.93& 1.00 & 0.98 &1.00& 0.93& 0.46 & 169 & 362 & 59&749&196& 2,644 \\ \hline
LFR-6 & 5,000 & 0.6& 38,935& 0.05& 102& 203& 104& 87 &110& 41 & 11& 0.82& 1.00 & 0.85 &0.98& 0.87& 0.30 & 175 & 441 & 58&859&241& 3,106 \\ \hline
LFR-7 & 5,000 & 0.7& 38,857& 0.02& 101& 356& 159& 5 &114& 24 & 15& 0.65& 0.88& 0.19 &0.72& 0.46& 0.14 & 192 & 767 & 53&1,368&279& 3,099 \\ \hline
LFR-8 & 5,000 & 0.8& 38,873& 0.01& 101& 530& 239& 1 &1& 12 & 13& 0.46& 0.37& 0.00 &0.00& 0.10& 0.06 & 193 & 1,238 & 49&1,681&290& 2,645 \\ \hline
LFR-9 & 5,000 & 0.9& 38,909& 0.01& 102& 614& 86& 1 &1& 12 & 13& 0.37& 0.11& 0.00 &0.00& 0.04& 0.04 & 195 & 939 & 47&1,522&305& 2,456 \\ \hline
LFR-10 & 5,000 & 1.0& 38,923& 0.01& 101& 618& 79& 1 &1& 12 & 13& 0.35& 0.09& 0.06 &0.00& 0.03& 0.03 & 189 & 913 & 47&1,553&303& 2,450 \\ \hline
\end{tabular}
}
\end{table*}
\begin{figure*}[!htbp]
\centering
\subfloat[NMI Scores]{
\label{fig:LFR1000_NMI}
\includegraphics
[width=0.9\columnwidth]
{figLFR1000_NMI.pdf}
}
\subfloat[Execution Times]{
\label{fig:LFR1000_executionTimes}
\includegraphics
[width=0.9\columnwidth]
{figLFR1000_ExecutionTime.pdf}
}\\
\subfloat[NMI Scores]{
\label{fig:LFR5000_NMI}
\includegraphics
[width=0.9\columnwidth]
{figLFR5000_NMI.pdf}
}
\subfloat[Execution Times]{
\label{fig:LFR5000_executionTimes}
\includegraphics
[width=0.9\columnwidth]
{figLFR5000_ExecutionTime.pdf}
}
\caption{
Comparison of NMI and execution times of
our method and known algorithms
on LFR benchmark network datasets.
\textbf{(a)} and
\textbf{(b)}
are for LFR network generated with
$[N = 1,000, \langle k\rangle = 15, k_{max} = 50, C_{min} = 10, C_{max} = 50]$.
\textbf{(c)} and
\textbf{(d)}
are for LFR network generated with
$[N = 5,000, \langle k\rangle = 15, k_{max} = 75, C_{min} = 20, C_{max} = 100]$. (Average of 100 realizations)
}
\label{fig:LFR_ComparativeAnalysis}
\end{figure*}
The second set of test is performed on LFR networks of $5,000$ nodes.
Infomap, LPA, and LPAc are successful
in identifying communities when mixing parameter is low,
however, their quality degrades with increasing mixing parameter.
LPAc and LPA have slightly better results on
the networks of $5,000$ nodes generated with $0.4<\mu<0.6$
compared to the previous set of networks of $1,000$ nodes.
With increasing value of the $\mu$, performances of LPA and LPAc get
worse and they tend to find a single community after $\mu>0.7$.
Infomap has the similar tendency but has better results on
LFR networks of $5,000$
nodes compared to previous
set of networks of $1,000$ nodes.
Newman's algorithm and Louvain
find a small number of communities;
they tend to merge communities,
which may lead to a resolution limit~\cite{%
fortunato2007resolution}.
Our G-CN\ algorithm identifies communities with
high accuracy when $\mu$ is low.
It is the only algorithm to identify communities
when community identification becomes very hard,
i.e.,
$\mu>0.75$.
Its execution times are lower than most of the algorithms;
only LPA has better execution times.
However, considering the quality of identified communities
and corresponding execution times,
G-CN\ algorithm performs better than LPA.
Newman's algorithm and LPAc\ have
the highest execution times on these networks.
\section{Conclusion}
We propose a new local community detection algorithm, G-CN,
which is based on identifying borderlines of communities using boundary nodes in the network.
It is a local algorithm that is able to run on very large
networks with low execution times.
It can identify communities with high quality,
regardless of the network size.
On the networks with subtle community structure,
it outperforms other algorithms.
On these networks;
Infomap, LPA, and LPAc\ merge all the nodes into a single community.
This is due to the heuristics of these algorithms,
where they lose granular structures and
fail to identify communities for certain kinds of networks.
However, our approach keeps granular
communities by focusing on the similarity of nodes
even when it has many dissimilar neighbors
but only a few similar ones.
It does not force small communities to join
to a giant component.
Our algorithm performs successfully
on generated networks with planted
community structure,
i.e.,
ground-truth is known.
However, on real-life networks
where ground-truth is created by using some meta-data,
all of the algorithms in benchmark find different results.
This may be due to the fact that
meta-data does not reflect the actual ground-truth
communities or meta-data shows
different aspects of the network structure as
discussed in the work of Peel et al.~\cite{Peel2017}.
With its local approach, G-CN\ is scalable
and suitable for distributed and parallel processing
(\emph{we have not
implemented a parallel version for this paper}).
Community detection task can be split into separate subtasks
on many computation devices (with the necessary piece of network data),
which will enable
real-time community detection on very large networks.\\
The source code of the algorithm is available at:
https://github.com/murselTasginBoun/CDBN
\section*{Acknowledgments}
Thanks to
Mark Newman,
Vincent Blondel and
Martin Rosvall
for the source codes of their community detection algorithms.
Thanks to
Mark Newman,
Jure Leskovec and
Vladimir Batagelj
for the network datasets.
This work was partially supported
by the Turkish State Planning Organization (DPT) TAM Project (2007K120610).
|
2,877,628,091,353 | arxiv | \section{Introduction}\label{sec:introSim}
In this paper we consider the following family of nonlinear Fokker--Planck equations
\begin{align}\label{eq:befp}
\partial_t f & = \Delta_v f+\mathrm{div}_v(v f(1+ f^\gamma)),\quad t>0,\;v\in\mathbb{R}^d,
\\\nonumber f(0,\cdot) & = f_0\ge0,
\end{align}
where $\gamma>0$ is a fixed parameter and $f=f(t,v)\ge0$.
We are particularly interested in the case $\gamma=1$,
in which equation~\eqref{eq:befp} is known as the \textit{Kaniadakis--Quarati model} for bosons (KQ).
It was introduced by Kaniadakis and Quarati~\cite{kaniadakis_classical_1994} as a model for quantum particles following Bose--Einstein statistics, obtained by adapting accordingly the transition probability rates in the kinetic model.
\paragraph{Physical background (\texorpdfstring{$\gamma=1$}{}).}
The feature in which KQ differs from the linear Fokker--Planck equation consists in the additional factor $(1+f)$ in the drift term. This factor, leading to a nonlinear equation, arises from the assumption of
indistinguishability of identical quantum particles. Indeed, in contrast to classical mechanics, in a quantum system of identical and indistinguishable particles, the presence of particles in a given energy state influences the probability of further quantum particles joining that state. Here we are interested in systems of bosons, whose wave function is symmetric with respect to permutations of particles. This results in an increase in the transition probability, which is encoded, in the continuum model, in the extra factor $(1+f)$. For KQ the choice $d=3$ is the physically most interesting space dimension. In this case the problem exhibits a finite critical mass $m_c$ above which condensates are expected to emerge in finite time, see below for more details. However, in the literature little is known about the possible formation of condensates in 3D KQ.
\paragraph{Variational structure and steady states.}
Equation~\eqref{eq:befp} has a natural \textit{entropy functional}, given by
\begin{align*}
\mathcal{H}(f):=\int \left(\frac{|v|^2}{2}f+\Phi(f)\right)\,\d v,
\end{align*}
where $\Phi(f):=\frac{1}{\gamma}\int_0^f\log\left(\frac{s^\gamma}{1+s^\gamma}\right)\d s$ and thus $\Phi''(f)=1/h(f)$ for $h(s):=s(1+s^\gamma)$.
Indeed, formally, equation~\eqref{eq:befp} can be rewritten as
\begin{align}\label{eq:gradflow}
\partial_tf = \nabla\cdot\left(h(f)\nabla\frac{\delta \mathcal{H}}{\delta f}(f)\right),
\end{align}
where $\frac{\delta \mathcal{H}}{\delta f}$ denotes the variational derivative of $\mathcal{H}$.
Thus, for any sufficiently regular, positive (and hence mass conserving) solution $f=f(t,v)$ of eq.~\eqref{eq:befp}, one obtains the \textit{entropy dissipation identity}
\begin{align}\label{eq:ediss}
\frac{\d}{\d t}\mathcal{H}(f)=-\int h(f)\left|\nabla\frac{\delta\mathcal{H}}{\delta f}(f)\right|^2\,\d v.
\end{align}
Notice, however, that due to the presence of the (quantum correction) term $s^{\gamma}$ in the definition of $h(s)$, equation~\eqref{eq:gradflow} is not a gradient flow of the functional $\mathcal{H}$ with respect to the classical Wasserstein metric.
The mobility $h(s)$ associated to the nonlinear continuity equation~\eqref{eq:gradflow}
is convex leading to well-known issues of ill-defined Wasserstein-like metrics to render rigorous the gradient flow structure~\cite{DNS} in contrast to the Fermi--Dirac case \cite{CLR,CLSS}.
We observe that, given a sufficiently regular positive function $f$, the RHS of equation~\eqref{eq:ediss} is strictly negative unless $\nabla\frac{\delta\mathcal{H}}{\delta f}(f)=0$. The regular solutions of this equation are henceforth referred to as the \textit{steady states} associated with problem~\eqref{eq:befp}. They are explicitly given by
\begin{align}\label{eq:ss}
f_{\infty,\theta}(v)=\left(\mathrm{e}^{\gamma(\frac{|v|^2}{2}+\theta)}-1\right)^{-1/\gamma},\quad\theta\ge0.
\end{align}
Notice that $f_{\infty,\theta}$ is smooth and integrable for $\theta>0$, and the family $\{f_{\infty,\theta}\}$ is strictly ordered and approaches $f_c:=f_{\infty,0}$ from below as $\theta\searrow0$. Furthermore, letting $m_c:=\int f_c$, the map $(0,\infty)\ni\theta\mapsto m_\theta:=\int f_{\infty,\theta}\in(0,m_c)$ is a bijection, and $m_c<\infty$ if and only if $\gamma>\frac{2}{d},$ i.e.\;if and only if the problem is $L^1$-supercritical. While $f_{\infty,\theta}$ is the unique minimiser of $\mathcal{H}$ among non-negative integrable functions of mass $m=m_\theta$, for $m>m_c$ the problem of minimising $\mathcal{H}$ under mass constraint does not have a regular solution.
Since $\Phi$ is sublinear at infinity, the natural extension $\mathcal{\widetilde H}$ of the entropy functional to the set of finite non-negative Borel measures $\mathcal{M}^+_b$ is given by
$$
\mathcal{\widetilde H}:\quad\mu\mapsto\int\left(\frac{|v|^2}{2}\mu(\d v)+\Phi(f)\,\d v\right),
$$
where $f$ denotes the density of the absolutely continuous part of $\mu$. The extension
is convex and lower-semicontinuous with respect to {weak-star} convergence in $\mathcal{M}$~\cite{demengel_convex_1984,abdallah_minimisation_2011}. In~\cite{abdallah_minimisation_2011} it is shown via an explicit calculation that the extended functional has a unique minimiser among finite non-negative measures of mass $m>m_c$, which is given by
\begin{align*}
f_c\cdot\mathcal{L}^d+(m-m_c)\delta_0.
\end{align*}
The above comments on the entropy functional and the steady states of equation~\eqref{eq:befp} apply to the problem posed on the whole space $\mathbb{R}^d$ (assuming sufficient decay as $|v|\to\infty$) as well as to the problem on a centred ball $B(0,R_1)$ subject to no-flux boundary conditions.
\paragraph{Dynamics of the Kaniadakis--Quarati model.}
As noted in~\cite{carrillo_finite-time_2019}, in the $L^1$-subcritical case, $d=1$, KQ is globally wellposed in the classical sense for sufficiently regular initial data, and solutions converge to equilibrium at an exponential rate~\cite{carrillo_1d_2008}. In the $L^1$-critical case, $d=2$, solutions are also globally regular and converge to equilibrium~\cite{canizo_fokkerplanck_2016}---with an exponential rate in the spatially isotropic case $f(t,v)=g(t,|v|)$. The approach in~\cite{canizo_fokkerplanck_2016} exploits the fact that $2$D KQ in isotropic coordinates can be transformed to a linear Fokker--Planck equation, which leads to explicit solutions also for the nonlinear equation.
For $3$D KQ Toscani~\cite{toscani_finite_2012} proved via contradiction the existence of solutions blowing up in finite time. Finite-time blow-up in this reference is obtained for any solution of sufficiently large mass $m$ (above a technical threshold far larger than the critical mass), but also for solutions of arbitrarily small mass provided they are initially sufficiently concentrated near the origin.
Formal results on the dynamics of isotropic solutions to 3D KQ based on matched asymptotic expansions have been obtained in~\cite{sopik_dynamics_2006}. Our numerical simulations will qualitatively confirm some of the main findings in~\cite{sopik_dynamics_2006}, which suggests that the dynamics depicted in this reference give a good hint at the typical behaviour of solutions.
Our numerical experiments will, however, also indicate that the dynamics may, in general, display a richer variety of phenomena. The formal considerations in~\cite{sopik_dynamics_2006} rely on the assumption of a sufficiently spread out initial datum $f_0$. We would also like to emphasize that in contrast to~\cite{sopik_dynamics_2006} our scheme allows for a numerical study beyond the first blow-up time.
\paragraph{$L^1$-supercritical Fokker--Planck model for bosons in 1D.} The one-dimensional case of equation~\eqref{eq:befp} with $\gamma>2$ was recently studied in the ref.~\cite{carrillo_finite-time_2019}, both on the entire line as well as on a centred interval subject to zero-flux boundary conditions.
We would like to point out that the successful numerical experiments reported in the present manuscript,
which are based on the equation for the pseudo-inverse cumulative distribution function
\begin{align}\label{eq:defPsI}
u(x) = \inf\left\{r:\int_{\{r'\le r\}}f(r')\,\mathrm{d} r'\ge x\right\},\quad x\in(0,\|f\|_{L^1}),
\end{align}
of the original density $f=f(t,\cdot)$ (cf.~equation~\eqref{eq:equ} in Section~\ref{sec:icdf} below)
triggered the rigorous analysis in~\cite{carrillo_finite-time_2019}, which is itself based on this reformulation.
Let us briefly review those results of~\cite{carrillo_finite-time_2019} which are of relevance for the present paper: the authors obtain global-in-time existence and uniqueness of solutions $u$ in the viscosity sense for initial data corresponding to sufficiently regular positive densities $f_0$ of finite mass $m$. These solutions are non-decreasing in the mass variable, here denoted by $x$. It is further shown that such solutions $u=u(t,x)$ are smooth away from $\{u=0\}$ and that
the push-forward measure $u(t,\cdot)_\#\mathcal{L}^1_{|(0,m)}=:\mu(t)\in\mathcal{M}^+_b$, generalising the problem in the original variables, has the the form
\begin{align*}
\mu(t) = f(t,\cdot)\cdot\mathcal{L}^1+ x_p(t)\delta_0,
\end{align*}
where the map $t\mapsto x_p(t):=\mathcal{L}^1(\{u(t,\cdot)=0\})$ is continuous and the function $f(t,\cdot)\in L^1_+$ is smooth away from the origin, where it satisfies equation~\eqref{eq:befp} in the pointwise sense.
Moreover, whenever the density $f(t,\cdot)$ is unbounded at the origin, its spatial blow-up profile has the form
\begin{align}\label{eq:profile}
f(t,v)=f_c(v)\cdot(1+O(|v|)=c_\gamma|v|^{-\frac{2}{\gamma}}\left(1+O(|v|)\right)\quad\text{as }|v|\to0,\qquad c_\gamma=\left(2/\gamma\right)^\frac{1}{\gamma}.
\end{align}
See~\cite{hopf_thesis} for a refinement of~\eqref{eq:profile}. The above framework makes it possible to extend entropy methods globally in time and to deduce convergence to the measure of the same mass which minimises the entropy. In the case $m>m_c$, the minimiser has a positive Dirac mass at the origin, and the solution must eventually have a non-trivial condensate component.
On the other hand, if $m<m_c$, the minimiser is smooth, and from the bound~\eqref{eq:profile} it can easily be deduced that in this case there exists $T\in(0,\infty)$ such that $x_p(t)=0$ for all $t\ge T$ (see~\cite[Cor.~3.16]{carrillo_finite-time_2019}).
From this observation combined with an adaptation of the finite-time blow-up argument in~\cite{toscani_finite_2012} one infers the existence of solutions whose condensate component $x_p=x_p(t)$ is not identically zero but compactly supported in $(0,\infty)$ (see~\cite[Cor.~3.18]{carrillo_finite-time_2019}). We refer to this phenomenon as a \textit{transient condensate}. Below we will see that the $L^1$-supercritical case in 1D of the family of nonlinear Fokker-Planck equations~\eqref{eq:befp}, corresponding to~$\gamma>2$, appears to be a good
caricature for the dynamical behaviour of the physically interesting case of the 3D KQ model in radial coordinates.
Let us finally mention that equation~\eqref{eq:befp} in 1D and without the diffusion term was analysed in \cite{CDT} showing that condensates always form in finite time and that their mass is increasing in time so that, once formed, they never dissolve. The results reported here and in \cite{carrillo_finite-time_2019} show the genuine countereffect of linear diffusion on condensation leading to transient condensates and non-monotonic behaviour of the condensate part $x_p(t)$, proved in one dimension for $\gamma>2$ and conjectured in the three dimensional case for $\gamma=1$.
\paragraph{Main numerical findings.}
The main purpose of this work is to provide strong numerical evidence for the existence of solutions to 3D KQ forming a Bose--Einstein condensate in finite time. Our numerical results suggest that any rotationally symmetric solution above the critical mass will eventually have a non-trivial condensate component. From our simulations a rather clear picture of the dynamical properties of KQ in $3$D in the isotropic case will emerge: the long-time asymptotics will be identified, which the numerical solution converges to in entropy at an exponential rate. Numerical evidence is provided for the possibility of the condensed part failing to be monotonic in time and for even dissolving completely.
Before investigating KQ in $3$D, we will apply the numerical scheme to the caricature of the $L^1$-supercritical case in 1D, i.e.\;\eqref{eq:befp} with $\gamma>2$, in order to numerically reproduce the analytical results established rigorously in~\cite{carrillo_finite-time_2019}, see Section~\ref{ssec:sim1D}. Since non-stationary explicit solutions are not available in $1$D, the $1$D scheme (in the $L^1$-supercritical case) will be validated by numerically analysing the convergence behaviour under mesh refinement with respect to a reference solution on a very fine mesh.
Concerning the scheme for rotationally symmetric solutions of KQ we perform a validation in 2D, where explicit solutions are available.
\paragraph{Numerical scheme.} The proposed numerical scheme is based on the variational formulation of equation~\eqref{eq:befp} using a mass transportation Lagrangian approach. It is motivated by the approach in~\cite{blanchet_convergence_2008,carrillo_numerical_2016}, where the gradient flow with respect to the Wasserstein distance is expressed in terms of the inverse of the cumulative distribution functions. Inherent in this approach is the conservation of mass property, which follows by construction.
We would like to emphasize that concerning the Kaniadakis--Quarati model in 3D studied in the present work, far less is known rigorously as compared to the equations simulated in~\cite{blanchet_convergence_2008,carrillo_numerical_2016} (porous medium equation, critical Keller--Segel) which have been exhaustively studied in the literature. This is partially explained by the fact that the variational structure for this problem cannot directly be exploited by resorting to established tools from optimal transportation theory.
%
In fact, the potential difficulty in our situation lies in the circumstance that we do not have the Wasserstein gradient flow structure in a rigorous sense. We will, however, see that this precise structure is not required and our proposed scheme will be shown to preserve in particular the entropy decay property (rigorously in 1D and 2D for the semidiscrete case, see~Section~\ref{ssec:semidiscrete}). Our numerical scheme is able to go beyond the first blow-up time and allows for exploring the qualitative behaviour after blow-up: blow-up profile, transient condensates and entropy decay. These good numerical properties, consistent in 1D with the existing theory, reassure us in our numerical findings in Section~\ref{ssec:sim3D} concerning the 3D isotropic case. The fact that our numerical experiments clearly support the conjecture that the qualitative behaviour of condensates in 1D proven in \cite{carrillo_finite-time_2019} is expected in the most realistic case of 3D radially symmetric initial data can be regarded as the main contribution of this paper.
\color{black}
There has been an increased interest in related structure preserving Lagrangian schemes in the last years, see for example~\cite{gosse_toscani_2006,blanchet_convergence_2008, carrillo_moll_2009, matthes_osberger_2014, CB16, carrillo_numerical_2016,carrillo2017blob,carrillo_lagrangian_2018}. The numerical analysis of these schemes is still underdeveloped with partial results in~\cite{blanchet_convergence_2008,matthes_osberger_2014,CG,carrillo2017blob,carrillo_lagrangian_2018}.
Let us finally point out that free energy decaying numerical schemes in the original variables based on finite volume schemes have been proposed in~\cite{CCH15,PZ18,ABPP,BCH} and references therein. These schemes fail to go beyond the blow-up time since they cannot resolve the presence of Dirac concentrations while accurately following the evolution of the smooth part of the solution.
\paragraph{Comparison with other models for Bose--Einstein condensation.}
There are many other models in the literature which have been suggested in the context of Bose--Einstein condensation. Of particular interest (due to similar phenomena) is a certain class of kinetic equations generally referred to as quantum Boltzmann equations, which, in contrast to classical Boltzmann equations, are derived using Bose--Einstein statistics.
Let us note that for $\gamma=1$ the steady states~\eqref{eq:ss} coincide with the classical Bose--Einstein distributions and the functional $\int\Phi(f)\,\d v$ agrees (up to a sign convention) with the entropy associated to the homogeneous Boltzmann--Nordheim equation for bosons, see~\cite{escobedo_finite_2015,huang_statistical_1963}. In contrast to equation~\eqref{eq:befp}, the Boltzmann--Nordheim equation formally preserves the kinetic energy $\int\frac{|v|^2}{2}f\,\d v$.
In the last two decades, significant progress has been made in the analysis of the Boltzmann--Nordheim equation in the homogeneous and velocity isotropic case~\cite{escobedo_quantum_2001, escobedo_asymptotic_2004,
lu_boltzmann_2013, escobedo_finite_2015, escobedo_blow_2014, bandyopadhyay_blow-up_2015, lu_2016, lu_strong_2018}.
To roughly summarise the main results, the authors of the cited references are able to establish the existence of generalised mass- and energy-conserving solutions, which form a Bose--Einstein condensate in finite time and converge, in some sense and under certain conditions, to the entropy minimiser in the large-time limit.
The results in the present paper suggest that in the isotropic case the dynamics of condensation in 3D KQ is in some aspects similar to the one of the Boltzmann--Nordheim equation as described rigorously in the references~\cite{escobedo_finite_2015, escobedo_blow_2014, bandyopadhyay_blow-up_2015,lu_2016, lu_strong_2018}.
We note that, regarding the nature of singularities,
in the Boltzmann--Nordheim equation many questions are still open.
Numerical schemes to approximate the Boltzmann--Nordheim equation or quantum Boltzmann equation for bosons have also been devised and used to understand their qualitative properties, see \cite{MP,BPM,HLP,FHJ} and the references therein. However, only few numerical studies attempt to go beyond the first blow-up time (where the velocity distribution ceases to be bounded). In \cite{semikoz_condensation_1997,semikoz_kinetics_1995,lacaze_dynamical_2001,spohn_kinetics_2010} the authors observe that at the first blow-up time the solution has an integrable power law singularity near the lowest energy state and, in general, there will be a non-trivial flux of particles entering that state. The hypothesis of mass conservation then leads to a law for the time evolution of the condensate component, resulting in a coupled system. The methods do not appear to allow to track in a precise way the evolution after blow-up. Our approach is very different as it does not require distinguishing between the times where the velocity distribution is bounded and the times where it is unbounded, and enables a detailed study of the dynamics of singular solutions. Let us finally mention that other descriptions have been used both analytically and numerically to study the behaviour beyond condensation in the quantum Boltzmann equation. In some of them the kinetic equation is coupled to a nonlinear Schr\"odinger equation (cubic, Gross--Pitaevski) modelling the evolution of the condensate, see \cite{ST,BC,B,escobedo_turbulence_2015} and the references therein for further details.
\paragraph{Plan of the manuscript.}
The remaining part of this manuscript is structured as follows: in Section~\ref{sec:icdf} we discuss the numerical scheme for the 1D caricature of the 3D KQ given by the $L^1$-supercritical 1D Fokker-Planck equation \eqref{eq:befp} with $\gamma>2$ and its generalisation to the radial case in higher-dimensions with particular focus on the KQ model, $\gamma=1$. We also briefly discuss the anisotropic case.
Section~\ref{ssec:sim1D} shows that the proposed numerical scheme does capture the main behaviour after blow-up in the 1D case: condensation, transient condensates for subcritical initial mass and convergence towards equilibrium. Section~\ref{ssec:val_2D} validates the discretisation of the radial case by comparing to the explicit solutions given in~\cite{canizo_fokkerplanck_2016}.
In Section~\ref{ssec:sim3D} we present the simulations of 3D~KQ, which
allow us to conclude that the caricature given by the $L^1$-supercritical Fokker-Planck equation~\eqref{eq:befp} in 1D is essentially numerically correct for the 3D KQ model for radial initial data.
\section{Numerical method}\label{sec:icdf}
Since we want our scheme to be able to deal with Dirac masses at the origin, our simulations are not based on the formulation~\eqref{eq:befp}; instead we follow and generalise the ansatz in the ref.~\cite{carrillo_finite-time_2019} considering the equation satisfied by the (pseudo-) inverse cumulative distribution function (cdf) of $f(t,\cdot)$. In higher dimensions $d>1$, assuming rotational symmetry, we will consider the inverse of the \textit{radial} cdf of $f(t,\cdot)$ (i.e.\;of the partial mass function)---appropriately normalised.
As in the first part of~\cite{carrillo_finite-time_2019}, we consider our equations posed on a bounded domain, more precisely on the centred ball $B(0,R_1)$ of radius $R_1>0$ with zero-flux boundary conditions.
\subsection{Change of variables}
\subsubsection{One-dimensional case}\label{sssec:1d}
Here, we consider the case $d=1$ and assume that $\gamma>2$, which represents the $L^1$-supercritical regime.
The total mass of the initial datum $f_0$ is denoted by $m$.
Then, the equation satisfied by the pseudo-inverse $u(t,\cdot)$ of the \textit{cumulative distribution function} (cdf)
\begin{align*}
M(t,v)=\int_{-R_1}^v f(t,w)\,\d w,\quad v\in[-R_1,R_1],
\end{align*}
of $ f(t,\cdot)$ (cf.~\eqref{eq:defPsI}) formally states
\begin{align}\label{eq:equ}
\partial_t u=(\partial_x u)^{-2}\partial_x^2 u-u(1+(u_x)^{-\gamma}),
\end{align}
where $x\in(0,m)$ denotes the mass variable.
Formally, $f$ is related to $u$ by the identity
\begin{align*}
\partial_xu=\frac{1}{f(u)}.
\end{align*}
Upon multiplying eq.~\eqref{eq:equ} by the factor $(\partial_x u)^\gamma$, it can be rewritten as
\begin{align}\label{eq:invBefp1D}
(\partial_x u)^{\gamma}\partial_t u-\frac{1}{\gamma-1}\partial_x\left((\partial_x u)^{\gamma-1}\right)+u((\partial_x u)^{\gamma}+1)=0.
\end{align}
While these new coordinates are generally known to be (numerically) favourable when investigating mass concentration phenomena in $1$D, a particular feature of equation~\eqref{eq:invBefp1D} is that the function $u\equiv0$, which at the level of $f$ corresponds to a Dirac delta at the origin, is an actual solution. Since mass conservation is a crucial feature of our Fokker--Planck model, the natural boundary condition for eq.~\eqref{eq:befp} states $\partial_rf+rf(1+f^\gamma)=0$ on $(0,\infty)\times\{-R_1,R_1\}$. It enforces the flux of particles through the boundary to be zero. Formally, at the level of $u$, this means that the RHS of eq.~\eqref{eq:equ} is zero on $(0,\infty)\times\{0,m\}$. Hence, if the solution $u$ is $C^{1,2}_{t,x}$ near and up to the boundary, this becomes $\partial_tu=0$ or, equivalently,
\begin{align*}
u=u_0\qquad\text{on }(0,\infty)\times\{0,m\}.
\end{align*}
This is the form we use in our numerical scheme.
It corresponds to the Dirichlet conditions $u(t,0)=-R_1$, $u(t,m)=R_1$ for $t>0$.
As explained in Section~\ref{sec:introSim}, given a radius $R_1$ and a mass $m=\|f_0\|_{L^1(-R_1,R_1)}$ there exists a unique $\mu_\infty\in\mathcal{M}^+_b([-R_1,R_1])$ of mass $m$ which minimises the entropy $\mathcal{\widetilde H}$. At the level of $u$, we denote this minimiser by $u_\infty$. We further let $H(u):=\mathcal{H}(f)$ resp.~$\mathcal{\widetilde H}(\mu)$, where $\mu=u_\#\mathcal{L}^1$ is the push-forward measure of the Lebesgue measure on $[0,m]$ under the map $u$ and will, in places, abbreviate $H_\infty:=H(u_\infty)=\mathcal{\widetilde H}(\mu_\infty)$. The dependence of $u_\infty$ on $R_1$ and $m$ will be omitted.
For later reference, let us observe that $H(u)$ is formally given by
\begin{align}\label{eq:Hu}
H(u)=\int_{(0,m)} \left(\frac{|u|^2}{2}+\Psi(u_x)\right)\mathrm{d} x,
\end{align}
where the function
\begin{align}\label{eq:defPsi}
\Psi(s):=s\Phi(1/s)\quad\text{ is convex with }\quad \Psi''(s)=s^{-3}\Phi''(1/s)=\frac{1}{s^3 h(1/s)}.
\end{align}
\subsubsection{Higher dimensions -- isotropic case}\label{sssec:eqhd}
For isotropic solutions $f(t,v)=g(t,|v|),$ $v\in\mathbb{R}^d$, we can perform a similar transformation in higher dimensions.
In radial form, equation~\eqref{eq:invBefp1D} reads
\begin{align}\label{eq:radBefp}
\partial_tg = r^{1-d}\partial_r\left(r^{d-1}\partial_rg+r^dg(1+g^\gamma)\right), \;t,r>0.
\end{align}
As a first ansatz one might try to consider the equation for the (pseudo-) inverse $R(t,z)$ of
the \textit{radial cdf} $\bar M(t,r)=\int_0^{r}g(t,s)s^{d-1}\,\d s$. However, for bounded densities $f$ the function $\bar M$ is of class $O(r^d)$ as $r\to0$, implying that $R(t,\cdot)$ is at most $1/d$-H\"older near $z=0$ and $\partial_zR\gtrsim z^{1/d-1}\to\infty$ as $z\searrow0$, whenever $d>1$.
We therefore consider the normalised version $N(t,s)=\bar M(t,s^{1/d})$ or, equivalently,
\begin{align*}
N(t,s)=\frac{1}{d}\int_0^{s}g(t,\sigma^{1/d})\,\d\sigma,
\end{align*}
which satisfies $\partial_sN(t,s)=\frac{1}{d}g(t,s^{1/d})$, and let $S(t,\cdot)$ denote the pseudo-inverse of $N(t,\cdot)$, so that $S=R^d$. From the formal relation $N(t,S(t,z))=z$ we deduce (omitting the time argument)
\begin{align}\label{eq:f-S}
\partial_zS=\frac{d}{g(R)}.
\end{align}
Then, the equation~\eqref{eq:radBefp} for $g$ leads to the following equation for $S$:
\begin{align*}
\frac{1}{d}\partial_tS - d\frac{S^{2-2/d}}{(\partial_zS)^2}\partial_z^2S+S(1+d^\gamma(\partial_zS)^{-\gamma})=0.
\end{align*}
Since we want our scheme to be able to deal with condensates, i.e.~$S(t,\cdot)\equiv0$ on some subinterval $(0,z(t))$, we multiply this equation by $(\partial_zS)^\gamma$ to obtain
\begin{align}\label{eq:Snonreg}
(\partial_zS)^\gamma\tfrac{1}{d}\partial_tS - d\cdot S^{2-2/d}(\partial_zS)^{\gamma-2}\partial_z^2S+S((\partial_zS)^{\gamma}+d^\gamma)=0.
\end{align}
Notice that if $\gamma\in[1,2)$, the viscosity term has a factor which becomes unbounded when $S$ forms a condensate. We therefore consider
for a small parameter $0<\varepsilon\ll1$
the following regularisation
\begin{align*}
(\partial_zS)^\gamma\tfrac{1}{d}\partial_tS - d\cdot S^{2-2/d}(\partial_zS+\varepsilon)^{\gamma-2}\partial_z^2S+S((\partial_zS)^{\gamma}+d^\gamma)=0
\end{align*}
or, equivalently,
\begin{align*}
\begin{cases}
(\partial_zS)^\gamma\tfrac{1}{d}\partial_tS - \frac{d}{\gamma-1}\cdot S^{2-2/d}\frac{\d}{\d z}(\partial_zS+\varepsilon)^{\gamma-1}+S((\partial_zS)^{\gamma}+d^\gamma) =0, \;\;&\text{ if }\gamma>1,\\[3mm]
(\partial_zS)^\gamma\frac{1}{d}\partial_tS - d\cdot S^{2-2/d}\frac{\d}{\d z}\log(\partial_zS+\varepsilon)+S((\partial_zS)^{\gamma}+d^\gamma) =0,\;\;&\text{ if }\gamma=1.
\end{cases}
\end{align*}
We are mostly interested in the KQ model (where $\gamma=1$), and will thus focus on the equation
\begin{align*}
d^{-1}\partial_zS\partial_tS - d S^{2-2/d}\frac{\d}{\d z}\log(\partial_zS+\varepsilon)+S(\partial_zS+d)=0,
\end{align*}
where $d=2,3$. Notice that a positive $\varepsilon$ decreases the strength of diffusion significantly when $\partial_zS\lesssim \varepsilon$.
In order to counterbalance this effect, which may potentially lead to numerical artefacts when investigating the expected phenomenon of condensation, we propose an artificial viscosity type regularisation of the form
\begin{align}\label{eq:2regS3D}
d^{-1}\partial_zS\partial_tS - d (S+\delta)^{2-2/d}\frac{\d}{\d z}\log(\partial_zS+\varepsilon)+S(\partial_zS+d)=0,
\end{align}
where $0<\delta\ll1$ is a small parameter.
Below $\bar m$ (resp.\;$\bar m_c$) denotes the total mass of the initial datum $f_0$ (resp.\;of $f_c$) on $B(0,R_1)$ \textit{multiplied by the factor} $\frac{1}{|\partial B(0,1)|}$.
Then, as in the $1$D case, the appropriate boundary conditions for equation~\eqref{eq:2regS3D} are $S(t,0)=0$ and $S(t,\bar m)=R_1^d$.
As in Section~\ref{sssec:1d} we denote by $S_\infty=S_\infty(R_1,\bar m)$ the pseudo-inverse normalised radial cdf of the unique (isotropic) minimising measure in $\mathcal{M}^+_b(\overline{B}(0,R_1))$ corresponding to the choice $(R_1,m)$ of parameters, and generally let $H_d(S):=\mathcal{\widetilde H}(\mu)$, where $\mu$ is the unique isotropic measure in $\mathcal{M}^+_b(\overline{B}(0,R_1))$ satisfying $\mu(\overline{B}(0,r))=\nu([0,r^d])\cdot |\partial B(0,1)|$ and $\nu$ denotes the measure associated with the generalised inverse of $S$. We also abbreviate $H_\infty:=H(S_\infty)$ and $H(t):=H_d(S(t))$.
\subsubsection{Higher dimensions -- anisotropic case}
Let us briefly discuss that one can perform a related change of variables in higher dimensions without radial symmetry. With this aim, one needs to consider vector-valued transformations $u(t,\cdot):U\to V$, $U,V\subset\mathbb{R}^d$,
which are formally related to the original density $f$ via
\begin{align*}
\det \nabla u(t,x)\cdot f(t,u)=1.
\end{align*}
Similarly to~\cite{evans_diffeo_2005,carrillo_moll_2009}, one finds that the system governing the evolution of $u=(u^1, \dots, u^d)^T$ can formally be written as
\begin{align*}
\quad\left[(\det \nabla u)^2\Psi''(\det \nabla u)\right]\partial_tu^i -\partial_{x_k}\left( \Psi'(\det \nabla u)(\mathrm{cof}(\nabla u))_k^i\right) + u^i = 0
\end{align*}
for $i=1,\dots,d$, where $\Psi$ is defined as in~\eqref{eq:defPsi}.
The entropy $H_{\mathrm{ani},d}(u)$ in the new variables takes the form
\begin{align*}
H_{\mathrm{ani},d}(u) = \int_U\left(\tfrac{1}{2}|u|^2+\Psi(\det\nabla u)\right)\,\mathrm{d} x.
\end{align*}
Observe that in the vectorial case $H_{\mathrm{ani},d}(u)$ is no longer convex but merely polyconvex in $\nabla u$.
This route could potentially allow to numerically analyse concentrations without radial symmetry in higher dimensions,
as it is the case in 2D for aggregation and Keller--Segel type problems close to the blow-up time~\cite{carrillo_numerical_2016}.
Even if this method deserves further exploration, we focus here on the isotropic case to capture the direct generalisation of the 1D behaviour in the 3D realistic setting.
\subsection{The semidiscrete scheme}\label{ssec:semidiscrete}
The scalar equations~\eqref{eq:invBefp1D} and~\eqref{eq:2regS3D} are discretised fully implicitly in time.
We let $\tau$ be the discrete time step and denote by $\{u^{n}\}_{n\in\mathbb{N}}$ the time-discrete solution of the implicit Euler discretisation of equation~\eqref{eq:invBefp1D}. More precisely, given a non-decreasing function $u^{n}$ satisfying $u^{n}(0)=-R_1$ and $u^{n}(m)=R_1$,
the problem for $u=u^{n+1}$ reads
\begin{align}\label{eq:invBefp1Dsemi}
\left(\partial_x u\right)^\gamma \frac{u-u^{n}}{\tau}
-\tfrac{1}{\gamma-1}\partial_x\left((\partial_x u)^{\gamma-1}\right)+u((\partial_x u)^{\gamma}+1)=0
\end{align}
subject to the Dirichlet boundary conditions $u^{n+1}(0)=-R_1, u^{n+1}(m)=R_1$.
Let us here make a short digression to explain the main difference and potential difficulty of the present problem with respect to the Wasserstein gradient flows treated in \cite{blanchet_convergence_2008,carrillo_numerical_2016}.
Those works are based on the idea that the Wasserstein gradient flow of the entropy/free energy in the original variables is equivalent to an $L^2$ gradient flow for the problem in the $u$-variables.
Loosely speaking, the semidiscrete $L^2$ gradient flow for $H(u)$ reads as follows: given $\tilde u^n$ formally define $\tilde u^{n+1}$ as a solution of the problem
\begin{align*}
\tilde u^{n+1}\in\mathrm{arg}\inf_{\tilde u}\left\{\frac{1}{2\tau}\|\tilde u-\tilde u^n\|_{L^2}^2+H(\tilde u)\right\}.
\end{align*}
The associated Euler--Lagrange equations, $\frac{\tilde u-\tilde u^n}{\tau}=-\nabla_{L^ 2} H(\tilde u),$ read
\begin{align*}
\frac{\tilde u-\tilde u^n}{\tau}=-[-\partial_x(\Psi'(\tilde u_x))+\tilde u].
\end{align*}
To compare this with our problem,
we write eq.~\eqref{eq:invBefp1Dsemi} in the more concise equivalent form
\begin{align*}
u_x^2\Psi''(u_x) \frac{u-u^n}{\tau}=-[-\partial_x(\Psi'(u_x))+u],
\end{align*}
which suggests that in some sense a gradient flow structure is kept.
At least, as will be shown below, we keep an important property in the semidiscrete numerical scheme, namely the monotonicity of the entropy.
Recall that in 1D the entropy $H(u)$ in the $u$-variables (see~\eqref{eq:Hu}) is convex in the classical sense, and
it is well-known that the implicit Euler scheme applied to a gradient flow of a convex functional satisfies the semidiscrete entropy inequality $H(\tilde u^{n+1})\leq H(\tilde u^{n})$ for all $n$.
In our situation, thanks to the convexity of the integrand of $H$, the entropy decay along the sequence $\{u^n\}$ can be recovered by a simple estimate:
\begin{align*}
H(u)-H(u^n)&\le \int_{(0,m)} (u(u-u^n)+\Psi'(u_x)(u-u^n)_x)\,\mathrm{d} x
\\&= \int_{(0,m)} (u-\partial_x(\Psi'(u_x)))(u-u^n)\,\mathrm{d} x
\\&=-\tau\int_{(0,m)} u_x^2\Psi''(u_x)\Big|\frac{u-u^n}{\tau}\Big|^2\,\mathrm{d} x\le0.
\end{align*}
Here, we used the fact that in the above integration by parts the boundary terms vanish since, by construction, $u=u^{n}$ on $\partial(0,m)$. This shows the entropy decay property of the semidiscrete scheme~\eqref{eq:invBefp1Dsemi}: $H(u^{n+1})\leq H(u^{n})$ for all $n$. We note that similar properties with a similar strategy of proof are found for finite volume schemes of gradient flows \cite{BCH}.
\begin{remark}[Higher dimensions, isotropic case]
In higher dimensions the entropy $H_d(S)$, introduced in Section~\ref{sssec:eqhd}, takes the form (see also~\eqref{eq:f-S})
\begin{align}\label{eq:HdS}
H_d(S) = \int \left(\tfrac{1}{2}S^\frac{2}{d}+ \Psi_d(\partial_zS)\right)\,\mathrm{d} z,
\end{align}
where $ \Psi_d(s) =\Psi(\tfrac{s}{d})$ is again convex.
If $d=2$, thanks to convexity, the implicit Euler discretisation of eq.~\eqref{eq:Snonreg} can be shown to keep the entropy decay by arguing as in the 1D case. In higher dimensions, $d>2$, this argument breaks down due to the kinetic part of the entropy failing to be a convex function of $S$. Notice, however, that the convexity in the highest order term, $\partial_zS$, is maintained.
\end{remark}
\subsection{The fully discrete scheme}
The semidiscrete nonlinear system \eqref{eq:invBefp1Dsemi} is discretised using finite differences and solved by the Newton--Raphson method.
In the one dimensional case, the finite difference approximation in space is chosen in such a way as to preserve the equation's symmetry, viz.
\small
\begin{align}\label{eq:discreteu}
(u_{i+1}^n-u_{i-1}^n)^\gamma(2h)^{-\gamma} \frac{u_i^n-u^{n-1}_i}{\tau}
- ((u_{i+1}^n-u_{i}^n)^{\gamma-1}-(u_{i}^n-u_{i-1}^n)^{\gamma-1})h^{-\gamma}(\gamma-1)^{-1}&
\\\nonumber +u_i^n((u_{i+1}^n-u_{i-1}^n)^\gamma(2h)^{-\gamma}+1)=0,&
\end{align}
\normalsize
for $i=1,\dots,N-1,$ complemented with the boundary conditions $u^n_0=u^0_0=-R_1$ and $u^n_N=u^0_N=R_1$.
We use a similar full discretisation for~\eqref{eq:2regS3D}, viz.
\begin{multline*}
(S^n_{i+1}-S^n_{i-1})(2hd\tau)^{-1}(S^n_i-S^{n-1}_i)\\- d (S_i^n+\delta)^{2-2/d}(\log((S^n_{i+1}-S^n_{i})/h+\varepsilon)-\log((S^n_{i}-S^n_{i-1})/h+\varepsilon))/h
\\+S^n_i((S^n_{i+1}-S^n_{i-1})/(2h)+d)=0
\end{multline*}
for $i=1,\dots,N$,
where the boundary conditions are given by $S_0^n=S_0^0=0$ and $S_N^n=S_N^0=R_1^d$.
\paragraph{Algorithm.}
Given $u^{n-1}$ the discrete approximation $u^{n}$ at the subsequent time point is computed using a Newton--Raphson iteration. The iteration is stopped as soon as the smallness condition $\|F_{\mathrm{NR}}(u^n,u^{n-1},h,\tau)\|_{l^2}<10^{-8}$ is satisfied, where $F_{\mathrm{NR}}(u^n,u^{n-1},h,\tau)_i$ is given by the LHS of equation~\eqref{eq:discreteu} multiplied by $h^\gamma$. For $S$ we proceed similarly.
\begin{remark}
In the simulations exhibiting the numerically somewhat delicate condensation phenomenon, the inverse cdf becomes slightly non-monotonic during the Newton--Raphson iteration, which leads to very small imaginary parts in the above scheme and of the solution at the subsequent time step. In our actual code we therefore rearrange the approximation in each Newton--Raphson iteration to ensure monotonicity. Alternatively, one can replace the first derivatives $u_x$ by their absolute values $|u_x|$ and discretise and simulate this equation.
In practice, the differences between the results using the first and the second option are negligible.
A similar statement applies to the higher-dimensional case, where we choose again the option of the monotonic rearrangement.
\end{remark}
\section{Numerical experiments}
In this section we describe the validation of our scheme, and present and discuss our numerical experiments.
\subsection{\texorpdfstring{$L^1$}{}-supercritical bosonic Fokker--Planck model in 1D: \texorpdfstring{\\}{}
simulations replicating the theory}
\label{ssec:sim1D}
First, we demonstrate the reliability of the proposed numerical scheme in~1D by reproducing the features proved in~\cite{carrillo_finite-time_2019}.
In addition, we use the scheme to predict that the entropy decays at an exponential rate, even after the onset of a condensate.
If not stated otherwise, we choose $\gamma=2.9$ and use a centred Gaussian as initial datum, viz.
\begin{align}\label{eq:init}
f_0(v) = A\mathrm{e}^{-\frac{|v|^2}{2\sigma^2}}
\end{align}
for fixed positive constants $A$ and $\sigma$. Moreover, we always set $R_1=1$.
We remark that for $d=1$ and the above choice of $\gamma$ and $R_1$ the critical mass $m_c$ takes the numerical value $m_c\approx5.37$.
\subsubsection{Validation in 1D}
We begin with validating the $1$D scheme~\eqref{eq:discreteu} by comparing the solution for a given mesh with a numerical reference solution calculated on a fixed and much finer mesh.
We set $\sigma=0.7$, $A=4.5$ in~\eqref{eq:init} as well as $T=0.025$. For simplicity, the mass variable $x\in[0,m]$ is often referred to as the \textit{spatial} variable. The numerical reference solution is computed on a grid of $12801$ (equidistant) spatial mesh points and a total number of $1000$ (equidistant) time points. Notice that the values of the parameters $A$ and $\sigma$ coincide with those in~\ref{it:sup} below and observe that, in the simulations based on~\ref{it:sup}, well before the final time $T=0.025$ chosen for our validation, a significant amount of mass has accumulated at the origin (cf.\;Figures~\ref{fig:111} and~\ref{fig:113}). Therefore, our validation covers the case in which condensation occurs.
\begin{table}[H]\footnotesize
\parbox{.49\linewidth}{
\centering
\begin{tabular}{|c c c c|}
\hline
timesteps & meshsize & $L^2_{x}$ error& rate \\
\hline \hline
1000 & 50 & 7.3825e-3 & - \\
\hline
1000 & 100 & 2.1290e-3 & 1.7939 \\
\hline
1000 & 200 & 5.6056e-4 & 1.9253 \\
\hline
1000 & 400 & 1.4222e-4 & 1.9788 \\
\hline
1000 & 800 & 3.5598e-5 & 1.9982 \\
\hline
1000 & 1600 & 8.8061e-6 & 2.0152 \\
\hline
1000 & 3200 & 2.0991e-6 & 2.0687 \\
\hline
\end{tabular}
\caption[Validation w.r.t.\;reference at time $T=0.025$ ($d=1,\gamma=2.9,m>m_c$)]{Convergence to reference solution at time $T=0.025$.}
\label{table:1Dmesh}
}
\hfill
\parbox{.49\linewidth}{
\centering
\begin{tabular}{|c c c c|}
\hline
timesteps & meshsize & $L^2_{t,x}$ error& rate \\
\hline \hline
10& 50& 6.1372e-3 & - \\
\hline
20& 100& 3.1393e-3 & 0.9671 \\
\hline
40& 200& 1.5817e-3 & 0.9890 \\
\hline
80& 400& 7.8542e-4 & 1.0099 \\
\hline
160& 800& 3.8200e-4 & 1.0399 \\
\hline
320& 1600& 1.7877e-4 & 1.0955 \\
\hline
640& 3200& 7.6728e-5 & 1.2203 \\
\hline
\end{tabular}
\caption[Validation w.r.t.\;reference on space-time grid ($d=1,\gamma=2.9,m>m_c$)]{Convergence to reference solution (on space-time grid).}
\label{table:1Dfull}
}
\end{table}
Table~\ref{table:1Dmesh} displays the discrete $L^2_x$ error of the solution on the coarser mesh with respect to the reference solution, evaluated at the final time $T$, while
Table~\ref{table:1Dfull} indicates the $L^2$ space-time error between computed and reference solution.
The results suggest a second order dependence of the error on the spatial increment and a first order dependence on the temporal increment.
As long as the solution is not degenerate, this can be explained by the fact that
we use an implicit Euler scheme in time (which is first-order accurate), a central finite difference discretisation in space (whose truncation error is of second order) and have chosen a high resolution in time for the test using purely spatial refinement, which makes the temporal error negligible in this test.
Notice, however, that the degenerate case requires more care and that, in this work, we do not provide a rigorous numerical analysis of the scheme.
\begin{remark}
Higher-order implicit time discretisations could be considered to obtain better accuracy. Table~\ref{table:CN} displays the convergence rates upon refinement of the space-time mesh using a Crank--Nicolson-type (CN) time discretisation for \eqref{eq:invBefp1D} with parameters \ref{it:sub} and clearly confirms the second order accuracy of CN.
However, we would like to point out that the initial datum determined by~\ref{it:sub} is mass-subcritical, and therefore the 2nd order accuracy is obtained for smooth solutions. Our simulations beyond blow-up indicate that the Newton solver for the implicit Euler scheme has better stability properties to cope with condensates than the CN scheme though.
\end{remark}
\begin{table}[H]\footnotesize\centering
\parbox{.49\linewidth}{
\centering
\begin{tabular}{|c c c c|}
\hline
timesteps & meshsize & $L^2_{t,x}$ error& rate \\
\hline \hline
10& 50& 5.2392e-3 & - \\
\hline
20& 100& 1.1085 e-3 & 2.2408 \\
\hline
40& 200& 2.4257 e-4 & 2.1921\\
\hline
80& 400& 5.6873e-05 & 2.0926 \\
\hline
160& 800& 1.3983e-05 & 2.0241 \\
\hline
\end{tabular}
\caption{Convergence to reference solutions using CN and \ref{it:sub}.}
\label{table:CN}
}
\end{table}
\subsubsection{Comparing simulations and theoretical results}
In order to numerically confirm the dynamical properties of eq.~\eqref{eq:befp} in $1$D, we run our scheme with the following four sets of parameters covering the mass-super resp.\;-subcritical, the asymmetric case as well as the case of the initial datum being highly concentrated near the origin $v=0$:
\begin{enumerate}[label=(P\arabic*)]
\item\label{it:sup} $m>m_c:$ $\sigma=0.7$, $A=4.5$, $T=0.4$, $\tau=0.001$, $n=2001$ ($n:=$ number of spatial grid points).
\item\label{it:supasy} Asymmetric \& $m>m_c:$ translated Gaussian
$f_0(v) = A\mathrm{e}^{-|v-v_0|^2/(2\sigma^2)}+0.1$ chosen as initial datum using the parameters $v_0=-1$, $\sigma=0.7$ and $A=4.5$. Moreover, $T=0.4$, $\tau=0.001$, $n=2001$. The shift by $+0.1$ ensures that the cdf of $f_0$ is numerically still well invertible close to $v=R_1$.
\item\label{it:sub} $m<m_c:$ $\sigma=0.7$, $A=1.5$, $T=0.4$, $\tau=0.001$, $n=2001$.
\item\label{it:subbu} Concentrated \& $m<m_c:$ $\sigma=0.1$, $A=1.5$, $T=0.4$, $\tau=10^{-6}$, $n=10001$.
\end{enumerate}
The approximate total mass for each of these simulations is indicated in part~\textbf{(a)} of the corresponding figure: it is the maximal value of the part of the horizontal axis which is displayed.
\paragraph{Entropy decay.} The convergence to the minimiser of the entropy can be clearly observed in Figures~\ref{fig:111} and~\ref{fig:141}. Beyond,
Figures~\ref{fig:112},~\ref{fig:142},~\ref{fig:122} and \ref{fig:132}, which show the evolution of the relative entropy $H(u(t,\cdot))-H(u_{\infty})$, indicate an exponential decay of the entropy.
The red slopes in Figures~\ref{fig:112},~\ref{fig:142},~\ref{fig:122} and \ref{fig:132} indicate the approximate slopes of the graphs averaged over the intervals where they are plotted. The computed slopes imply quantitative decays rates for the entropy of the form $e^{-\alpha t}$
with the following numerical values for~$\alpha$:
$\alpha\approx23.7$ for~\ref{it:sup},
$\alpha\approx23.8$ for~\ref{it:sub},
$\alpha\approx23.1$ for~\ref{it:subbu},
and $\alpha\approx23.0$ for~\ref{it:supasy}.
\begin{remark}
In the mass-subcritical case $m<m_c$ there exists $T=T(u_0)<\infty$ such that
the mapping $u(t,\cdot)$ has no critical point for $t>T$, so that the density $f(t,\cdot)$ of its inverse is smooth (see~\cite[Corollary~4.5]{carrillo_finite-time_2019}). In this case one can exploit the fact that the entropy functional of the bosonic Fokker--Planck equation in 1D coincides with that of a nonlinear diffusion equation with linear drift to which the theory developed in~\cite{unterreiter_entropy_2001} applies
in order to deduce exponential decay of the entropy with rate $\alpha=2$ for $t\ge T$, i.e.
\begin{align*}
H(u(t,\cdot))-H(u_\infty)\le (H(u(0,\cdot))-H(u_\infty))\mathrm{e}^{-2t},\;t\ge t_0,\quad t\ge T.
\end{align*}
This idea was used before in~\cite{carrillo_1d_2008} for~1D~KQ. The rate of convergence in the general case is still open.
\end{remark}
\begin{figure}[H]
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig111}
\subcaption{$u(t,\cdot)$ and $u_\infty$.}
\label{fig:111}
\end{minipage}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig112}
\subcaption{Evolution of the relative entropy.}
\label{fig:112}
\end{minipage}
\vspace{.5cm}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig113}
\subcaption{Evolution of the Dirac part.}
\label{fig:113}
\end{minipage}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig116}
\subcaption{Behaviour near singularity.}
\label{fig:115}
\end{minipage}
\caption[Large-time behaviour for~\ref{it:sup} ($d=1, \gamma=2.9, m>m_c$)]{Long-time behaviour in the mass-supercritical case~\ref{it:sup} ($d=1, \gamma=2.9$).}
\end{figure}
\paragraph{Finite-time condensation for $m>m_c$.} The finite-time condensation in the mass-supercritical case is well confirmed by the simulations \ref{it:sup}\&\ref{it:supasy}.
Recall that the condensate corresponds to the zero level set of $u(t,\cdot)$, which we numerically determine by the criterion $|u(t,\cdot)|<10^{-6}$.
Figure~\ref{fig:113} shows the time evolution of the condensed part relative to the (conserved) total mass. It clearly shows the onset of a condensate after some time $0<t\ll0.025$.
Further figures depicting the formation of condensates are Fig.~\ref{fig:111},~\ref{fig:141} and~\ref{fig:143}. Interestingly, in Figure~\ref{fig:143} the fraction of mass in the condensate is not monotonic, illustrating that, even when above the critical mass, a previously formed condensate may partially dissolve.
\paragraph{Blow-up profile.} Figures~\ref{fig:115} and~\ref{fig:145} show the behaviour of $f(t,v)-f_c(v)$ for $0<v\ll R_1$ at the times $t=0.04$ and $t=0.1$. The figures indicate an error of the form
\begin{align}\label{eq:linearError}
f(t,v)-f_c(v) = c_\pm(t)|v| + o(|v|)\quad\text{ as }v\to0\pm
\end{align}
for suitable constants $c_+(t), c_-(t)\in\mathbb{R}$, which, for asymmetric solutions, need not necessarily coincide.
The asymptotic behaviour in~\eqref{eq:linearError} not only confirms the leading order spatial profile obtained rigorously in~\cite{carrillo_finite-time_2019} (see~\eqref{eq:profile}),
but indicates that the error with respect to $f_c$ may typically be of first order in
$|v|$ and thus smaller than the order $1-2/\gamma$ ensured by formula~\eqref{eq:profile}. (A rigorous derivation of the improved error control can be found in~\cite{hopf_thesis}.)
Let us also mention that in both figures the solution $u(t,\cdot)$ is not uniformly close to $u_\infty$, so that the asymptotic behaviour of the density near the origin at the chosen times is not due to the fact that the long-time limit of the density equals~$f_c$.
\begin{figure}
\begin{minipage}[H]{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig141}
\subcaption{$u(t,\cdot)$ and $u_\infty$.}
\label{fig:141}
\end{minipage} \hfill
\begin{minipage}[H]{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig142}
\subcaption{Evolution of the relative entropy.}
\label{fig:142}
\end{minipage}
\vspace{.5cm}
\begin{minipage}[H]{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig143}
\subcaption{Evolution of the Dirac part.}
\label{fig:143}
\end{minipage} \hfill
\begin{minipage}[H]{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig146}
\subcaption{Behaviour near singularity.}
\label{fig:145}
\end{minipage}
\caption[Long-time behaviour for~\ref{it:supasy} ($d=1, \gamma=2.9, m>m_c$, asymmetric)]{Long-time behaviour for asymmetric mass-supercritical datum~\ref{it:supasy} ($d=1, \gamma=2.9$).
}
\label{fig:image2}
\end{figure}
\begin{figure}[ht!]
\begin{minipage}{.49\textwidth}\centering
\includegraphics[scale=0.36]{fig121}
\subcaption{$u(t,\cdot)$ and $u_\infty$ ($\sigma=0.5$).}
\label{fig:121}
\end{minipage}
\begin{minipage}{.49\textwidth}\centering
\includegraphics[scale=0.36]{fig131}
\subcaption{$u(t,\cdot)$ and $u_\infty$ ($\sigma=0.1$).}
\label{fig:131}
\end{minipage}
\vspace{0.5cm}
\begin{minipage}{.49\textwidth}\centering
\includegraphics[scale=0.36]{fig122}
\subcaption{Evolution of the relative entropy \\($\sigma=0.5$).}
\label{fig:122}
\end{minipage}
\begin{minipage}{.49\textwidth}\centering
\includegraphics[scale=0.36]{fig132}
\subcaption{Evolution of the relative entropy \\($\sigma=0.1$).}
\label{fig:132}
\end{minipage}
\vspace{0.5cm}
\begin{minipage}{.49\textwidth}\centering
\includegraphics[scale=0.36]{fig135}
\subcaption{
Zoomed-in view of Fig.~\ref{fig:131}.}
\label{fig:135}
\end{minipage}
\begin{minipage}{.49\textwidth}\centering
\includegraphics[scale=0.36]{fig133}
\subcaption{Dirac mass ($\sigma=0.1$).}
\label{fig:133}
\end{minipage}
\caption[Large-time behaviour for~\ref{it:sub} and~\ref{it:subbu} ($d=1, \gamma=2.9, m<m_c$)]{The mass-subcritical cases~\ref{it:sub} and~\ref{it:subbu}, $d=1, \gamma=2.9, A=1.5.$}
\label{fig:tc}
\end{figure}
\paragraph{Transient condensates.}
In Figure~\ref{fig:tc} the behaviour of a mass-subcritical, but initially very concentrated solution is compared to the solution emanating from a more spread out datum.
In both cases the entropy decays exponentially. Observe that in the case of high concentration, the solution forms a condensate in finite time which eventually vanishes again. We refer to this phenomenon as a \textit{transient condensate}.
Recall that for $d=1$ and $\gamma>2$ the existence of transient condensates is known rigorously~\cite{carrillo_finite-time_2019}. The simulations based on~\ref{it:subbu} illustrate very explicitly how, after some finite time, the function $u(t,\cdot)$ begins to forms a flat part at the horizontal axis, which eventually disappears again as the solution converges to the smooth, non-degenerate equilibrium (cf.~Figure~\ref{fig:135}).
\subsection{Validating KQ by means of explicit solutions in~2D}
\label{ssec:val_2D}
In the case $d=2$, KQ is $L^1$-critical and---as shown in~\cite{canizo_fokkerplanck_2016}---its isotropic form can be transformed in an explicit way to a linear Fokker--Planck equation, whose solutions are explicit by means of the fundamental solution for this problem in $\mathbb{R}^2$. Here we want to use these explicit solutions to validate the proposed numerical scheme. Since all simulations are performed on a finite domain with zero flux boundary condition, the solutions to KQ obtained upon this transformation are only approximations of the exact solutions to our problem. However, we obtain a good approximation of the solutions in $B(0,R_1)\subset\mathbb{R}^2$ with zero flux provided $R_1$ is chosen sufficiently large. This is due to the fact that the exact solutions in $\mathbb{R}^2$ emanating from the chosen initial data (Gaussians) have exponential decay in $|v|$. The same is true for their derivative with respect to $v$, implying that on the boundary $\partial B(0,R_1)$ of a centred ball of large enough radius $R_1\gg1$ the flux is negligible. Hence, the exact solutions on $\mathbb{R}^2$ restricted to $B(0,R_1)$ are close to the exact solutions on $B(0,R_1)$ with zero flux.
Next, we recall the transformation leading to the explicit formula of solutions on the whole space, as observed in~\cite{canizo_fokkerplanck_2016}:
the solutions of the linear Fokker--Planck equation
\begin{align}\label{eq:linFP}
\partial_th & = \Delta h+\mathrm{div}(vh),\quad t>0,v\in\mathbb{R}^2,
\\h(0,\cdot) & = h_0\nonumber
\end{align}
are given by means of the fundamental solution
\begin{align*}
F(t,v,w)=a(t)^{-1}K_{b(t)}(a(t)^{-1/2}v-w),
\end{align*}
where
$ a(t)=\mathrm{e}^{-2t},\;\;b(t)=\mathrm{e}^{2t}-1,\text{ and }K_b(z)=(2\pi b)^{-1}\mathrm{e}^{-|z|^2/2b}.$
More precisely, (for sufficiently regular data $h_0$) the solution of equation~\eqref{eq:linFP} takes the form
\begin{align}\label{eq:solFP}
h(t,v) = \int_{\mathbb{R}^2} F(t,v,w)h_0(w)\,\d w.
\end{align}
The relation between non-negative, isotropic solutions $f$ of $2$D KQ and non-negative, isotropic solutions $h$ of eq.~\eqref{eq:linFP} is given by
\begin{align}\label{eq:trafo}
f(t,v) = \frac{h(t,v)}{1+\bar M_h(t,|v|)}\quad \text{ resp. }\quad h(t,v)=f(t,v)\mathrm{e}^{\bar M_f(t,|v|)},
\end{align}
where
\begin{align*}
\bar M_f(t,\rho)=\frac{1}{2\pi}\int_{\{|v|\le \rho\}}f(t,w)\,\d w
= \int_0^\rho g(t,r)r\,\d r.
\end{align*}
We initialise our tests again with a centred Gaussian of the form
\begin{align*}
f_0(v) = A\mathrm{e}^{-\frac{|v|^2}{2\sigma^2}}
\end{align*}
for fixed positive constants $A$ and $\sigma$. Then the initial datum $h_0$ corresponding to $f_0$ via the transformation~\eqref{eq:trafo} is given by
\begin{align*}
h_0(v) = A\mathrm{e}^{-\frac{|v|^2}{2\sigma^2}}\mathrm{e}^{A\sigma^2\left(1-\mathrm{e}^{-\frac{|v|^2}{2\sigma^2}}\right)},
\end{align*}
and from formula~\eqref{eq:solFP} and relation~\eqref{eq:trafo} we infer an expression for the solution $f$, which shows, in particular, that $f(T,\cdot)$ has exponential decay for any positive time $T$.
In our actual code, we use the inverse cdf of $f(T,\cdot)$.
\paragraph{Details on the tests.}
We choose $R_1$ to be the smallest radius satisfying $f_c(v)\le10^{-4}$ for $|v|\ge R_1$. This guarantees that for any not too large $\sigma>0$, the function $f(t,\cdot)$ is small outside $B(0,R_1)$.
Two different tests are performed using the following common set of parameters:
$A=4$, $\sigma=0.9$, final time $T=0.04$ and size of the coarsest mesh equal to $n_0=25$.
Since the solution to the exact problem remains bounded,
the tests are performed with $\varepsilon=\delta=0$.
In the first test the dependence of the $L^2$ distance at time $T$ between exact and computed solution for different spatial resolutions is analysed.
More precisely, for $j=0,\dots,N=5$ we compute the error
$$E_j= \|S^{(j)}(T,\cdot)-S^{(j)}_\mathrm{exact}(T,\cdot)\|_{l^2(J_j)}\cdot 2^{-j},$$ where
$J_j$ denotes the discrete mesh using a total number of $2^{j}n_0+1$ mesh points intersected with the interval $[0,m/2]$, $S^{(j)}_\mathrm{exact}$ denotes the exact solution restricted to the spatial mesh $J_j$ and $S^{(j)}$ the discrete solution computed on the mesh $J_j$ using a total number of $400$ time steps.
Since we expect a polynomial dependence of the error on the spatial increment, we then let $\mathrm{rate}(j)=\log_2(E_j/E_{j+1})$.
The results of the test can be found in Table~\ref{table:2Dmesh}. Theoretically, since in the present case of two space dimensions the original density $f$ remains uniformly bounded in time, which implies that $\partial_zS$ stays away from zero, the spatial discretisation based on central differences should guarantee a quadratic dependence of the truncation error on the spatial increment. The rates displayed in Table~\ref{table:2Dmesh} are somewhat worse, possibly due to the fact that the mesh size has not been chosen sufficiently large to capture the asymptotic behaviour well enough.
In the second test we analyse the dependence of the $L^2$ space-time distance between exact and computed solution on the number of spatial and temporal grid points. The procedure is analogous to the first test except that the $j$-th mesh is obtained by using $2^{j}n_0+1$ spatial and $2^{j}m_0$ temporal grid points, where $m_0=4$, and that now the error is given by
$$E_j= \|S^{(j)}-S^{(j)}_\mathrm{exact}\|_{l^2(I_j\times J_j)}\cdot 2^{-2j},$$
where $I_j$ denotes the discrete temporal mesh consisting of $2^{j}m_0$ time points. The results are displayed in Table~\ref{table:2Dfull} and suggest a linear rate of convergence. This is in line with the backward Euler scheme used for the time stepping.
\begin{table}[H]\footnotesize
\parbox{.5\linewidth}{
\centering
\begin{tabular}{|cccc|}
\hline number of&mesh size&$L^2$ error& rate\\
time points & & (at time $T$) & \\
\hline \hline
4000 & 25 & 6.2783e-3 & - \\
\hline
4000 & 50 & 2.2323e-3 & 1.4919 \\
\hline
4000 & 100 & 7.9661e-4 & 1.4866 \\
\hline
4000 & 200 & 2.6080e-4 & 1.6109 \\
\hline
4000 & 400 & 7.7921e-5 & 1.7428 \\
\hline
4000 & 800 & 1.9283e-5 & 2.0147 \\
\hline
\end{tabular}
\caption{Convergence to exact solution at the final time $T=0.04$.}
\label{table:2Dmesh}
}
\hfill
\parbox{.5\linewidth}{
\centering
\begin{tabular}{|cccc|}
\hline
number of&mesh size& full $L^2$ error& rate\\
time points & & & \\
\hline \hline
4 & 25 & 8.3850e-4 & - \\
\hline
8 & 50 & 4.1295e-4 & 1.0218 \\
\hline
16 & 100 & 2.0813e-4 & 0.9885 \\
\hline
32 & 200 & 1.0427e-4 & 0.9971 \\
\hline
64 & 400 & 5.1996e-5 & 1.0039 \\
\hline
128 & 800 & 2.5774e-5 & 1.0125 \\
\hline
\end{tabular}
\caption{Convergence to reference solution (spacetime grid).}
\label{table:2Dfull}
}
\end{table}
\begin{remark}[Validation of regularisation]
For completeness, we also tested the dependence of the computed solution on the regularisation parameters $\varepsilon$ and $\delta$, even though this is not necessary for $2$D KQ since the density is theoretically known to remain bounded.
We obtained a polynomial decrease of the error.
\end{remark}
\subsection{Simulations of 3D KQ in radial coordinates}\label{ssec:sim3D}
Here, we simulate equation~\eqref{eq:2regS3D} with $d=3$ for suitable choices of $\varepsilon,\delta$, $0<\varepsilon,\delta\ll1$, where we choose $R_1=1$.
We recall our notation $\bar m_c=\frac{1}{|\partial B(0,1)|}\int_{B(0,R_1)}f_c(v)\,\d v$, where now $|\partial B(0,1)| = 4\pi$ denotes the area of the 2-sphere, and remark that the numerical value of $\bar m_c$ is approximately given by~$\bar m_c\approx1.84$.
We perform three simulations with a mass-supercritical, a mass-subcritical and a highly concentrated initial datum, respectively. More precisely, choosing as initial data again Gaussians of the form $f_0(v)=A\mathrm{e}^{-|v|^2/(2\sigma)}$, we run our scheme with the following three sets of parameters:
\setlist[enumerate,1]{start=5}
\begin{enumerate}[label=(P\arabic*)]
\item\label{it:sub3D} $m<m_c:$ $\sigma=0.3$, $A=3$, $T=0.2$, $\tau=0.001$, $n=2001$, $\varepsilon=0$, $\delta=0$.
\item\label{it:sup3D} $m>m_c:$ $\sigma=0.9$, $A=10$, $T=0.25$, $\tau=5\cdot10^{-6}$, $n=50001$, $\varepsilon=10^{-12}$, $\delta=0$.
\item\label{it:subbu3D} $m<m_c:$ $\sigma=0.15$, $A=50$, $T=0.25$, $\tau=5\cdot10^{-5}$, $n=2001$, $\varepsilon=10^{-10}$, $\delta=10^{-10}$.
\end{enumerate}
The quantity $\bar m:=m/|\partial B(0,1)|$ associated with the above choice of parameters takes the value $\bar m\approx 0.335$ for~\ref{it:sub3D}, $\bar m\approx 2.59$ for~\ref{it:sup3D}, and $\bar m\approx 1.41$ for~\ref{it:subbu3D} (see Figures~\ref{fig:321}, \ref{fig:311} and \ref{fig:381}).
The size of the condensate divided by $|\partial B(0,1)|$, i.e.~$\bar x_p(t):=\mathcal{L}^1(\{S(t,\cdot)=0\})$, is numerically determined by replacing the condition $S(t,\cdot)=0$ with the smallness criterion $S(t,\cdot)<10^{-10}$.
\begin{remark}
The choice of the comparatively fine mesh in~\ref{it:sup3D} was made in order to ensure a sufficiently good approximation of the evolution of the entropy. See Fig.~\ref{fig:312}, which suggests an exponential decay.
\end{remark}
\begin{figure}[H]
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig321}
\subcaption{$S(t,\cdot)$ and $S_\infty$.}
\label{fig:321}\vfill
\end{minipage}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig322}
\subcaption{Evolution of the relative entropy.}
\label{fig:322}
\end{minipage}
\caption[Large-time behaviour for~\ref{it:sub3D} ($d=3, \gamma=1, m<m_c$)]{Long-time behaviour in mass-subcritical case~\ref{it:sub3D} ($\gamma=1, d=3$).}
\end{figure}
\paragraph{Long-time behaviour.}
Our simulations suggest that $3$D KQ has properties which are very similar to the Fokker--Planck model for bosons in $1$D in the $L^1$-supercritical regime. Figures~\ref{fig:321},~\ref{fig:311} and~\ref{fig:381} suggest that in the long-time limit the numerical solution $S(t,\cdot)$ approximates the minimiser of the entropy (at the level of $S$), which we here denote\footnote{For simplicity, in our notation $S_\infty$ for the entropy minimiser we omit its dependence on the given mass $m$ and the radius $R_1$. } by $S_\infty$.
\paragraph{Entropy.}
The decay of the relative entropy appears to be exponential in all three cases~\ref{it:sub3D}--\ref{it:subbu3D}, see Figures~\ref{fig:322},~\ref{fig:312} and~\ref{fig:382}. In each of these plots the red slope indicates the approximate slope of the graph averaged over the interval where it is plotted. Numerically, the relative entropy $H(t)-H_{\infty}$ appears to decay to zero like $e^{-\alpha t}$, where
$\alpha\approx35.3$ for~\ref{it:sub3D}, $\alpha\approx21.1$ for~\ref{it:sup3D}, and $\alpha\approx21.7$ for~\ref{it:subbu3D}.
\begin{figure}[H]
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig3111.eps}
\subcaption{$S(t,\cdot)$ and $S_\infty$.}
\label{fig:311}
\end{minipage}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig3112.eps}
\subcaption{Evolution of the relative entropy.
}
\label{fig:312}
\end{minipage}
\vspace{0.5cm}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig3113.eps}
\subcaption{Evolution of the Dirac part.}
\label{fig:313}
\end{minipage}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig3114.eps}
\subcaption{Behaviour near singularity.
}
\label{fig:314}
\end{minipage}
\caption[Large-time behaviour for~\ref{it:sup3D} ($d=3, \gamma=1, m>m_c$)]{Long-time behaviour in the mass-supercritical case~\ref{it:sup3D} ($d=3, \gamma=1, \varepsilon=10^{-12}$ and $ \delta=0$).}
\end{figure}
\paragraph{Condensation.}
In both the mass-supercritical case~\ref{it:sup3D} and the case of high concentration near the origin~\ref{it:subbu3D} we observe the onset of a flat part at the level of $S(t,\cdot)$ at height zero after some finite time, see Fig.~\ref{fig:313} and~\ref{fig:383}. In the original variables this means that mass is gradually absorbed by the origin. Furthermore, Fig.~\ref{fig:383} shows that, similarly to the observations in $1$D (see Section~\ref{ssec:sim1D}), it is possible for mass previously concentrated at velocity zero to escape. In fact, the condensate component may even dissolve completely. Thus, at least numerically, the fraction of particles in the condensate is, in general, not monotonic in time for the 3D Kaniadakis--Quarati model.
\begin{figure}[H]
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig381.eps}
\subcaption{$S(t,\cdot)$ and $S_\infty$.}
\label{fig:381}
\end{minipage}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig385.eps}
\subcaption{Zoomed-in view of Fig.~\ref{fig:381}.}
\label{fig:385}
\end{minipage}
\vspace{.5cm}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig382.eps}
\subcaption{Evolution of the relative entropy.
}
\label{fig:382}
\end{minipage}
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig383}
\subcaption{Evolution of the Dirac part.}
\label{fig:383}
\end{minipage}
\caption[Transient condensate for~\ref{it:subbu3D} ($d=3, \gamma=1, m<m_c$)]{Transient condensate in the mass-subcritical case~\ref{it:subbu3D} ($d=3, \gamma=1, \varepsilon=\delta=10^{-10}$).}
\label{fig:tc3d}
\end{figure}
\begin{remark}
In order to produce the transient condensate in Figure~\ref{fig:tc3d} it was necessary to choose the parameter $\delta$ appearing in equation~\eqref{eq:2regS3D} (and its discrete counterpart)
strictly positive.
The same simulation for $\delta=0$ results in the flat part being trapped at height zero once it has formed. As explained in Section~\ref{sssec:eqhd} and also in view of our results for the $1$D model, this \enquote{stickiness} appears to be a numerical artefact resulting from the circumstance that a regularisation based on a positive $\varepsilon$ but vanishing~$\delta$ is imbalanced and favours condensation.
\end{remark}
\paragraph{Blow-up profile.} At times where the solution has a non-trivial condensate component, we were interested in the spatial behaviour of $S(t,\cdot)$ close to $\{S(t,\cdot)=0\}$.
Owing to the results on the $1$D model, one may expect the function $f(t,\cdot)$ to behave to leading order like the limiting steady state $f_c$, i.e.\;like $2|v|^{-2}$.
Furthermore, the formal expansions in~\cite[Section~III.C]{sopik_dynamics_2006} suggest that for isotropic solutions of 3D KQ the error by which $f(t,\cdot)$ deviates from $f_c$ has the form
\begin{align}\label{eq:Sopik3Dprofile}
f(t,v) - f_c(v) = c(t)|v|^{-1} + o(|v|^{-1})
\end{align}
for some constant $c(t)\in\mathbb{R}$.
Our experiments corroborate formula~\eqref{eq:Sopik3Dprofile}. Indeed, Figures~\ref{fig:314} and~\ref{fig:384} displaying the quantity $f(t,v)/f_c(v)$ at times where $f(t,\cdot)$ is unbounded at the origin show that numerically it behaves like $1+\tilde c(t)|v|+o(|v|)$ as $|v|\to0$.
\begin{figure}[H]\centering
\begin{minipage}{0.49\textwidth}\centering
\includegraphics[scale=0.36]{fig384}
\vfill
\end{minipage}
\caption[Blow-up profile in \ref{it:subbu3D} ($d=3, \gamma=1,m<m_c$)]{Spatial blow-up profile in \ref{it:subbu3D}.}
\label{fig:384}
\end{figure}
\section{Conclusion}
In this work we propose a numerical scheme for nonlinear Fokker--Planck equations for bosons able for the first time to cope with Dirac delta concentrations of (partial) mass at the origin in finite time and to go beyond this blow-up time. This is achieved by
considering appropriately normalised pseudo-inverse distributions and scaling suitably the equation to obtain an alternative formulation admitting a Dirac delta concentration at the origin as a possible steady state. These new PDEs are solved by implicit schemes, and their approximations by Newton--Raphson type methods are shown to be numerically convergent by mesh refinement, even beyond the blow-up time.
The physical entropy associated to these problems is shown to be decreasing for the semidiscrete schemes in 1D and 2D.
We illustrate different phenomena appearing in the 3D radial KQ model mimicking the phenomena observed and partially proved for the 1D caricature of the KQ model in the $L^1$-supercritical case, see \cite{carrillo_finite-time_2019}.
\section*{Acknowledgements}
JAC was partially supported by the EPSRC grant number EP/P031587/1. KH was supported by MASDOC DTC at the University of Warwick, which is funded by the EPSRC grant number EP/HO23364/1. MTW acknowledges partial support by the EPSRC grant number EP/P01240X/1.
\bibliographystyle{abbrv}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.